id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2301.09711
Local-global compatibility over function fields
We prove that V. Lafforgue's global Langlands correspondence is compatible with Fargues-Scholze's semisimplified local Langlands correspondence. As a consequence, we canonically lift Fargues-Scholze's construction to a non-semisimplified local Langlands correspondence for positive characteristic local fields. We also deduce that Fargues-Scholze's construction agrees with that of Genestier-Lafforgue, answering a question of Fargues-Scholze, Hansen, Harris, and Kaletha. The proof relies on a uniformization morphism for moduli spaces of shtukas.
Siyan Daniel Li-Huerta
2023-01-23T20:36:24Z
http://arxiv.org/abs/2301.09711v2
# Local-global compatibility over function fields ###### Abstract. We prove that V. Lafforgue's global Langlands correspondence is compatible with Fargues-Scholze's semisimplified local Langlands correspondence. As a consequence, we canonically lift Fargues-Scholze's construction to a non-semisimplified local Langlands correspondence for local fields of characteristic \(p\geq 5\). We also deduce that Fargues-Scholze's construction agrees with that of Genesier-Lafforgue, answering a question of Fargues-Scholze, Hansen, Harris, and Kaletha. The proof relies on a uniformization morphism for moduli spaces of shtukas. ###### Contents * 1 Recollections on affine Grassmannians * 2 Formal moduli of local shtukas * 3 Relative \(z\)-adic Hodge theory * 4 Analytic moduli of local shtukas * 5 Uniformizing the moduli spaces of global shtukas * 6 Local-global compatibility * 7 Applications ## Introduction The Langlands program predicts a relationship between automorphic forms and Galois representations. More precisely, in the case of a connected reductive group \(\mathbf{G}\) over a global function field \(\mathbf{F}\) of characteristic \(p>0\), the Langlands program posits a canonical map \[\mathrm{GLC}_{\mathbf{G}}:\left\{\begin{array}{c}\mathrm{cuspidal\ automorphic}\\ \mathrm{representations\ of\ }\mathbf{G}(\mathbb{A}_{\mathbf{F}})\end{array} \right\}\to\left\{\begin{array}{c}L\text{-parameters}\\ \mathrm{for\ }\mathbf{G}\ \mathrm{over\ }\mathbf{F}\end{array}\right\},\] where \(\mathbb{A}_{\mathbf{F}}\) denotes the adele ring of \(\mathbf{F}\), and all representations are taken with \(\overline{\mathbb{Q}}_{\ell}\)-coefficients for some \(\ell\neq p\). In a landmark result, such a map \(\mathrm{GLC}_{\mathbf{G}}\) was constructed by V. Lafforgue [32]. In the case of a connected reductive group \(G\) over a nonarchimedean local field \(F\), the Langlands program predicts a similar map ( \[\dagger\] ) \[\mathrm{LLC}_{G}:\left\{\begin{array}{c}\mathrm{irreducible\ smooth}\\ \mathrm{representations\ of\ }G(F)\end{array}\right\}\to\left\{\begin{array}{c}L \text{-parameters}\\ \mathrm{for\ }G\ \mathrm{over\ }F\end{array}\right\}.\] Recent breakthrough work of Fargues-Scholze [11] constructs such a map up to semisimplification; namely, they construct a map ( \[\ddagger\] ) \[\operatorname{LLC}_{G}^{\operatorname{ss}}:\left\{\begin{array}{c}\text{ irreducible smooth}\\ \text{representations of }G(F)\end{array}\right\}\to\left\{\begin{array}{c}\text{ semisimple $L$-parameters}\\ \text{for $G$ over $F$}\end{array}\right\}.\] Our main result is that V. Lafforgue's global Langlands correspondence is compatible with Fargues-Scholze's semisimplified local Langlands correspondence. **Theorem A**.: _Let \(v\) be a place of \(\mathbf{F}\). Then the square_ _commutes._ Since \(\operatorname{GLC}_{\mathbf{G}}\)[32, Theoreme 12.3] and \(\operatorname{LLC}_{G}^{\operatorname{ss}}\)[11, Theorem IX.0.5] are compatible with the Satake isomorphism at unramified places, for a given cuspidal automorphic representation this is already known at _unramified_ places. We actually prove a refinement of Theorem A on the level of _excursion algebras_; see Theorem 6.13. _Remarks_.: 1. V. Lafforgue [32, Theoreme 13.2] and Fargues-Scholze [11, Proposition IX.4.1] prove a version of their results with \(\overline{\mathbb{F}}_{\ell}\)-coefficients, and the analogous version of Theorem A also holds in this mod-\(\ell\) context. See Theorem 6.15. 2. Once one constructs a non-semisimplified local Langlands correspondence as in Equation (\(\dagger\)) (e.g. see Theorem B below), one can ask whether Theorem A holds before semisimplification. The answer is already negative when \(\mathbf{G}\) is the units of a quaternion algebra [14, Remarque 0.3]. More generally, Arthur's conjecture [4] predicts that the answer is negative precisely for global \(A\)-packets where a local \(A\)-packet intersects more than one local \(L\)-packet. For instance, examples of Howe-Piatetski-Shapiro [25] show that the answer is also negative when \(\mathbf{G}\) is \(\operatorname{Sp}_{4}\). We now turn to some consequences of Theorem A. When \(\operatorname{char}F>0\) is not too small, Theorem A enables us to remove the "up to semisimplification" ambiguity in Fargues-Scholze's construction. **Theorem B**.: _Assume that \(\operatorname{char}F=p\geq 5\). Then \(\operatorname{LLC}_{G}^{\operatorname{ss}}\) canonically lifts to a non-semisimplified local Langlands correspondence \(\operatorname{LLC}_{G}\) as in Equation (\(\dagger\))._ Actually, we only need to assume that \(p\) is good for the non-simply laced absolute factors of \(G\); see Theorem 7.1. The proof that Theorem A implies Theorem C is due to Gan-Harris-Sawin [12]; roughly, the idea is to maneuver into a situation where Theorem A holds even before semisimplification. This uses a globalization result of Beuzart-Plessis [12], work of Heinloth-Ngo-Yun [24] on Kloosterman sheaves, and Deligne's purity theorem. Our next result concerns previous work of Genestier-Lafforgue [14], who also constructed a map as in Equation (\(\ddagger\)) when \(\operatorname{char}F>0\). Genestier-Lafforgue obtained a version of Theorem A for their construction, and since this property basically uniquely characterizes such maps, we deduce the following result. **Theorem C**.: _The Genestier-Lafforgue correspondence agrees with the Fargues-Scholze correspondence._ This answers a question of Fargues-Scholze [11], Hansen, Harris, and Kaletha [28]. We also prove a refinement of Theorem C on the level of Bernstein centers; see Theorem 6.15. _Remark_.: Conversely, if we only had Theorem C, then work of Genestier-Lafforgue would imply Theorem A. However, our proof of Theorem A is independent of their results. We conclude by showing that \(\operatorname{LLC}^{\operatorname{ss}}_{G}\) satisfies the expected compatibility with the local Jacquet-Langlands correspondence [5], which we denote by \(\operatorname{JL}\), when \(\operatorname{char}F>0\) and \(G\) is the units of a central simple algebra over \(F\). **Theorem D**.: _Assume that \(\operatorname{char}F>0\) and \(G\) is the units of a central simple algebra over \(F\). For any irreducible essentially \(L^{2}\) representation \(\pi\) of \(G(F)\), we have \(\operatorname{LLC}^{\operatorname{ss}}_{G}(\pi)=\operatorname{LLC}^{ \operatorname{ss}}_{\operatorname{GL}_{n}}(\operatorname{JL}(\pi))\)._ When \(\operatorname{char}F>0\), Theorem D was previously only known when \(G\) is \(\operatorname{GL}_{n}\) or the units of a central division algebra over \(F\)[11, Theorem IX.7.4]. The \(\operatorname{char}F=0\) analogue of Theorem D is due to Hansen-Kaletha-Weinstein [19, Theorem 6.6.1] as a consequence of their work on the local Kottwitz conjecture. Let us discuss our proof of Theorem A. Elements of our strategy go back to Deligne's letter to Piatetski-Shapiro [10], which proves local-global compatibility for modular forms. The Galois representations associated with modular forms are constructed via the cohomology of modular curves, and one of Deligne's key ideas was to restrict to the supersingular locus, using the uniformization of the latter by Lubin-Tate space to relate the local and global Langlands correspondences for \(\operatorname{GL}_{2}\). Deligne's proof, as well as subsequent works on local-global compatibility using basic uniformization [9, 21, 40, 35], also crucially relies on arguments specific to the particular group \(\mathbf{G}\) in question. However, our proof of Theorem A is uniform in all groups \(\mathbf{G}\). We begin by observing that, since the correspondences of V. Lafforgue and Fargues-Scholze are constructed via _excursion operators_, it suffices to show that said operators are compatible. Let us recall their definition, which uses moduli spaces of shtukas. For simplicity, assume that \(\mathbf{G}\) is split, and write \(\widehat{\mathbf{G}}\) for the dual group of \(\mathbf{G}\) over \(\overline{\mathbb{Q}}_{\ell}\). For any finite set \(I\) and representation \(V\) of \(\widehat{\mathbf{G}}^{I}\), write \(\operatorname{Sht}^{I}_{\mathbf{G},V}\) for the associated moduli space of _global \(\mathbf{G}\)-shtukas_,1 which is a Deligne-Mumford stack. Work of Xue [44] naturally endows the compactly supported intersection cohomology \(H^{I}_{V}\) of its generic fiber with an action of \(W^{I}_{\mathbf{F}}\), where \(W_{\mathbf{F}}\) denotes the absolute Weil group. For any \(x\) and \(\xi\) in \(V\) and \(V^{\vee}\), respectively, that are fixed by the image of \(\Delta:\widehat{\mathbf{G}}\operatorname{\hookrightarrow}\widehat{\mathbf{G}}^{I}\), and any \(\gamma_{\bullet}\) in \(W^{I}_{\mathbf{F}}\), the associated global excursion operator is Footnote 1: In the introduction, we ignore convolution data and level structures in our notation. ( \[\heartsuit\] ) \[H^{*}_{\mathbf{1}}\operatorname{\xrightarrow{\,x\,}}H^{*}_{V}_{|_{\Delta( \widehat{\alpha})}}=H^{I}_{V}\operatorname{\xrightarrow{\,\gamma_{\bullet}\,}}H ^{I}_{V}=H^{*}_{V}_{|_{\Delta(\widehat{\alpha})}}\operatorname{\xrightarrow{ \,\xi\,}}H^{*}_{\mathbf{1}},\] where \(*\) denotes the singleton set, and \(\mathbf{1}\) denotes the trivial representation. In the local setting, write \(\mathcal{L}\mathrm{oc}\mathsf{Sht}^{I}_{\mathbf{G},V}\) for the associated moduli space of _local \(\mathbf{G}\)-shtukas_, which is an analytic adic space. Work of Fargues-Scholze [11] naturally endows the intersection homology \(H^{\mathrm{loc},I}_{V}\) of its generic fiber with an action of \(W^{I}_{\mathbf{F}_{v}}\), so when \(\gamma_{\bullet}\) lies in \(W^{I}_{\mathbf{F}_{v}}\), we can form local excursion operators using the same recipe as in Equation (\(\heartsuit\)). We compare the local and global excursion operators using a uniformization morphism. To define it, first we construct a formal model \(\mathfrak{Loc}\mathfrak{Sht}^{I}_{\mathbf{G},V}\) for \(\mathcal{L}\mathrm{oc}\mathfrak{Sht}^{I}_{\mathbf{G},V}\) at hyperspecial level. Stating the formal moduli problem is straightforward, although comparing it with our original definition of local \(\mathbf{G}\)-shtukas requires an equicharacteristic version of Kedlaya-Liu's results [31] on relative \(p\)-adic Hodge theory, which we prove. Next, we use Beauville-Laszlo gluing to construct a formally etale morphism of formal stacks \[\widehat{\Theta}:\mathfrak{Loc}\mathsf{Sht}^{I}_{\mathbf{G},V}\mathop{\to} \widehat{\mathrm{Sht}}^{I}_{\mathbf{G},V}\] when the level is hyperspecial at \(v\), where \(\widehat{\mathrm{Sht}}^{I}_{\mathbf{G},V}\) denotes the formal completion of \(\mathrm{Sht}^{I}_{\mathbf{G},V}\) along \(v^{I}\), and we assume that \(\deg v=1\) for simplicity. This generalizes results of Arasteh Rad-Hartl [3]. From here, we restrict to a Harder-Narasimhan truncation \(\mathrm{Sht}^{I,\leq s}_{\mathbf{G},V}\) of \(\mathrm{Sht}^{I}_{\mathbf{G},V}\) and enlarge the level away from \(v\). This yields a scheme that is locally of finite type, so we can use Huber's analytification [26, (3.8)] to extend \(\widehat{\Theta}\) to a morphism of analytic adic spaces \[\Theta:\mathcal{L}\mathrm{oc}\mathsf{Sht}^{I,\leq s}_{\mathbf{G},V}\mathop{\to }(\mathrm{Sht}^{I,\leq s}_{\mathbf{G},V})_{(\mathrm{Spa}\,\mathbf{F}_{v})^{I}}.\] for deeper levels at \(v\). To prove that \(\Theta\) is etale, it suffices to consider the case of hyperspecial level. There, we prove that \(\mathfrak{Loc}\mathfrak{Sht}^{I}_{\mathbf{G},V}\) is a formal scheme that is locally formally of finite type, generalizing results of Arasteh Rad-Hartl [2]. After restricting to a Harder-Narasimhan truncation, this lets us upgrade the formal etaleness of \(\Theta\) to etaleness, as desired. Since \(\Theta\) is etale, we can form the \(!\)-pushforward map \[\Theta_{!}:H^{\mathrm{loc},I,\leq s}_{V}\mathop{\to}H^{I,\leq s}_{V}.\] After restricting to a Harder-Narasimhan truncation, this induces a morphism from the composition diagram in Equation (\(\heartsuit\)) to the analogous composition diagram for \(H^{\mathrm{loc},I}_{V}\). We use this to prove that the global and local excursion operators are compatible, which concludes the proof of Theorem A. With Theorem A in hand, let us return to the local context and sketch the proofs of Theorem B, Theorem C, and Theorem D. For Theorem B, compatibility with parabolic induction and the Langlands classification reduce us to the case of \(L^{2}\) representations \(\pi\). Then the Langlands program predicts \(\mathrm{LLC}_{G}(\pi)\) to be the unique pure \(L\)-parameter whose semisimplification is \(\mathrm{LLC}^{\mathrm{ss}}_{G}(\pi)\), if it exists. To construct this \(L\)-parameter, we use a globalization result of Beuzart-Plessis [12] to obtain a cuspidal automorphic representation \(\Pi\) that has the same cuspidal support as \(\pi\) at one place and is isomorphic to the cuspidal representation \(\pi^{\prime}\) considered by Gross-Reeder [16] at another place. Using Theorem A and work of Heinloth-Ngo-Yun [24], we show that the Fargues-Scholze parameter of \(\pi^{\prime}\) is irreducible. Therefore applying Deligne's purity theorem to the V. Lafforgue parameter of \(\Pi\) and using Theorem A again yield the desired result. For Theorem C, we instead reduce to the case of cuspidal representations. Then a classical Poincare series argument and Theorem A give the desired result. Finally, for Theorem D we construct a cuspidal automorphic representation of \(\operatorname{GL}_{n}\) that globalizes \(\operatorname{JL}(\pi)\) and transfers to a suitable central division algebra under the global Jacquet-Langlands correspondence [6] by using the simple trace formula. From here, the Chebotarev density theorem and Theorem A imply the desired result. ### Outline In SS1, we recall some facts about loop groups and Beilinson-Drinfeld affine Grassmannians. In SS2, we define the formal moduli problem and prove that it is a formal scheme that is formally of finite type. In SS3, we prove the necessary results on \(z\)-adic Hodge theory. In SS4, we define the analytic moduli problem, compare it with the formal moduli problem, and recall results of Fargues-Scholze [11] on its intersection homology. In SS5, we recall the global moduli problem and construct the uniformization morphism. In SS6, we use this to prove Theorem A. In SS7, we use Theorem A to prove Theorem B, Theorem C, and Theorem D. ### Notation Unless otherwise specified, all products are taken over \(\mathbb{F}_{q}\). In our definition of ind-schemes, we require the transition morphisms to be closed embeddings. When viewing an adic space \(X\) as a locally ringed space, we use \(\mathscr{O}_{X}\) for its structure sheaf. Starting in SS3, we freely use definitions from perfectoid geometry as in [41] and [11]. For any adic space \(X\) over \(\mathbb{Z}_{p}\), write \(X^{\Diamond}\) for the associated v-sheaf over \(\mathbb{F}_{p}\) as in [42, Lemma 18.1.1]. ### Acknowledgements The author thanks Mark Kisin for his patience and advice. The author would also like to thank Michael Harris for giving a talk on [12] that motivated him to prove Theorem A, and to thank David Hansen for his interest and encouragement. ## 1. Recollections on affine Grassmannians In this section, we begin by setting up our local context. We then establish some notation on loop groups, Beilinson-Drinfeld affine Grassmannians, and their affine Schubert varieties, as well as recall basic facts about these objects. Nothing in this section is new. Let \(F\) be a local field of characteristic \(p>0\), and write \(\mathbb{F}_{q}\) for its residue field. Fix a separable closure \(\overline{F}\) of \(F\), and write \(\Gamma_{F}\) for \(\operatorname{Gal}(\overline{F}/F)\). Choose a uniformizer \(z\) of \(\mathcal{O}_{F}\), which yields an identification \(\mathcal{O}_{F}=\mathbb{F}_{q}[z]\). Let \(G\) be a parahoric group scheme over \(\mathcal{O}_{F}\) as in [8, 5.2.6]. It will be convenient to use the following globalization of our local setup, although we will see that our constructions are independent of this globalization. **Lemma**.: _There exists a geometrically connected smooth proper curve \(C\) over \(\mathbb{F}_{q}\), a nonempty open subspace \(U\subseteq C\), a parahoric group scheme \(G_{C}\) over \(C\) as in [38, Definition 2.18], a closed point \(v\) of \(C\), and an isomorphism \(\widehat{\mathscr{O}}_{C,v}\cong\mathcal{O}_{F}\) such that_ 1. \(G_{C}|_{U}\) _is reductive over_ \(U\)_,_ 2. \(G_{C}|_{\mathcal{O}_{v}}\) _is identified with_ \(G\) _as group schemes over_ \(\widehat{\mathscr{O}}_{C,v}\cong\mathcal{O}_{F}\)_._ _Moreover, there exists an \(\operatorname{SL}_{h}\)-bundle \(\mathscr{V}\) on \(C\) and a closed embedding_ \[\iota:G_{C}\to\underline{\operatorname{Aut}}(\mathscr{V})\] _of group schemes over \(C\) such that \(\underline{\operatorname{Aut}}(\mathscr{V})/G_{C}\) is quasi-affine over \(C\)._ Proof.: By [38, Lemma 3.1], there exists a connected smooth curve \(\hat{C}\) over \(\mathbb{F}_{q}\), a smooth affine group scheme \(\mathring{G}\) over \(\mathring{C}\) with geometrically connected fibers, a closed point \(v\) of \(\mathring{C}\), and an isomorphism \(\widehat{\mathscr{O}}_{C,v}\cong\mathcal{O}_{F}\) such that \(\mathring{G}|_{\mathring{C}\smallsetminus v}\) is reductive over \(\mathring{C}\smallsetminus v\) and \(\mathring{\mathring{G}}|_{\widehat{\mathscr{O}}_{C,v}}\) is identified with \(G\) as group schemes over \(\widehat{\mathscr{O}}_{C,v}\cong\mathcal{O}_{F}\). Because \(\mathring{C}\) has an \(\mathbb{F}_{q}\)-point \(v\), it is geometrically connected. Write \(C\) for the associated smooth proper curve over \(\mathbb{F}_{q}\). Fpqc descent and [8, 5.1.9] yield a parahoric group scheme \(G_{C}\) over \(C\) as in [38, Definition 2.18] that extends \(\mathring{G}\), so we can take \(U=\mathring{C}\smallsetminus v\). Finally, the last claim follows from [3, Proposition 2.2(b)]. Let us recall some facts about loop groups and affine Grassmannians. Let \(S=\operatorname{Spec}R\) be an affine scheme over \(C^{I}\), and for all \(i\) in \(I\), write \(\Gamma_{i}\) for the graph of its \(i\)-th projection \(S\!\to\!C\), which is a relative effective Cartier divisor on \(C\times S\). Let \(I_{1},\ldots,I_{k}\) be an ordered partition of \(I\). Write \(\widehat{\mathcal{O}}_{C}(S)\) for the ring of global sections of the completion of \(\mathscr{O}_{C\times S}\) along \(\sum_{i\in I}\Gamma_{i}\). For all \(1\leq j\leq k\), write \(\widehat{\mathcal{O}}_{C}^{j,\circ}(S)\) for the version that is punctured along \(\sum_{i\in I_{j}}\Gamma_{i}\). **Definition**.: 1. Write \(L_{I}^{n}(G_{C})\), \(L_{I}^{+}(G_{C})\), and \(L_{I}^{j,\circ}(G_{C})\) for the sheaves over \(C^{I}\) given by sending \(S\) to \(G_{C}(\mathscr{O}_{n\sum_{i\in I}\Gamma_{i}})\), \(G_{C}(\widehat{\mathcal{O}}_{C}(S))\), and \(G_{C}(\widehat{\mathcal{O}}_{C}^{j,\circ}(S))\), respectively. 2. Write \(\operatorname{Gr}_{G_{C}}^{(I_{1},\ldots,I_{k})}\) for the sheaf over \(C^{I}\) whose \(S\)-points parametrize data consisting of 1. for all \(1\leq j\leq k\), a \(G\)-bundle \(\mathscr{G}_{j}\) on \(\operatorname{Spec}\widehat{\mathcal{O}}_{C}(S)\), 2. for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\phi_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}\widehat{\mathcal{O}}_{C}^{j, \circ}(S)}\mathop{\to}^{\sim}\mathscr{G}_{j+1}|_{\operatorname{Spec}\widehat{ \mathcal{O}}_{C}^{j,\circ}(S)},\] where \(\mathscr{G}_{k+1}\) denotes the trivial \(G\)-bundle. Write \(L_{z}^{+}G\) and \(L_{z}G\) for the fiber at \(v\) of \(L_{*}^{+}(G_{C})\) and \(L_{*}^{1,\circ}(G_{C})\), respectively, where \(*\) denotes the singleton set. Also, write \(\operatorname{Gr}_{z,G}^{k}\) for the fiber at \(v^{I}\) of \(\operatorname{Gr}_{G_{C}}^{(\{1\},\ldots,\{k\})}\). The proof of [18, Lemma 3.2] shows that \(L_{I}^{n}(G_{C})\) is an affine scheme of finite type over \(C^{I}\), so \(L_{I}^{+}(G_{C})=\varprojlim_{n}L_{I}^{n}(G_{C})\) is an affine scheme over \(C^{I}\). Recall that \(L_{I}^{j,\circ}(G_{C})\) is an ind-affine ind-scheme over \(C^{I}\)[18, Lemma 3.2(i)], and \(\operatorname{Gr}_{G_{C}}^{(I_{1},\ldots,I_{k})}\) is an ind-projective ind-scheme over \(C^{I}\)[3, Proposition 3.12]. Also, note that \(L_{z}^{+}G\), \(L_{z}G\), and \(\operatorname{Gr}_{z,G}^{k}\) are independent of the globalization from Lemma 1.1. The following lemmas give an alternative description of the Beilinson-Drinfeld affine Grassmannian after completing at a point. Write \(\mathbb{D}\) for the formal scheme \(\operatorname{Spf}\mathcal{O}_{F}\), and let \(I\) be a finite set. Recall that \(\operatorname{Spec}\) yields an anti-equivalence from the category of \(\mathbb{F}_{q}[\zeta_{i}]_{i\in I}\)-algebras where the \(\zeta_{i}\) are nilpotent to the category of affine schemes over \(\mathbb{D}^{I}\). Let \(S=\operatorname{Spec}R\) be an affine scheme over \(\mathbb{D}^{I}\). **Lemma**.: _The direct system \((n\sum_{i\in I}\Gamma_{i})_{n\geq 0}\) of schemes over \(C\times S\) is naturally isomorphic to \((nv\times S)_{n\geq 0}\). Consequently, \(\widehat{\mathcal{O}}_{C}(S)\) is naturally isomorphic to \(R[\![z]\!]\), and \(\widehat{\mathcal{O}}_{C}^{j,\circ}(S)=\widehat{\mathcal{O}}_{C}(S)[\frac{1}{z \!-\!\zeta_{i}}]_{i\in I_{j}}=R[\![z]\!][\frac{1}{z\!-\!\zeta_{i}}]_{i\in I_{j}}\) is naturally isomorphic to \(R(\!(z)\!)\)._ Proof.: As nilpotent thickenings are etale-local and \(C\) is smooth at \(v\), it suffices to replace \(C\) with \(\mathbb{A}_{\mathbb{F}_{q}}^{1}=\operatorname{Spec}\mathbb{F}_{q}[z]\) and \(v\) with the origin. Then \(n\sum_{i\in i}\Gamma_{i}\) is the vanishing locus of \(\prod_{i\in I}(z-\zeta_{i})^{n}\) in \(C\times S=\operatorname{Spec}R[z]\), and \(nv\times S\) is the vanishing locus of \(z^{n}\) in \(C\times S\). Choose positive integers \(n_{i}\) such that \(\zeta_{i}^{n_{i}}=0\) in \(R\). Set \(N_{1}\coloneqq\sum_{i\in I}n+n_{i}-1\) and \(N_{2}\coloneqq n+\max_{i\in I}\{n_{i}\}-1\). On \(n\sum_{i\in I}\Gamma_{i}\), we see \[z^{N_{1}}=\prod_{i\in I}((z-\zeta_{i})+\zeta_{i})^{n+n_{i}-1}=\prod_{i\in I} \sum_{l=1}^{n+n_{i}-1}\binom{n+n_{i}-1}{l}(z-\zeta_{i})^{l}\zeta_{i}^{n+n_{i}-1 -l}=0,\] so \(n\sum_{i\in I}\Gamma_{i}\) lies in \(N_{1}v\times S\). Conversely, on \(nv\times S\), we have \[\prod_{i\in I}(z-\zeta_{i})^{N_{2}}=\prod_{i\in I}\sum_{l=1}^{N_{2}}\binom{N_{ 2}}{l}z^{l}\zeta_{i}^{N_{2}-l}=0,\] so \(nv\times S\) lies in \(N_{2}\sum_{i\in I}\Gamma_{i}\). Write \(\widehat{\operatorname{Gr}}_{G}^{(I_{1},\dots,I_{k})}\) for the formal completion of \(\operatorname{Gr}_{G_{C}}^{(I_{1},\dots,I_{k})}\) along \(v^{I}\) in \(C^{I}\). **Lemma**.: _Our \(\widehat{\operatorname{Gr}}_{G}^{(I_{1},\dots,I_{k})}\) is an ind-projective ind-scheme over \(\mathbb{D}^{I}\), and it is naturally isomorphic to \(\operatorname{Gr}_{z,G}^{k}{\,|_{\mathbb{D}^{I}}}\)._ Thus \(\widehat{\operatorname{Gr}}_{G}^{(I_{1},\dots,I_{k})}\) is independent of the globalization from Lemma 1.1. Proof.: This follows immediately from Lemma 1.3. We now introduce affine Schubert varieties. Write \(\overline{\mathbb{F}_{q}(C)}\) for the separable closure of \(\mathbb{F}_{q}(C)\) in \(\overline{F}\), and write \(\Gamma_{\mathbb{F}_{q}(C)}\) for \(\operatorname{Gal}(\overline{\mathbb{F}_{q}(C)}/\mathbb{F}_{q}(C))\). Let \(T\) be a maximal subtorus of \(G_{C}{\,|_{\mathbb{F}_{q}(C)}}\), and write \(X_{*}^{+}(T)\) for the set of dominant cocharacters of \(T_{\overline{\mathbb{F}_{q}(C)}}\) with respect to a fixed Borel subgroup \(B\subseteq G_{C}{\,|_{\overline{\mathbb{F}_{q}(C)}}}\) containing \(T_{\overline{\mathbb{F}_{q}(C)}}\). Identify \(X_{*}^{+}(T)\) with the set of conjugacy classes of cocharacters of \(G_{C}{\,|_{\overline{\mathbb{F}_{q}(C)}}}\). Let \(\mu_{\bullet}=(\mu_{i})_{i\in I}\) be in \(X_{*}^{+}(T)^{I}\). Identify the the field of definition of \(\mu_{i}\) with \(\mathbb{F}_{q}(C_{i})\) for some finite cover \(C_{i}{\,\rightarrow\,}C\) that is etale over \(U\), and write \(U_{i}\) for the preimage of \(U\). Note that the closure \(F_{i}\) of \(\mathbb{F}_{q}(C_{i})\) in \(\overline{F}\) equals the completion of \(\mathbb{F}_{q}(C_{i})\) at the closed point \(v_{i}\) of \(C_{i}\) above \(v\) induced by \(\overline{\mathbb{F}_{q}(C)}{\,\rightarrow\,}\overline{F}\). Write \(\mathbb{D}_{i}\) for \(\operatorname{Spf}\mathcal{O}_{F_{i}}\). **Definition**.: 1. Write \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i \in I}U_{i}}\subseteq\operatorname{Gr}_{G_{C}}^{(I_{1},\dots,I_{k})}{\,|_{ \prod_{i\in I}U_{i}}}}\) for the associated closed affine Schubert variety, and write \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i \in I}C_{i}}}\) for its closure in \(\operatorname{Gr}_{G_{C}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i\in I}C_{i}}}\). 2. Write \(\widehat{\operatorname{Gr}}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{ \prod_{i\in I}{\,\mathbb{D}_{i}}}}\) for the formal completion of \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i \in I}C_{i}}}\) along \(\prod_{i\in I}v_{i}\) in \(\prod_{i\in I}C_{i}\). 3. When \(I=*\), write \(\operatorname{Gr}_{z,G,\mu}^{1}{\,|_{\psi_{*}}}\) for the fiber at \(v_{*}\) of \(\operatorname{Gr}_{G_{C},\mu}^{(*)}{\,|_{C_{*}}}\). Recall that \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i \in I}C_{i}}}\) is a projective scheme over \(\prod_{i\in I}C_{i}\), and the natural \(L_{I}^{+}(G_{C})\)-action on \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{i \in I}C_{i}}}\) factors through \(L_{I}^{n}(G_{C})\) for large enough \(n\)[32, Proposition 1.10]2. Therefore \(\widehat{\operatorname{Gr}}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{\prod_{ i\in I}{\,\mathbb{D}_{i}}}}\) is a formal scheme that is formally of finite type and adic over \(\prod_{i\in I}{\,\mathbb{D}_{i}}\), and its special fiber is projective over \(\prod_{i\in I}v_{i}\). Also, the proof of [46, Lemma 3.2] shows that \(\widehat{\operatorname{Gr}}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}{\,|_{ \prod_{i\in I}{\,\mathbb{D}_{i}}}}\) is independent of the globalization from Lemma 1.1. Recall that we have an isomorphism \[\operatorname{Gr}_{z,G}^{k}\stackrel{{\sim}}{{\to}}(\operatorname{ Gr}_{z,G}^{1})^{k}\] given by \(((\mathscr{G}_{j})_{j=1}^{k},(\phi_{j})_{j=1}^{k})\mapsto((\mathscr{G}_{k},\phi_{k }),\ldots,(\mathscr{G}_{1},\phi_{k}\circ\cdots\circ\phi_{1}))\). **Definition**.: Under this identification, write \(\operatorname{Gr}_{z,\operatorname{SL}_{h},m}^{k}\) for the closed subsheaf of \(\operatorname{Gr}_{z,\operatorname{SL}_{h}}^{k}\) corresponding to \((\operatorname{Gr}_{z,\operatorname{SL}_{h},m2\rho^{\vee}}^{1})^{k}\subseteq( \operatorname{Gr}_{z,\operatorname{SL}_{h}}^{1})^{k}\), where \(2\rho^{\vee}\) denotes the sum of positive coroots in \(\operatorname{SL}_{h}\). By 1.5, we see that \(\operatorname{Gr}_{z,\operatorname{SL}_{h},m}^{k}\) is a projective scheme over \(\mathbb{F}_{q}\). We conclude by showing that, after pulling back to the loop group, affine Schubert varieties are affine. Write \(L_{I}(G_{C})_{\mu_{\bullet}}\big{|}_{\prod_{i\in I}C_{i}}\) for the pullback of \[\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I }C_{i}}\] under the natural morphism \(\prod_{j=1}^{k}L_{I}^{j,\circ}(G_{C})\operatorname{\to}\operatorname{Gr}_{G_{C }}^{(I_{1},\ldots,I_{k})}\). **Lemma**.: _Our \(L_{I}(G_{C})_{\mu_{\bullet}}\big{|}_{\prod_{i\in I}C_{i}}\) is affine over \(\prod_{i\in I}C_{i}\)._ Proof.: Because \(\operatorname{\underline{Aut}}(\mathscr{V})/G_{C}\) is quasi-affine over \(C\), the proof of [47, Proposition 1.2.6] shows that \(\iota_{\ast}:\operatorname{Gr}_{G_{C}}^{(I_{1},\ldots,I_{k})}\operatorname{ \to}\operatorname{Gr}_{\operatorname{SL}_{h,C}}^{(I_{1},\ldots,I_{k})}\) is a locally closed embedding. Now 1.5 indicates that \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I }C_{i}}\) is a quasi-compact scheme, so [23, Lemma 5.4] implies that its image under \(\iota_{\ast}\) lies in \(\operatorname{Gr}_{\operatorname{SL}_{h,C},(m2\rho^{\vee})_{i\in I}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I}C_{i}}\) for large enough \(m\). Since \(\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I }C_{i}}\) is projective over \(\prod_{i\in I}C_{i}\) by 1.5 and \(\iota_{\ast}\) is a monomorphism, we see that \(\iota_{\ast}:\operatorname{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_ {\prod_{i\in I}C_{i}}\operatorname{\to}\operatorname{Gr}_{\operatorname{SL}_ {h,C},(m2\rho^{\vee})_{i\in I}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I}C_{i}}\) is a closed embedding. Combined with the fact that \(L_{I}^{+}(G_{C})\operatorname{\to}L_{I}^{+}(\operatorname{SL}_{r,C})\) is a closed embedding, this implies that \(L_{I}(G_{C})_{\mu_{\bullet}}\big{|}_{\prod_{i\in I}C_{i}}\operatorname{\to}L_ {I}(\operatorname{SL}_{h,C})_{(m2\rho^{\vee})_{i\in I}}|_{\prod_{i\in I}C_{i}}\) is also a closed embedding. Now the argument in the proof of [2, Lemma 4.23] shows that \(L_{I}(\operatorname{SL}_{h,C})_{(m2\rho^{\vee})_{i\in I}}\) is affine over \(C^{I}\), so \(L_{I}(G_{C})_{\mu_{\bullet}}|_{\prod_{i\in I}C_{i}}\) is affine over \(\prod_{i\in I}C_{i}\). ## 2. Formal moduli of local shtukas To define the uniformization morphism via Beauville-Laszlo gluing in SS5, we need a formal variant of the moduli of local shtukas. Moreover, to show that the uniformization morphism is etale, we need some finitude properties of this formal moduli. Accomplishing these tasks is the goal of this section. We start by defining local shtukas and their quasi-isogenies in the formal setting. After proving a rigidity property for quasi-isogenies, we define the formal moduli problem, and we dedicate the rest of this section to proving that it gives a formal scheme that is locally formally of finite type over \(\mathbb{D}^{I}\). Our strategy ultimately harks back to Rapoport-Zink's proof [36] of the analogous property for Rapoport-Zink spaces. The equicharacteristic incarnation of this argument is heavily based on work of Hartl-Viehmann [23] and Arasteh Rad-Hartl [2], although we generalize their results to the case of arbitrarily many legs. Later, it will be useful to work in the following generality. Let \(R\) be a topological \(\mathbb{F}_{\mathbb{Z}}[\xi_{i}]_{i\in I}\)-algebra that is adic with finitely generated ideal of definition, and write \(S\coloneqq\operatorname{Spec}R\). Write \(\tau:S\operatorname{\rightarrow}S\) for the absolute \(q\)-Frobenius endomorphism. By abuse of notation, we also write \(\tau:R[\![z]\operatorname{\rightarrow}R[\![z]\operatorname{\rightarrow}R[\![z]\) for the canonical lift of absolute \(q\)-Frobenius. We use \({}^{\tau}(-)\) to denote pullback by \(\tau\). Write \(R[\![z,\frac{1}{z})\) for the completion of \(R((z)\!)\) with respect to the topology induced from \(R\). We now define _local \(G\)-shtukas_. **Definition**.: 1. A _local_\(G\)_-shtuka_ over \(S\) consists of 1. for all \(1\leq j\leq k\), a \(G\)-bundle \(\mathscr{G}_{j}\) on \(\operatorname{Spec}R[\![z]\!]\), 2. for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\phi_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}R[\![z]\!][\frac{1}{z-\zeta_{i}}] _{i\in I_{j}}}\overset{\sim}{\to}\mathscr{G}_{j+1}|_{\operatorname{Spec}R[ \![z]\!][\frac{1}{z-\zeta_{i}}]_{i\in I_{j}}},\] where \(\mathscr{G}_{k+1}\) denotes the \(G\)-bundle \({}^{\tau}\mathscr{G}_{1}\). 2. Suppose that \(\operatorname{Spf}R\) lies over \(\prod_{i\in I}\mathbb{D}_{i}\), and let \(\mathscr{G}=((\mathscr{G}_{j})_{j=1}^{k},(\phi_{j})_{j=1}^{k})\) be a local shtuka over \(S\). We say that \(\mathscr{G}\) is _bounded by \(\mu_{\bullet}\)_ if, for any affine etale cover \(\operatorname{Spf}\widetilde{R}\operatorname{\rightarrow}\operatorname{Spf}R\) such that \({}^{\tau}\mathscr{G}_{1}|_{\operatorname{Spec}\widetilde{R}[\![z]\!]}\) is trivial and any trivialization \(t:\)\({}^{\tau}\mathscr{G}_{1}|_{\operatorname{Spec}\widetilde{R}[\![z]\!]}\overset{ \sim}{\to}G\), the \(\operatorname{Spf}\widetilde{R}\)-point of \(\widehat{\operatorname{Gr}}^{(I_{1},\dots,I_{k})}_{G}\big{|}_{\prod_{i\in I} \mathbb{D}_{i}}\) given by \[\mathscr{G}_{1}|_{\operatorname{Spec}\widetilde{R}[\![z]\!]}\overset{(\phi_{1 })_{R[\![z,\frac{1}{z})}}{---\overset{-}{-}\overset{-}{-}\overset{-}{-}} \operatorname{\rightarrow}\cdots\overset{(\phi_{k-1})_{R[\![z,\frac{1}{z}]\! )}}{\to}\mathscr{G}_{k}|_{\operatorname{Spec}\widetilde{R}[\![z]\!]}\overset{ (to\phi_{k})_{R[\![z,\frac{1}{z}]\!)}}{---\overset{-}{-}\overset{-}{-}}G\] lies in \(\widehat{\operatorname{Gr}}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\mathbb{D}_{i}}\), using the description of \(\widehat{\operatorname{Gr}}^{(I_{1},\dots,I_{k})}_{G}\) from Lemma 1.4. It suffices to check Definition 2.1.b) for a single \(\operatorname{Spf}\widetilde{R}\operatorname{\rightarrow}\operatorname{Spf}R\) and \(t\). For the rest of this section, assume that \(R\) is discrete, so that the \(\zeta_{i}\) are nilpotent in \(R\). In this setting, we use the following notion of quasi-isogenies. **Definition**.: Let \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) be local \(G\)-shtukas over \(S\). 1. A _quasi-isogeny_ from \(\mathscr{G}\) to \(\mathscr{G}^{\prime}\) consists of, for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\delta_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}R((z)\!)}\overset{\sim}{\to} \mathscr{G}^{\prime}_{j}|_{\operatorname{Spec}R((z)\!)}\] such that the diagram commutes, where \(\delta_{k+1}\) denotes the isomorphism \({}^{\tau}\delta_{1}\), and we use Lemma 1.3 to identify \(R[\![z]\!][\frac{1}{z-\zeta_{i}}]_{i\in I_{j}}\) with \(R((z)\!)\). 2. Let \(m\) be a non-negative integer, and let \(\delta\) be a quasi-isogeny from \(\mathscr{G}\) to \(\mathscr{G}^{\prime}\). We say that \(\delta\) is _bounded by \(m\)_ if, for all \(1\leq j\leq k\), the morphism \(\iota_{*}(\delta_{j})\) yields a point of \([L_{z}^{+}\operatorname{SL}_{h}\backslash\operatorname{Gr}^{1}_{z,\operatorname {SL}_{h},m2\rho^{\vee}}]\). Since \(L_{z}^{+}G\)-bundles on \(\operatorname{Spec}R\) are trivial after an etale cover, [23, Lemma 5.4] implies that any quasi-isogeny is bounded by \(m\) for large enough \(m\). We will need the following quantitative version of the rigidity of quasi-isogenies. Let \(J\) be an ideal of \(R\) satisfying \(J^{n}=0\), and write \(\operatorname{\mathsf{j}}:\overline{S}\operatorname{\rightarrow}S\) for the associated closed embedding. **Proposition**.: _For all local \(G\)-shtukas \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) over \(S\), pullback yields a bijection_ \[\{\text{quasi-isogenies from $\mathscr{G}$ to $\mathscr{G}^{\prime}$}\} \stackrel{{\sim}}{{\rightarrow}}\{\text{quasi-isogenies from $\operatorname{\mathsf{j}}^{*}\mathscr{G}$ to $\operatorname{\mathsf{j}}^{*}\mathscr{G}^{\prime}$}\}.\] _Moreover, suppose that \(S\) lies over \(\prod_{i\in I}\mathbb{D}_{i}\) and that \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) are bounded by \(\mu_{\bullet}\). There exists a non-negative integer \(B\) such that, if \(\operatorname{\mathsf{j}}^{*}\delta\) is bounded by \(m\), then \(\delta\) is bounded by \(m+B\lceil\log_{q}n\rceil\)._ Proof.: By induction, it suffices to consider the case where \(n=q\). Then \(\tau:S\operatorname{\rightarrow}S\) factors as \(\operatorname{\mathsf{j}}\circ\operatorname{\mathsf{i}}\) for a unique morphism \(\operatorname{\mathsf{i}}:S\operatorname{\rightarrow}\overline{S}\). For any quasi-isogeny \(\delta\) from \(\mathscr{G}\) to \(\mathscr{G}^{\prime}\), note that \({}^{\tau}\delta_{1}=\operatorname{\mathsf{i}}^{*}\operatorname{\mathsf{j}}^{* }\delta_{1}\). Therefore the commutative square \[\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0. First, we naively stratify \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I }\mathbb{D}_{i}}\) by bounding the quasi-isogeny. Write \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) for the subsheaf of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) whose \(S\)-points consist of the \((\mathscr{G},\delta)\) such that \(\delta\) is bounded by \(m\). **Proposition**.: _Our \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) is a formal scheme that is formally of finite type and adic over \(\prod_{i\in I}\mathbb{D}_{i}\), its reduced subscheme is projective over \(\prod_{i\in I}v_{i}\), and \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) equals the direct limit \(\varinjlim_{m}\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots, I_{k})}|_{\prod_{i\in\mathbb{D}_{i}}}\)._ Proof.: Note that we have a Cartesian square where we use Proposition 2.5 to identify \(\mathfrak{Loc}\mathfrak{Sht}_{\mathrm{SL}_{b}}^{(I_{1},\dots,I_{k})}\) with \(\mathrm{Gr}_{z,\mathrm{SL}_{h}}^{k}\left|{}_{\mathbb{D}^{I}}\right.\). Because \(\mathrm{SL}_{h}/G\) is quasi-affine over \(\mathcal{O}_{F}\), Proposition 2.5 and [47, Proposition 1.2.6] show that \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\to\mathrm{Gr}_{z,\mathrm{SL}_{h}}^{k}\left|{}_{\prod_ {i\in I}\mathbb{D}_{i}}\right.\) is a closed embedding. Therefore its pullback \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}\mathbb{D}_{i}}\to\mathrm{Gr}_{z,\mathrm{SL}_{h},m}^{k}|_{\prod_{i \in I}\mathbb{D}_{i}}\) is as well. Since \(\mathrm{Gr}_{z,\mathrm{SL}_{h},m}^{k}\left|{}_{\prod_{i\in I}nv_{i}}\right.\) is projective over \(\prod_{i\in I}nv_{i}\) for any positive integer \(n\) by 1.6, the same holds for \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}nv_{i}}\). Now the underlying topological space of \(\mathrm{Gr}_{z,\mathrm{SL}_{h},m}^{k}\left|{}_{\prod_{i\in I}nv_{i}}\right.\) is independent of \(n\), so the \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}nv_{i}}\) have this property too. From here, [17, (1, 10.6.4)] indicates that \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}\mathbb{D}_{i}}\) is a noetherian formal scheme that is adic over \(\prod_{i\in I}\mathbb{D}_{i}\). Hence its reduced subscheme equals that of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}v_{i}}\), which is projective over \(\prod_{i\in I}v_{i}\), so \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod _{i\in I}\mathbb{D}_{i}}\) is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). Finally, last statement follows from \(\mathrm{Gr}_{z,\mathrm{SL}_{h}}^{k}\) equaling the direct limit \(\varinjlim_{m}\mathrm{Gr}_{z,\mathrm{SL}_{h},m}^{k}\). To obtain a more refined stratification of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\), we need the following algebraization lemma. Briefly, relax our assumption that \(R\) is discrete, since we plan to use this lemma later as well. Let \((S_{l})_{l\geq 0}\) be a direct system of affine schemes \(S_{l}=\mathrm{Spec}\,R_{l}\) over \(\prod_{i\in I}\mathbb{D}_{i}\) such that 1. the morphisms \(S_{l}\!\to\!S_{l^{\prime}}\) are closed embeddings, 2. the associated ideals \(\ker(R_{l^{\prime}}\!\to\!R_{l})\) are nilpotent. Take \(R\) to be the ring \(\varinjlim_{\xi\to l}R_{l}\), and endow \(R\) with a topological ring structure such that \(\mathbb{F}_{q}[\zeta_{i}]_{i\in I}\!\to\!R\) is continuous, the \(R\!\to\!R_{l}\) are continuous for the discrete topology on \(R_{l}\), and \(R\) is adic with finitely generated ideal of definition. **Lemma**.: _Pullback yields an equivalence of groupoids_ \[\left\{\begin{array}{c}\text{local $G$-shtukas over}\\ \text{$S$ bounded by $\mu_{\bullet}$}\end{array}\right\}\stackrel{{ \sim}}{{\longrightarrow}}\varprojlim_{l}\left\{\begin{array}{c}\text{ local $G$-shtukas over}\\ \text{$S_{l}$ bounded by $\mu_{\bullet}$}\end{array}\right\}.\] Proof.: Let \((\mathscr{G}^{l})_{l\geq 0}\) be a compatible system of local \(G\)-shtukas over \(S_{l}\) bounded by \(\mu_{\bullet}\). We can form the \(G\)-bundles \(\mathscr{G}_{j}\coloneqq\varprojlim_{l}\mathscr{G}_{j}^{l}\) on \(\mathrm{Spec}\,R[\![z]\!]\), so now we just need to form the isomorphisms \(\phi_{j}\). Let \(\operatorname{Spec}\widetilde{R}_{0}\operatorname{\rightarrow}S_{0}\) be an affine etale cover where \(\mathscr{G}_{j}^{0}|_{\operatorname{Spec}\widetilde{R}_{0}[\sharp]}\) is trivial for all \(1\leq j\leq k\), and fix trivializations of the \(\mathscr{G}_{j}^{0}|_{\operatorname{Spec}\widetilde{R}_{0}[\sharp]}\). By ii), there exists a unique affine etale cover \(\operatorname{Spec}\widetilde{R}_{l}\operatorname{\rightarrow}S_{l}\) whose pullback to \(S_{0}\) is \(\operatorname{Spec}\widetilde{R}_{0}\), and there also exist compatible systems of trivializations of the \(\mathscr{G}_{j}^{l}|_{\operatorname{Spec}\widetilde{R}_{l}[\sharp]}\)[23, Proposition 2.2(c)]3. Under these identifications, the \((\phi_{j}^{l})_{\widetilde{R}_{l}(\sharp)}\) correspond to compatible systems of \(b_{j}^{l}\) in \(G(\widehat{\mathcal{O}}_{C}^{j,\circ}(\operatorname{Spec}\widetilde{R}_{l}))\), where we use Lemma 1.3 to identify \(\widetilde{R}_{l}(z)\) with \(\widehat{\mathcal{O}}_{C}^{j,\circ}(\operatorname{Spec}\widetilde{R}_{l})\). Footnote 3: While [23] only treats split reductive \(G\), the proof immediately adapts to any smooth \(G\). For all \(i\) in \(I\), let \(V_{i}\) be an affine neighborhood of \(v_{i}\) in \(C_{i}\). Because the \(\mathscr{G}^{l}\) are bounded by \(\mu_{\bullet}\), our \((b_{j}^{l})_{j=1}^{k}\) yield \(\widetilde{R}_{l}\)-points of \(L_{I}(G_{C})_{\mu_{\bullet}}|_{\prod_{i\in I}V_{i}}\). The latter is affine by Lemma 1.7, so the compatible system of \((b_{j}^{l})_{j=1}^{k}\) yields an \(\widetilde{R}\coloneqq\varprojlim_{l}\widetilde{R}_{l}\)-point \((b_{j})_{j=1}^{k}\) of \(L_{I}(G_{C})|_{\prod_{i\in I}V_{i}}\). By construction, the resulting local \(G\)-shtuka \(\widetilde{\mathscr{G}}\coloneqq((\mathscr{G}_{j}|_{\operatorname{Spec} \widetilde{R}[\sharp]})_{j=1}^{k},(b_{j})_{j=1}^{k})\) over \(\operatorname{Spec}\widetilde{R}\) is bounded by \(\mu_{\bullet}\). Since the \((\phi_{j}^{l})_{\widetilde{R}_{l}(\sharp)}\) and thus \(b_{j}^{l}\) are compatible with the descent data of \(\mathscr{G}_{j}^{l}\) from \(\operatorname{Spec}\widetilde{R}_{l}\) to \(S_{l}\), we see that the \(b_{j}\) are compatible with the descent data of \(\mathscr{G}_{j}\) from \(\operatorname{Spec}\widetilde{R}\) to \(S\). Hence \(\widetilde{\mathscr{G}}\) naturally descends to a local \(G\)-shtuka \(\mathscr{G}\) over \(S\) bounded by \(\mu_{\bullet}\), as desired. Resume our assumption that \(R\) is discrete. The following refined stratification of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{ i\in I}\mathbb{D}_{i}}\) has better closure properties under formal completion. Write \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m}}^{(I_{1},\dots,I_{k})}| _{\prod_{i\in I}\mathbb{D}_{i}}\) for the formal completion of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) along the reduced subscheme of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{ \prod_{i\in I}\mathbb{D}_{i}}\). **Lemma**.: _Our \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m}}^{(I_{1},\dots,I_{k} )}|_{\prod_{i\in I}\mathbb{D}_{i}}\) is a formal scheme that is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\)._ Proof.: Proposition 2.6 and [23, Lemma 5.4] imply that \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m}}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\) equals the direct limit \(\varinjlim_{l}\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l}^{(I _{1},\dots,I_{k})}|_{\prod_{i\in I}\mathbb{D}_{i}}\), where \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\) denotes the formal completion of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m+l}^{(I_{1},\dots,I_{k})}|_{ \prod_{i\in I}\mathbb{D}_{i}}\) along the reduced subscheme of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\). The reduced subscheme of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) is quasi-compact by Proposition 2.6, so it is covered by finitely many affine open subschemes \(U\). Proposition 2.6 indicates that \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\) is a noetherian formal scheme with reduced subscheme equal to that of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},m}^{(I_{1},\dots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\), so we can form the affine open formal subscheme \(\mathfrak{U}_{l}=\operatorname{Spf}A_{l}\) of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\) with underlying topological space \(U\). The above shows that \(\varinjlim_{l}\mathfrak{U}_{l}\) is an open subsheaf of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m}}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\). Thus it suffices to prove that \(\varinjlim_{l}\mathfrak{U}_{l}\) is an affine formal scheme that is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). Because the \[\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\operatorname{\rightarrow}\mathfrak{Loc} \mathfrak{Sht}_{G,\mu_{\bullet},\widehat{m},l^{\prime}}^{(I_{1},\dots,I_{k})}|_{ \prod_{i\in I}\mathbb{D}_{i}}\] are closed embeddings, the \(A_{l^{\prime}}\operatorname{\rightarrow}A_{l}\) are surjective. Write \(A\coloneqq\varprojlim_{l}A_{l}\). Write \(J_{0}\) for the largest ideal of definition of \(A_{0}\), and write \(J\) for its preimage in \(A\). For any positive integer \(c\), we claim that \(A_{l^{\prime}}/J^{c}\operatorname{\rightarrow}A_{l}/J^{c}\) is an isomorphism for large enough \(l\) and \(l^{\prime}\). Note that the \(A_{l^{\prime}}/J^{c}\operatorname{\rightarrow}A_{l}/J^{c}\) have nilpotent kernels, and the Mittag-Leffler criterion implies that \(A/J^{c}=\varprojlim_{l}A_{l}/J^{c}\). Endow \(A/J^{c}\) with the discrete topology. Because the \(\zeta_{i}\) vanish in \(A/J=A_{0}/J_{0}\), we see that the \(\zeta_{i}\) are nilpotent in \(A/J^{c}\). Thus \(\mathbb{F}_{q}[\zeta_{i}]_{i\in I}\to A/J^{c}\) is continuous, so we can apply Lemma 2.7 to the local \(G\)-shtukas \(\mathscr{G}^{l}\) over \(\operatorname{Spec}A_{l}/J^{c}\) obtained from the morphism \[\operatorname{Spec}A_{l}/J^{c}\operatorname{\rightarrow}\operatorname{Spf}A_{l }\operatorname{\rightarrow}\mathfrak{LOC}\mathfrak{SH}_{G,\mu_{\bullet},\widehat {m},l}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}\mathbb{D}_{i}}\] to get a local \(G\)-shtuka \(\mathscr{G}\) over \(\operatorname{Spec}A/J^{c}\) bounded by \(\mu_{\bullet}\). Next, consider the quasi-isogeny \(\delta^{0}\) obtained from \(\operatorname{Spec}A_{0}/J_{0}\operatorname{\rightarrow}\mathfrak{LOC} \mathfrak{SH}_{G,\mu_{\bullet},\widehat{m},0}^{(I_{1},\dots,I_{k})}|_{\prod_{i \in I}\mathbb{D}_{i}}\). Proposition 2.3 uniquely lifts \(\delta^{0}\) to a quasi-isogeny \(\delta\) from \(\mathscr{G}\) to \(G\), which implies that the resulting \(A/J^{c}\)-point \((\mathscr{G},\delta)\) of \(\mathfrak{LOC}\mathfrak{SH}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i \in I}\mathbb{D}_{i}}\) lies in \(\mathfrak{LOC}\mathfrak{SH}_{G,\mu_{\bullet},\widehat{m}}^{(I_{1},\dots,I_{k})}| _{\prod_{i\in I}\mathbb{D}_{i}}\). Therefore [23, Lemma 5.4] indicates that \((\mathscr{G},\delta)\) lies in \(\mathfrak{LOC}\mathfrak{SH}_{G,\mu_{\bullet},\widehat{m},l}^{(I_{1},\dots,I_{k })}|_{\prod_{i\in I}\mathbb{D}_{i}}\) for large enough \(l\). Pulling back to \(\operatorname{Spec}A_{0}/J_{0}\) shows that \((\mathscr{G},\delta)\) even lies in \(\operatorname{Spf}A_{l}\). The uniqueness of Proposition 2.3 implies that the pullback of \((\mathscr{G},\delta)\) to \(\operatorname{Spec}A_{l^{\prime}}/J^{c}\) equals \((\mathscr{G}^{l},\delta^{l})\), so \(A_{l^{\prime}}\operatorname{\rightarrow}A_{l}\operatorname{\rightarrow}A/J^ {c}\operatorname{\rightarrow}A_{l^{\prime}}/J^{c}\) equals the quotient map. Quotienting by the image of \(J^{c}\) in \(A_{l}\) shows that \(A_{l^{\prime}}/J^{c}\operatorname{\rightarrow}A_{l}/J^{c}\) is an isomorphism, which concludes our proof of the claim. Write \(\mathfrak{a}_{l}\operatorname{\coloneqq}\ker(A\operatorname{\rightarrow}A_{l})\). The claim indicates that the ideals \(\mathfrak{a}_{l}+J^{c}\) of \(A\) stabilize for any positive integer \(c\), and because the \(A_{l}\) are noetherian, we see that the \(\operatorname{im}(J/J^{2}\operatorname{\rightarrow}A_{l}/J^{2})=J/(J^{2}+ \mathfrak{a}_{l})\) are finite over \(A\). Therefore [36, proposition (2.5)] shows that \(A\) with the inverse limit topology is noetherian and \(J\)-adic, which implies that \(\varinjlim_{l}\mathfrak{U}_{l}=\operatorname{Spf}A\). Finally, the reduced subscheme of \(\operatorname{Spf}A\) is of finite type over \(\prod_{i\in I}v_{i}\) by Proposition 2.6, so \(\operatorname{Spf}A\) is formally of finite type over \(\mathbb{D}_{i}\). We can use the quasi-isogeny to define the following distance function. **Definition**.: Let \(K\) be a field over \(\mathbb{F}_{q}\), and let \(x=(\mathscr{G},\delta)\) and \(x^{\prime}=(\mathscr{G}^{\prime},\delta^{\prime})\) be \(K\)-points of \(\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}\). Write \(d(x,x^{\prime})\) for the smallest non-negative integer \(m\) such that the quasi-isogeny \(\delta^{-1}\circ\delta^{\prime}\) of local \(G\)-shtukas over \(\operatorname{Spec}K(\!(z)\!)\) is bounded by \(m\). **2.10 Lemma**.: _As \(K\) runs over all fields over \(\mathbb{F}_{q}\), the maps \(d\) induce a metric on the underlying set \(|\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}|\). For any \(x\) in \(|\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}|\) and non-negative integer \(r\), the associated closed ball \(B_{r}(x)\) of radius \(r\) centered at \(x\) is closed with respect to the Zariski topology on \(|\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}|\)._ Proof.: We immediately see that \(d\) is insensitive to field extensions, so \(d\) induces a map \(|\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}|\times|\mathfrak{LOC} \mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}|\operatorname{\rightarrow}\mathbb{Z}_{ \geq 0}\). Since relative position bounds along the same divisor are sub-additive under composition, \(d\) satisfies the triangle inequality, and because \(2\rho^{\vee}\) is fixed by the Chevalley involution, \(d\) is symmetric. Next, if \(d(x,x^{\prime})=0\), then \(\iota_{*}(\delta_{j}^{-1}\circ\delta_{j}^{\prime})\) extends to an isomorphism of \(\operatorname{SL}_{r}\)-bundles on \(\operatorname{Spec}K[\![z]\!]\) for all \(1\leq j\leq k\). Since \(\iota\) is a monomorphism, this implies that the \(\delta_{j}^{-1}\circ\delta_{j}^{\prime}\) extend to isomorphisms of \(G\)-bundles on \(\operatorname{Spec}K[\![z]\!]\), so \(x=x^{\prime}\). For the last statement, note that \(B_{r}(x)\) equals, on the level of topological spaces, the preimage of the closed substack \([L_{z}^{+}\operatorname{SL}_{h}\backslash\operatorname{Gr}_{z,\operatorname{SL} _{h},r2\rho^{\vee}}^{1}]^{k}\) under the morphism \[\mathfrak{LOC}\mathfrak{SH}_{G}^{(I_{1},\dots,I_{k})}\operatorname{ \rightarrow}[L_{z}^{+}\operatorname{SL}_{h}\backslash\operatorname{Gr}_{z, \operatorname{SL}_{h}}^{1}]^{k}\] given by \((\mathscr{G}^{\prime},\delta^{\prime})\mapsto(\iota_{*}(\delta_{j}^{-1}\circ \delta_{j}^{\prime}))_{j=1}^{k}\) All points of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i \in I}\mathbb{D}_{i}}\) are close enough to one defined over a fixed finite field in the following sense. **Lemma**.: _There exists a finite extension \(\mathbb{F}_{q^{\prime}}\) of \(\mathbb{F}_{q}\) and a non-negative integer \(D\) such that, for every \(x\) in \(|\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}|\), there exists an \(\mathbb{F}_{q^{\prime}}\)-point \(y\) of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) satisfying \(d(x,y)\leq D\)._ Proof.: Suppose that \(x\) corresponds to a \(K\)-point \((\mathscr{G},\delta)\), where we can assume that \(K\) is an algebraically closed field over \(\mathbb{F}_{q}\). Then \(\mathscr{G}_{j}\) is trivial for all \(1\leq j\leq k\), and after fixing trivializations of the \(\mathscr{G}_{j}\), our \(\delta_{j}\) correspond to \(g_{j}\) in \(G(K(\!(z)\!))\). The commutativity of the diagram implies that \({}^{\tau}\delta_{1}^{-1}\circ\delta_{1}=\phi_{k}\circ\cdots\circ\phi_{1}\), so the image of \(\tau(g_{1})^{-1}g_{1}\) in \(\operatorname{Gr}^{1}_{z,G}|_{v_{*}}\) lies in \(\operatorname{Gr}^{1}_{z,G,\sum_{i\in I}\mu_{i}}|_{v_{*}}\). Now 1.5 indicates that \(\operatorname{Gr}^{1}_{z,G,\sum_{i\in I}\mu_{i}}|_{v_{*}}\) is a quasi-compact scheme, so [23, Lemma 5.4] shows that its image under \(\iota_{*}\) lies in \(\operatorname{Gr}^{1}_{z,\operatorname{SL}_{k},m}\) for large enough \(m\). Therefore [33, 2.2.1 (ii)] and [37, (2.1)] yield a non-negative integer \(D\) such that, for all such \(g_{1}\), there exists \(h_{1}\) in \(G(\mathbb{F}_{q}(\!(z)\!))\) such that the image of \(g_{1}h_{1}^{-1}\) in \(\operatorname{Gr}^{1}_{z,\operatorname{SL}_{h}}\) lies in \(\operatorname{Gr}^{1}_{z,\operatorname{SL}_{h},D2\rho^{\vee}}\). If \(\sum_{i\in I}\mu_{i}\) is not a coroot, then \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) is empty, and the result vacuously holds. So assume that \(\sum_{i\in I}\mu_{i}\) is a coroot. Then the image of 1 in \(\operatorname{Gr}^{(D)}_{G}|_{\prod_{i\in I}v_{i}}\) lies in \(\operatorname{Gr}^{(I)}_{G,\sum_{i\in I}\mu_{i}}|_{\prod_{i\in I}v_{i}}\). Since the convolution morphism \(\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i\in I}v_ {i}}\to\operatorname{Gr}^{(I)}_{G,\sum_{i\in I}\mu_{i}}|_{\prod_{i\in I}v_{i}}\) is of finite type by 1.5 and surjective, its fiber at 1 has an \(\mathbb{F}_{q^{\prime}}\)-point \(b\) for some finite extension \(\mathbb{F}_{q^{\prime}}\) of \(\mathbb{F}_{q}\). Next, identify \(\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G}|_{v^{I}}\) with \(\operatorname{Gr}^{k}_{z,G}|_{v^{I}}\). Because the fiber of \((L_{z}G)^{k}\to\operatorname{Gr}^{k}_{G}|_{\prod_{i\in I}v_{i}}\) at \(b\) is an \((L_{z}^{+}G)^{k}\)-bundle on \(\operatorname{Spec}\mathbb{F}_{q^{\prime}}\), Lang's lemma indicates that it has an \(\mathbb{F}_{q^{\prime}}\)-point \((b_{j})_{j=1}^{k}\). By construction, the local \(G\)-shtuka \(\mathscr{H}\coloneqq((G)_{j=1}^{k},(b_{j})_{j=1}^{k})\) over \(\operatorname{Spec}\mathbb{F}_{q^{\prime}}\) is bounded by \(\mu_{\bullet}\), and \(b_{k}\cdots b_{1}\) equals 1 up to right \(G(\mathbb{F}_{q^{\prime}}[\![z]\!])\)-translation. By replacing \(b_{1}\) with a right \(G(\mathbb{F}_{q^{\prime}}[\![z]\!])\)-translate, we can assume that \(b_{k}\cdots b_{1}=1\). Combined with the fact that \(h_{1}=\tau(h_{1})\), this shows that the diagram commutes for uniquely determined \(h_{2},\ldots,h_{k}\) in \(G(\mathbb{F}_{q^{\prime}}(\!(z)\!))\). Since \(b_{j}\) and \(\phi_{j}\) are bounded by \(\sum_{i\in I_{j}}\mu_{j}\) for \(1\leq j\leq k-1\), where the relative position bound is taken with respect to \(z\), a quasi-compactness argument as before shows that, after increasing \(D\) by an amount depending only on \(\mu_{\bullet}\), the image of \(g_{j}h_{j}^{-1}\) in \(\operatorname{Gr}^{1}_{z,\operatorname{SL}_{h}}\) lies in \(\operatorname{Gr}^{1}_{z,\operatorname{SL}_{h},D2\rho^{\vee}}\). Therefore the quasi-isogeny \(h\coloneqq(h_{j})_{j=1}^{k}\) from \(\mathscr{H}\) to \(G\) yields an \(\mathbb{F}_{q^{\prime}}\)-point \(y\coloneqq(\mathscr{H},h)\) of \(\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) with \(d(x,y)\leq D\), as desired. The following theorem is the main result of this section. Write \(B_{r}(x)_{\mu_{\bullet}}\) for the intersection of \(B_{r}(x)\) and \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\mathbb{D}_{i}}|\), and write \(\mathbf{1}\) for the \(\mathbb{F}_{q}\)-point \((G,(\mathrm{id})_{j=1}^{k})\) of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}\). Note that \(B_{m}(\mathbf{1})_{\mu_{\bullet}}\) equals \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},m}|_{\prod_ {i\in I}\mathbb{D}_{i}}|\). **Theorem**.: _Our \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\mathbb{D}_{i}}\) is a formal scheme that is locally formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\)._ Proof.: Let \(\mathbb{F}_{q^{\prime}}\) and \(D\) be as in Lemma 2.11. Write \(Z_{m}^{s}\) for the union \[\bigcup_{y}B_{D}(y)_{\mu_{\bullet}}\cap B_{m}(\mathbf{1})_{\mu_{\bullet}},\] where \(y\) runs over \(\mathbb{F}_{q^{\prime}}\)-points of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) satisfying \(d(\mathbf{1},y)\geq s\). The triangle inequality implies that it suffices to take \(y\) also satisfying \(d(\mathbf{1},y)\leq m+D\). Because \(B_{m+D}(\mathbf{1})_{\mu_{\bullet}}\) equals \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},m+D}|_{ \prod_{i\in I}\mathbb{D}_{i}}|\), Proposition 2.6 implies that there are finitely many such \(y\). Hence Lemma 2.10 indicates that \(Z_{m}^{s}\) is a a finite union of Zariski closed subsets of \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}|\). Write \(\mathfrak{U}_{m}^{s}\) for the open formal subscheme of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},\widetilde {m}}|_{\prod_{i\in I}\mathbb{D}_{i}}\) with underlying topological space given by the complement of \(Z_{m}^{s}\). By Lemma 2.8, \(\mathfrak{U}_{m}^{s}\) is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). Note that \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},\widetilde {m}}|_{\prod_{i\in I}\mathbb{D}_{i}}\) equals the formal completion of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},\widetilde {m}+1}|_{\prod_{i\in I}\mathbb{D}_{i}}\) along the reduced subscheme of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},\widetilde {m}}|_{\prod_{i\in I}\mathbb{D}_{i}}\), so \(\mathfrak{U}_{m+1}^{s}\) equals the formal completion of \(\mathfrak{U}_{m}^{s}\) along the reduced subscheme of \(\mathfrak{U}_{m}^{s}\). For any non-negative integer \(s\), we claim that \(\mathfrak{U}_{m}^{s}\) stabilizes. The above indicates that it suffices to check this on underlying sets, so suppose that there exists \(x\) in \(|\mathfrak{U}_{m+1}^{s}|\smallsetminus|\mathfrak{U}_{m}^{s}|\). Lemma 2.11 yields an \(\mathbb{F}_{q^{\prime}}\)-point \(y\) of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) satisfying \(d(x,y)\leq D\). As \(x\) does not lie in \(Z_{m+1}^{s}\), we have \(d(\mathbf{1},y)<s\), so the triangle inequality yields \(m+1=d(\mathbf{1},x)<s+D\). Hence \(\mathfrak{U}_{m}^{s}\) stabilizes for \(m\geq s+D-1\), which concludes our proof of the claim. Set \(\mathfrak{U}^{s}\coloneqq\varinjlim_{m}\mathfrak{U}_{m}^{s}\). Proposition 2.6 implies that \(\mathfrak{U}^{s}\) is an open subsheaf of \[\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}.\] The claim shows that \(\mathfrak{U}^{s}\) equals \(\mathfrak{U}_{m}^{s}\) for large enough \(m\), so \(\mathfrak{U}^{s}\) is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). Now we just need to prove \(\varinjlim_{s}\mathfrak{U}^{s}=\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_ {k})}_{G,\mu_{\bullet}}|_{\prod_{i\in I}\mathbb{D}_{i}}\). It suffices to check this on underlying sets, so take \(x\) in \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}|\). Proposition 2.6 indicates that \(x\) lies in \[|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},m}|_{\prod_ {i\in I}\mathbb{D}_{i}}|\] for large enough \(m\), so for all \(y\) in \(|\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G}|\) such that \(x\) lies in \(B_{D}(y)_{\mu_{\bullet}}\), the triangle inequality yields \(d(\mathbf{1},y)\leq m+D\). Therefore \(x\) lies in \(|\mathfrak{U}^{m+D+1}|\). Using representations of the dual group, we can index relative position bounds as follows. Let \(\widetilde{F}\) be the finite Galois extension of \(F\) such that \(\mathrm{Gal}(\widetilde{F}/F)\) equals the image of the \(\Gamma_{F}\)-action on \(X_{*}^{+}(T)\). Write \(\widetilde{\mathbb{D}}\) for \(\mathrm{Spd}\,\mathcal{O}_{\widetilde{F}}\). Let \(E\) be a finite extension of \(\mathbb{Q}_{\ell}(\sqrt{q})\), write \(\widehat{G}\) for the dual group of \(G_{F}\) over \(\mathcal{O}_{E}\), and write \({}^{L}G\) for \(\widehat{G}\rtimes\mathrm{Gal}(\widetilde{F}/F)\). Let \(V\) be an object of \(\operatorname{Rep}_{E}(^{L}G)^{I}\). Note that \(\coprod_{\mu_{\bullet}}\mathfrak{Loc}\mathfrak{Sht}_{G,\mu_{\bullet}}^{(I_{1}, \dots,I_{k})}\big{|}_{\bar{\mathbb{D}}^{I}}\) naturally descends to a sheaf \(\mathfrak{Loc}\mathfrak{Sht}_{G,V}^{(I_{1},\dots,I_{k})}\) over \(\mathbb{D}^{I}\), where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{\mathbb{Q}}_{t}}\big{|}_{\bar{G}^{I}}\). Theorem 2.12 and descent imply that \(\mathfrak{Loc}\mathfrak{Sht}_{G,V}^{(I_{1},\dots,I_{k})}\) is a formal scheme that is locally formally of finite type over \(\mathbb{D}^{I}\). Finally, we define partial Frobenius for the formal moduli of local \(G\)-shtukas. **Definition**.: Write \(\mathfrak{F}^{(I_{1},\dots,I_{k})}:\mathfrak{Loc}\mathfrak{Sht}_{G,V}^{(I_{1}, \dots,I_{k})}\to\mathfrak{Loc}\mathfrak{Sht}_{G,V}^{(I_{2},\dots,I_{k},I_{1})}\) for the morphism given by sending Note that \(\mathfrak{F}^{(I_{1},\dots,I_{k})}\) lies above the endomorphism of \(\mathbb{D}^{I}\) given by geometric \(q\)-Frobenius on the \(i\)-th factor for \(i\) in \(I_{1}\) and the identity on all other factors. ## 3. Relative \(z\)-adic Hodge theory The local shtukas defined in SS2 are (formal) algebraic, while the local shtukas used by Fargues-Scholze [11] are (non-archimedean) analytic in nature. To compare them, we need an equicharacteristic version of Kedlaya-Liu's results [31] on relative \(p\)-adic Hodge theory. Our goal in this section is to prove the necessary results on _relative \(z\)-adic Hodge theory_, in the spirit of work of Hartl [22]. We begin by recalling the equicharacteristic version of Fontaine's period ring \(A_{\inf}\). Using a theorem of Anschutz [1], we prove that an algebraization theorem for \(G\)-bundles on \(A_{\inf}\), at least pro-etale locally on the base. Finally, we relate \(G(\mathcal{O}_{F})\)-local systems to \(G\)-bundles on the equicharacteristic version of the (relative integral) Robba ring equipped with a Frobenius automorphism. Our arguments closely follow those of Kedlaya-Liu [31] and Scholze-Weinstein [42]. However, we have streamlined and simplified the presentation, both because we only prove what we need as well as because the arithmetic of formal power series is easier than that of Witt vectors. Let \(S=\operatorname{Spa}(R,R^{+})\) be an affinoid perfectoid space over \(\mathbb{F}_{q}\), and choose a pseudouniformizer \(\varpi\) of \(R\). Write \(\mathcal{Y}_{S}\) for the complement of the vanishing locus of \(\varpi\) and \(z\) in \(\operatorname{Spa}R^{+}[\![z]\!]\), and note that \(\mathcal{Y}_{S}\) is the analytic locus of the pre-adic space \(\operatorname{Spa}R^{+}[\![z]\!]\). We have a continuous map \(\operatorname{rad}:|\mathcal{Y}_{S}|\to[\![0,\infty]\!]\) given by \[x\mapsto\frac{\log|\varpi(\widetilde{x})|}{\log|z(\widetilde{x})|},\] where \(\widetilde{x}\) denotes the unique rank-\(1\) generalization of \(x\) in \(\mathcal{Y}_{S}\). For any closed interval \(\mathcal{I}\) in \([0,\infty]\) with rational endpoints, write \(\mathcal{Y}_{S,\mathcal{I}}=\operatorname{Spa}(B_{S,\mathcal{I}},B_{S, \mathcal{I}}^{+})\) for the associated rational open subspace of \(\operatorname{Spa}R^{+}[\![z]\!]\), which lies in \(\mathcal{Y}_{S}\). More generally, for any subset \(\mathcal{I}\) of \([0,\infty]\), write \(\mathcal{Y}_{S,\mathcal{I}}\) for the open subspace \(\bigcup_{\mathcal{I}^{\prime}}\mathcal{Y}_{S,\mathcal{I}^{\prime}}\) of \(\mathcal{Y}_{S}\), where \(\mathcal{I}^{\prime}\) runs over closed intervals in \(\mathcal{I}\) with rational endpoints. Note that \(\mathcal{Y}_{S,\mathcal{I}}\subseteq\operatorname{rad}^{-1}(\mathcal{I})\). We see that \(\mathcal{Y}_{S,[0,\infty)}\) and \(\mathcal{Y}_{S,(0,\infty)}\) are naturally isomorphic to \(\mathbb{D}\times S\) and \(\operatorname{Spa}F\times S\), respectively. Write \(\tau:S\to S\) for the absolute \(q\)-Frobenius automorphism, and by abuse of notation, write \(\tau:R[\![z]\to R[\![z]\!]\) for the canonical lift of absolute \(q\)-Frobenius. Note that \(\operatorname{rad}\circ\tau=q\cdot\operatorname{rad}\). Finally, write \(X_{S}\) for the quotient \(\mathcal{Y}_{S,(0,\infty)}/\tau^{\mathbb{Z}}\). When \(\mathcal{I}\) contains \(\infty\), we can describe \(B_{S,\mathcal{I}}\) using the following lemma. For any positive \(r\) in \(\mathbb{Z}[\frac{1}{p}]\), write \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) for the \(\varpi\)-adic completion of \(R^{+}[\![z]\!][\frac{\varpi^{r}}{z}]\). **Lemma**.: _We can identify_ \[R^{+}[\![z,\frac{\varpi^{r}}{z})=\left\{\sum_{m=-\infty}^{\infty}a_{m}z^{m} \,\middle|\,\text{the}\,\,\,a_{m}\in R^{+}\,\,\text{and}\,\,\,\lim_{m\to-\infty }a_{m}\varpi^{rm}=0\right\}.\] _If we give \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) the \((\varpi,z)\)-adic topology, then \(B_{S,[1/r,\infty]}\) equals \(R^{+}[\![z,\frac{\varpi^{r}}{z})[\frac{1}{z}]\)._ Proof.: The above description of \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) follows immediately from the definition. This description shows that \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) is \(z\)-adically complete as a ring, so \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) equals the \((\varpi,z)\)-adic completion of \(R^{+}[\![z]\!][\frac{\varpi^{r}}{z}]\) as rings. Since \(\mathcal{Y}_{S,[1/r,\infty]}\) equals the rational open subspace \(\{|\varpi^{r}|\leq|z|\neq 0\}\) of \(\operatorname{Spa}R^{+}[\![z]\!]\), this identifies \(B_{S,[1/r,\infty]}\) with \(R^{+}[\![z,\frac{\varpi^{r}}{z})[\frac{1}{z}]\) if we give \(R^{+}[\![z,\frac{\varpi^{r}}{z})\) the \((\varpi,z)\)-adic topology. Sometimes, it will be convenient to ignore the topology induced from \(R\) as follows. Write \(A^{\prime}(R^{+})\) for \(R^{+}[\![z]\!]\) with the \(z\)-adic topology. **Lemma**.: _Our \(\operatorname{Spa}(A^{\prime}(R^{+})[\frac{1}{z}],A^{\prime}(R^{+}))\) is a sosuperfectoid adic space._ Proof.: The natural map \(A^{\prime}(R^{+})[\frac{1}{z}]\to R^{+}[\![z^{\pm 1/p^{\infty}}]\!]\) is a split injection of topological \(A^{\prime}(R^{+})[\frac{1}{z}]\)-modules, where we give \(R^{+}[\![z^{\pm 1/p^{\infty}}]\!]\) the \(z\)-adic topology. ### Proposition _Our \(\mathcal{Y}_{S}\) is a sousperfectoid adic space._ Proof.: Note that \(\mathcal{Y}_{S}\) is covered by \(\mathcal{Y}_{S,[0,\infty)}\) and \(\mathcal{Y}_{S,[1,\infty]}\). Now \(\mathcal{Y}_{S,[0,\infty)}\) is a sousperfectoid adic space by [11, Proposition II.1.1], so it suffices to prove that \(\mathcal{Y}_{S,[1,\infty]}\) is a sousperfectoid adic space. By Proposition 3.2, \(B_{S,[1,\infty]}\) equals \(R^{+}[\![z,\frac{\varpi}{z})[\frac{1}{z}]\), where \(R^{+}[\![z,\frac{\varpi}{z})\) has the \((\varpi,z)\)-adic topology. Now \(z\) divides \(\varpi\) in \(R^{+}[\![z]\!][\frac{\varpi}{z}]\), so the \((\varpi,z)\)-adic topology on \(R^{+}[\![z]\!][\frac{\varpi}{z}]\) equals the \(z\)-adic topology. This enables us to identify \(\mathcal{Y}_{S,[1,\infty]}\) with the rational open subspace \(\{|\varpi|\leq|z|\neq 0\}\) of \(\operatorname{Spa}(A^{\prime}(R^{+})[\frac{1}{z}],A^{\prime}(R^{+}))\). The latter is sousperfectoid by Lemma 3.3, so \(\mathcal{Y}_{S,[1,\infty]}\) is as well. Since a power of \(\varpi\) divides a power of \(z\) in \(R^{+}[\![z]\!][\frac{z}{\varpi^{r}}]\), the \((\varpi,z)\)-adic topology on \(R^{+}[\![z]\!][\frac{z}{\varpi^{r}}]\) equals the \(\varpi\)-adic topology. Therefore \(B_{S,[0,1/r]}\) equals the Tate algebra \(R\langle\frac{z}{\varpi^{r}}\rangle\). This argument lets us similarly identify \[B_{S,[1,1]}=\left\{\sum_{m=-\infty}^{\infty}a_{m}z^{m}\,\middle|\,\text{the}\,\,a _{m}\in R\,\,\text{and}\,\,\,\lim_{m\to\pm\infty}a_{m}\varpi^{m}=0\right\}.\] We will use the following result with the Tannakian description of \(G\)-bundles. **Proposition**.: _Pullback yields a fully faithful functor_ \[\{\text{vector bundles on}\,\,\operatorname{Spec}R^{+}[\![z]\!]\}\longleftrightarrow \{\text{vector bundles on}\,\,\mathcal{Y}_{S}\}.\] Proof.: Let \(f:M\!\to\!M^{\prime}\) be a map of finite projective \(R^{+}\llbracket z\rrbracket\)-modules, and consider its pullback \(g\) to \(\mathcal{Y}_{S}\). Now Proposition 3.4 and [31, Theorem 2.7.7] indicate that \(g|_{\mathcal{Y}_{S,[0,1]}}\), \(g|_{\mathcal{Y}_{S,[1,\infty]}}\), and \(g|_{\mathcal{Y}_{S,[1,1]}}\) correspond to maps of finite projective modules over \(B_{S,[0,1]}\), \(B_{S,[1,\infty]}\), and \(B_{S,[1,1]}\), respectively, which are given by tensoring with \(f\) over \(R^{+}\llbracket z\rrbracket\). Lemma 3.2 indicates that \(B_{S,[1,\infty]}\) equals \(R^{+}\llbracket z,\frac{\varpi}{z}\rangle[\frac{1}{z}]\) as rings, so we see that \(B_{S,[0,1]}\) and \(B_{S,[1,\infty]}\) inject into \(B_{S,[1,1]}\). Note that their intersection equals \(R^{+}\llbracket z\rrbracket\). Therefore the flatness of \(M\) yields a Cartesian square and the same holds for \(M^{\prime}\). In particular, we recover \(f\) as the restriction of \(g|_{\mathcal{Y}_{S,[0,1]}}\) (or of \(g|_{\mathcal{Y}_{S,[1,\infty]}}\)) to the intersection of \(M\otimes_{R^{+}\llbracket z\rrbracket}B_{S,[0,1]}\) and \(M\otimes_{R^{+}\llbracket z\rrbracket}B_{S,[1,\infty]}\) in \(M\otimes_{R^{+}\llbracket z\rrbracket}B_{S,[1,1]}\). We turn to the first main result of this section, which algebraizes \(G\)-bundles on \(\mathcal{Y}_{S}\) when \(S\) is a product of points as in [15, Definition 1.2]. Recall that \(\operatorname{Spa}\) yields an anti-equivalence from the category of perfectoid Huber pairs over \(\mathbb{F}_{q}\llbracket\zeta_{i}\rrbracket_{i\in I}\) to the category of affinoid perfectoid spaces over \(\mathbb{D}^{I}\). Let \(S=\operatorname{Spa}(R,R^{+})\) be an affinoid perfectoid space over \(\mathbb{D}^{I}\), and for all \(i\) in \(I\), write \(\Gamma_{i}\) for the graph of its \(i\)-th projection \(S\!\to\!\mathbb{D}\), which is a closed effective Cartier divisor on \(\mathcal{Y}_{S}\)[11, Proposition VI.1.2 (i)]. **Theorem**.: _Suppose that \(S\) is a product of points as in [15, Definition 1.2], and let \(1\leq j\leq k\) be an integer. Then pullback yields an equivalence of groupoids_ \[\{G\text{-bundles on }\operatorname{Spec}R^{+}\llbracket z\rrbracket\} \stackrel{{\sim}}{{\longrightarrow}}\{G\text{-bundles on }\mathcal{Y}_{S}\},\] _where morphisms on the left-hand side are given by isomorphisms of their pullbacks to \(\operatorname{Spec}R^{+}\llbracket z\rrbracket[\frac{1}{z-\zeta_{i}}]_{j\in I _{j}}\), and morphisms on the right-hand side are given by isomorphisms of their pullbacks to \(\mathcal{Y}_{S}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}\) that are meromorphic along \(\sum_{i\in I_{j}}\Gamma_{i}\)._ Proof.: First, we tackle full faithfulness. Write \(\mathscr{O}(\sum_{i\in I_{j}}\Gamma_{i})\) for the line bundle on \(\mathcal{Y}_{S}\) associated with the closed effective Cartier divisor \(\sum_{i\in I_{j}}\Gamma_{i}\), and let \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) be \(G\)-bundles on \(\mathcal{Y}_{S}\). The Tannakian description of \(G\)-bundles implies that an isomorphism \(\mathscr{G}|_{\mathcal{Y}_{S}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}} \stackrel{{\sim}}{{\rightarrow}}\mathscr{G}^{\prime}|_{\mathcal{ Y}_{S}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}}\) that is meromorphic along \(\sum_{i\in I_{j}}\Gamma_{i}\) corresponds to a family of morphisms of vector bundles over \(\mathcal{Y}_{S}\) \[\mathscr{G}(V)\!\to\!\mathscr{G}^{\prime}(V)\otimes\mathscr{O}(\sum_{i\in I _{j}}\Gamma_{i})^{\otimes n(V)}\] that is functorial in \(V\), compatible with tensor products, and compatible with duals, where \(V\) runs over objects of \(\operatorname{Rep}_{\mathcal{O}_{F}}(G)\) and \(n(V)\) is a large enough integer. Hence full faithfulness follows immediately from Proposition 3.5. As for essential surjectivity, let \(\mathscr{G}\) be a \(G\)-bundle on \(\mathcal{Y}_{S}\). By [31, Theorem 2.7.7], \(\mathscr{G}|_{\mathcal{Y}_{S,[0,1]}}\) and \(\mathscr{G}|_{\mathcal{Y}_{S,[1,\infty]}}\) correspond to \(G\)-bundles \(N_{0}\) and \(N_{\infty}\) on \(\operatorname{Spec}B_{S,[0,1]}\) and Spec \(B_{S,[1,\infty]}\), respectively. Note that the \(z\)-adic completion of \(R^{+}\llbracket z\rrbracket[\frac{z}{\varpi}]\) equals \(R^{+}\langle\frac{z}{\varpi}\rangle\) as rings, so the global sections of the rational open subspace \(\{|z|\leq|\varpi|\neq 0\}\) of \(\operatorname{Spa}(A^{\prime}(R^{+})[\frac{1}{z}],A^{\prime}(R^{+}))\) equals \(R\langle\frac{z}{\varpi}\rangle[\frac{1}{z}]=B_{S,[0,1]}[\frac{1}{z}]\) as rings. We have seen in the proof of Proposition 3.4 that the global sections of the rational open subspace \(\{|\varpi|\leq|z|\neq 0\}\) of \(\operatorname{Spa}(A^{\prime}(R^{+})[\frac{1}{z}],A^{\prime}(R^{+}))\) equals \(B_{S,[1,\infty]}\). Because these two rational open subspaces cover \(\operatorname{Spa}(A^{\prime}(R^{+})[\frac{1}{z}],A^{\prime}(R^{+}))\), Lemma 3.3 and [31, Theorem 2.7.7] enable us to glue \(N_{0}[\frac{1}{z}]^{4}\) and \(N_{\infty}\) into a \(G\)-bundle \(N_{\frac{1}{z}}\) on \(\operatorname{Spec}A^{\prime}(R^{+})[\frac{1}{z}]=\operatorname{Spec}R^{+}( \!(z)\!)\). Note that the \(z\)-adic completion of equals \(R[\![z]\!]\). Since \[N_{\frac{1}{z}}\otimes_{R^{+}(\!(z)\!)}B_{S,[0,1]}[\frac{1}{z}]=N_{0}[\frac{1} {z}],\] we see that \(N_{\frac{1}{z}}[\frac{1}{\varpi}]\otimes_{R^{+}(\!(z)\!)[\frac{1}{z}]}R(\!(z) \!)=N_{0}\otimes_{B_{S,[0,1]}}R(\!(z)\!)\). Therefore we can apply Beauville-Laszlo to the vanishing locus of \(z\) in \(\operatorname{Spec}R^{+}[\![z]\!][\frac{1}{\varpi}]\) to glue \(N_{\frac{1}{z}}[\frac{1}{\varpi}]\) and \(N_{0}\otimes_{B_{S,[0,1]}}R[\![z]\!]\) into a \(G\)-bundle \(N_{\frac{1}{\varpi}}\) on \(\operatorname{Spec}R^{+}[\![z]\!][\frac{1}{\varpi}]\). As \(N_{\frac{1}{\varpi}}[\frac{1}{z}]=N_{\frac{1}{z}}[\frac{1}{\varpi}]\), we can glue \(N_{\frac{1}{\varpi}}\) and \(N_{\frac{1}{z}}\) into a \(G\)-bundle \(\hat{N}\) on the complement of the vanishing locus of \(\varpi\) and \(z\) in \(\operatorname{Spec}R^{+}[\![z]\!]\). Finally, because \(S\) is a product of points, [1, Proposition 11.5] uniquely extends \(\hat{N}\) to a \(G\)-bundle \(N\) on \(\operatorname{Spec}R^{+}[\![z]\!]\). Let us verify that the pullback of \(N\) to \(\mathcal{Y}_{S}\) equals \(\mathscr{G}\). Because \(N[\frac{1}{z}]=\hat{N}[\frac{1}{z}]=N_{\frac{1}{z}}\), we see that \(N\otimes_{R^{+}[\![z]\!]}B_{S,[1,\infty]}=N_{\infty}\). Thus we just need to show \(N\otimes_{R^{+}[\![z]\!]}B_{S,[0,1]}=N_{0}\). We have \(N[\frac{1}{\varpi}]=\hat{N}[\frac{1}{\varpi}]=N_{\frac{1}{\varpi}}\), so \[N\otimes_{R^{+}[\![z]\!]}B_{S,[0,1]}[\frac{1}{z}]=N_{\frac{1}{z}}\otimes_{R^{ +}(\!(z)\!)}B_{S,[0,1]}[\frac{1}{z}]=N_{0}[\frac{1}{z}].\] Note that the \(z\)-adic completion of \(B_{S,[0,1]}=R\langle\frac{z}{\varpi}\rangle\) equals \(R[\![z]\!]\), and \[N\otimes_{R^{+}[\![z]\!]}R[\![z]\!]=N_{\frac{1}{\varpi}}\otimes_{R^{+}[\![z]\!] [\frac{1}{\varpi}]}R[\![z]\!]=N_{0}\otimes_{B_{S,[0,1]}}R[\![z]\!].\] Hence the desired result follows from applying the uniqueness of Beauville-Laszlo gluing to the vanishing locus of \(z\) in \(\operatorname{Spec}B_{S,[0,1]}\). We have the following version of non-abelian Artin-Schreier-Witt theory for \(\mathcal{O}_{F}\). Recall the terminology of \(\tau\)-modules as in [42, Definition 12.3.3], and let \(n\) be a positive integer. For any \(\mathcal{O}_{F}/z^{n}\)-local system \(\mathbb{L}\) on \(\operatorname{Spec}R\), write \(M(\mathbb{L})\) for the \(\tau\)-module over \(\operatorname{Spec}R[\![z]\!]/z^{n}\) given by \(\mathbb{L}\otimes_{\underline{\mathcal{O}_{F}/z^{n}}}(\mathscr{O}_{ \operatorname{Spec}R[\![z]\!]/z^{n}},\operatorname{id})\). Conversely, for any \(\tau\)-module \((M,\phi)\) over \(\operatorname{Spec}R[\![z]\!]/z^{n}\), write \(\mathbb{L}(M,\phi)\) for the \(\underline{\mathcal{O}_{F}/z^{n}}\)-sheaf over \(\operatorname{Spec}R\) given by \(\underline{\operatorname{Hom}_{\tau\operatorname{-mod}}}((\mathscr{O}_{ \operatorname{Spec}R[\![z]\!]/z^{n}},\operatorname{id}),(M,\phi))\). **Proposition**.: _Our \(M(-)\) yields an exact tensor equivalence of categories_ \[\{\underline{\mathcal{O}_{F}/z^{n}}\text{-local systems on }\operatorname{Spec}R\} \stackrel{{\sim}}{{\longrightarrow}}\{\tau\text{-modules over }\operatorname{Spec}R[\![z]\!]/z^{n}\}.\] _Consequently, \(\mathbb{L}\mapsto\mathbb{L}\otimes_{\underline{\mathcal{O}_{F}}}(\mathscr{O}_{ \operatorname{Spec}R[\![z]\!]},\operatorname{id})\) is an exact tensor equivalence of categories_ \[\{\underline{\mathcal{O}_{F}}\text{-local systems on }S\}\stackrel{{\sim}}{{ \longrightarrow}}\{\tau\text{-modules over }\operatorname{Spec}R[\![z]\!]\}.\] Proof.: Note that \(M(-)\) is left adjoint to \(\mathbb{L}(-)\), and the unit \(\operatorname{id}\to\mathbb{L}(M(-))\) is an isomorphism. So we just need to prove that \(M(-)\) is essentially surjective. Because \(\underline{\mathcal{O}_{F}/z^{n}}\)-local systems are trivial after a finite etale cover, it suffices to prove that the same holds for \(\tau\)-modules over \(\operatorname{Spec}R[\![z]\!]/z^{n}\). So let \((M,\phi)\) be a \(\tau\)-module over \(\operatorname{Spec}R[\![z]\!]/z^{n}\) such that \(M\) has rank \(h\). When \(n=1\), the desired result is [31, Lemma 3.2.7]. For \(n\geq 2\), by induction there exists a finite etale cover \(\operatorname{Spec}R^{\prime}\operatorname{\rightarrow}\operatorname{Spec}R\) such that the pullback of \((M,\phi)\) to \(\operatorname{Spec}R^{\prime}[\![z]\!]/z^{n-1}\) has a basis fixed by \(\phi_{R^{\prime}[\![z]\!]/z^{n-1}}\). Nakayama's lemma shows that any lift of this basis to \(R^{\prime}[\![z]\!]/z^{n}\) yields a basis of \(M\otimes_{R}R^{\prime}\). In these coordinates, we see that \(\phi^{-1}_{R^{\prime}[\underline{z}]/z^{n}}\) acts by \(A\circ\tau\), where \(A\) in \(\operatorname{GL}_{h}(R^{\prime}[\underline{z}]/z^{n})\) satisfies \(A\equiv 1\pmod{z^{n-1}}\). Write \(\operatorname{Spec}\widetilde{R}\) for the vanishing locus in \(\operatorname{Spec}R^{\prime}[u_{ab}]_{1\leq a,b\leq h}\) of the matrix \[\tau(U)-U-\tfrac{1}{z^{n-1}}(A-1),\] where \(U\) denotes the matrix with entries \(u_{ab}\). Examining entrywise shows that \(\widetilde{R}\) is finite over \(R^{\prime}\), the Jacobian criterion shows that \(\widetilde{R}\) is etale over \(R^{\prime}\), and checking on fibers shows that \(\operatorname{Spec}\widetilde{R}\mathop{\rightarrow}\operatorname{Spec}R^{\prime}\) is surjective. Finally, on \(\widetilde{R}[\underline{z}]/z^{n}\) we have \[(1+z^{n-1}U)A\tau(1+z^{n-1}U)^{-1}=(1+z^{n-1}U)(1+A-1)(1-z^{n-1}U-(A+1))=1,\] so the basis of \(M\otimes_{R}\widetilde{R}\) given by \(1+z^{n-1}U\) is fixed by \(\phi^{-1}_{\widetilde{R}}\). Therefore the pullback of \((M,\phi)\) to \(\operatorname{Spec}\widetilde{R}[\underline{z}]/z^{n}\) is trivial, as desired. We can upgrade Proposition 3.7 for \(G\)-bundles as follows. Briefly, let \(X\) be a scheme or a sousperfectoid adic space over \(\mathcal{O}_{F}\), and let \(\tau:X\mathop{\rightarrow}X\) be an endomorphism over \(\mathcal{O}_{F}\). By a \(\tau\)_-\(G\)-bundle_ over \(X\), we mean a \(G\)-bundle \(\mathscr{G}\) on \(X\) along with an isomorphism of \(G\)-bundles \(\phi:\mathscr{G}\mathop{\rightarrow}^{\tau}\mathscr{G}\). Let \(n\) be a positive integer or \(\infty\), and define \(z^{\infty}\) to be \(0\). For any \(G(\mathcal{O}_{F}/z^{n})\)-bundle \(\mathbb{P}\) on \(S\), by abuse of notation write \(M(\mathbb{P})\) for the \(\tau\)-\(G\)-bundle over \(\operatorname{Spec}R[\underline{z}]/z^{n}\) given by \(\mathbb{P}\times\!\tfrac{G(\mathcal{O}_{F}/z^{n})}{(G,\operatorname{id})}\). **Proposition**.: _Our \(M(-)\) yields an equivalence of groupoids_ \[\{\underline{G(\mathcal{O}_{F}/z^{n})}\text{-bundles on $S$}\}\mathop{ \longrightarrow}^{\sim}\{\tau\text{-}G\text{-bundles over }\operatorname{Spec}R[\underline{z}]/z^{n}\}.\] Proof.: The assignment \(\mathbb{P}\mapsto(V\mapsto\mathbb{P}\times\!\tfrac{G(\mathcal{O}_{F}/z^{n})}{ \operatorname{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text Let us recall the equicharacteristic version of the _(relative integral) Robba ring_. Write \(\left\|-\right\|\) for the spectral norm on \(R\), normalized such that \(\left\|\varpi\right\|=\frac{1}{q}\). For any positive rational \(b\), we have a map \(\left\|-\right\|_{b}:R\llbracket z\rrbracket\to\!\!\left[0,\infty\right]\) given by \[\sum_{m=0}^{\infty}a_{m}z^{m}\mapsto\sup_{m\geq 0}\{q^{-m}\|a_{m}\|^{b}\}.\] Evidently \(\left\|\tau(-)\right\|_{b}=\left\|-\right\|_{qb}\). When \(1/b\) lies in \(\mathbb{Z}[\frac{1}{p}]\), 3.5 shows that the restriction of \(\left\|-\right\|_{b}\) to \(B_{S,[0,b]}\subseteq R\llbracket z\rrbracket\) is a norm and induces the usual topology on \(B_{S,[0,b]}\). Moreover, \(\sum_{m=0}^{\infty}a_{m}z^{m}\) lies in \(B_{S,[0,b]}\) if and only if \(\left\|a_{m}z^{m}\right\|_{b}\to 0\). Write \(\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}\) for \(\varinjlim_{b}B_{S,[0,b]}\), where \(b\) runs over positive rationals. Note that any multiple \(f\) of \(z\) in \(\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}\) satisfies \(\left\|f\right\|_{b}<1\) for small enough \(b\), so the completeness of \(B_{S,[0,b]}\) implies that \(z\) lies in the Jacobson radical of \(\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}\). Just like \(\mathcal{O}_{F}\)-local systems, we show that \(\tau\)-modules over the Robba ring are trivial after a pro-finite etale cover. **Lemma**.: _Let \((\widetilde{M},\widetilde{\phi})\) be a \(\tau\)-module over \(\operatorname{Spec}\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}\) such that \(\widetilde{M}\) is free of rank \(h\). Then there exists a pro-finite etale cover \(\operatorname{Spa}(\widetilde{R},\widetilde{R}^{+})\to S\) such that the pullback of \((\widetilde{M},\widetilde{\phi})\) to \(\operatorname{Spec}\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}\) is trivial._ Proof.: Proposition 3.7 enables us to assume that the pullback of \((\widetilde{M},\widetilde{\phi})\) to \(\operatorname{Spec}R\) has a basis fixed by \(\phi_{R}\). Now 3.9 and Nakayama's lemma show that any lift of this basis yields a basis of \(\widetilde{M}\), and in these coordinates, we see that \(\phi^{-1}\) acts by \(A\circ\tau\), where \(A\) in \(\operatorname{GL}_{h}(\widetilde{\mathcal{R}}_{R}^{\mathrm{int}})\) satisfies \(A\equiv 1\pmod{z}\). Proposition 3.7 yields a pro-finite etale cover \(\operatorname{Spa}(\widetilde{R},\widetilde{R}^{+})\to\operatorname{Spa}(R,R ^{+})\) such that the pullback of \((\widetilde{M},\widetilde{\phi})\) to \(\operatorname{Spec}\widetilde{R}\llbracket z\rrbracket\) has a basis fixed by \((\widetilde{\phi})_{\widetilde{R}\llbracket z\rrbracket}\). Since the pullback of \((\widetilde{M},\widetilde{\phi})\) to \(\operatorname{Spec}R\) is already trivial, we can choose this basis of \(\widetilde{M}\otimes_{\widetilde{\mathcal{R}}_{R}^{\mathrm{int}}}\widetilde{R }\llbracket z\rrbracket\) such that its matrix \(U\) in \(\operatorname{GL}_{h}(\widetilde{R}\llbracket z\rrbracket)\) satisfies \(U\equiv 1\pmod{z}\). Now we just need to prove that \(U\) lies in \(\operatorname{GL}_{h}(\widetilde{\mathcal{R}}_{R}^{\mathrm{int}})\). As \(A-1\) is divisible by \(z\), we have \(\left\|A-1\right\|_{b}<1\) for small enough positive rational \(b\). Write \(C\coloneqq\max\{q^{-1},\left\|A-1\right\|_{b}\}<1\), write \(U_{n}\) for the mod-\(z^{n}\) truncation of \(U\), and write \(X_{n}\) for the \(z^{n}\)-coefficient of \(U\). For any positive integer \(n\), we claim that \[\left\|z^{n}X_{n}\right\|_{qb},\,\left\|U_{n}-1\right\|_{b},\text{ and }\left\|U_{n}-1 \right\|_{qb}\leq C.\] When \(n=1\), the last two bounds hold because \(U_{1}=1\). For general \(n\), we have \[U_{n}+z^{n}X_{n}\equiv U\equiv A\tau(U)\equiv A(\tau(U_{n})+z^{n }\tau(X_{n}))\pmod{z^{n+1}}\] \[\implies z^{n}(X_{n}-A\tau(X_{n}))\equiv(A-1)\tau(U_{n})+(\tau(U _{n})-1)-(U_{n}-1)\pmod{z^{n+1}}\] \[\implies X_{n}-\tau(X_{n})\equiv\tfrac{1}{z^{n}}\big{[}(A-1)\tau (U_{n})+(\tau(U_{n})-1)-(U_{n}-1)\big{]}\pmod{z}.\] By evaluating this equation at rank-\(1\) points of \(S\) and considering the Newton polygon of its entries, induction on \(n\) implies that \[\left\|X_{n}\right\|_{b} \leq\max\{1,(q^{n}\|(A-1)\tau(U_{n})+\tau(U_{n}-1)-(U_{n}-1)\|_{b} )^{1/q}\}\] \[\leq\max\{1,(q^{n}C)^{1/q}\}\leq(q^{n}C)^{1/q}.\] Therefore \(\left\|z^{n}X_{n}\right\|_{qb}\leq C\), so \(\left\|U_{n+1}-1\right\|_{qb}\leq C\). Since \(C\geq q^{-n}\), we also get \[\left\|U_{n+1}-1\right\|_{b}\leq\max\{\left\|z^{n}X_{n}\right\|_{b},\left\|U_{n} -1\right\|_{b}\}\leq\max\{q^{-n}(q^{n}C)^{1/q},C\}\leq C,\] which concludes our proof of the claim. By 3.9, the claim implies that \(U\) has coefficients in \(B_{S,[0,b^{\prime}]}\) for any positive rational \(b^{\prime}<qb\) such that \(1/b^{\prime}\) lies in \(\mathbb{Z}[\frac{1}{p}]\). After decreasing \(b^{\prime}\) such that \(b^{\prime}<b\), the claim also implies that \(U\) is invertible over \(B_{S,[0,b^{\prime}]}\). Therefore \(U\) indeed lies in \(\operatorname{GL}_{h}(\widetilde{\mathcal{R}}^{\operatorname{int}}_{\widetilde {R}})\), as desired. Vector bundles on the Robba ring are local on \(S\) in the following sense. Let \((S_{\alpha})_{\alpha}\) be a finite cover of \(S\) by rational open subspaces, where \(S_{\alpha}=\operatorname{Spa}(R_{\alpha},R_{\alpha}^{+})\). Write \(S_{\alpha\beta}=\operatorname{Spa}(R_{\alpha\beta},R_{\alpha\beta}^{+})\) for their pairwise intersections, and write \(S_{\alpha\beta\gamma}=\operatorname{Spa}(R_{\alpha\beta\gamma},R_{\alpha\beta \gamma}^{+})\) for their triple intersections. **Lemma**.: _Pullback yields an equivalence from the category of vector bundles on \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}\) to the category of vector bundles on the \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R_{\alpha}}\) with transition morphisms on the \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R_{\alpha \beta}}\) whose pullbacks to \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R_{\alpha \beta\gamma}}\) satisfy the cocycle condition. Moreover, for any vector bundle \(M\) on \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}\), there exists \((S_{\alpha})_{\alpha}\) as above such that \(M|_{\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R_{\alpha }}}\) is trivial for all \(\alpha\)._ Proof.: Because \(\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}=\varinjlim_{b}B_{S,[0,b]}\), we have an equivalence of categories \[\varinjlim_{b}\{\text{vector bundles on }\operatorname{Spec}B_{S,[0,b]}\} \stackrel{{\sim}}{{\longrightarrow}}\{\text{vector bundles on }\operatorname{Spec}\widetilde{\mathcal{R}}^{ \operatorname{int}}_{R}\}.\] When \(1/b\) lies in \(\mathbb{Z}[\frac{1}{p}]\), the \(B_{S,[0,b]}\) are Tate algebras over \(R\). Hence \(S\mapsto B_{S,[0,b]}\) commutes with rational localization on \(S\). Applying [31, Theorem 2.7.7] to the resulting open cover of \(\mathcal{Y}_{S,[0,b]}\) by \((\mathcal{Y}_{S_{\alpha},[0,b]})_{\alpha}\) shows that vector bundles on \(\operatorname{Spec}B_{S,[0,b]}\) are equivalent to vector bundles on the \(\operatorname{Spec}B_{S_{\alpha},[0,b]}\) with transition morphisms on the \(\operatorname{Spec}B_{S_{\alpha\beta},[0,b]}\) whose pullbacks to \(\operatorname{Spec}B_{S_{\alpha\beta\gamma},[0,b]}\) satisfy the cocycle condition. Because there are finitely many \(\alpha\), taking the directed limit over \(b\) yields the first claim. For the second claim, [31, Theorem 2.7.7] shows that there exists \((S_{\alpha})_{\alpha}\) as above such that the pullback of \(M\) to \(\operatorname{Spec}R_{\alpha}\) is trivial for all \(\alpha\). Since \(z\) lies in the Jacobson radical of \(\widetilde{\mathcal{R}}^{\operatorname{int}}_{R_{\alpha}}\), any trivialization lifts to \(\mathcal{R}^{\operatorname{int}}_{R_{\alpha}}\) by Nakayama's lemma. We conclude by showing that \(\tau\)-modules on \(R[\![z]\!]\) uniquely descend to the Robba ring. **Theorem**.: _Pullback yields an exact tensor equivalence of categories_ \[\{\text{$\tau$-modules over }\operatorname{Spec}\widetilde{\mathcal{R}}^{ \operatorname{int}}_{R}\}\stackrel{{\sim}}{{\longrightarrow}}\{ \text{$\tau$-modules over }\operatorname{Spec}R[\![z]\!]\}.\] _Consequently, pullback induces an equivalence of groupoids_ \[\{\text{$\tau$-$G$-bundles over }\operatorname{Spec}\widetilde{\mathcal{R}}^{ \operatorname{int}}_{R}\}\stackrel{{\sim}}{{\longrightarrow}}\{ \text{$\tau$-$G$-bundles over }\operatorname{Spec}R[\![z]\!]\}.\] Proof.: First, we tackle full faithfulness. By considering internal homs for \(\tau\)-modules, it suffices to prove that, for any \(\tau\)-module \((\widetilde{M},\widetilde{\phi})\) over \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}\), any \(m\) in \(\widetilde{M}\otimes_{\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}}\)\(R[\![z]\!]\) that is fixed by \(\widetilde{\phi}_{R[\![z]\!]}\) lies in \(\widetilde{M}\). Lemma 3.11 implies that it suffices to prove this after passing to an open cover of \(S\), so we can assume that \(\widetilde{M}\) is free of rank \(h\). Then Lemma 3.10 yields a pro-finite etale cover \(\operatorname{Spa}(\widetilde{R},\widetilde{R}^{+})\!\to\!S\) such that the pullback of \((\widetilde{M},\widetilde{\phi})\) to \(\operatorname{Spec}\widetilde{\mathcal{R}}^{\operatorname{int}}_{\widetilde {R}}\) has a basis fixed by \(\widetilde{\phi}_{\widetilde{\mathcal{R}}^{\operatorname{int}}_{\widetilde{R}}}\). In these coordinates, the entries of \(m\) lie in \((\widetilde{R}^{\tau})[\![z]\!]\), which lies in \(\widetilde{\mathcal{R}}^{\operatorname{int}}_{\widetilde{R}}\) by 3.9. Note that the intersection of \(R[\![z]\!]\) and \(\widetilde{\mathcal{R}}^{\operatorname{int}}_{\widetilde{R}}\) equals \(\widetilde{\mathcal{R}}^{\operatorname{int}}_{R}\), so the flatness of \(\widetilde{M}\) shows that \(m\) lies in \(\widetilde{M}\). As for essential surjectivity, let \((M,\phi)\) be a \(\tau\)-module over \(\operatorname{Spec}R\llbracket z\rrbracket\). By passing to a clopen cover of \(S\), we can assume that \(M\) has rank \(h\). Proposition 3.7, full faithfulness, and finite etale descent enable us to assume that the pullback of \((M,\phi)\) to \(\operatorname{Spec}R\) has a basis fixed by \(\phi_{R}\). Nakayama's lemma shows that any lift of this basis yields a basis of \(M\otimes_{\widetilde{\mathcal{R}}_{R}^{\operatorname{int}}}R\llbracket z\rrbracket\), and in these coordinates, we see that \(\phi_{R\llbracket z\rrbracket}^{-1}\) acts by \(A\circ\tau\), where \(A\) in \(\operatorname{GL}_{h}(R\llbracket z\rrbracket)\) satisfies \(A\equiv 1\pmod{z}\). Let \(n\) be a positive integer. We inductively construct certain \(C_{n}\), \(B_{n}\), and \(U_{n}\) in \(\operatorname{GL}_{h}(R\llbracket z\rrbracket)\) such that \(C_{n}-B_{n}\) is divisible by \(z^{n}\). First, set \(C_{1}\coloneqq A\) and \(B_{1}\coloneqq 1\). For general \(n\), write \(X_{n}\) for the \(z^{n}\)-coefficient of \(C_{n}-B_{n}\). There exists \(Y_{n}\) in \(\operatorname{Mat}_{h}(R)\) satisfying \(\left\lVert X_{n}+Y_{n}-\tau(Y_{n})\right\rVert_{1}<q^{n/2}\)[31, Lemma 8.5.2], which we use to define \[U_{n}\coloneqq 1+z^{n}Y_{n},\,C_{n+1}\coloneqq U_{n}C_{n}\tau(U_{n})^{-1},\text{ and }B_{n+1}\coloneqq B_{n}+z^{n}(X_{n}+Y_{n}-\tau(Y_{n})).\] By induction, we have \[C_{n+1} \equiv(1+z^{n}Y_{n})C_{n}(1-z^{n}\tau(Y_{n}))\] \[\equiv B_{n}+z^{n}(X_{n}+Y_{n}-\tau(Y_{n}))\equiv B_{n+1}\pmod{z^{n+ 1}},\] as desired. We see from 3.9 that the \(B_{n}\) converge to a matrix \(B\) in \(\operatorname{GL}_{h}(B_{S,[0,1]})\). Now the \(C_{n}\) converge to a matrix \(C\) in \(\operatorname{GL}_{h}(R\llbracket z\rrbracket)\), and because \(C_{n}-B_{n}\) is divisible by \(z^{n}\), we have \(C=B\). Moreover, the infinite product \(U\coloneqq U_{1}U_{2}\cdots\) converges to a matrix \(U\) in \(\operatorname{GL}_{h}(R\llbracket z\rrbracket)\), and the above shows that \(UA\tau(U)^{-1}=C=B\). Thus the basis of \(M\otimes_{\widetilde{\mathcal{R}}_{R}^{\operatorname{int}}}R\llbracket z\rrbracket\) given by \(U\) descends \((M,\phi)\) to a \(\tau\)-module over \(\operatorname{Spec}\widetilde{\mathcal{R}}_{R}^{\operatorname{int}}\), as desired. Finally, we show that pullback has an exact tensor quasi-inverse. Note that we have a commutative triangle Every arrow is an exact tensor functor, and \(M(-)\) is an exact tensor equivalence by Proposition 3.7. Hence its quasi-inverse \(\mathbb{L}(-)\) postcomposed with the left arrow yields an exact tensor quasi-inverse to pullback. ## 4. Analytic moduli of local shtukas In this section, we define local shtukas in the analytic setting and compare them with the formal variant from SS2. We start by giving an algebraic version of local shtukas over a perfectoid space, which is the equicharacteristic version of Breuil-Fargues-Kisin modules. This mediates between the formal variant and more analytic versions. Next, we define an analytic version of local shtukas, as well as the corresponding moduli problem. Using results from SS3, we show that the analytic moduli problem agrees with the formal moduli problem from SS2. From here, we define the covering tower for our analytic moduli problem. We conclude by recalling the moduli of local shtukas appearing in Fargues-Scholze [11], which is defined purely in terms of the Fargues-Fontaine curve. While this subtly differs from our analytic moduli problem, their intersection homology complexes are naturally isomorphic, which is all we need. Let \(S=\operatorname{Spa}(R,R^{+})\) be an affinoid perfectoid space over \(\mathbb{D}^{I}\). For any \(i\) in \(I\), if \(\zeta_{i}\) is an \(R^{\infty}\)-multiple of \(\varpi^{r}\), then \[\frac{1}{z-\zeta_{i}}=\frac{1}{z}\sum_{n=0}^{\infty}\left(\frac{\zeta_{i}}{z} \right)^{n}\] lies in \(R^{+}\llbracket z,\frac{\varpi^{r}}{z}\rangle[\frac{1}{z}]\). As \(\zeta_{i}\) is topologically nilpotent, this always holds for small enough \(r\). Recall the \(\mu_{i}\) and \(\mathbb{D}_{i}\) from 1.5, and recall Definition 2.1. We use Definition 2.1 to define an algebraic version of local \(G\)-shtukas over \(S\). **Definition**.: 1. An _algebraic local_ \(G\)_-shtuka_ over \(S\) is a local \(G\)-shtuka over \(\operatorname{Spec}R^{+}\). 2. Suppose that \(S\) lies over \(\prod_{i\in I}\mathbb{D}_{i}\), and let \(\mathscr{G}\) be an algebraic local shtuka over \(S\). We say that \(\mathscr{G}\) is _bounded by_\(\mu_{\bullet}\) if the corresponding local \(G\)-shtuka over \(\operatorname{Spec}R^{+}\) is bounded by \(\mu_{\bullet}\). 3. Let \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) be algebraic local \(G\)-shtukas over \(S\). A _quasi-isogeny_ from \(\mathscr{G}\) to \(\mathscr{G}^{\prime}\) consists of, for some small enough positive \(r\) in \(\mathbb{Z}[\frac{1}{p}]\) and all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\delta_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\varpi^ {r}}{z}\rangle[\frac{1}{x}]}\stackrel{{\sim}}{{\rightarrow}} \mathscr{G}^{\prime}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\varpi^ {r}}{x}\rangle[\frac{1}{x}]}\] such that the diagram \[\begin{CD}\mathscr{G}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\varpi^ {r}}{z}\rangle[\frac{1}{x}]}&\xrightarrow{\text{$(\phi_{j})_{R}+\llbracket z,\frac{\varpi^{r}}{z}\rangle[\frac{1}{x}]$}}&\mathscr{G}_{j+1}|_{\operatorname {Spec}R^{+}\llbracket z,\frac{\varpi^{r}}{z}\rangle[\frac{1}{x}]}\\ @V{}V{\mathscr{G}^{\prime}_{j}}|_{\operatorname{Spec}R^{+}\llbracket z, \frac{\varpi^{r}}{z}\rangle[\frac{1}{x}]}&\xrightarrow{\text{$(\phi^{\prime}_ {j})_{R}+\llbracket z,\frac{\varpi^{r}}{z}\rangle[\frac{1}{x}]$}}&\mathscr{G }^{\prime}_{j+1}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\varpi^{r}}{z} \rangle[\frac{1}{x}]}\\ \end{CD}\] commutes, where \(\delta_{k+1}\) denotes the isomorphism \({}^{\tau}\delta_{1}\). Let \(n\) be a non-negative integer, and note that \(R^{+}/\varpi^{n}\) is a discrete \(\mathbb{F}_{q}[\zeta_{i}]_{i\in I}\)-algebra. For any algebraic local shtuka \(\mathscr{G}\) over \(S\), write \(\mathscr{G}^{n}\) for the local shtuka over \(S_{n}\coloneqq\operatorname{Spec}R^{+}/\varpi^{n}\) given by pullback. Since \(R^{+}\llbracket z,\frac{\varpi^{r}}{z}\rangle[\frac{1}{z}]/\varpi^{n}\) equals \((R^{+}/\varpi^{n})(\langle z\rangle)\), quasi-isogenies of algebraic local \(G\)-shtukas over \(S\) pull back to quasi-isogenies of local \(G\)-shtukas over \(S_{n}\). Lemma 2.7 shows that bounded algebraic local \(G\)-shtukas are all captured by this limit process. The following lemma shows that quasi-isogenies between them are also all captured by this limit process. **Lemma**.: _Suppose that \(S\) lies over \(\prod_{i\in I}\mathbb{D}_{i}\), and let \(\mathscr{G}\) and \(\mathscr{G}^{\prime}\) be algebraic local \(G\)-shtukas over \(S\) bounded by \(\mu_{\bullet}\). Then pullback yields a bijection_ \[\{\text{quasi-isogenies from $\mathscr{G}$ to $\mathscr{G}^{\prime}$}\} \stackrel{{\sim}}{{\longleftarrow}}\varprojlim_{n}\{\text{quasi-isogenies from $\mathscr{G}^{n}$ to $\mathscr{G}^{\prime n}$}\}.\] Proof.: Let \((\delta^{n})_{n\geq 0}\) be a compatible system of quasi-isogenies from \(\mathscr{G}^{n}\) to \(\mathscr{G}^{\prime n}\). Because \(\varprojlim_{n}(R^{+}/\varpi^{n})(\langle z\rangle)\) equals \(R^{+}\llbracket z,\frac{1}{z}\rangle\), we see that \(\delta_{j}\coloneqq\varprojlim_{n}\delta_{j}^{n}\) yields an isomorphism of \(G\)-bundles \(\mathscr{G}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{1}{z}\rangle} \stackrel{{\sim}}{{\rightarrow}}\mathscr{G}^{\prime}_{j}|_{ \operatorname{Spec}R^{+}\llbracket z,\frac{1}{z}\rangle}\) for all \(1\leq j\leq k\). Now \(\delta^{0}\) is bounded by \(m\) for some non-negative integer \(m\) as in Definition 2.2.b), so Proposition 2.3 yields a non-negative integer \(B\) such that \(\delta^{n}\) is bounded by \(B\lceil\log_{q}n\rceil\). From here, the Tannakian description of \(G\)-bundles implies that \(\delta_{j}\) naturally descends to an isomorphism of \(G\)-bundles \[\mathscr{G}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\pi^{r}}{\bar{z}} \rangle\llbracket\frac{1}{\bar{z}}\rangle}\mathop{\to}\limits^{\sim}\mathscr{G} ^{\prime}_{j}|_{\operatorname{Spec}R^{+}\llbracket z,\frac{\pi^{r}}{\bar{z}} \rangle\lfloor\frac{1}{\bar{z}}\rangle}\] for any positive \(r\) in \(\mathbb{Z}[\frac{1}{p}]\). By taking \(r\) small enough such that \(\frac{1}{z-\zeta_{i}}\) lies in \(R^{+}\llbracket z,\frac{\pi^{r}}{\bar{z}}\rangle\lfloor\frac{1}{z}\rangle\) for all \(i\) in \(I\), the commutativity of the square in Definition 4.1.c) follows from the commutativity of the analogous square in Definition 2.2.a). Before introducing the analytic version of local \(G\)-shtukas, we need some notation on the \(B_{\operatorname{dR}}\)-affine Grassmannian. Write \(B_{\operatorname{dR}}^{+}(S)\) for the ring of global sections of the completion of \(\mathscr{O}_{\mathcal{Y}_{S}}\) along \(\sum_{i\in I}\Gamma_{i}\), and write \(B_{\operatorname{dR}}^{j}(S)\) for the version that is punctured along \(\sum_{i\in I_{j}}\Gamma_{i}\). **Definition**.: 1. Write \(\mathcal{L}_{I}^{n}G\) and \(\mathcal{L}_{I}^{+}G\) for the small v-sheaves over \((\mathbb{D}^{I})^{\Diamond}\) given by sending \(S\) to \(G(\mathscr{O}_{n\sum_{i\in I}\Gamma_{i}})\) and \(G(B_{\operatorname{dR}}^{+}(S))\), respectively. 2. Write \(\mathcal{G}_{G}^{(I_{1},\dots,I_{k})}\) for the small v-sheaf over \((\mathbb{D}^{I})^{\Diamond}\) whose \(S\)-points parametrize data consisting of 1. for all \(1\leq j\leq k\), a \(G\)-bundle \(\mathscr{G}_{j}\) on \(\operatorname{Spec}B_{\operatorname{dR}}^{+}(S)\), 2. for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\phi_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}B_{\operatorname{dR}}^{j}(S)} \mathop{\to}\limits^{\sim}\mathscr{G}_{j+1}|_{\operatorname{Spec}B_{ \operatorname{dR}}^{j}(S)},\] where \(\mathscr{G}_{k+1}\) denotes the trivial \(G\)-bundle. In certain cases, we can describe the functor of points of (generalized) analytifications without analytically sheafifying. Briefly, let \(A\) be a noetherian ring, and let \(X\) be a scheme locally of finite type over \(Z\coloneqq\operatorname{Spec}A\). Let \(J\subseteq A\) be an ideal, write \(\widehat{A}\) for the completion of \(A\) with respect to \(J\), and write \(\widehat{Z}\) for the adic space \(\operatorname{Spa}\widehat{A}\). Write \(X_{\widehat{Z}}\) for the fiber product as in [26, (3.8)]. **Lemma**.: _Suppose that \(X\) is quasi-projective over \(Z\). For any analytic affinoid adic space \(S=\operatorname{Spa}(R,R^{+})\), the \(S\)-points of \(X_{\widehat{Z}}\) consist of the \(R\)-points of \(X\) such that the resulting ring homomorphism \(A\mathop{\to}\limits R\) is continuous for the \(J\)-adic topology on \(A\)._ Proof.: The universal property of \(X_{\widehat{Z}}\)[26, (3.8)] indicates that an \(S\)-point of \(X_{\widehat{Z}}\) is equivalent to a morphism \(S\mathop{\to}\limits\widehat{Z}\) of adic spaces along with a morphism \(S\mathop{\to}\limits X\) of locally ringed spaces such that, in the category of locally ringed spaces, the square commutes. The \(\operatorname{Spec}\)-global sections adjunction shows that \(S\mathop{\to}\limits X\mathop{\to}\limits Z\) yields a ring homomorphism \(A\mathop{\to}\limits R\), and note that the commutativity of this square is equivalent to \(A\mathop{\to}\limits R\) being continuous for the \(J\)-adic topology on \(A\). Now assume that \(X=\mathbb{P}_{Z}^{N}\). Since \(Z\) is affine, the \(\operatorname{Spec}\)-global sections adjunction implies that \(S\mathop{\to}\limits X\) is equivalent to the data of a line bundle \(\mathscr{L}\) on \(S\) along with sections \(s_{0},\dots,s_{N}\) that generate \(\mathscr{L}\). By [30, Theorem 1.4.2], this is equivalent to a finite projective \(R\)-module \(M\) of rank \(1\) along with elements \(r_{0},\dots,r_{N}\) that generate \(M\), which is precisely the data of an \(R\)-point of \(X\). In general, \(X\) is a locally closed subscheme of \(\mathbb{P}_{Z}^{N}\). Because \(Z\) is noetherian, there exist finitely many homogeneous polynomials \(f_{1},\dots,f_{l}\) and \(g_{1},\dots,g_{m}\) in \(A[t_{0},\dots,t_{N}]\) such that \(X\subseteq\mathbb{P}_{Z}^{N}\) is the locus where \(f_{a}(s_{0},\dots,s_{N})\) vanishes for all \(1\leq a\leq l\) and \(g_{b}(s_{0},\dots,s_{N})\) does not vanish for all \(1\leq b\leq m\). These properties are preserved by [30, Theorem 1.4.2], so we see that \(S\!\to\!X\) is equivalent to an \(R\)-point of \(X\). We check that the \(B_{\mathrm{dR}}\)-affine Grassmannian and its affine Schubert varieties are the analytifications of their algebraic counterparts. Write \(S^{\mathrm{alg}}\) for the \(R\)-point of \(C^{I}\) given by \(\mathrm{Spec}\,R\!\to\!\mathrm{Spec}\,\mathbb{F}_{q}[\![\zeta_{i}]\!]_{i\in I }\!\to\!C^{I}\), and write \(\Gamma_{i}^{\mathrm{alg}}\) for the resulting relative effective Cartier divisor on \(C\times S\) as in 1.2. Recall the \(F_{i}\) from 1.5. **Lemma**.: _We have a natural isomorphism of rings \(\mathscr{O}_{n\sum_{i\in I}\Gamma_{i}^{\mathrm{alg}}}\cong\mathscr{O}_{n\sum _{i\in I}\Gamma_{i}}\). Consequently, we obtain natural isomorphisms from \((L_{I}^{n}(G_{C}))_{\mathbb{D}^{I}}^{\diamond}\) and \((L_{I}^{+}(G_{C}))_{\mathbb{D}^{I}}^{\diamond}\) to \(\mathcal{L}_{I}^{n}(G)\) and \(\mathcal{L}_{I}^{+}(G)\), respectively, and we may view \((\widehat{\mathrm{Gr}}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I }\mathbb{D}_{i}})^{\diamond}\) as a closed subsheaf_ \[\mathcal{G}_{\Gamma_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I} \mathbb{D}_{i}^{\diamond}}}\subseteq\mathcal{G}_{\Gamma_{G}^{(I_{1},\dots,I_{k })}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}\mathbb{D}_{i}^{\diamond}}.\] _Finally, the \(S\)-points of \(\mathcal{G}_{\Gamma_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I} \mathrm{Spd}\,F_{i}}}\) consist of the \(((\mathscr{G}_{j})_{j=1}^{k},(\phi_{j})_{j=1}^{k})\) such that, for all geometric points \(\overline{s}\) of \(S\) and \(1\leq j\leq k\), the relative position of \(\phi_{j,\overline{s}}\) at \(\Gamma_{i,\overline{s}}\) is bounded by \(\sum_{i^{\prime}}\mu_{i^{\prime}}\), where \(i^{\prime}\) runs over elements of \(I\) satisfying \(\Gamma_{i^{\prime},\overline{s}}=\Gamma_{i,\overline{s}}\)._ Proof.: The first claim is immediate, which identifies \((L_{I}^{n}(G_{C}))_{\mathbb{D}^{I}}^{\diamond}\) with \(\mathcal{L}_{I}^{n}(G)\). The first claim also induces isomorphisms \(\widehat{\mathcal{O}}_{C}(S^{\mathrm{alg}})\cong B_{\mathrm{dR}}^{+}(S)\) and \(\widehat{\mathcal{O}}_{C}^{j,\diamond}(S^{\mathrm{alg}})\cong B_{\mathrm{dR} }(S)\), which identifies \((L_{I}^{+}(G_{C}))_{\mathbb{D}^{I}}^{\diamond}\) with \(\mathcal{L}_{I}^{+}(G)\). This also shows that, for any presentation of \(\mathrm{Gr}_{G_{C}}^{(I_{1},\dots,I_{k})}\) as a directed limit \(\varinjlim_{l}X_{l}\) of projective schemes \(X_{l}\) over \(C^{I}\), we have \[\mathcal{G}_{\Gamma_{G}^{(I_{1},\dots,I_{k})}}(S)=\mathrm{Gr}_{G_{C}}^{(I_{1}, \dots,I_{k})}(S^{\mathrm{alg}})=(\varinjlim_{l}X_{l})(S^{\mathrm{alg}})= \varinjlim_{l}X_{l}(S^{\mathrm{alg}})=\varinjlim_{l}(X_{l})_{\mathbb{D}^{I}}^{\diamond }(S),\] where the last two equalities follow from [23, Lemma 5.4] and Lemma 4.4, respectively. Now 1.5 indicates that \(\mathrm{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}C_{i}}\) is a closed subscheme of \(X_{l}|_{\prod_{i\in I}C_{i}}\) for large enough \(l\). Since \(\mathrm{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}C_{i}}\) is projective over \(\prod_{i\in I}C_{i}\), the natural morphism of adic spaces \(\widehat{\mathrm{Gr}}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I }\mathbb{D}_{i}}\!\to\!(\mathrm{Gr}_{G_{C},\mu_{\bullet}}^{(I_{1},\dots,I_{k} )})_{\prod_{i\in I}\mathbb{D}_{i}}\) is an isomorphism [26, (4.6.iv.d)]. Hence taking \((-)^{\diamond}\) yields the desired closed subsheaf \[\mathcal{G}_{\Gamma_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I} \mathbb{D}_{i}^{\diamond}}}\subseteq\mathcal{G}_{G}^{(I_{1},\dots,I_{k})}|_{ \prod_{i\in I}\mathbb{D}_{i}^{\diamond}}.\] Finally, the description of \(\mathcal{G}_{\Gamma_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I} \mathrm{Spd}\,F_{i}}}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}\mathrm{Spd}\,F_{ i}}\) follows from 1.5. Now, we can define an analytic version of local \(G\)-shtukas over \(S\). Let \(a\) in \(\mathbb{Z}[\frac{1}{p}]\) be non-negative. For any \(i\) in \(I\), if \(\zeta_{i}^{a}\) is an \(R^{\circ\diamond}\)-multiples of \(\varpi\), then \(\mathrm{rad}(\Gamma_{i})\) lie in \([0,a)\). As \(\zeta_{i}\) is topologically nilpotent, this always holds for large enough \(a\). **Definition**.: 1. An _analytic local_ \(G\)_-shtuka_ over \(S\) consists of 1. for all \(1\leq j\leq k\), a \(G\)-bundle \(\mathscr{G}_{j}\) on \(\mathcal{Y}_{S,[0,\infty)}\), 2. for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\phi_{j}:\mathscr{G}_{j}|_{\mathcal{Y}_{S,[0,\infty)}\smallsetminus\sum_{i\in I_{j} }\Gamma_{i}}\stackrel{{\sim}}{{\to}}\mathscr{G}_{j+1}|_{\mathcal{Y }_{S,[0,\infty)}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}},\] that is meromorphic along \(\sum_{i\in I_{j}}\Gamma_{i}\), where \(\mathscr{G}_{k+1}\) denotes the \(G\)-bundle \({}^{\tau}\mathscr{G}_{1}\). 2. Suppose that \(S\) lies over \(\prod_{i\in I}\mathbb{D}_{i}\), and let \(\mathscr{G}\) be an analytic local \(G\)-shtuka over \(S\). We say that \(\mathscr{G}\) is _bounded by \(\mu_{\bullet}\)_ if, for any affinoid perfectoid etale cover \(\operatorname{Spa}(\widetilde{R},\widetilde{R}^{+})\operatorname{\to}S\) where \({}^{\tau}\mathscr{G}_{1}|_{\mathcal{Y}_{\operatorname{Spa}(\widetilde{R}, \widetilde{R}^{+}),[0,\infty)}}\) is trivial and any trivialization \(t:{}^{\tau}\mathscr{G}_{1}|_{\mathcal{Y}_{\operatorname{Spa}(\widetilde{R}, \widetilde{R}^{+}),[0,\infty)}}\stackrel{{\sim}}{{\to}}G\), the \(\operatorname{Spa}(\widetilde{R},\widetilde{R}^{+})\)-point of \(\mathcal{G}_{G}^{(I_{1},\dots,I_{k})}|_{\prod_{i\in I}\mathbb{D}_{i}^{ \diamond}}\) given by \[\mathscr{G}_{1}|_{\operatorname{Spec}B_{\operatorname{dR}}^{+}(\widetilde{R})} \stackrel{{(\phi_{1})_{B_{\operatorname{dR}}^{\text{$\widetilde{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{ \cdot Proof.: Theorem 2.12 shows that \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\mathbb{D}_{i}}\) is a locally noetherian formal scheme, so as an adic space it is the analytic sheafification of the presheaf \[\operatorname{Spa}(A,A^{+})\mapsto\operatorname{Hom}(\operatorname{Spa}(A^{+},A^ {+}),\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{ \prod_{i\in I}\mathbb{D}_{i}}).\] Because \(R^{+}\) is adic with ideal of definition generated by \(\varpi\), we have \[\operatorname{Hom}(S,\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i\in I}\mathbb{D}_{i}})\] \[=\operatorname{Hom}(\operatorname{Spf}R^{+},\mathfrak{Loc} \mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i\in I} \mathbb{D}_{i}})\] \[=\varprojlim_{n}\operatorname{Hom}(\operatorname{Spec}R^{+}/ \varpi^{n},\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet} }|_{\prod_{i\in I}\mathbb{D}_{i}}).\] From here, Lemma 2.7 and Lemma 4.2 yield the first claim. The second claim follows from the fact that \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}^{\diamond}}\) is already a sheaf in the analytic topology, so pulling back \((\mathscr{G},\delta)\) induces a morphism \(\underline{\operatorname{an}}\) as desired. **4.9 Theorem**.: _Our \(\underline{\operatorname{an}}\) is an isomorphism. Consequently, \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\operatorname{Spd}F_{i}}\) is a locally spatial diamond._ Proof.: First, we prove that \(\underline{\operatorname{an}}\) is an isomorphism. Because products of points as in [15, Definition 1.2] form a basis for the v-topology [15, Example 1.1]5 and both \((\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{ \prod_{i\in I}\mathbb{D}_{i}})^{\diamond}\) and \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}^{\diamond}}\) are v-sheaves, it suffices to check this on \(S\)-points when \(S\) is a product of points. Products of points are totally disconnected [15, Proposition 1.6], so we do not need to analytically sheafify when evaluating \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}\) on them. Footnote 5: However, in [15, Example 1.1] one must replace the \(k(x)\) with its completed algebraic closure \(C(x)\) and \(k(x)^{+}\) with its integral closure in \(C(x)\). So assume that \(S\) is a product of points, and let \((\mathscr{G},\delta)\) be an \(S\)-point of \[\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}^{\diamond}}.\] For large enough rational \(a\) and all \(1\leq j\leq k\), we can use \(\delta_{j}|_{\mathcal{Y}_{S,[a,a]}}\) to glue \(\mathscr{G}_{j}|_{\mathcal{Y}_{S,[0,a]}}\) and \(G|_{\mathcal{Y}_{S,[a,\infty]}}\) into a \(G\)-bundle \(\overline{\mathscr{G}}_{j}\) on \(\mathcal{Y}_{S}\). The commutativity of the square in Definition 4.6.c) imply that \(\phi_{j}\) and id glue into an isomorphism of \(G\)-bundles \[\overline{\phi}_{j}:\overline{\mathscr{G}}_{j}|_{\mathcal{Y}_{S}\smallsetminus \sum_{i\in I_{j}}\Gamma_{i}}\stackrel{{\sim}}{{\rightarrow}} \overline{\mathscr{G}}_{j+1}|_{\mathcal{Y}_{S}\smallsetminus\sum_{i\in I_{j}} \Gamma_{i}},\] where \(\overline{\mathscr{G}}_{k+1}\) denotes the \(G\)-bundle \({}^{\tau}\overline{\mathscr{G}}_{1}\). Then Theorem 3.6 indicates that \(\overline{\mathscr{G}}_{j}\) and \(\overline{\phi}_{j}\) are uniquely pulled back from a \(G\)-bundle \(\mathscr{G}_{j}^{\operatorname{alg}}\) on \(\operatorname{Spec}R^{+}\llbracket z\rrbracket\) and an isomorphism of \(G\)-bundles \(\phi_{j}^{\operatorname{alg}}:\mathscr{G}_{j}^{\operatorname{alg}}|_{ \operatorname{Spec}R^{+}\llbracket z\rrbracket[\frac{1}{z-\zeta_{i}}]_{i\in I_{j}}} \stackrel{{\sim}}{{\rightarrow}}\mathscr{G}_{j+1}^{\operatorname{ alg}}|_{\operatorname{Spec}R^{+}\llbracket z\rrbracket[\frac{1}{z-\zeta_{i}}]_{i\in I_{j}}}\), where \(\mathscr{G}_{k+1}^{\operatorname{alg}}\) denotes the \(G\)-bundle \({}^{\tau}\mathscr{G}_{1}^{\operatorname{alg}}\). Altogether \(\mathscr{G}^{\operatorname{alg}}\coloneqq((\mathscr{G}_{j}^{\operatorname{alg}}) _{j=1}^{k},(\phi_{j}^{\operatorname{alg}})_{j=1}^{k})\) is an algebraic local \(G\)-shtuka over \(S\). Since \(\mathscr{G}\) is bounded by \(\mu_{\bullet}\), Lemma 4.5 shows that \(\mathscr{G}^{\operatorname{alg}}\) is too. Finally, take \(a\) for which \(r\coloneqq 1/a\) lies in \(\mathbb{Z}[\frac{1}{p}]\). Applying Lemma 3.2, Proposition 3.4, and [31, Theorem 2.7.7] to the canonical isomorphism \(\mathscr{G}_{j}|_{\mathcal{Y}_{S,[a,\infty]}}\stackrel{{\sim}}{{ \rightarrow}}G\) yields an isomorphism of \(G\)-bundles \(\delta_{j}^{\operatorname{alg}}:\mathscr{G}_{j}^{\operatorname{alg}}|_{ \operatorname{Spec}R^{+}\llbracket z,\frac{m^{\prime}}{z}\rrbracket}\stackrel{{ \sim}}{{\rightarrow}}G\), and we see that \(\delta^{\operatorname{alg}}\coloneqq(\delta_{j}^{\operatorname{alg}})_{j=1}^{k}\) is a quasi-isogeny from \(\mathscr{G}^{\operatorname{alg}}\) to \(G\). The uniqueness of Theorem 3.6 and [31, Theorem 2.7.7] imply that \((\mathscr{G},\delta)\) is uniquely the image of \((\mathscr{G}^{\operatorname{alg}},\delta^{\operatorname{alg}})\) under \(\underline{\operatorname{an}}\). Hence \(\underline{\operatorname{an}}\) is bijective on \(S\)-points, as desired. Finally, the last statement follows from \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\operatorname{Spa}F_{i}}\) being an analytic adic space and [41, Lemma 15.6]. Next, we turn to level structures. Let \(n\) be a non-negative integer. **Definition**.: Suppose that \(S\) lies over \((\operatorname{Spa}F)^{I}\), and let \(\mathscr{G}\) be an analytic local \(G\)-shtuka over \(S\). A _level-\(n\) structure_ on \(\mathscr{G}\) consists of, for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\psi_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}R[\sharp]/z^{n}}\stackrel{{ \sim}}{{\to}}G\] such that the diagram commutes, where \(\mathscr{G}_{k+1}\) denotes \({}^{\tau}\mathscr{G}_{1}\), and \(\psi_{k+1}\) denotes \({}^{\tau}\psi_{1}\). Since \(S\) lies over \((\operatorname{Spa}F)^{I}\), the \((\phi_{j})_{R[\sharp]/z^{n}}\) are isomorphisms. Therefore \(\psi_{1}\) uniquely determines \(\psi_{j}\) for \(2\leq j\leq k\). We now define the covering tower of the generic fiber of \(\mathcal{L}\mathrm{oc}\mathcal{Sh}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}|_ {\prod_{i\in I}\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i} ^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{ i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{ \mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i}^{\mathbb{D}_{i} \mathbb{D}_{i}^{\mathbb{D}_{i}}^{\mathbb{D}_{i}}^{\mathbb{D}_{i}}^{\mathbb{D}_{i} \mathbb{D}_{i}}^{\mathbb{D}_{i}}}}}}}}}}}}}}}}}}}}}}}}}}}}} Then \(S^{\prime}\) parametrizes level-\(n^{\prime}\) structures \(\psi\) on \(\mathscr{G}\). Because \(\psi_{1}\) uniquely determines \(\psi_{j}\) for \(2\leq j\leq k\), we see that level-\(n^{\prime}\) structures on \(\mathscr{G}\) are equivalent to trivializations of the \(\tau\)-\(G\)-bundle \((\mathscr{G}_{1}|_{\operatorname{Spec}R[\sharp]/z^{n^{\prime}}},(\phi_{k} \circ\cdots\circ\phi_{1})_{R[\sharp]/z^{n^{\prime}}})\) over \(\operatorname{Spec}R[\sharp]/z^{n^{\prime}}\). Thus Proposition 3.8 and [41, Proposition 9.7] imply that \(S^{\prime}\to S\) is finite Galois with the desired Galois action. For general \(n\), the result follows from the commutative triangle and compatibility of the \(K_{n^{\prime},n}\)-action with changing \(n^{\prime}\) and \(n\). Finally, the last statement follows from Theorem 4.9 and [41, Lemma 11.21]. The covering tower enjoys the following Hecke correspondences. Write \[\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},\infty v}^{(I_{1 },\ldots,I_{k})}\coloneqq\varprojlim_{n}\mathcal{L}\mathrm{oc}\mathcal{S} \mathrm{ht}_{G,\mu_{\bullet},nv}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I} \operatorname{Spd}F_{v}},\] and write \(K_{n}\) for the kernel of \(G(\mathcal{O}_{F})\to G(\mathcal{O}_{F}/z^{n})\). **Proposition**.: _We have a canonical \(G(F)\)-action on \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},\infty v}^{(I_{ 1},\ldots,I_{k})}\) over \(\prod_{i\in I}\operatorname{Spd}F_{i}\) that extends the \(G(\mathcal{O}_{F})\)-action from 4.12. Consequently, for any \(g\) in \(G(F)\), we have a canonical finite etale correspondence \(\mathbf{1}_{K_{n}gK_{n}}\) from \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},nv}^{(I_{1}, \ldots,I_{k})}|_{\prod_{i\in I}\operatorname{Spd}F_{i}}\) to itself._ Proof.: Let \((\mathscr{G},\delta)\) be an \(S\)-point of \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I}\operatorname{Spd}F_{i}}\), and let \((\psi^{n})_{n\geq 0}\) be a compatible system of level-\(n\) structures \(\psi^{n}\) on \(\mathscr{G}\). For all \(1\leq j\leq k\), we see that \(\psi_{j}\coloneqq\varprojlim_{n}\psi_{j}^{n}\) yields an isomorphism of \(G\)-bundles \(\mathscr{G}_{j}|_{\operatorname{Spec}R[\sharp]}\xrightarrow{\sim}G\). For any \(g\) in \(G(F)\), we get an isomorphism of \(G\)-bundles \(g\circ(\psi_{j})_{R(\sharp)}:\mathscr{G}_{j}|_{\operatorname{Spec}R(\sharp)} \xrightarrow{\sim}G\), which we use with Beauville-Laszlo to glue \(G|_{\operatorname{Spec}R[\sharp]}\) and \(\mathscr{G}_{j}|_{\mathcal{Y}_{S,(0,\infty)}}\) into a \(G\)-bundle \(g\cdot\mathscr{G}_{j}\) on \(\mathcal{Y}_{S,[0,\infty)}\). Since \((g\cdot\mathscr{G}_{j})|_{\mathcal{Y}_{S,(0,\infty)}\smallsetminus\sum_{i\in I_{ j}}\Gamma_{i}}\) is canonically isomorphic to \(\mathscr{G}_{j}|_{\mathcal{Y}_{S,(0,\infty)}\smallsetminus\sum_{i\in I_{j}}\Gamma_ {i}}\), the commutativity of the square in Definition 4.10 and Beauville-Laszlo let us glue \(\mathrm{id}\) and \((\phi_{j})_{\mathcal{Y}_{S,(0,\infty)}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}}\) into an isomorphism of \(G\)-bundles \[g\cdot\phi_{j}:(g\cdot\mathscr{G}_{j})|_{\mathcal{Y}_{S,[0,\infty)}\smallsetminus \sum_{i\in I_{j}}\Gamma_{i}}\xrightarrow{\sim}(g\cdot\mathscr{G}_{j+1})|_{ \mathcal{Y}_{S,[0,\infty)}\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}},\] where \(g\cdot\mathscr{G}_{k+1}\) denotes \({}^{\tau}(g\cdot\mathscr{G}_{1})\). As \(\mathscr{G}\) is bounded by \(\mu_{\bullet}\), the analytic local \(G\)-shtuka \(g\cdot\mathscr{G}\coloneqq((g\cdot\mathscr{G}_{j})_{1}^{k},(g\cdot\phi_{j})_{ j=1}^{k})\) is too. Because \((g\cdot\mathscr{G}_{j})|_{\mathcal{Y}_{S,[a,\infty)}}\) is canonically isomorphic to \(\mathscr{G}_{j}|_{\mathcal{Y}_{S,[a,\infty)}}\), our \(\delta\) induces a quasi-isogeny from \(g\cdot\mathscr{G}\) to \(G\). Since \((g\cdot\mathscr{G}_{j})|_{\operatorname{Spec}R[\sharp]}\) is canonically trivial, we have the trivial level-\(n\) structure \(\mathrm{id}=(\mathrm{id})_{j=1}^{k}\) on \(g\cdot\mathscr{G}\). Altogether, we define the image of \((\mathscr{G},\delta,(\psi^{n})_{n\geq 0})\) under \(g\) to be \((g\cdot\mathscr{G},\delta,(\mathrm{id})_{n\geq 0})\). When \(g\) lies in \(G(\mathcal{O}_{F})\), our \(g\circ(\psi_{j})_{R(\sharp)}\) above extends to an isomorphism of \(G\)-bundles \(g\circ\psi_{j}:\mathscr{G}_{j}|_{\operatorname{Spec}R[\sharp]}\xrightarrow{ \sim}G\), and tracing through our identifications shows that this indeed recovers the action from 4.12. Finally, \(\mathbf{1}_{K_{n}gK_{n}}\) is given by and identifying \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},\infty v}^{(I_{1}, \dots,I_{k})}/K_{n}\) with \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},nv}^{(I_{1}, \dots,I_{k})}|_{\Pi_{i\in I}\operatorname{Spd}F_{i}}\). Recall the following variant of the moduli of local shtukas, which is defined purely in terms of the Fargues-Fontaine curve. Let \(K\) be a compact open subgroup of \(G(F)\). **Definition**.: Write \(\mathcal{M}^{I}_{G,\mu_{\bullet},K}|_{\Pi_{i\in I}\operatorname{Spd}F_{i}}\) for the small v-sheaf over \(\prod_{i\in I}\operatorname{Spd}F_{i}\) whose \(S\)-points parametrize data consisting of 1. a \(G\)-bundle \(\mathscr{E}\) on \(X_{S}\) such that, for all geometric points \(\overline{s}\) of \(S\), its pullback \(\mathscr{E}_{\overline{s}}\) to \(X_{\overline{s}}\) is trivial, 2. an isomorphism of \(G\)-bundles \[\alpha:\mathscr{E}|_{X_{S}\smallsetminus\sum_{i\in I}\Gamma_{i}}\stackrel{{ \sim}}{{\to}}G\] that is meromorphic along \(\sum_{i\in I}\Gamma_{i}\) such that, for all geometric points \(\overline{s}\) of \(S\), the relative position of \(\alpha_{\overline{s}}\) at \(\Gamma_{i}\),\(\overline{s}\) is bounded by \(\sum_{i^{\prime}}\mu_{i^{\prime}}\), where \(i^{\prime}\) runs over elements of \(I\) satisfying \(\Gamma_{i^{\prime},\overline{s}}=\Gamma_{i}\),\(\overline{s}\), 3. a \(\underline{K}\)-bundle \(\mathbb{P}\) on \(S\) whose pushforward along \(\underline{K}\operatorname{\to}G(F)\) equals the \(\underline{G(F)}\)-bundle on \(S\) corresponding to \(\mathscr{E}\) via [11, Theorem III.2.4]. Write \(f^{\mathcal{M}}:\mathcal{M}^{I}_{G,\mu_{\bullet},K}|_{\Pi_{i\in I} \operatorname{Spd}F_{i}}\operatorname{\to}\prod_{i\in I}\operatorname{Spd}F_ {i}\) for the structure morphism. Recall that \(\mathcal{M}^{I}_{G,\mu_{\bullet},K}|_{\Pi_{i\in I}\operatorname{Spd}F_{i}}\) is a locally spatial diamond. The analytic moduli of local \(G\)-shtukas is related to the Fargues-Fontaine variant as follows. **Proposition**.: _We have a canonical morphism_ \[c:\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},nv}^{(I)}|_{ \Pi_{i\in I}\operatorname{Spd}F_{i}}\operatorname{\to}\mathcal{M}^{I}_{G,\mu_ {\bullet},K_{n}}|_{\Pi_{i\in I}\operatorname{Spd}F_{i}}\] _of locally spatial diamonds over \(\prod_{i\in I}\operatorname{Spd}F_{i}\)._ Proof.: Let \((\mathscr{G},\delta,\psi)\) be an \(S\)-point of \(\mathcal{L}\mathrm{oc}\mathcal{S}\mathrm{ht}_{G,\mu_{\bullet},nv}^{(I)}\). Theorem 3.12 and Proposition 3.8 show that \((\mathscr{G}_{1}|_{\operatorname{Spec}\widetilde{\mathcal{R}}_{\mathrm{R}}^{ \mathrm{int}}},(\phi_{1})_{\widetilde{\mathcal{R}}_{\mathrm{R}}^{\mathrm{int}}})\) corresponds to a \(\underline{G(\mathcal{O}_{F})}\)-bundle on \(S\), and Proposition 3.8 implies that \(\psi_{1}\) corresponds to a reduction \(\mathbb{P}\) of this \(\underline{G(\mathcal{O}_{F})}\)-bundle to a \(\underline{K_{n}}\)-bundle. Note that \((\mathscr{G}_{1}|_{\mathcal{Y}_{S,(0,\infty)}},(\phi_{1})_{\mathcal{Y}_{S,(0, \infty)}})\) corresponds to the pushforward of \(\mathbb{P}\) along \(\underline{K_{n}}\operatorname{\to}\underline{G(F)}\). Therefore the pullback of the \(G\)-bundle \(\mathscr{E}\coloneqq(\mathscr{G}_{1}|_{\mathcal{Y}_{S,(0,\infty)}})/(\phi_{1} )_{\mathcal{Y}_{S,(0,\infty)}}^{\mathbb{Z}}\) from \(\overline{X_{S}}\) to \(X_{\overline{s}}\) is trivial for all geometric points \(\overline{s}\) of \(S\), and the pushforward of \(\mathbb{P}\) along \(\underline{K_{n}}\operatorname{\to}\underline{G(F)}\) equals the \(\underline{G(F)}\)-bundle on \(S\) corresponding to \(\mathscr{E}\) via [11, Theorem III.2.4]. Finally, continuation by Frobenius and Lemma 4.5 indicate that \(\delta_{1}\) induces an isomorphism of \(G\)-bundles \(\alpha:\mathscr{E}|_{X_{S}\smallsetminus\sum_{i\in I}\Gamma_{i}}\stackrel{{ \sim}}{{\to}}G\) with the desired relative position bound, so altogether \((\mathscr{E},\alpha,\mathbb{P})\) yields an \(S\)-point of \(\mathcal{M}^{I}_{G,\mu_{\bullet},K_{n}}\). We will need the following results of Fargues-Scholze [11] on the intersection homology of the moduli of local shtukas. Recall the notation of 2.13, and let \(V\) be an object of \(\operatorname{Rep}_{E}(^{L}G)^{I}\). Note that \[\coprod_{\mu_{\bullet}}\operatorname{\mathcal{L}oc}\operatorname{Sht}_{G,\mu_{ \bullet},nv}^{(I_{1},\dots,I_{k})}|_{(\operatorname{Spd}\widetilde{F})^{I}} \text{ and }\coprod_{\mu_{\bullet}}\operatorname{\mathcal{M}}_{G,\mu_{\bullet},K}^{I}|_{( \operatorname{Spd}\widetilde{F})^{I}}\] naturally descend to small v-sheaves \(\operatorname{\mathcal{L}oc}\operatorname{Sht}_{G,V,nv}^{(I_{1},\dots,I_{k})}\) and \(\operatorname{\mathcal{M}}_{G,V,K}^{I}\) over \((\operatorname{Spd}F)^{I}\), respectively, where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{\operatorname{Q}}_{\ell}}|_{\widetilde{G}^{I}}\). Proposition 4.12 and [41, Proposition 13.4 (iv)] imply that \(\operatorname{\mathcal{L}oc}\operatorname{Sht}_{G,V,nv}^{(I_{1},\dots,I_{k})}\) is a locally spatial diamond, and we see that \(\operatorname{\mathcal{M}}_{G,V,K}^{I}\) is also a locally spatial diamond. Let \(\Lambda\) be \(\mathcal{O}_{E}\) or \(E\), and now let \(V\) be an object of \(\operatorname{Rep}_{\mathcal{O}_{E}}(^{L}G)^{I}\). If \(\Lambda=\mathcal{O}_{E}\), then by abuse of notation write \(V\) for \(V_{E}\). Write \((\operatorname{Spd}\widetilde{F})^{I}\) for the \(I\)-th power of \(\operatorname{Spd}\widetilde{F}\) over \(\overline{\mathbb{F}}_{q}\), and write \({}^{\prime}\mathcal{F}_{V,K,\Lambda}^{I}\) for the object of \(D_{\blacksquare}(\operatorname{\mathcal{M}}_{G,V,K}^{I}|_{(\operatorname{Spd} \widetilde{F})^{I}},\Lambda)\) obtained from [11, Theorem VI.11.1] and \(V\) by first applying the double-dual embedding as in [11, p. 264] and then pulling back to \(\operatorname{\mathcal{M}}_{G,V,K}^{I}|_{(\operatorname{Spd}\widetilde{F})^{I}}\). Write \({}^{\prime}\mathcal{F}_{V,nv,\Lambda}^{(I_{1},\dots,I_{k})}\) for the pullback of \({}^{\prime}\mathcal{F}_{V,Kn,\Lambda}^{I}\) under the composition \[\operatorname{\mathcal{L}oc}\operatorname{Sht}_{G,V,nv}^{(I_{1},\dots,I_{k})} |_{(\operatorname{Spd}\widetilde{F})^{I}}\to\operatorname{\mathcal{L}oc} \operatorname{Sht}_{G,V,nv}^{(I)}|_{(\operatorname{Spd}\widetilde{F})^{I}} \overset{c}{\to}\operatorname{\mathcal{M}}_{G,V,Kn}^{I}|_{(\operatorname{ Spd}\widetilde{F})^{I}}.\] Write \(W_{F}\) for the absolute Weil group of \(F\). **Theorem**.: _Our \(c\) induces an isomorphism \(f_{\natural}^{\mathcal{L}}(^{\prime}\mathcal{F}_{V,nv,\Lambda}^{(I)})\overset{ \sim}{\to}f_{\natural}^{\mathcal{M}}(^{\prime}\mathcal{F}_{V,K,\Lambda}^{I})\). Consequently, the object \(f_{\natural}^{\mathcal{L}}(^{\prime}\mathcal{F}_{V,nv,\Lambda}^{(I)})\) of \(D_{\blacksquare}((\operatorname{Spd}\widetilde{F})^{I},\Lambda)\) naturally arises via pullback from \(D(W_{F}^{I},\Lambda)\)._ Proof.: Using Theorem 3.12 and Proposition 3.8, the argument in the proof of [11, Proposition IX.3.2] yields the first claim. For the second claim, [11, Proposition VII.3.1 (iii)] enables us to identify \(f_{\natural}^{\mathcal{M}}(^{\prime}\mathcal{F}_{V,K,\Lambda}^{I})\) with \(i_{1}^{*}T_{V}(i_{1!}(\operatorname{c\text{-}Ind}_{K_{n}}^{G(F)}\Lambda))\) as objects of \(D(\Lambda)\), where \(i_{1}:[*/G(F)]\to\operatorname{Bun}_{G}\) is the canonical open embedding, and \(T_{V}\) is the geometric Hecke operator associated with \(V\). Therefore [11, Corollary IX.2.3] yields the desired result. Finally, we define partial Frobenii for the analytic moduli of local \(G\)-shtukas and relate them to partial Frobenii on the Fargues-Fontaine variant as follows. Write \(\mathcal{F}_{\Gamma^{(I_{1},\dots,I_{k})}}:\operatorname{\mathcal{L}oc} \operatorname{Sht}_{G,V,nv}^{(I_{1},\dots,I_{k})}\to\operatorname{\mathcal{L}oc }\operatorname{Sht}_{G,V,nv}^{(I_{2},\dots,I_{k},I_{1})}\) for the morphism that sends Note that \(\operatorname{\mathcal{M}}_{G,V,K}^{I}\) naturally descends to a v-sheaf over \((\operatorname{Div}_{F}^{1})^{I}\), where \(\operatorname{Div}_{F}^{1}\) denotes the small v-sheaf over \(\operatorname{Spd}\mathbb{F}_{q}\) whose \(S\)-points parametrize degree-1 relative effective Cartier divisors of \(X_{S}\). Write \(\varphi_{I_{1}}:\operatorname{\mathcal{M}}_{G,V,K}^{I}\to\operatorname{ \mathcal{M}}_{G,V,K}^{I}\) for the resulting endomorphism given by geometric \(q\)-Frobenius on the \(i\)-th factor of \((\operatorname{Spd}F)^{I}\) for \(i\) in \(I_{1}\) and the identity on all other factors. **Lemma**.: _We have a commutative diagram_ Proof.: This follows immediately from the proof of Proposition 4.15. ## 5. Uniformizing the moduli spaces of global shtukas At this point, we shift focus from local to global considerations. Our goal in this section is to define the uniformization morphism, which is essential for our main results. First, we recall some facts about global shtukas and their moduli spaces. We then take formal completions at a fixed place and define the uniformization morphism on the level of formal stacks. By restricting to a Harder-Narasimhan truncation on the global moduli and using results from SS2 on the local moduli, we can pass from formal stacks to formal schemes that are locally formally of finite type over \(\mathbb{D}^{I}\). This enables us to upgrade the formal etaleness of our uniformization morphism to etaleness, as well as to avoid questions about analytifying stacks. Finally, we extend the uniformization theorem to the covering tower on generic fibers. We start by switching our notation to a global context. Let \(C\) be a geometrically connected smooth proper curve over a finite field \(\mathbb{F}_{q}\), and write \(F\) for \(\mathbb{F}_{q}(C)\). Fix a separable closure \(\overline{F}\) of \(F\), and write \(\Gamma_{F}\) for \(\operatorname{Gal}(\overline{F}/F)\). Write \(\mathbb{A}\) for the adele ring of \(C\), and write \(\mathbb{O}\) for its subring of integral adeles. Let \(G\) be a parahoric group scheme over \(C\) as in [38, Definition 2.18], and write \(Z\) for the center of \(G\). By [3, Proposition 2.2(b)], there exists an \(\operatorname{SL}_{h}\)-bundle \(\mathscr{V}\) on \(C\) and a closed embedding of group schemes \(\iota:G^{\operatorname{ad}}\operatorname{\rightarrow}\underline{ \operatorname{Aut}}(\mathscr{V})\) of group schemes over \(C\) such that \(\underline{\operatorname{Aut}}(\mathscr{V})/G^{\operatorname{ad}}\) satisfies [3, (2.1)]. Let \(T\) be a maximal subtorus of \(G_{F}\), and write \(X_{*}^{+}(T)\) for the set of dominant cocharacters of \(T_{\overline{F}}\) with respect to a fixed Borel subgroup \(B\subseteq G_{\overline{F}}\) containing \(T_{\overline{F}}^{\cdot}\). Identify \(X_{*}^{+}(T)\) with the set of conjugacy classes of cocharacters of \(G_{\overline{F}}\). Let \(\mu_{\bullet}=(\mu_{i})_{i\in I}\) be in \(X_{*}^{+}(T)\), and identify the field of definition of \(\mu_{i}\) with \(\mathbb{F}_{q}(C_{i})\) for some finite generically etale cover \(C_{i}\operatorname{\rightarrow}C\). Write \(\operatorname{Gr}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}\big{|}_{\prod_{i \in I}C_{i}}\) for the closed affine Schubert variety as in 1.5. Let us recall the definition of global \(G\)-shtukas. Let \(S\) be an affine scheme over \(C^{I}\), and adopt the notation of 1.2. Write \(\tau:S\operatorname{\rightarrow}S\) for the absolute \(q\)-Frobenius endomorphism, and by abuse of notation, write \(\tau:C\times S\operatorname{\rightarrow}C\times S\) for the identity times \(\tau\). **Definition**.: 1. A _global_ \(G\)_-shtuka_ over \(S\) consists of 1. for all \(1\leq j\leq k\), a \(G\)-bundle \(\mathscr{G}_{j}\) on \(C\times S\), 2. for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\phi_{j}:\mathscr{G}_{j}|_{C\times S\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}} \stackrel{{\sim}}{{\rightarrow}}\mathscr{G}_{j+1}|_{C\times S \smallsetminus\sum_{i\in I_{j}}\Gamma_{i}},\] where \(\mathscr{G}_{k+1}\) denotes the \(G\)-bundle \({}^{\tau}\mathscr{G}_{1}\). 2. Suppose that \(S\) lies over \(\prod_{i\in I}C_{i}\), and let \(\mathscr{G}=((\mathscr{G}_{j})_{j=1}^{k},(\phi_{j})_{j=1}^{k})\) be a global \(G\)-shtuka over \(S\). We say that \(\mathscr{G}\) is _bounded by \(\mu_{\bullet}\)_ if the \(S\)-point of \[[L_{I}^{+}(G)\backslash\mathrm{Gr}_{G}^{(I_{1},\ldots,I_{k})}\,|_{\prod_{i \in I}C_{i}}]\] given by \(((\mathscr{G}_{j}|_{\operatorname{Spec}\widehat{\mathcal{O}}_{C}(S)})_{j=1}^{ k},((\phi_{j})_{\widehat{\mathcal{O}}_{C}^{\downarrow,\circ}(S)})_{j=1}^{k})\) lies in \([L_{I}^{+}(G)\backslash\mathrm{Gr}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}\,|_ {\prod_{i\in I}C_{i}}]\). 3. Let \(\mathscr{G}\) be a global \(G\)-shtuka over \(S\). We say that \(\mathscr{G}\) has _Harder-Narasimhan polygon bounded by \(s\)_ if the \(\mathrm{SL}_{r}\)-bundle \(\iota_{*}(\mathscr{G}_{1}^{\mathrm{ad}})\) has Harder-Narasimhan polygon bounded by \(s2\rho^{\vee}\), where \(2\rho^{\vee}\) denotes the sum of positive coroots in \(\mathrm{SL}_{h}\). Next, we turn to level structures. Let \(N\) be a finite closed subscheme of \(C\). **Definition**.: Suppose that \(S\) lies over \((C\smallsetminus N)^{I}\), and let \(\mathscr{G}\) be a global \(G\)-shtuka over \(S\). A _level-\(N\) structure_ on \(\mathscr{G}\) consists of, for all \(1\leq j\leq k\), an isomorphism of \(G\)-bundles \[\psi_{j}:\mathscr{G}_{j}|_{N\times S}\mathop{\to}^{\sim}G\] such that the diagram commutes, where \(\mathscr{G}_{k+1}\) denotes \({}^{\tau}\mathscr{G}_{1}\), and \(\psi_{k+1}\) denotes \({}^{\tau}\psi_{1}\). Since \(S\) lies over \((C\smallsetminus N)^{I}\), the \((\phi_{j})_{N}\) are isomorphisms. Therefore \(\psi_{1}\) uniquely determines \(\psi_{j}\) for \(2\leq j\leq k\). We now recall the moduli of global \(G\)-shtukas and its associated structures. Write \(N_{i}\) for the preimage of \(N\) in \(C_{i}\). **Definition**.: Write \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}\,|_{\prod_{i\in I}C_{ i}\smallsetminus N_{i}}\) for the stack over \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\) whose \(S\)-points parametrize data consisting of 1. a global \(G\)-shtuka \(\mathscr{G}\) over \(S\) bounded by \(\mu_{\bullet}\), 2. a level-\(N\) structure \(\psi=(\psi_{j})_{j=1}^{k}\) on \(\mathscr{G}\). Write \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k}),\leq s}\,|_{\prod_{i \in I}C_{i}\smallsetminus N_{i}}\) for the open substack of \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}\,|_{\prod_{i\in I}C_ {i}\smallsetminus N_{i}}\) whose \(S\)-points consist of the \((\mathscr{G},\psi)\) such that \(\mathscr{G}\) has Harder-Narasimhan polygon bounded by \(s\). Write \(f^{\mathrm{S}}:\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}\,|_{ \prod_{i\in I}C_{i}\smallsetminus N_{i}}\,{\to}\,{\prod_{i\in I}C_{i} \smallsetminus N_{i}}\) for the structure morphism. Our \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}\,|_{\prod_{i\in I}C_ {i}\smallsetminus N_{i}}\) has an action of \(Z(F)\backslash Z(\mathbb{A})\) by twisting. Since the image of \(Z\) in \(\underline{\mathrm{Aut}}(\mathscr{V})\) is trivial, \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k}),\leq s}\,|_{\prod_{i \in I}C_{i}\smallsetminus N_{i}}\) is preserved by the \(Z(F)\backslash Z(\mathbb{A})\)-action. Finally, note that \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}\,|_{\prod_{i\in I}C_ {i}\smallsetminus N_{i}}\) is the increasing union of the \(\mathrm{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k}),\leq s}\,|_{\prod_{i \in I}C_{i}\smallsetminus N_{i}}\). For finite closed subschemes \(N^{\prime}\supseteq N\) of \(C\), we have morphisms \[\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N^{\prime}}\left|{}_{ \prod_{i\in I}C_{i}\smallsetminus N^{\prime}_{i}}\right.\to\operatorname{Sht}^{(I _{1},\dots,I_{k})}_{G,\mu_{\bullet},N}\left|{}_{\prod_{i\in I}C_{i} \smallsetminus N^{\prime}_{i}}\right.\] given by pulling back \(\psi_{j}\) to \(N\times S\) for all \(1\leq j\leq k\). Write \(K_{N^{\prime},N}\) for the kernel of \(G(\mathscr{O}_{N^{\prime}})\operatorname{\to}G(\mathscr{O}_{N})\), and note that \(K_{N^{\prime},N}\) acts on \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N^{\prime}}\left|{}_ {\prod_{i\in I}C_{i}\smallsetminus N^{\prime}_{i}}\right.\) over \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N}\left|{}_{\prod_ {i\in I}C_{i}\smallsetminus N^{\prime}_{i}}\right.\) via postcomposition with \(\psi_{j}\) for all \(1\leq j\leq k\). **Proposition**.: _The morphism \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N^{\prime}}\left|{}_ {\prod_{i\in I}C_{i}\smallsetminus N^{\prime}_{i}}\operatorname{\to} \operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N}\left|{}_{\prod_ {i\in I}C_{i}\smallsetminus N^{\prime}_{i}}\right.\) is finite Galois, where the Galois action is given by that of \(K_{N^{\prime},N}\)._ Proof.: When \(N=\varnothing\), the result follows from the proof of [43, Proposition 2.16 b)]. For general \(N\), the result follows from the commutative triangle and compatibility of the \(K_{N^{\prime},N}\)-action with changing \(N^{\prime}\) and \(N\). **Proposition**.: _Our \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N}\left|{}_{\prod_ {i\in I}C_{i}\smallsetminus N_{i}}\right.\) is a Deligne-Mumford stack that is separated and locally of finite type over \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\). Moreover, for large enough \(\deg N\), our \(\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,\mu_{\bullet},N}\left|{}_ {\prod_{i\in I}C_{i}\smallsetminus N_{i}}\right.\) is a scheme that is separated and locally of finite type over \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\)._ Proof.: The second claim follows from the proof of [32, Lemme 12.19]. Using Proposition 5.5, the first claim follows from the argument in [45, SS5.1.5]. Let \(\widetilde{F}\) be the finite Galois extension of \(F\) such that \(\operatorname{Gal}(\widetilde{F}/F)\) equals the image of the \(\Gamma_{F}\)-action on \(X_{*}^{+}(T)\), and identify \(\widetilde{F}\) with \(\mathbb{F}_{q}(\widetilde{C})\) for some finite generically etale cover \(\widetilde{C}\operatorname{\to}C\). Write \(\widetilde{N}\) for the preimage of \(N\) in \(\widetilde{C}\). Write \(\widehat{G}\) for the dual group of \(G_{F}\) over \(\mathcal{O}_{E}\), and write \({}^{L}G\) for \(\widehat{G}\rtimes\operatorname{Gal}(\widetilde{F}/F)\). Let \(V\) be an object of \(\operatorname{Rep}_{E}({}^{L}G)^{I}\). Note that \(\coprod_{\mu_{\bullet}}\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{ \bullet},N}\left|{}_{(\widetilde{C}\smallsetminus\widetilde{N})^{I}}\right.\) and \(\coprod_{\mu_{\bullet}}\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,\mu _{\bullet},N}\left|{}_{(\widetilde{C}\smallsetminus\widetilde{N})^{I}}\right.\) naturally descend to stacks \[\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,V,N}\ \text{ and }\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,V,N}\] over \((C\smallsetminus N)^{I}\), respectively, where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{Q}_{\ell}}\left|{}_{\widehat{G}^{I}}\right.\). Proposition 5.6 and descent imply that \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,V,N}\) is a Deligne-Mumford stack that is separated and locally of finite type over \((C\smallsetminus N)^{I}\), and for large enough \(\deg N\), our \(\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,V,N}\) is a scheme that is separated and locally of finite type over \((C\smallsetminus N)^{I}\). Write \(K_{N}\) for the kernel of \(G(\mathbb{O})\operatorname{\to}G(\mathscr{O}_{N})\). For any \(g\) in \(G(\mathbb{A})\), recall that we have a canonical finite etale correspondence \(\mathbf{1}_{K_{N}gK_{N}}\) from \(\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N}\left|{}_{\prod_ {i\in I}\mathbb{F}_{q}(C_{i})}\right.\) to itself [32, Construction 2.20]6. Note that \(\mathbf{1}_{K_{N}gK_{N}}\) commutes with the \(Z(F)\backslash Z(\mathbb{A})\)-action. **5.9 Definition**.: Write \(\operatorname{Fr}^{(I_{1},\dots,I_{k})}:\operatorname{Sht}^{(I_{1},\dots,I_{k})}_{G, V,N}\mathop{\rightarrow}\operatorname{Sht}^{(I_{2},\dots,I_{k},I_{1})}_{G,V,N}\) for the morphism given by Note that \(\operatorname{Fr}^{(I_{1},\dots,I_{k})}\) lies above the endomorphism of \((C\smallsetminus N)^{I}\) given by geometric \(q\)-Frobenius on the \(i\)-th factor for \(i\) in \(I_{1}\) and the identity on all other factors. By [32, Lemme 3.1]7, there exists a non-negative integer \(\kappa(V)\) such that Footnote 7: While [32, Lemme 3.1] only treats the case of split \(G\), it extends to the general case. Indeed, this is already implicitly used in [32, (12.15)]. \[(\operatorname{Fr}^{(I_{1},\dots,I_{k})})^{-1}(\operatorname{Sht}^{(I_{2}, \dots,I_{k},I_{1}),\leq s}_{G,V,N}) \subseteq\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s+\kappa(V)}_{G,V,N}\text { and }\] \[\operatorname{Fr}^{(I_{1},\dots,I_{k})}(\operatorname{Sht}^{(I_{1},\dots,I_{ k}),\leq s}_{G,V,N}) \subseteq\operatorname{Sht}^{(I_{2},\dots,I_{k},I_{1}),\leq s+\kappa(V)}_{G, V,N}.\] At this point, we fix a place of \(F\) and begin exploring the interplay between the local and global situations. Let \(v\) be a closed point of \(C\), write \(r\) for the degree of \(v\), and write \(\mathcal{O}_{v}\) for \(\widehat{\mathcal{O}}_{C,v}\). Choose a uniformizer \(z\) of \(\mathcal{O}_{v}\), which yields an identification \(\mathcal{O}_{v}=\mathbb{F}_{q^{r}}[\![z]\!]\). Write \(F_{v}\) for the fraction field of \(\mathcal{O}_{v}\), and write \(\mathbb{D}\) for the formal scheme \(\operatorname{Spf}\mathcal{O}_{v}\). Fix a separable closure \(\overline{F}_{v}\) of \(F_{v}\), and fix an embedding \(\overline{F}\mathop{\rightarrow}\overline{F}_{v}\). By abuse of notation, write \(G\) for the pullback of \(G\) to \(\mathcal{O}_{v}\). Using \(T_{F_{v}}\) for our maximal subtorus of \(G_{F_{v}}\) and \(B_{\overline{F}_{v}}\) for our Borel subgroup of \(G_{\overline{F}_{v}}\), we can identify \(F_{i}\) from 1.5 with the closure of \(\mathbb{F}_{q}(C_{i})\) in \(\overline{F}_{v}\) as well as identify \(\mathbb{D}_{i}\) from 1.5 with the formal completion of \(C_{i}\) at the closed point \(v_{i}\) of \(C_{i}\) above \(v\) induced by \(\overline{F}\mathop{\rightarrow}\overline{F}_{v}\). The following two lemmas explain how to resolve the clash between our local and global base fields. Write \(\mathbb{D}^{I}\) for the \(I\)-th power of \(\mathbb{D}\) over \(\mathbb{F}_{q^{r}}\). Adopt the notation of 1.3, and let \(S=\operatorname{Spec}R\) be an affine scheme over \(\mathbb{D}^{I}\). **Lemma**.: _We have a natural isomorphism of affine formal schemes_ \[\coprod\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\mathop{\rightarrow}\mathbb{ D}\times S,\] _where \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\) denotes the product of \(S\mathop{\rightarrow}\operatorname{Spec}\mathbb{F}_{q^{r}}\xrightarrow{ \tau^{d}}\operatorname{Spec}\mathbb{F}_{q^{r}}\) and \(\mathbb{D}\) over \(\mathbb{F}_{q^{r}}\), and \(d\) runs over \(\mathbb{Z}/r\). Under this identification, \(\tau:\mathbb{D}\times S\mathop{\rightarrow}\mathbb{D}\times S\) on the right-hand side corresponds to the disjoint union of \(\tau:\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\mathop{\rightarrow}\mathbb{ D}\times_{\mathbb{F}_{q^{r}},d-1}S\) on the left-hand side._ Proof.: Take \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\mathop{\rightarrow}\mathbb{D}\times S\) to be the natural morphism. Since \(\mathbb{F}_{q^{r}}\) is finite Galois over \(\mathbb{F}_{q}\) with \(\operatorname{Gal}(\mathbb{F}_{q^{r}}/\mathbb{F}_{q})=\tau^{\mathbb{Z}/r}\), the induced morphism above is an isomorphism. The last statement follows immediately. **5.12 Lemma**.: _A local \(G\)-shtuka over \(S\) is equivalent to data consisting of_ 1. _for all_ \(1\leq j\leq k\)_, a_ \(G\)_-bundle_ \(\mathscr{H}_{j}\) _on_ \(\mathbb{D}\times S\)_,_ 2. _for all_ \(1\leq j\leq k\)_, an isomorphism of_ \(G\)_-bundles_ \[\chi_{j}:\mathscr{H}_{j}|_{\mathbb{D}\times S\smallsetminus\sum_{i\in I_{j}} \Gamma_{i}}\xrightarrow{\sim}\mathscr{H}_{j+1}|_{\mathbb{D}\times S \smallsetminus\sum_{i\in I_{j}}\Gamma_{i}},\] _where_ \(\mathscr{H}_{k+1}\) _denotes the_ \(G\)_-bundle_ \({}^{\tau}\mathscr{H}_{k}\)_._ Proof.: Let \(\mathscr{G}\) be a local \(G\)-shtuka over \(S\), and for all \(1\leq j\leq k\), view \(\mathscr{G}_{j}\) as a \(G\)-bundle on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}}}S\). Using Lemma 5.11, we can form \(\mathscr{H}_{j}\) by taking \(\tau^{{}^{\mathcal{G}}}\mathscr{G}_{1}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\) for \(1\leq d\leq r-1\) and \(\mathscr{G}_{j}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}}}S\). Note that \(\tau^{{}^{\mathcal{H}}}\mathscr{H}_{1}\) is given by \(\tau^{{}^{\mathcal{G}}}\mathscr{G}_{1}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\) for all \(1\leq d\leq r\). Therefore we can form \(\chi_{j}\) by taking \(\mathrm{id}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\) for \(1\leq d\leq r-1\) and \(\phi_{j}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}}}S\). Conversely, let \(\mathscr{H}\coloneqq((\mathscr{H}_{j})_{j=1}^{k},(\chi_{j})_{j=1}^{k})\) be as above. Write \((-)|_{d}\) for restrictions to \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\). Since \(\Gamma_{i}\) lies in \(\mathbb{D}\times_{\mathbb{F}_{q^{r}}}S\) for all \(i\) in \(I\), our \(\chi_{j}|_{d}\) is an isomorphism for all \(1\leq j\leq k\) and \(1\leq d\leq r-1\). By repeatedly using Lemma 5.11, this identifies \(\mathscr{H}_{j}|_{d}\) with \(\tau^{{}^{\mathcal{G}}}\mathscr{H}_{1}|_{r}\). Hence this also identifies \(\mathscr{H}_{k+1}|_{r}\) with \(\tau^{{}^{\mathcal{r}}}\mathscr{H}_{1}|_{r}\), so altogether we see that \(\mathscr{H}|_{r}\) yields a local \(G\)-shtuka over \(S\). In our study of the uniformization morphism, we start by defining it on the level of formal stacks. Write \(\prod_{i\in I}\mathbb{D}_{i}\) for the product of the \(\mathbb{D}_{i}\) over \(\mathbb{F}_{q^{r}}\), and write \(\prod_{i\in I}v_{i}\) for the product of the \(v_{i}\) over \(\mathbb{F}_{q^{r}}\). Assume that \(N\) and \(v\) are disjoint, and write \(\widehat{\mathrm{Sht}}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet},N}|_{\prod_{i \in I}\mathbb{D}_{i}}\) for the formal completion of \(\mathrm{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet},N}\left|_{\prod_{i\in I}C _{i}\smallsetminus N_{i}}\right.\) along \(\prod_{i\in I}v_{i}\) in \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\). **Proposition**.: _We have a canonical morphism_ \[\widehat{\Theta}:\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{ \bullet}}|_{\prod_{i\in I}\mathbb{D}_{i}}\to\widehat{\mathrm{Sht}}^{(I_{1}, \ldots,I_{k})}_{G,\mu_{\bullet},N}|_{\prod_{i\in I}\mathbb{D}_{i}}\] _of stacks over \(\prod_{i\in I}\mathbb{D}_{i}\) that is formally etale._ This result generalizes cases of [3, Theorem 5.3]. Proof.: First, we define \(\widehat{\Theta}\). Let \((\mathscr{G},\delta)\) be an \(S\)-point of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_ {i\in I}\mathbb{D}_{i}}\), and let \(((\mathscr{H}_{j})_{j=1}^{k},(\chi_{j})_{j=1}^{k})\) be the data corresponding to \(\mathscr{G}\) as in Lemma 5.12. For all \(1\leq j\leq k\), Lemma 5.11 shows that taking \(\delta_{j}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}}}S\) and \(\tau^{{}^{\mathcal{A}}}\delta_{1}\) on \(\mathbb{D}\times_{\mathbb{F}_{q^{r}},d}S\) for \(1\leq d\leq r-1\) yields an isomorphism of \(G\)-bundles \[\epsilon_{j}:\mathscr{H}_{j}|_{\mathbb{D}\times S\smallsetminus v\times S} \stackrel{{\sim}}{{\to}}G.\] Beauville-Laszlo lets us use \(\epsilon_{j}\) to glue \(\mathscr{H}_{j}\) and \(G|_{C\times S\smallsetminus v\times S}\) into a \(G\)-bundle \(\mathscr{G}_{j}^{\Theta}\) on \(C\times S\). Because the square in Definition 2.2.b) commutes, Beauville-Laszlo also lets us glue \(\chi_{j}\) and \(\mathrm{id}\) into an isomorphism of \(G\)-bundles \[\phi_{j}^{\Theta}:\mathscr{G}_{j}^{\Theta}|_{C\times S\smallsetminus\sum_{i\in I _{j}}\Gamma_{i}}\stackrel{{\sim}}{{\to}}\mathscr{G}_{j+1}^{ \Theta}|_{C\times S\smallsetminus\sum_{i\in I_{j}}\Gamma_{i}},\] where we use Lemma 1.3 to identify \(R[\![z]\!]\) with \(\widehat{\mathcal{O}}_{C}(S)\), and \(\mathscr{G}_{k+1}^{\Theta}\) denotes the \(G\)-bundle \(\tau^{\mathscr{G}_{1}}\). As \(\mathscr{G}\) is bounded by \(\mu_{\bullet}\), the global \(G\)-shtuka \(\mathscr{G}^{\Theta}\coloneqq((\mathscr{G}_{j}^{\Theta})_{j=1}^{k},(\phi_{j}^{ \Theta})_{j=1}^{k})\) is too. Because \(N\) and \(v\) are disjoint, \(\mathscr{G}_{j}^{\Theta}|_{N\times S}\) and \(\phi_{j}^{\Theta}|_{N\times S}\) are canonically trivial, so we have the trivial level-\(N\) structure \(\mathrm{id}=(\mathrm{id})_{i=1}^{k}\) on \(\mathscr{G}^{\Theta}\). Altogether, we define \(\widehat{\Theta}(\mathscr{G},\delta)\) to be the \(S\)-point \((\mathscr{G}^{\Theta},\mathrm{id})\) of \(\widehat{\mathrm{Sht}}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet},N}|_{\prod_{i \in I}\mathbb{D}_{i}}\). To see that \(\widehat{\Theta}\) is formally etale, let \(J\) be an ideal of \(R\) satisfying \(J^{n}=0\), and write \(\overline{S}\!\to\!S\) for the associated closed embedding. For any commutative square write \((\overline{\mathscr{G}},\overline{\delta})\) for the \(\overline{S}\)-point of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i \in I}\mathbb{D}_{i}}\), and write \((\mathscr{F},\psi)\) for the \(S\)-point of \(\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}}|_{\prod_{ i\in I}\mathbb{D}_{i}}\). The restriction of \(\mathscr{F}\) to \(\mathbb{D}\times S\) yields data as in Lemma 5.12, which corresponds to a local \(G\)-shtuka \(\mathscr{G}\) over \(S\). As \(\mathscr{F}\) is bounded by \(\mu_{\bullet}\), our \(\mathscr{G}\) is too. Because the pullback of \(\mathscr{F}\) to \(\overline{S}\) is \(\widehat{\Theta}(\overline{\mathscr{G}},\overline{\delta})\), we see that the pullback of \(\mathscr{G}\) to \(\overline{S}\) is \(\overline{\mathscr{G}}\). Therefore Proposition 2.3 yields a unique quasi-isogeny \(\delta\) from \(\mathscr{G}\) to \(G\) whose pullback to \(\overline{S}\) is \(\overline{\delta}\). Consider the \(S\)-point of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{ i\in I}\mathbb{D}_{i}}\) given by \((\mathscr{G},\delta)\). The top triangle commutes by construction, and the bottom triangle commutes by the uniqueness of Beauville-Laszlo gluing. Finally, the uniqueness of Proposition 2.3 and Beauville-Laszlo gluing also imply that \((\mathscr{G},\delta)\) is the unique such morphism, as desired. By restricting to a Harder-Narasimhan truncation and letting the (tame) level be large enough, we can pass from formal stacks to formal schemes. Maintain the assumptions of 5.13, Write \(\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s}}|_{ \prod_{i\in I}\mathbb{D}_{i}}\) for the formal completion of \[\operatorname{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,\mu_{\bullet},N}|_{\prod_{ i\in I}C_{i}\smallsetminus N_{i}}\] along \(\prod_{i\in I}v_{i}\) in \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\), and write \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{G,\mu_{\bullet}}|_ {\prod_{i\in I}\mathbb{D}_{i}}\) for the preimage of \(\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s}}|_{ \prod_{i\in I}\mathbb{D}_{i}}\) under \(\widehat{\Theta}\). **Proposition**.: _For large enough \(\deg N\), the restriction_ \[\widehat{\Theta}:\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{ \prod_{i\in I}\mathbb{D}_{i}}\to\widehat{\operatorname{Sht}_{G,\mu_{\bullet} }^{(I_{1},\dots,I_{k}),\leq s}}|_{\prod_{i\in I}\mathbb{D}_{i}}\] _is an etale morphism of formal schemes._ Proof.: Proposition 5.13 shows that the restriction \[\widehat{\Theta}:\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{ \prod_{i\in I}\mathbb{D}_{i}}\to\widehat{\operatorname{Sht}_{G,\mu_{\bullet} }^{(I_{1},\dots,I_{k}),\leq s}}|_{\prod_{i\in I}\mathbb{D}_{i}}\] is formally etale. Because \(\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s}}|_ {\prod_{i\in I}\mathbb{D}_{i}}\) is an open substack of \[\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s}}|_ {\prod_{i\in I}\mathbb{D}_{i}},\] we see that \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k}),\leq s}_{\prod_{i\in I} \mathbb{D}_{i}}\) is an open subsheaf of \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{ i\in I}\mathbb{D}_{i}}\), so Theorem 2.12 implies that \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\dots,I_{k})}_{G,\mu_{\bullet},N}|_{\prod_{ i\in I}\mathbb{D}_{i}}\) is a formal scheme that is locally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). For large enough \(\deg N\), Proposition 5.6 implies that \(\widehat{\operatorname{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s}}|_{ \prod_{i\in I}\mathbb{D}_{i}}\) is also a formal scheme that is formally of finite type over \(\prod_{i\in I}\mathbb{D}_{i}\). Hence the above restriction is formally of finite type, so it is an etale morphism of formal schemes. To add level at \(v\), we need to pass to generic fibers as follows. Maintain the assumptions of 5.14, and assume that \(\deg N\) is large enough as in Proposition 5.14. Proposition 5.6 shows that \(\operatorname{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\dots,I_{k}),\leq s}\left|{ \prod_{i\in I}C_{i}\smallsetminus N_{i}}\right.\) is separated over \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\), so the natural morphism of adic spaces \[\widehat{\operatorname{Sht}}_{G,\mu_{\bullet},N}^{(I_{1},\dots,I_{k}),\leq s} \left|{\prod_{i\in I}\mathbb{D}_{i}}\to(\operatorname{Sht}_{G,\mu_{\bullet},N}^ {(I_{1},\dots,I_{k}),\leq s}){\prod_{i\in I}\mathbb{D}_{i}}\right.\] is an open embedding [26, (4.6.iv.c)]. Write \(\prod_{i\in I}\operatorname{Spa}F_{i}\) for the product of the \(\operatorname{Spa}F_{i}\) over \(\mathbb{F}_{q^{r}}\). For any non-negative integer \(n\), write \(\widehat{\operatorname{Sht}}_{G,\mu_{\bullet},nv+N}^{(I_{1},\dots,I_{k}), \leq s}\left|{\prod_{i\in I}\operatorname{Spa}F_{i}}\right.\) for the preimage of \(\widehat{\operatorname{Sht}}_{G,\mu_{\bullet},N}^{(I_{1},\dots,I_{k}),\leq s} \left|{\prod_{i\in I}\operatorname{Spa}F_{i}}\right.\) in \((\operatorname{Sht}_{G,\mu_{\bullet},nv+N}^{(I_{1},\dots,I_{k}),\leq s}){\prod _{i\in I}\operatorname{Spa}F_{i}}\). Write \(\prod_{i\in I}\operatorname{Spd}F_{i}\) for the product of the \(\operatorname{Spd}F_{i}\) over \(\mathbb{F}_{q^{r}}\). Write \[\mathcal{L}\text{oc}\mathcal{Sht}_{G,\mu_{\bullet},nv}^{(I_{1},\dots,I_{k}), \leq s}\left|{\prod_{i\in I}\operatorname{Spd}F_{i}}\right.\] for the preimage of \((\mathfrak{Loc}\mathcal{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s} \left|{\prod_{i\in I}\operatorname{Spa}F_{i}}\right.)^{\Diamond}\) in \(\mathcal{L}\text{oc}\mathcal{Sht}_{G,\mu_{\bullet},nv}^{(I_{1},\dots,I_{k})} \left|{\prod_{i\in I}\operatorname{Spd}F_{i}}\right.\), where we use Theorem 4.9 to identify \((\mathfrak{Loc}\mathcal{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})}\left|{ \prod_{i\in I}\mathbb{D}_{i}}\right.)^{\Diamond}\) with \[\mathcal{L}\text{oc}\mathcal{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k})} \left|{\prod_{i\in I}\mathbb{D}_{i}^{\Diamond}}.\] We can now define the uniformization morphism on generic fibers. Maintain the assumptions of 5.15, and let \(S=\operatorname{Spa}(R,R^{+})\) be an affinoid perfectoid space over \(\prod_{i\in I}\operatorname{Spa}F_{i}\). **Theorem**.: _We have a canonical morphism_ \[\Theta_{n}:\mathcal{L}\text{oc}\mathcal{Sht}_{G,\mu_{\bullet},nv}^{(I_{1}, \dots,I_{k}),\leq s}\left|{\prod_{i\in I}\operatorname{Spd}F_{i}}\right.\to( \widehat{\operatorname{Sht}}_{G,\mu_{\bullet},nv+N}^{(I_{1},\dots,I_{k}), \leq s}\left|{\prod_{i\in I}\operatorname{Spa}F_{i}}\right.)^{\Diamond}\] _of locally spatial diamonds over \(\prod_{i\in I}\operatorname{Spd}F_{i}\) that is etale._ Proof.: First, we define \(\Theta_{n}\). By Theorem 4.9, an \(S\)-point of \[\mathcal{L}\text{oc}\mathcal{Sht}_{G,\mu_{\bullet},nv}^{(I_{1},\dots,I_{k}), \leq s}\left|{\prod_{i\in I}\operatorname{Spd}F_{i}}\right.\] corresponds to a cover \((S_{\alpha})_{\alpha}\) of \(S\) by rational open subspaces \(S_{\alpha}=\operatorname{Spa}(R_{\alpha},R_{\alpha}^{+})\) with pairwise intersections \(S_{\alpha\beta}=\operatorname{Spa}(R_{\alpha\beta},R_{\alpha\beta}^{+})\), a family \((\mathscr{G}^{\alpha},\delta^{\alpha})\) of \(\operatorname{Spf}R_{\alpha}^{+}\)-points of \(\mathfrak{Loc}\mathcal{Sht}_{G,\mu_{\bullet}}^{(I_{1},\dots,I_{k}),\leq s} \left|{\prod_{i\in I}\mathbb{D}_{i}}\right.\) that agree on \(\operatorname{Spf}R_{\alpha\beta}^{+}\), and a level-\(n\) structure \(\psi\) on the analytic local \(G\)-shtuka over \(S\) obtained from gluing the \((\mathscr{G}^{\alpha})^{\text{an}}\). Proposition 5.6 indicates that \(\operatorname{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\dots,I_{k}),\leq s}\left|{ \prod_{i\in I}C_{i}\smallsetminus N_{i}}\right.\) is locally of finite type over \(\prod_{i\in I}C_{i}\smallsetminus N_{i}\), so for all \(\alpha\), our \(\Theta(\mathscr{G}^{\alpha},\delta^{\alpha})\) yields an \(R_{\alpha}^{+}\)-point of \[\operatorname{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\dots,I_{k}),\leq s}\left|{ \prod_{i\in I}C_{i}\smallsetminus N_{i}}.\right.\] Write \(\mathscr{G}^{\alpha,\Theta}\) for the resulting global \(G\)-shtuka over \(\operatorname{Spec}R_{\alpha}\), which is bounded by \(\mu_{\bullet}\) and has Harder-Narasimhan polygon bounded by \(m\). Note that the pullback \(\psi^{\alpha}\) of \(\psi\) to \(S_{\alpha}\) is precisely a level-\(nv\) structure on \(\mathscr{G}^{\alpha,\Theta}\), so we can form a level-\((nv+N)\) structure \(\psi^{\alpha,\Theta}\) on \(\mathscr{G}^{\alpha,\Theta}\) by taking \(\psi^{\alpha}\) on \(nv\) and \(\operatorname{id}\) on \(N\). Then \((\mathscr{G}^{\alpha,\Theta},\psi^{\alpha,\Theta})\) induces an \(S_{\alpha}\)-point of \(\widehat{\operatorname{Sht}}_{G,\mu_{\bullet},nv+N}^{(I_{1},\dots,I_{k}),\leq s }\left|{\prod_{i\in I}\operatorname{Spa}F_{i}}\right.\), and because the \(\mathscr{G}^{\alpha,\Theta}\) and \(\psi^{\alpha,\Theta}\) agree on \(\operatorname{Spec}R_{\alpha\beta}\), the resulting family glues into an \(S\)-point. We define this \(S\)-point to be the value of \(\Theta_{n}\). To see that \(\Theta_{n}\) is etale, note that we have a commutative square Theorem 4.9 and Proposition 4.12 imply that the top arrow is etale, and Proposition 5.5 and [41, Lemma 15.6] imply that the bottom arrow is etale. Proposition 5.14 and [41, Lemma 15.6] show that \(\widehat{\Theta}^{\Diamond}\) is etale, so the \(2\)-out-of-\(3\) property [41, Proposition 11.30] concludes that \(\Theta_{n}\) is etale. As before, we reindex everything in terms of representations of the dual group. Maintain the assumptions of 5.15. Let \(\widetilde{F}_{v}\) be the extension of \(F_{v}\) as in 2.13, and identify \(\widetilde{F}_{v}\) with the completion of \(\widetilde{F}\) at the place \(\widetilde{v}\) of \(\widetilde{F}\) above \(v\) induced by \(\overline{F}\!\to\!\overline{F}_{v}\). Identify \(\widehat{G}\) with the dual group of \(G_{F_{v}}\) over \(\mathcal{O}_{E}\), and write \({}^{L}G_{v}\) for \(\widehat{G}\rtimes\operatorname{Gal}(\widetilde{F}_{v}/F_{v})\). Note that we have a natural inclusion \({}^{L}G_{v}\!\to\!^{L}G\). Let \(V\) be an object of \(\operatorname{Rep}_{E}({}^{L}G_{v})^{I}\). Write \(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k})}_{G,V,N}\) and \(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,N}\) for the formal completions of \(\operatorname{Sht}^{(I_{1},\ldots,I_{k})}_{G,V,N}\) and \(\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,N}\), respectively, along \(v^{I}\) in \((C\smallsetminus N)^{I}\). Proposition 5.13 and descent yield a canonical morphism \[\widehat{\Theta}:\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,V} \!\to\!\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k})}_{G,V,N}\] that is formally etale. Write \(\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V}\) for the preimage of \(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,N}\) under \(\Theta\). Write \(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv+N}\) for the preimage of \(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv+N}\) in \((\operatorname{Sht}^{(I_{1},\ldots,I_{k})}_{G,V,nv+N})_{(\operatorname{Spa} F_{v})^{I}}\), and write \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv}\) for the preimage of \((\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V})^{\Diamond}_ {(\operatorname{Spa}F_{v})^{I}}\) in \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,V,nv}\), where we use Theorem 4.9 to identify \((\mathfrak{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V})^{\Diamond}_ {(\operatorname{Spa}F_{v})^{I}}\) with \(\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k})}_{G,V,0v}\). Theorem 5.16 and Galois descent yield a canonical morphism \[\Theta_{n}:\mathcal{Loc}\mathfrak{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv} \!\to\!\!(\widehat{\operatorname{Sht}}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv+ N})^{\Diamond}\] of locally spatial diamonds over \((\operatorname{Spd}F_{v})^{I}\) that is etale. Finally, we show that the uniformization morphism is compatible with partial Frobenii. Maintain the assumptions of 5.15. **Lemma**.: _Our \(\mathcal{F}\Gamma^{(I_{1},\ldots,I_{k})}\) restricts to a morphism_ \[\mathcal{F}\Gamma^{(I_{1},\ldots,I_{k})}:\mathcal{Loc}\mathfrak{Sht}^{(I_{1}, \ldots,I_{k}),\leq s}_{G,V,nv}\!\to\!\mathcal{Loc}\mathfrak{Sht}^{(I_{1}, \ldots,I_{k}),\leq s+r\kappa(V)}_{G,V,nv}.\] _After enlarging \(\deg N\), we can also form the \(r\)-fold composition_ \[(\operatorname{Fr}^{(I_{1},\ldots,I_{k})})_{\tau^{r-1}(\operatorname{Spa}F_{ v})^{I_{1}}\times(\operatorname{Spa}F_{v})^{I\smallsetminus I_{1}}}\circ\cdots \circ(\operatorname{Fr}^{(I_{1},\ldots,I_{k})})_{(\operatorname{Spa}F_{v})^{I}},\] _which yields a morphism_ \[(\operatorname{Fr}^{(I_{1},\ldots,I_{k})})^{r}_{(\operatorname{Spa}F_{v})^{I} }:(\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,V,nv+N})_{(\operatorname {Spa}F_{v})^{I}}\!\to\!(\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s+r \kappa(V)}_{G,V,nv+N})_{(\operatorname{Spa}F_{v})^{I}}.\] _Finally, we have \(\Theta_{n}\circ\mathcal{F}\Gamma^{(I_{1},\ldots,I_{k})}=(\operatorname{Fr}^{(I _{1},\ldots,I_{k})})^{r,\Diamond}_{(\operatorname{Spa}F_{v})^{I}}\circ\Theta_{n}\)._ Proof.: Write \(\widehat{\operatorname{Sht}}_{G,V,N}^{(I_{1},\ldots,I_{k})}|_{\neg\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{}}}}}}}}}}}}} \times\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\cdotcdot{ \cdot}}}}}}}}}}}}}\) for the formal completion of \(\operatorname{Sht}_{G,V,N}^{(I_{1},\ldots,I_{k})}\) along \(\tau(v)^{I_{1}}\times v^{I\smallsetminus I_{1}}\) in \((C\smallsetminus N)^{I}\). We see from 5.9 that \(\operatorname{Fr}^{(I_{1},\ldots,I_{k})}\) induces a morphism \[\widehat{\operatorname{Fr}}^{(I_{1},\ldots,I_{k})}:\widehat{\operatorname{Sht }}_{G,V,N}^{(I_{1},\ldots,I_{k})}\to\widehat{\operatorname{Sht}}_{G,V,N}^{(I_{ 2},\ldots,I_{k},I_{1})}|_{\neg\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{ \operatorname{ \ We will also take cohomology after quotienting by a lattice \(\Xi\) of \(Z(F)\backslash Z(\mathbb{A})\), where by a _lattice_ we mean a discrete torsionfree cocompact subgroup, so we proceed as follows. Note that \(L_{I}^{+}(Z)\) acts trivially on \(\operatorname{Gr}_{G,\boldsymbol{\mu}_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{ \prod_{i\in I}C_{i}}\), so the natural \(L_{I}^{+}(G)\)-action factors through \(L_{I}^{+}(G^{\operatorname{ad}})\). For large enough \(e\), \(1.5\) indicates that this factors through \(L_{I}^{e}(G^{\operatorname{ad}})\). Now \(L_{I}^{e}(Z)\) acts trivially on the objects of \[D(\operatorname{Gr}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I}C _{i}},E)\] obtained from geometric Satake [32, Theoreme 12.16], so these objects are \(L_{I}^{e}(G^{\operatorname{ad}})\)-equivariant. Adapting the construction in 6.1 yields an object \(\mathcal{F}_{\Xi,\mu_{\bullet},N,E}^{(I_{1},\ldots,I_{k})}\) of \[D(\operatorname{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}/\Xi\,|_{\prod_ {i\in I}C_{i}\smallsetminus N_{i}},E),\] and we see that the pullback of \(\mathcal{F}_{\Xi,\mu_{\bullet},N,E}^{(I_{1},\ldots,I_{k})}\) to \(\operatorname{Sht}_{G,\mu_{\bullet},N}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I} C_{i}\smallsetminus N_{i}}\) equals \(\mathcal{F}_{\Xi,\mu_{\bullet},N,E}^{(I_{1},\ldots,I_{k})}\). Next, we describe the coefficient sheaves used for the homology of the moduli of local \(G\)-shtukas. Recall \(\mathcal{L}_{I}^{e}(G)\) and \(\mathcal{L}_{I}^{+}(G)\) from Definition 4.3. For large enough \(e\), \(1.5\) and Lemma 4.5 indicate that the natural \(\mathcal{L}_{I}^{+}(G)\)-action on \(\mathcal{G}_{G,\mu_{\bullet}}^{(I_{1},\ldots,I_{k})}|_{\prod_{i\in I}\mathbb{ D}_{i}^{\circ}}\) factors through \(\mathcal{L}_{I}^{e}(G)\). Write \(\mathcal{A}_{G,\mu\bullet,nv}^{(I_{1},\ldots,I_{k})}\) for the \(\mathcal{L}_{I}^{e}(G)\)-bundle on \[\mathcal{L}\mathrm{oc}\mathrm{Sht}_{G,\mu_{\bullet},nv}^{(I_{1},\ldots,I_{k })}|_{\prod_{i\in I}\operatorname{Spd}F_{i}}\] whose fiber over \((\mathscr{G},\delta,\psi)\) parametrizes trivializations of the \(G\)-bundle \({}^{\tau^{r}}\mathscr{G}_{1}|_{e\sum_{i\in I}\Gamma_{i}}\). Note that we have a natural \(\mathcal{L}_{I}^{e}(G)\)-equivariant morphism \[\mathcal{A}_{G,\mu_{\bullet},nv}^{(I_{1},\ldots,I_{k})}\mathop{\to}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! is an analytic adic space, [41, Lemma 15.6] and [41, Remark 14.14] indicate that \((\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{\prod_{i \in I}\operatorname{Spa}\bar{F}_{i}}\) yields an object \((\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{\prod_ {i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\) of \[D_{\operatorname{et}}((\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G, \mu_{\bullet},nv+N})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond},E).\] **Lemma**.: \((\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{\prod _{i\in I}\operatorname{Spa}\bar{F}_{i}}\) _is universally locally acyclic over \(\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}\). Moreover, its image \({}^{\prime}(\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar {E}})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\) in \(D_{\blacksquare}((\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond},E)\) under the double-dual embedding as in [11, p. 260] satisfies_ \[\Theta_{n}^{*}\big{[}\,{}^{\prime}(\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{ \mu_{\bullet},nv+N,\bar{E}})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{ \diamond}\big{]}={}^{\prime}\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{ \bullet},nv,E}.\] Proof.: We start by rewriting \((\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{ \prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\) as follows. Since \[(\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}})_{\prod_{i\in I} \operatorname{Spa}\bar{F}_{i}}\] is an analytic adic space, [41, Lemma 15.6] and [41, Remark 14.14] indicate that \((\mathcal{S}^{(I_{1},\ldots,I_{k})}_{\mu_{\bullet},E})_{\prod_{i\in I} \operatorname{Spa}\bar{F}_{i}}^{(I_{1},\ldots,I_{k})}\) yields an object of \(D_{\operatorname{et}}((\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G,\mu_{ \bullet}})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond},E)\). By first pulling back \((\mathcal{S}^{(I_{1},\ldots,I_{k})}_{\mu_{\bullet},E})_{\prod_{i\in I} \operatorname{Spa}\bar{F}_{i}}^{\diamond}\) to \((\operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{ \prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\) and then using \(\mathcal{L}^{\varepsilon}_{I}(G)\)-equivariance and [41, Proposition 17.3] to descend along \[(\operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{ \prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\to(\operatorname{Sht}^ {(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod_{i\in I} \operatorname{Spa}\bar{F}_{i}}^{\diamond},\] where we use Lemma 4.5 to identify \((L^{\varepsilon}_{I}(G))_{\mathbb{D}^{I}}^{\diamond}\) with \(\mathcal{L}^{\varepsilon}_{I}(G)\), we see that the resulting object of \(D_{\operatorname{et}}((\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G, \mu_{\bullet},nv+N})_{\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond},E)\) equals \((\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{ \prod_{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\). Let us prove the first claim. By using the explicit description in [11, Proposition VI.7.9] and the fiberwise criterion for perversity [11, Corollary VI.7.6], we see that \((\mathcal{S}^{(I_{1},\ldots,I_{k})}_{\mu_{\bullet},E})^{\diamond}\) equals the object obtained from [11, Theorem VI.11.1] and \(V_{\mu_{\bullet}}\), where we use Lemma 4.5 to identify \((\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}})_{\prod_{i\in I} \operatorname{Spa}\bar{F}_{i}}^{\diamond}\) with \(\mathcal{G}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet}}|_{\prod_{i\in I} \operatorname{Spd}\bar{F}_{i}}\). Hence \((\mathcal{S}^{(I_{1},\ldots,I_{k})}_{\mu_{\bullet},E})^{\diamond}\) is universally locally acyclic over \(\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}\). Now 6.1 and [41, Proposition 24.4] show that \[(\operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod _{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\to(\operatorname{Gr}^{(I_{1},\ldots,I_{k})}_{G,\mu_{\bullet},nv+N})_{\prod_{i\in I}\operatorname{Spa}\bar{F} _{i}}^{\diamond}\] is \(\ell\)-cohomologically smooth, so [11, Proposition IV.2.13 (i)] implies that the pullback of \((\mathcal{S}^{(I_{1},\ldots,I_{k})}_{\mu_{\bullet},E})^{\diamond}\) to \((\operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod _{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\) remains universally locally acyclic over \(\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}\). Applying [41, Proposition 24.4] again shows that \[(\operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod _{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\to(\operatorname{Sht}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod_{i\in I}\operatorname{Spa }\bar{F}_{i}}^{\diamond}\] is \(\ell\)-cohomologically smooth, so [11, Proposition IV.2.13 (ii)] implies that \[(\mathcal{F}^{(I_{1},\ldots,I_{k}),\leq s}_{\mu_{\bullet},nv+N,\bar{E}})_{\prod _{i\in I}\operatorname{Spa}\bar{F}_{i}}^{\diamond}\] is universally locally acyclic over \(\prod_{i\in I}\operatorname{Spa}\bar{F}_{i}\), as desired. For the second claim, note that \(\Theta_{n}\) naturally induces a morphism \[\mathcal{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N}\to( \operatorname{A}^{(I_{1},\ldots,I_{k}),\leq s}_{G,\mu_{\bullet},nv+N})_{\prod_{i\in I }\operatorname{Spa}\bar{F}_{i}}^{\diamond}\] such that the diagram commutes. Therefore the above discussion yields the desired result. We now consider the cohomology of the moduli of global \(G\)-shtukas. Let \(V\) be an object of \(\operatorname{Rep}_{E}({}^{L}G)^{I}\). Note that the \(\mathcal{F}_{\mu_{\bullet},N,E}^{(I_{1},\dots,I_{k})}\) and \(\mathcal{F}_{\mu_{\bullet},N,E}^{(I_{1},\dots,I_{k}),\leq s}\) naturally descend to objects \(\mathcal{F}_{V,N,E}^{(I_{1},\dots,I_{k})}\) and \(\mathcal{F}_{V,N,E}^{(I_{1},\dots,I_{k}),\leq s}\) of \(D(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k})},E)\) and \(D(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k}),\leq s},E)\), respectively, where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{\mathbb{Q}}_{t}|\bar{\mathbb{Q}}^{I}}\) with multiplicity. Recall that \(f_{!}^{\mathrm{S}}\mathcal{F}_{V,N,E}^{(I_{1},\dots,I_{k}),\leq s}\) is independent of the ordered partition \(I_{1},\dots,I_{k}\)[32, p. 868], so we write it as \(\mathcal{H}_{V,N,E}^{I,\leq s}\). The same holds for \(f_{!}^{\mathrm{S}}\mathcal{F}_{V,N,E}^{(I_{1},\dots,I_{k})}\), so we write it as \(\mathcal{H}_{V,N,E}^{I}\). Because \(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k})}\) is the increasing union of the \(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k}),\leq s}\), we have \(\mathcal{H}_{V,N,E}^{I}=\varinjlim_{\longrightarrow}\mathcal{H}_{V,N,m,E}^{I,\leq s}\). Note that 5.8 yields an action of \(C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)\) on \(\mathcal{H}_{V,N,E}^{I}\). Recall the following smoothness result of Xue [44]. Write \(\overline{\eta}\) for \(\operatorname{Spec}\overline{F}\), and write \(\Delta\) for diagonal morphisms. Write \(W_{F}\) for the absolute Weil group of \(F\), and write \(\operatorname{val}_{F}:W_{F}\operatorname{\rightarrow}\mathbb{Z}\) for the homomorphism that sends geometric \(q\)-Frobenius to \(1\). **Theorem**.: _The cohomology sheaves of \(\mathcal{H}_{V,N,E}^{I}\) are ind-smooth, and the cohomology sheaves of \(\mathcal{H}_{V,N,E}^{I}|_{\Delta(\overline{\eta})}\) have a natural action of \(W_{F}^{I}\). For any \(\gamma_{\bullet}=(\gamma_{i})_{i\in I}\) in \(W_{F}^{I}\), the \(\gamma_{\bullet}\)-action sends the image of the cohomology groups of \(\mathcal{H}_{V,N,E}^{I,\leq s}|_{\Delta(\overline{\eta})}\) to the image of the cohomology groups of \(\mathcal{H}_{V,N,E}^{I,\leq s^{\prime}}|_{\Delta(\overline{\eta})}\) for \(s^{\prime}\geq s+\sum_{i\in I}\max\{0,\operatorname{val}_{F}(\gamma_{i})\}\)._ Proof.: The first claim follows from the proof of [44, Theorem 6.0.12], and the \(W_{F}^{I}\)-action follows from the proof of [44, Proposition 6.0.10]. The last claim follows from 5.9. Let us record the analogous results after quotienting by \(\Xi\). Let \(V\) be an object of \(\operatorname{Rep}_{E}({}^{L}G)^{I}\), and note that the \(\mathcal{F}_{\Xi,\mu_{\bullet},N,E}^{(I_{1},\dots,I_{k})}\) naturally descend to an object \(\mathcal{F}_{\Xi,V,N,E}^{(I_{1},\dots,I_{k})}\) of \(D(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k})}/\Xi,E)\), where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{\mathbb{Q}}_{t}|\bar{\mathbb{Q}}^{I}}\) with multiplicity. Recall that \(f_{!}^{\mathrm{S}}\mathcal{F}_{\Xi,V,N,E}^{(I_{1},\dots,I_{k})}\) is independent of the ordered partition \(I_{1},\dots,I_{k}\)[32, p. 868], so we write it as \(\mathcal{H}_{\Xi,V,N,E}^{I}\). Note that 5.8 yields an action of \[C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)\] on \(\mathcal{H}^{I}_{\Xi,V,N,E}\). Recall that the cohomology sheaves of \(\mathcal{H}^{I}_{\Xi,V,N,E}\) are ind-smooth [44, Theorem 6.0.12], and the cohomology sheaves of \(\mathcal{H}^{I}_{\Xi,V,N,E}|_{\Delta(\overline{\eta})}\) have a natural action of \(W^{I}_{F}\)[44, Proposition 6.0.10]. Next, we consider the homology of the moduli of local \(G\)-shtukas. Let \(V\) be an object of \(\operatorname{Rep}_{E}({}^{L}G_{v})^{I}\). Note that the \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k})}_{\mu_{\bullet},nv,\Lambda}\) and \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k}),\leq s}_{\mu_{\bullet},nv,\Lambda}\) naturally descend to objects \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k})}_{V,nv,\Lambda}\) and \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k}),\leq s}_{V,nv,\Lambda}\) of \(D_{\blacksquare}(\mathcal{LocSH}^{(I_{1},\dots,I_{k})}_{G,V,nv}|_{(\operatorname{ Spd}\bar{F}_{v})^{I}},\Lambda)\) and \(D_{\blacksquare}(\mathcal{LocSH}^{(I_{1},\dots,I_{k}),\leq s}_{G,V,nv}|_{( \operatorname{Spd}\bar{F}_{v})^{I}},\Lambda)\), respectively, where \(\mu_{\bullet}\) runs over highest weights appearing in \(V_{\overline{\mathbb{Q}}_{\ell}}|_{\bar{G}^{I}}\) with multiplicity. Recall the notation of 4.16. Since the square commutes, where \(\mathcal{G}\mathcal{F}^{(I)}_{G,V}\) denotes the natural descent of \(\coprod_{\mu_{\bullet}}\mathcal{G}^{(I)}_{G,\mu_{\bullet}}|_{(\widetilde{ \mathbb{D}}^{I})^{\circ}}\) to \((\mathbb{D}^{I})^{\circ}\), the \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k})}_{V,nv,\Lambda}\) defined in 4.16 agrees with the \({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k})}_{V,nv,\Lambda}\) defined here. The smallness of convolution implies that \(f^{\mathcal{M}}_{\sharp}({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k}),\leq s}_{ V,nv,\Lambda})\) is independent of the ordered partition \(I_{1},\dots,I_{k}\), so we write it as \(\mathcal{H}^{\operatorname{loc},I,\leq s}_{V,nv,\Lambda}\). The same holds for \(f^{\mathcal{M}}_{\sharp}({}^{\prime}\mathcal{F}^{(I_{1},\dots,I_{k})}_{V,nv, \Lambda})\), so we write it as \(\mathcal{H}^{\operatorname{loc},I}_{V,nv,\Lambda}\). Because \(\mathcal{LocSH}^{(I_{1},\dots,I_{k})}_{G,V,nv}\) is the increasing union of the \(\mathcal{LocSH}^{(I_{1},\dots,I_{k}),\leq s}_{G,V,nv}\), we have \(\mathcal{H}^{\operatorname{loc},I}_{V,nv,\Lambda}=\varinjlim_{\gamma}\mathcal{ H}^{\operatorname{loc},I,\leq s}_{V,nv,\Lambda}\). Note that Proposition 4.13 yields an action of \(C_{c}(K_{n}\backslash G(F_{v})/K_{n},E)\) on \(\mathcal{H}^{\operatorname{loc},I}_{V,nv,\Lambda}\). Write \(\mathbb{C}_{v}\) for the completion of \(\overline{F}_{v}\), and write \(\overline{\eta}_{v}\) for \(\operatorname{Spd}\mathbb{C}_{v}\). Theorem 4.16 yields a natural action of \(W^{I}_{F_{v}}\) on the cohomology groups of \(\mathcal{H}^{\operatorname{loc},I}_{V,nv,\Lambda}|_{\Delta(\overline{\eta}_{ v})}\). For any \(\gamma_{\bullet}\) in \(W_{F_{v}}\), Lemma 5.18 and Lemma 4.17 imply that the \(\gamma_{\bullet}\)-action sends the image of the cohomology groups of \(\mathcal{H}^{\operatorname{loc},I,\leq s^{\prime}}_{V,nv,\Lambda}\) to the image of the cohomology groups of \(\mathcal{H}^{\operatorname{loc},I,\leq s^{\prime}}_{V,nv,\Lambda}\) for \(s^{\prime}\geq s+\sum_{i\in I}\max\{0,\operatorname{val}_{F}(\gamma_{i})\}\). Let us recall some facts about excursion algebras. For any abstract group \(W\), finite group \(Q\) with a pinned action on \(\widehat{G}\), and group homomorphism \(W\!\to\!Q\), write \(\operatorname{Exc}(W,\widehat{G})\) for the excursion algebra over \(\mathcal{O}_{E}\) as in [11, Definition VIII.3.4]. Recall that \(\operatorname{Exc}(W,\widehat{G})\) is flat over \(\mathcal{O}_{E}\) and has canonical generators \(S_{I,V,x,\xi,\gamma_{\bullet}}\) subject to explicit relations, where \(I\) runs over finite sets, \(V\) runs over objects of \(\operatorname{Rep}_{\mathcal{O}_{E}}((\widehat{G}\rtimes Q)^{I})\), \(x\) runs over morphisms \(\mathbf{1}\!\to\!V|_{\Delta(\widehat{G})}\), \(\xi\) runs over morphisms \(V|_{\Delta(\widehat{G})}\!\to\!\mathbf{1}\), and \(\gamma_{\bullet}\) runs through \(W^{I}\). **Proposition**.: _Let \(L\) be an algebraically closed field over \(\mathcal{O}_{E}\). We have a unique bijection_ \[\left\{\begin{array}{c}\mathcal{O}_{E}\text{-algebra homomorphisms}\\ \chi:\operatorname{Exc}(W,\widehat{G})\!\to\!L\end{array}\right\}\overset{ \sim}{\to}\left\{\begin{array}{c}\text{ semisimple homomorphisms}\\ \rho:W\!\to\!\widehat{G}(L)\rtimes Q\text{ over }Q\end{array}\right\}\!\Big{/} \widehat{G}(L)\text{-conj.}\] _such that \(\chi(S_{I,V,x,\xi,\gamma_{\bullet}})\) equals the composition_ \[L\xrightarrow{x}V(L)\xrightarrow{(\rho(\gamma_{i}))_{i\in I}}V(L)\xrightarrow{ \xi}L.\] Proof.: This follows immediately from [11, Corollary VII.4.3]. The following theorem summarizes the work of V. Lafforgue [32] and Xue [44] on global excursion operators. Write \(\operatorname{Bun}_{G,N}(\mathbb{F}_{q})\) for the groupoid of \(G\)-bundles on \(C\) equipped with a trivialization along \(N\). **Theorem**.: _There exists a unique \(E\)-algebra homomorphism_ \[\operatorname{Exc}(W_{F},\widehat{G})_{E}\operatorname{\to}\operatorname{End }_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}(\operatorname{Bun}_{G,N }(\mathbb{F}_{q}),E))\] _that sends \(S_{I,V,x,\xi,\gamma_{\bullet}}\) to the composition_ _Moreover, the image of \(\operatorname{Exc}(W_{F},\widehat{G})_{E}\) in \(\operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}( \operatorname{Bun}_{G,N}(\mathbb{F}_{q}),E))\) preserves the kernel of the surjective \(C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)\)-equivariant map_ \[C_{c}(\operatorname{Bun}_{G,N}(\mathbb{F}_{q}),E)\operatorname{\to}C_{c}( \operatorname{Bun}_{G,N}(\mathbb{F}_{q})/\Xi,E),\] _so we obtain an \(E\)-algebra homomorphism_ \[\operatorname{Exc}(W_{F},\widehat{G})_{E}\operatorname{\to}\operatorname{ End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}(\operatorname{Bun}_{G,N }(\mathbb{F}_{q})/\Xi,E)).\] Proof.: Arguing as in [32, p. 870] shows that the images of the \(S_{I,V,x,\xi,\gamma_{\bullet}}\) satisfy the explicit relations, so we get the desired \(E\)-algebra homomorphism \[\operatorname{Exc}(W_{F},\widehat{G})_{E}\operatorname{\to}\operatorname{ End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}(\operatorname{Bun}_{G,N }(\mathbb{F}_{q}),E)).\] Next, because \(\operatorname{Sht}_{G,V,N}^{(I_{1},\dots,I_{k})}\operatorname{\to}\operatorname {Sht}_{G,V,N}^{(I_{1},\dots,I_{k})}/\Xi\) is etale, 6.2 yields a natural \(!\)-pushforward morphism \(\mathcal{H}^{I}_{V,N,E}\operatorname{\to}\mathcal{H}^{I}_{\Xi,V,N,E}\), which induces a morphism from the composition diagram above to the analogous composition diagram for \(\mathcal{H}^{I}_{\Xi,V,N,E}\). Note that, when \(I=*\) and \(V=\mathbf{1}\), the natural \(!\)-pushforward morphism recovers \[C_{c}(\operatorname{Bun}_{G,N}(\mathbb{F}_{q}),E)\operatorname{\to}C_{c}( \operatorname{Bun}_{G,N}(\mathbb{F}_{q})/\Xi,E)\] on fibers. Thus the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}( \operatorname{Bun}_{G,N}(\mathbb{F}_{q}),E))\) satisfies the desired property. We now elaborate on variants of Theorem 6.10. Recall that \[\operatorname{Bun}_{G,N}(\mathbb{F}_{q})\cong\coprod_{\alpha}G_{\alpha}(F) \backslash G_{\alpha}(\mathbb{A})/K_{N}\] as groupoids [32, Remarque 12.2], where \(\alpha\) runs over \(G\)-bundles on \(\operatorname{Spec}F\) whose pullback to \(\operatorname{Spec}F_{c}\) is trivial for all closed points \(c\) of \(C\), and \(G_{\alpha}\) denotes the inner twist of \(G_{F}\) over \(F\) associated with \(\alpha\). Hence \(C_{c}(G(F)\backslash G(\mathbb{A})/K_{N},E)\) and \(C_{c}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},E)\) are \(C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)\)-stable direct summands of \[C_{c}(\operatorname{Bun}_{G,N}(\mathbb{F}_{q}),E)\text{ and }C_{c}( \operatorname{Bun}_{G,N}(\mathbb{F}_{q})/\Xi,E),\] respectively, so Theorem 6.10 induces \(E\)-algebra homomorphisms \[\operatorname{Exc}(W_{F},\widehat{G})_{E}\operatorname{\to} \operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}(G(F) \backslash G(\mathbb{A})/K_{N},E)),\] \[\operatorname{Exc}(W_{F},\widehat{G})_{E}\operatorname{\to} \operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{c}(G(F) \Xi\backslash G(\mathbb{A})/K_{N},E)).\] 6.12. For us, the most convenient interpretation of Fargues-Scholze [11] is the following theorem. Write \(\mathfrak{z}_{K_{n}}(G(F_{v}),\Lambda)\) for the center of \(C_{c}(K_{n}\backslash G(F_{v})/K_{n},\Lambda)\). **Theorem**.: _There exists a unique \(\Lambda\)-algebra homomorphism_ \[\operatorname{Exc}(W_{F_{v}},\widehat{G})_{\Lambda}\!\to\!\mathfrak{z}_{K_{n} }(G(F_{v}),\Lambda)\] _that sends \(S_{I,V,x,\xi,\gamma_{\bullet}}\) to the composition_ Proof.: This follows from [11, Corollary IX.2.4] and [11, Theorem VIII.4.1]. 6.13. We now prove local-global compatibility on the level of algebras over \(E\). Write \(\mathbb{A}^{v}\) for the away-from-\(v\) adeles, write \(K_{N}^{v}\) for \(\mathbb{A}^{v}\cap K_{N}\), and let \(n\) be the multiplicity of \(v\) in \(N\). So \(K_{N}=K_{n}K_{N}^{v}\). **Theorem**.: _The square_ _commutes._ Proof.: It suffices to check commutativity on the canonical generators \(S_{I,V,x,\xi,\gamma_{\bullet}}\) of \(\operatorname{Exc}(W_{F_{v}},\widehat{G})_{E}\), where \(I\) is a finite set, \(V\) is an object of \(\operatorname{Rep}_{E}((\widehat{G}\rtimes\operatorname{Gal}(\widetilde{F}/F ))^{I})\), \(x\) is a morphism \(\mathbf{1}\!\to\!V|_{\Delta(\widehat{G})}\), \(\xi\) is a morphism \(V|_{\Delta(\widehat{G})}\!\to\!\mathbf{1}\), and \(\gamma_{\bullet}\) is in \(W_{F_{v}}^{I}\). This amounts to computing certain actions on \(C_{c}(G(F)\backslash G(\mathbb{A})/K_{N},E)\), which we check on the basis given by \(\mathbf{1}_{G(F)gK_{N}}\) for \(g\) in \(G(\mathbb{A})\). Since the \(C_{c}(K_{n}\backslash G(F_{v})/K_{n},E)\)-action commutes with the \(C_{c}(K_{N}^{v}\backslash G(\mathbb{A}^{v})/K_{N}^{v},E)\)-action, we can assume that the away-from-\(v\) components of \(g\) equal \(1\). Then \(\mathbf{1}_{G(F)gK_{N}}\) equals the image of \(\mathbf{1}_{g_{v}K_{n}}\) under the natural pushforward map \[C_{c}(G(F_{v})/K_{n},E)\!\to\!C_{c}(G(F)\backslash G(\mathbb{A})/K_{N},E).\] Because this map commutes with the \(C_{c}(K_{n}\backslash G(F_{v})/K_{n},E)\)-action, it also commutes with the action of the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\mathfrak{z}_{K_{n}}(G(F_{v}),E)\). Hence we can compute the latter for \(\mathbf{1}_{G(F)gK_{N}}\) by computing it for \(\mathbf{1}_{g_{v}K_{n}}\). Fix \(s\) such that \(\mathbf{1}_{g_{v}K_{n}}\) lies in the image of \(\mathcal{H}_{\mathbf{1},nv,E}^{*,\leq s,0}\) in \(\mathcal{H}_{\mathbf{1},nv,E}^{*,0}=C_{c}(G(F_{v})/K_{n},E)\). By Theorem 6.12 and 6.8, the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\mathfrak{z}_{K_{n}}(G(F_{v}),E)\) acts on \(\mathbf{1}_{gK_{n}}\) via the composition ( \[\mathcal{H}_{\mathbf{1},nv,E}^{\operatorname{loc},*,\leq s,0} \big{|}_{\overline{\eta}_{v}}\xrightarrow{x}\mathcal{H}_{V|_{\Delta(\widehat{ G})},nv,E}^{\operatorname{loc},*,\leq s,0}|_{\overline{\eta}_{v}}= \mathcal{H}_{V,nv,E}^{\operatorname{loc},I,\leq s,0}|_{\Delta(\overline{\eta} _{v})}\] \[\xrightarrow{\gamma_{\bullet}}\mathcal{H}_{V,nv,E}^{\operatorname{ loc},I,\leq s^{\prime},0}|_{\Delta(\overline{\eta}_{v})}=\mathcal{H}_{V|_{\Delta( \widehat{G})},nv,E}^{\operatorname{loc},*,\leq s^{\prime},0}|_{\overline{\eta }_{v}}\xrightarrow{\xi}\mathcal{H}_{\mathbf{1},nv,E}^{\operatorname{loc},*, \leq s^{\prime},0}|_{\overline{\eta}_{v}}\] for large enough \(s^{\prime}\). By enlarging the away-from-\(v\) part of \(N\) and using the action of \(C_{c}(K_{N}^{v}\backslash G(\mathbb{A}^{v})/K_{N}^{v},E)\) as before, we can assume that \(\deg N\) is large enough. Then Lemma 6.4 shows that \(\Theta_{n}\) yields a natural \(\natural\)-pushforward morphism \[\mathcal{H}_{V,nv,E}^{\operatorname{loc},I,\leq s}|_{\Delta(\overline{\eta}_ {v})}\operatorname{\rightarrow}\mathcal{H}_{V,N,E}^{I,\leq s}|_{\Delta( \overline{\eta})},\] where we use Lemma 6.4, [11, Proposition VII.5.2], and [27, (5.7.2)] to identify \[(f^{\operatorname{S}})_{\Delta(\overline{\eta}_{v})\natural}^{\diamond}\big{[} (\mathcal{I}_{V,N,E}^{(I_{1},\dots,I_{k}),\leq s})_{\Delta(\overline{\eta}_{v })}^{\diamond}\big{]}=\mathcal{H}_{V,N,E}^{I,\leq s}|_{\Delta(\overline{\eta})}.\] Lemma 5.18 and Lemma 4.17 imply that \(\mathcal{H}_{V,nv,E}^{\operatorname{loc},I,\leq s}|_{\Delta(\overline{\eta}_ {v})}\operatorname{\rightarrow}\mathcal{H}_{V,N,E}^{I,\leq s}|_{\Delta( \overline{\eta})}\) induces a morphism from the composition diagram in Equation (\(\star\)) to the composition diagram \[\mathcal{H}_{\mathbf{1},N,E}^{*,\leq s,0}|_{\overline{\eta}} \operatorname{\rightarrow}^{x}\mathcal{H}_{V|_{\Delta(\widehat{G})},N,E}^{ *,\leq s,0}|_{\overline{\eta}}=\mathcal{H}_{V,N,E}^{I,\leq s,0}|_{\Delta( \overline{\eta})}\] \[\operatorname{\rightarrow}^{\tau_{\star}}\mathcal{H}_{V,N,E}^{I, \leq s^{\prime},0}|_{\Delta(\overline{\eta})}=\mathcal{H}_{V|_{\Delta(G)},N,E} ^{*,\leq s^{\prime},0}|_{\overline{\eta}}\operatorname{\rightarrow}^{\xi} \mathcal{H}_{\mathbf{1},N,E}^{*,\leq s^{\prime},0}|_{\overline{\eta}}.\] When \(I=*\) and \(V=\mathbf{1}\), the natural \(\natural\)-pushforward morphism recovers \[C_{c}(G(F_{v})/K_{n},E)\operatorname{\rightarrow}C_{c}(G(F)\backslash G( \mathbb{A})/K_{N},E)\] on fibers, so we see that the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\mathfrak{z}_{K_{n}}(G(F_{v}),E)\) acts on \(\mathbf{1}_{G(F)gK_{N}}\) via Equation (\(\star\star\)). But Theorem 6.10 and 6.5 indicate that this is precisely how the image of \(S_{I,V,x,\gamma_{\bullet}}\) in \(\operatorname{Exc}(W_{F},\widehat{G})_{E}\) acts on \(\mathbf{1}_{G(F)gK_{N}}\), as desired. Let us recall the elements of the Bernstein center constructed by Genestier-Lafforgue [14]. Write \(\mathfrak{m}_{E}\) for the maximal ideal of \(\mathcal{O}_{E}\), and let \(c\) be a non-negative integer. Write \(\mathfrak{z}_{K_{n}}(G(F_{v}),\mathcal{O}_{E}/\lambda^{c})\) for the center of \(C_{c}(K_{n}\backslash G(F_{v})/K_{n},\mathcal{O}_{E}/\mathfrak{m}_{E}^{c})\). For any finite set \(I\), algebraic function \(f\) on \(\widehat{G}\backslash({}^{L}G)^{I}/\widehat{G}\), element \(\gamma_{\bullet}\) of \(W_{F_{v}}^{I}\), and positive integer \(n\), write \(\mathfrak{z}_{n,c,I,f,\gamma_{\bullet}}^{\operatorname{GL}}\) for the element of \(\mathfrak{z}_{K_{n}}(G(F_{v}),\mathcal{O}_{E}/\mathfrak{m}_{E}^{c})\) constructed in [14, Theoreme 1.1]8. Footnote 8: While [14, Theoreme 1.1] is stated for split \(G\), the proof adapts for all \(G\). Indeed, this is implicitly used in [14, Theoreme 8.1]. We prove that the elements of the Bernstein center constructed by Fargues-Scholze coincide with those constructed by Genestier-Lafforgue. Recall that the image of \(\operatorname{Exc}(W_{F},\widehat{G})\) in \(\operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},E)}(C_{\operatorname {cusp}}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},E))\) preserves \(C_{\operatorname{cusp}}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E})\)[32, Proposition 13.1], so 6.11 induces an \(\mathcal{O}_{E}\)-algebra homomorphism \[\operatorname{Exc}(W_{F},\widehat{G})\operatorname{\rightarrow}\operatorname{ End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E})}(C_{\operatorname {cusp}}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E})).\] For any object \(V\) of \(\operatorname{Rep}_{\mathcal{O}_{E}}({}^{L}G)^{I}\), morphism \(x:\mathbf{1}\operatorname{\rightarrow}V|_{\Delta(\widehat{G})}\), and morphism \(\xi:V|_{\Delta(\widehat{G})}\operatorname{\rightarrow}\mathbf{1}\), write \(f\) for the algebraic function on \(\widehat{G}\backslash({}^{L}G)^{I}/\widehat{G}\) given by \(g_{\bullet}\mapsto\xi(g_{\bullet}\cdot x)\). **Theorem**.: _The square_ \[\operatorname{Exc}(W_{F_{v}},\widehat{G})\operatorname{\xrightarrow{}} \mathfrak{z}_{K_{n}}(G(F_{v}),\mathcal{O}_{E})\] \[\operatorname{Exc}(W_{F},\widehat{G})\operatorname{\xrightarrow{}} \operatorname{End}_{C_{c}(K_{N}\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E})}(C_{ \operatorname{cusp}}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E}))\] commutes. Consequently, the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\mathfrak{z}_{K_{n}}(G(F_{v}),\mathcal{O}_{E}/\mathfrak{m}_{E}^{c})\) equals \(\mathfrak{z}_{n,c,I,f,\gamma_{\bullet}}^{\operatorname{GL}}\)._ Proof.: Since Theorem 6.12 is compatible with changing \(\Lambda\), the first claim follows immediately from 6.11 and Theorem 6.13. From here, tensoring with \(\mathcal{O}_{E}/\mathfrak{m}_{E}^{c}\) shows that the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\mathfrak{z}_{K_{n}}(G(F_{v}),\mathcal{O}_{E}/\mathfrak{m}_{E}^{c})\) has the same action on \[C_{\operatorname{cusp}}(G(F)\Xi\backslash G(\mathbb{A})/K_{N},\mathcal{O}_{E }/\mathfrak{m}_{E}^{c})\] as the image of \(S_{I,V,x,\xi,\gamma_{\bullet}}\) in \(\operatorname{Exc}(W_{F},\widehat{G})\) does. Now \(\mathfrak{z}_{n,c,I,f,\gamma_{\bullet}}^{\operatorname{GL}}\) enjoys the same property by [14, Proposition 1.3], so they must be equal by [14, Lemma 1.4]. We conclude this section by proving Theorem A. By a _cuspidal automorphic representation_ of \(G(\mathbb{A})\), we mean an irreducible summand of \(C_{\operatorname{cusp}}^{\infty}(G(F)\Xi\backslash G(\mathbb{A}),\overline{ \mathbb{Q}}_{\ell})\) for some lattice \(\Xi\) of \(Z(F)\backslash Z(\mathbb{A})\). **Theorem**.: _The square_ _commutes._ Proof.: Let \(\Pi\) be an irreducible summand of \(C_{\operatorname{cusp}}^{\infty}(G(F)\Xi\backslash G(\mathbb{A})/K_{N}, \overline{\mathbb{Q}}_{\ell})\), and let \(N\) be large enough such that \(\Pi^{K_{N}}\) is nonzero. Schur's lemma shows that \(\Pi^{K_{N}}\) induces an \(\overline{\mathbb{Q}}_{\ell}\)-algebra homomorphism \[\chi_{\Pi}:\operatorname{End}_{C_{e}(K_{N}\backslash G(\mathbb{A})/K_{N}, \overline{\mathbb{Q}}_{\ell})}(C_{\operatorname{cusp}}(G(F)\Xi\backslash G( \mathbb{A})/K_{N},\overline{\mathbb{Q}}_{\ell}))\operatorname{\rightarrow} \overline{\mathbb{Q}}_{\ell}.\] Adapt the notation of 6.13, and note that \(\Pi^{K_{n}}_{v}\) induces an \(\overline{\mathbb{Q}}_{\ell}\)-algebra homomorphism \(\chi_{\Pi_{v}}:\mathfrak{z}_{K_{n}}(G(F_{v}),\overline{\mathbb{Q}}_{\ell}) \operatorname{\rightarrow}\overline{\mathbb{Q}}_{\ell}\). By Theorem 6.15, the square commutes. The composition of \(\chi_{\Pi}\) with the bottom arrow corresponds to \(\operatorname{GLC}_{G}(\Pi)\) under Proposition 6.9, and further composition with the left arrow corresponds to \(\operatorname{GLC}_{G}(\Pi)|_{W_{F_{v}}}^{\operatorname{ss}}\) under Proposition 6.9. On the other hand, the composition of \(\chi_{\Pi}\) with the right arrow equals \(\chi_{\Pi_{v}}\), and further composition with the top arrow corresponds to to \(\operatorname{LLC}_{G_{F_{v}}}^{\operatorname{ss}}(\Pi_{v})\) under Proposition 6.9. Hence commutativity of the square yields the desired result. ## 7. Applications We revert our notation to the local context. Let \(F\) be a local field of characteristic \(p>0\), let \(G\) be a connected reductive group over \(F\), and write \(C\) for its radical. Our goal in this section is to prove Theorem B, Theorem C, and Theorem D. The proofs all proceed by appropriately embedding local representations into global ones. We now prove Theorem B. Fix an isomorphism \(\overline{\mathbb{Q}}_{\ell}\cong\mathbb{C}\). **Theorem**.: _If \(G_{\overline{F}}^{\mathrm{ad}}\) has any \(\mathsf{B}_{n}\) or \(\mathsf{C}_{n}\) factors, assume that \(p\geq 3\). If \(G_{\overline{F}}^{\mathrm{ad}}\) has any \(\mathsf{F}_{4}\) or \(\mathsf{G}_{2}\) factors, assume that \(p\geq 5\). Then \(\mathrm{LLC}_{G}^{\mathrm{ss}}\) uniquely lifts to a family of maps_ \[\mathrm{LLC}_{G}:\left\{\begin{array}{c}\text{irreducible smooth}\\ \text{representations of $G(F)$}\end{array}\right\}\to\left\{\begin{array}{c} \text{$L$-parameters}\\ \text{for $G$ over $F$}\end{array}\right\},\] _where \(G\) runs over connected reductive groups over \(F\), that is compatible with twisting by characters, compatible with parabolic induction for essentially \(L^{2}\) representations as in [29, Conjecture 4.1 (5)], and whose value on \(L^{2}\) representations with finite order central character is pure._ Proof.: By compatibility with parabolic induction for essentially \(L^{2}\) representations, \(\mathrm{LLC}_{G}\) is determined by its values on essentially \(L^{2}\) representations \(\pi\). By compatibility with twisting by characters, we can assume that \(\pi\) also has finite order central character \(\omega_{\pi}:C(F)\!\to\!\overline{\mathbb{Q}}_{\ell}^{\times}\). There exists at most one pure \(L\)-parameter for \(G\) over \(F\) whose semisimplification equals \(\mathrm{LLC}_{G}^{\mathrm{ss}}(\pi)\)[12, Lemma 3.5.(b)], so we just need to construct it. Let \(\mathbf{F}\), \(v\), \(\mathbf{G}\), and \(\mathbf{C}\) be as in the proof of Theorem 7.2. Let \(v^{\prime}\neq v\) be a place of \(\mathbf{F}\) where \(\mathbf{G}_{\mathbf{F}_{v^{\prime}}}\) is unramified, write \(\mathbb{F}_{q^{\prime}}\) for the residue field of \(\mathbf{F}_{v^{\prime}}\), and identify \(\mathbf{F}_{v^{\prime}}\) with \(\mathbb{F}_{q^{\prime}}((\frac{1}{z}))\). By [39, Lemma 2.1], \(\mathbf{G}_{\mathbf{F}_{v^{\prime}}}\) is the pullback of a connected reductive group \(\mathbf{H}\) over \(\mathbb{F}_{q^{\prime}}\). Let \(\phi\) be a generic character as in [24, Section 1.3], write \(\Pi^{\prime}\) for the cuspidal automorphic representation of \(\mathbf{H}(\mathbb{A}_{\mathbb{F}_{q^{\prime}}(z)})\) associated with the automorphic sheaf \(A_{\phi}\) as in [24, Definition 2.6], and write \(\rho^{\prime}\) for the \(L\)-parameter for \(\mathbf{H}\) over \(\mathbb{F}_{q^{\prime}}(z)\) associated with the \({}^{L}\mathbf{H}\)-local system \(\mathrm{Kl}_{{}^{L}\mathbf{H}}(\phi)\) as in [24, Theorem 1(1)]. Since \(A_{\phi}\) is a Hecke eigensheaf with eigenvalue \(\mathrm{Kl}_{{}^{L}\mathbf{H}}(\phi)\), we see that \(\Pi^{\prime}\) and \(\rho^{\prime}\) are associated via the Satake isomorphism at cofinitely many places of \(\mathbb{F}_{q^{\prime}}(z)\). Now \(\rho^{\prime}|_{W_{\mathbb{F}_{q^{\prime}}(\frac{1}{z})}}\) is irreducible by [24, Theorem 2], so [32, Theoreme 12.3] and [7, Proposition 6.4]9 indicate that \(\rho^{\prime}=\mathrm{GLC}_{\mathbb{H}}(\Pi^{\prime})\). From here, Theorem 6.16 shows that \(\mathrm{LLC}_{\mathbf{H}_{\mathbb{F}_{q^{\prime}}(\frac{1}{z})}}^{\mathrm{ss} }(\Pi^{\prime}_{\infty})=\rho^{\prime}|_{W_{\mathbb{F}_{q^{\prime}}(\frac{1}{ z})}}^{\mathrm{ss}}=\rho^{\prime}|_{W_{\mathbb{F}_{q^{\prime}}(\frac{1}{z})}}\). Footnote 9: While [7] only considers split \(G\), [7, Proposition 6.4] immediately extends to general \(G\). By [13, p. 2829], there exists a finite order character \(\omega:\mathbf{C}(\mathbf{F})\backslash\mathbf{C}(\mathbb{A}_{\mathbf{F}})\! \to\!\overline{\mathbb{Q}}_{\ell}^{\times}\) such that \(\omega_{v}\) is identified with \(\omega_{\pi}\) and \(\omega_{v^{\prime}}\) is identified with an unramified twist of \(\omega_{\Pi^{\prime}_{\infty}}\). Note that \(\ker\omega\) contains a lattice \(\Xi\) of \(\mathbf{C}(\mathbf{F})\backslash\mathbf{C}(\mathbb{A}_{\mathbf{F}})\). Then [12, Lemma A.1] and [13, Lemma 8.1] yield an irreducible summand \(\Pi\) of \(C_{\mathrm{cusp}}^{\infty}(\mathbf{G}(\mathbf{F})\Xi\backslash\mathbf{G}( \mathbb{A}_{\mathbf{F}}),\overline{\mathbb{Q}}_{\ell})\) such that * \(\Pi_{v}\) has the same cuspidal support as \(\pi\), * \(\Pi_{v^{\prime}}\) is isomorphic to an unramified twist of \(\Pi^{\prime}_{\infty}\) via \(\mathbf{F}_{v^{\prime}}\cong\mathbb{F}_{q^{\prime}}((\frac{1}{z}))\). Theorem 6.16 and [11, p. 326] indicate that \(\mathrm{GLC}_{\mathbf{G}}(\Pi)|_{W_{\mathbb{F}_{q^{\prime}}}}^{\mathrm{ss}}\) equals an unramified twist of \(\mathrm{LLC}_{\mathbf{H}_{\mathbb{F}_{q^{\prime}}(\frac{1}{z})}}^{\mathrm{ss}}( \Pi^{\prime}_{\infty})\). This shows that \(\mathrm{GLC}_{\mathbf{G}}(\Pi)\) is irreducible, so [32, Lemme 16.2] and [39, Lemma 11.4] indicate that \(\mathrm{GLC}_{\mathbf{G}}(\Pi)\) is pure.10 Hence \(\mathrm{GLC}_{\mathbf{G}}(\Pi)|_{W_{\mathbf{F}_{v}}}\) is pure as in [12, Definition 3.3.(b)]. Finally, Theorem 6.16 and [11, Corollary IV.7.3] show that \(\mathrm{GLC}_{\mathbf{G}}(\Pi)|_{W_{\mathbf{F}_{v}}}^{\mathrm{ss}}=\mathrm{ LLC}_{G}^{\mathrm{ss}}(\Pi_{v})=\mathrm{LLC}_{G}^{\mathrm{ss}}(\pi)\), so \(\mathrm{GLC}_{\mathbf{G}}(\Pi)|_{W_{\mathbf{F}_{v}}}\) is the unique pure \(L\)-parameter for \(G\) over \(F\) whose semisimplification equals \(\mathrm{LLC}_{G}^{\mathrm{ss}}(\pi)\). Footnote 10: Now [32, Lemme 16.2] and [39, Lemma 11.4] are stated for split \(G\), but they hold in general. We give the following abstract proof of Theorem C. **Theorem**.: _There exists at most one family of maps_ \[\mathcal{LLC}_{G}^{\mathrm{ss}}:\left\{\begin{array}{c}\text{ irreducible smooth}\\ \text{ representations of }G(F)\end{array}\right\}\mathop{\to}\left\{\begin{array}{c} \text{ semisimple $L$-parameters}\\ \text{for $G$ over $F$}\end{array}\right\},\] _where \(G\) runs over connected reductive groups over \(F\), that is compatible with twisting by characters as in [20, Property 2.8], compatible with parabolic induction as in [20, Property 2.13], and satisfies the conclusion of Theorem 6.16. Consequently, the Genestier-Lafforgue correspondence agrees with the Fargues-Scholze correspondence._ Proof.: By compatibility with parabolic induction, \(\mathcal{LLC}_{G}^{\mathrm{ss}}\) is determined by its values on cuspidal representations \(\pi\). By compatibility with twisting by characters, we can assume that \(\pi\) also has finite order central character \(\omega_{\pi}:C(F)\mathop{\to}\overline{\mathbb{Q}}_{\ell}^{\times}\) By [13, Lemma 3.2], there exists a global field \(\mathbf{F}\) of characteristic \(p\), a place \(v\) of \(\mathbf{F}\), a connected reductive group \(\mathbf{G}\) over \(\mathbf{F}\), and an isomorphism \(\mathbf{F}_{v}\cong F\) such that * \(\mathbf{G}_{\mathbf{F}_{v}}\) is identified with \(G\) as group schemes over \(\mathbf{F}_{v}\cong F\), * the radical \(\mathbf{C}\) of \(\mathbf{G}\) has \(\mathbf{F}\)-split rank equal to the \(F\)-split rank of \(C\). Write \(\mathbb{A}_{\mathbf{F}}\) for the adele ring of \(\mathbf{F}\). By [13, Lemma 3.3], there exists a finite order character \(\omega:\mathbf{C}(\mathbf{F})\backslash\mathbf{C}(\mathbb{A}_{\mathbf{F}}) \mathop{\to}\overline{\mathbb{Q}}_{\ell}^{\times}\) such that \(\omega_{v}\) is identified with \(\omega_{\pi}\). Note that \(\ker\omega\) contains a lattice \(\Xi\) of \(\mathbf{C}(\mathbf{F})\backslash\mathbf{C}(\mathbb{A}_{\mathbf{F}})\). Poincare series yield an irreducible summand \(\Pi\) of \(C_{\mathrm{cusp}}^{\infty}(\mathbf{G}(\mathbf{F})\Xi\backslash\mathbf{G}( \mathbb{A}),\overline{\mathbb{Q}}_{\ell})\) such that \(\Pi_{v}\) is identified with \(\pi\)[14, Theorem 1.1], so the conclusion of Theorem 6.16 uniquely determines \(\mathcal{LLC}_{G}^{\mathrm{ss}}(\pi)\) as \(\mathrm{GLC}_{\mathbf{G}}(\Pi)|_{W_{\mathbf{F}_{v}}}^{\mathrm{ss}}\). The Fargues-Scholze correspondence satisfies the aforementioned properties by [11, p. 326], [11, Corollary IX.7.3], and Theorem 6.16. The Genestier-Lafforgue correspondence also satisfies these properties by [14, Theoreme 8.1], so the above shows that it agrees with the Fargues-Scholze correspondence. Finally, we prove Theorem D. Let \(D\) be a central simple algebra over \(F\) of degree \(n\). **Theorem**.: _The triangle_ _commutes, where \(\mathrm{JL}\) denotes the local Jacquet-Langlands correspondence as in [5, (th. 1.1)]._ Proof.: Because both JL [5, (th. 1.1)] and \(\operatorname{LLC}^{\operatorname{ss}}_{G}\)[11, p. 326] are compatible with twisting by characters, it suffices to check commutativity on \(L^{2}\) representations \(\pi\) with finite order central character \(\omega_{\pi}:F^{\times}\to\overline{\bigotimes}_{\ell}^{\times}\). Let \(\mathbf{F}\) be a global field of characteristic \(p\) along with a place \(v\) of \(\mathbf{F}\) and an isomorphism \(\mathbf{F}_{v}\cong F\), and let \(\mathbf{D}\) be a central division algebra over \(\mathbf{F}\) such that \(\mathbf{D}_{\mathbf{F}_{v}}\) is identified with \(D\) as central simple algebras over \(\mathbf{F}_{v}\cong F\). Using the pseudo-coefficient for \(\operatorname{JL}(\pi)\) constructed in [6, Section 5], the proof of [34, (15.10)] yields a lattice \(\Xi\) of \(\mathbf{F}^{\times}\backslash\mathbb{A}_{\mathbf{F}}^{\times}\) and an irreducible summand \(\widetilde{\Pi}\) of \(C^{\infty}_{\operatorname{cusp}}(\operatorname{GL}_{n}(\mathbf{F})\Xi \backslash\operatorname{GL}_{n}(\mathbb{A}_{\mathbf{F}}),\overline{\mathbb{Q} }_{\ell})\) such that * \(\widetilde{\Pi}_{v}\) is isomorphic to \(\operatorname{JL}(\pi)\), * for all places \(v^{\prime}\neq v\) of \(\mathbf{F}\) where \(\mathbf{D}_{\mathbf{F}_{v^{\prime}}}\) is ramified, \(\widetilde{\Pi}_{v^{\prime}}\) is cuspidal. Therefore we can applying the global Jacquet-Langlands correspondence [6, Theorem 3.2] to \(\widetilde{\Pi}\), which yields an irreducible summand \(\Pi\) of \(C^{\infty}_{\operatorname{cusp}}(\mathbf{D}^{\times}\Xi\backslash(\mathbf{D} \otimes_{\mathbf{F}}\mathbb{A}_{\mathbf{F}})^{\times},\overline{\mathbb{Q}}_{ \ell})\) such that * \(\Pi_{v}\) is isomorphic to \(\pi\), * for all places \(w\) of \(\mathbf{F}\) where \(\mathbf{D}_{\mathbf{F}_{w}}\) is split, \(\Pi_{w}\) is isomorphic to \(\widetilde{\Pi}_{w}\). Then [32, Theoreme 12.3] and the Chebotarev density theorem imply that \[\operatorname{GLC}_{\mathbf{D}^{\times}}(\Pi)=\operatorname{GLC}_{ \operatorname{GL}_{n}}(\widetilde{\Pi}),\] so Theorem 6.16 enables us to conclude that \[\operatorname{LLC}^{\operatorname{ss}}_{D^{\times}}(\pi)=\operatorname{GLC}_{ \mathbf{D}^{\times}}(\Pi)|_{W_{\mathbf{F}_{v}}}^{\operatorname{ss}}= \operatorname{GLC}_{\operatorname{GL}_{n}}(\widetilde{\Pi})|_{W_{\mathbf{F}_ {v}}}^{\operatorname{ss}}=\operatorname{LLC}^{\operatorname{ss}}_{ \operatorname{GL}_{n}}(\operatorname{JL}(\pi)).\qed\]
2303.07145
Sublinear drag regime at mesoscopic scales in viscoelastic materials
Stressed soft materials commonly present viscoelastic signatures in the form of power-law or exponential decay. Understanding the origins of such rheologic behaviors is crucial to find proper technological applications. Using an elastic network model of macromolecules immersed in a viscous fluid, we numerically reproduce those characteristic viscoelastic relaxations and show how the microscopic interactions determine the rheologic response. We find that exponential relaxations are indeed the most common behavior. However, power laws may arise when drag forces between the macromolecules and the fluid are sublinear, which is related to micro-deformations of the macromolecules.
A. E. O. Ferreira, J. L. B. de Araújo, W. P. Ferreira, J. S. de Sousa, C. L. N. Oliveira
2023-03-13T14:13:28Z
http://arxiv.org/abs/2303.07145v1
# Sublinear drag regime at mesoscopic scales in viscoelastic materials ###### Abstract Stressed soft materials commonly present viscoelastic signatures in the form of power-law or exponential decay. Understanding the origins of such rheologic behaviors is crucial to find proper technological applications. Using an elastic network model of macromolecules immersed in a viscous fluid, we numerically reproduce those characteristic viscoelastic relaxations and show how the microscopic interactions determine the rheologic response. We find that exponential relaxations are indeed the most common behavior. However, power laws may arise when drag forces between the macromolecules and the fluid are sublinear, which is related to micro-deformations of the macromolecules. Purely elastic and purely viscous behaviors are limited cases of constitutive equations of materials [1]. Actual substances may deform and flow, but one of these attributes usually dominates the other, depending on the applied conditions. This solid-liquid duality has teased researchers since at least the \(19^{th}\) century. Back then, pioneers such as James Maxwell and Ludwig Boltzmann proposed analytical models based on series and parallel associations of springs and dashpots to explain the peculiar characteristics observed in silk, glass fibers, and steel wires [2; 3]. The effective response of such early models invariably presents exponential relaxation decays, regardless of how springs and dashpots are connected. However, these simple approaches are only suited for some viscoelastic materials nowadays. In modern society, soft matter is ubiquitous and broadly accessible. The emergence of such complex materials has triggered new theoretical models and the improvement of proper experimental techniques to explain and control their viscoelastic properties [4; 5]. Nanoindentation methods, such as Atomic Force Microscopy, have become essential to characterize viscoelastic features at micro and nanometer scales by probing materials with nano-sized indenters [6]. The characterization of viscoelastic materials attempts to determine the relaxation function that possesses both qualitative and quantitative information. Exponential and power-law relaxation functions are the two major types of experimentally probed responses. Polyacrylamide gels [7; 8] and aqueous solutions of cationic surfactants [9], for instance, present exponential-like responses with a relaxation time for the material to achieve a new equilibrium configuration. On the other hand, living cells [10], microgel dispersions [11], soft glassy materials [12], and hydrogels [13] present a time-invariant power-law-like behavior. As observed in elastic materials [14; 15], macroscopic physical parameters are intrinsically connected to their microscopic interactions and structures [16; 17; 18]. Power laws and exponentials arise in many physical phenomena having a deep origin in their dynamic processes. For instance, in non-additive entropy systems, many physical variables are described by power-law distributions instead of the traditional exponential functions in the counterpart entropy [19; 20]. Exponential and power-law canonical distributions emerge naturally regarding whether the heat capacity of the heat bath is constant or diverges [21]. Moreover, power laws are associated with emergence phenomena where exponents display scaling behaviors as they approach criticality [22]. Systems with precisely the same critical exponents belong to the same universality class, and a small set of universality classes describes almost all material phase transitions. One of the challenges in material science is linking the physical mechanisms at microscopic scales to macroscopic functional behavior. This approach is especially relevant for soft matter because properties on the molecular scale are linked to conformational and compositional fluctuations on the nanometer and micrometer scale and, in addition, span many orders of magnitude in length [23; 24]. Soft matter holds rich structures and various interactions at the mesoscale, where thermal energy per unit volume is negligible, in contrast with the high energy density stored in atomic bonds of crystalline structures [25]. While exponential materials can be modeled by an association of springs and dashpots, such as the so-called standard linear solid model, power-law materials are usually obtained by fractional rheology [26; 27] or glassy rheology models [28; 29]. However, these models cannot explain the connection between macroscopic responses and their elastic and viscous components. We design a model of viscoelastic materials composed of an immersed elastic network of macromolecules to study how mesoscopic interactions influence macroscopic rheological behavior in soft materials [30]. We assume non-linear hydrodynamic drag forces act between the macromolecule and the fluid, where the contribution of elastic and viscous interactions are controlled at the mesoscopic level. By changing the physical parameters of elastic and drag forces, we obtain materials with exponential or power-law relaxations or then an intermediary
2304.03384
Beyond NeRF Underwater: Learning Neural Reflectance Fields for True Color Correction of Marine Imagery
Underwater imagery often exhibits distorted coloration as a result of light-water interactions, which complicates the study of benthic environments in marine biology and geography. In this research, we propose an algorithm to restore the true color (albedo) in underwater imagery by jointly learning the effects of the medium and neural scene representations. Our approach models water effects as a combination of light attenuation with distance and backscattered light. The proposed neural scene representation is based on a neural reflectance field model, which learns albedos, normals, and volume densities of the underwater environment. We introduce a logistic regression model to separate water from the scene and apply distinct light physics during training. Our method avoids the need to estimate complex backscatter effects in water by employing several approximations, enhancing sampling efficiency and numerical stability during training. The proposed technique integrates underwater light effects into a volume rendering framework with end-to-end differentiability. Experimental results on both synthetic and real-world data demonstrate that our method effectively restores true color from underwater imagery, outperforming existing approaches in terms of color consistency.
Tianyi Zhang, Matthew Johnson-Roberson
2023-04-06T21:29:34Z
http://arxiv.org/abs/2304.03384v2
Beyond NeRF Underwater: Learning Neural Reflectance Fields for True Color Correction of Marine Imagery ###### Abstract Underwater imagery often exhibits distorted coloration as a result of light-water interactions, which complicates the study of benthic environments in marine biology and geography. In this research, we propose an algorithm to restore the true color (albedo) in underwater imagery by jointly learning the effects of the medium and neural scene representations. Our approach models water effects as a combination of light attenuation with distance and backscattered light. The proposed neural scene representation is based on a neural reflectance field model, which learns albedos, normals, and volume densities of the underwater environment. We introduce a logistic regression model to separate water from the scene and apply distinct light physics during training. Our method avoids the need to estimate complex backscatter effects in water by employing several approximations, enhancing sampling efficiency and numerical stability during training. The proposed technique integrates underwater light effects into a volume rendering framework with end-to-end differentiability. Experimental results on both synthetic and real-world data demonstrate that our method effectively restores true color from underwater imagery, outperforming existing approaches in terms of color consistency. Our code and data are released at [https://github.com/tyz1030/neuralsea.git](https://github.com/tyz1030/neuralsea.git) ## I Introduction Optical imaging is being widely used in exploring the benthic world together with modern underwater robotic systems. The visual information presented in RGB format reveals rich details about underwater ecosystems and artifacts. For example, images collected by an underwater robot can be used to assess the health of coral reefs and segment live corals from dead samples [1]. However, the colors displayed in underwater images are consistently distorted due to wavelength-dependent attenuation and veiling effects resulting from light-water interactions. Such effects alter the visual appearance of images, as well as the performance of downstream tasks such as detection, classification, or segmentation [2]. Restoring the color in underwater imagery is of great interest to communities working on marine ecology, biology, and geography, etc. The formation of underwater color distortion has seen significant work, in which two kinds of light-water interaction are commonly studied: attenuation and scattering [3, 4]. Attenuation describes the process whereby water absorbs light at varying rates depending on the wavelength. Red light is absorbed most quickly leading to a loss of the red part of the visual spectrum in typical underwater images [5]. Underwater light scattering refers to the process by which light is dispersed in various directions as it interacts with water molecules, suspended particles, and other microscopic elements within the underwater environment [3]. While in graphics multiple-scattering are typically modeled, in water photons reflected to the camera without striking the scene, i.e. backscatter, have a major impact on image formation by creating a veiling effect. Although our understanding of water optics has advanced, restoring color in underwater images is still challenging. While these effects are well-modeled, accurately estimating them from real data in uncontrolled environments remains an open problem. Early studies on marine optics developed underwater image formation models [3, 6] and measured absorption and scattering functions from different types of water samples [5, 7]. With the above work, images can be synthesized with underwater effects [8]. However, this approach is insufficient for accurately correcting the color of real-world underwater images, as the measurements of a finite number of water optic properties cannot be reliably applied to novel field data. Recent progresses on structure-from-motion (SfM) and deep learning have inspired the development of data-driven algorithms for underwater color correction. SfM-based method [9] estimates the true color (albedo) with multiple-view geometry constraints, but is only able to generate sparse results on feature points. Deep-learning-based methods [2, 10, 11] are able to correct the color with physical information, but the result depends on prior color distributions or pre-training. Fig. 1: Observing an underwater scene from different altitudes results in variating color distribution over the RGB channels. Such observations encode the physics of light-water interactions. Our proposed model leverages this cue to restore the true color of underwater scenes by learning water effects together with neural scene representations. Combining insights from both types of methods, we developed a unified model that effectively restores the true color in underwater imagery (Fig. 1). Our proposed model optimizes the attenuation and backscatter coefficients together with a neural reflectance field [12] from a sequence of observations without any assumptions on prior color distributions. Based on the observation that water and scene are separable given volume density, we embed a logistic regression function in our neural scene representation which allows us to apply different light-transmitting physics to water and the scene, while maintaining end-to-end differentiability of our model. Our experiments demonstrate that our method is able to generate photo-realistic results with restored true color in a dense format, outperforming previous studies, particularly when the underlying albedo of the scene has a biased color distribution in the RGB space. ## II Related Work ### _Underwater Image Formation Model_ According to Jaffe-McGlamery model [3, 6], the formation of underwater images can be decomposed into direct signals, forward-scattering, and backscatter. Direct signals refer to the light that is reflected from the underwater scene. Backscatter refers to the phenomenon in which light enters a camera without being reflected directly from the scene. The trajectory of a photon after interacting with a particle in water is characterized by volume scattering functions (VSFs) [7]. These empirical functions are dependent on both viewing and lighting directions. Forward-scattering occurs when a photon deviates from its direct path before reaching the sensor, resulting in a blurred image. This effect can be modeled by convolution operations [3] or Gaussian blurring [8]. In this work, we face challenges of modeling VSF for robots with different camera-light configurations. To overcome this challenge, we propose several approximations for backscatter that are applicable to the cases where the camera and light source move as a rigid body. Our scene representations do not model forward scattering, as the error introduced by forward scattering is zero-mean and negligible [13]. ### _Neural Implicit Representations_ Neural implicit representations, which encode signals as continuous functions instead of discrete samples, have been widely used in learning visual appearances and structures. NeRF [14] is a kind of neural implicit representation that learns a 3D scene in the form of a neural field of volume density and radiance. The volume rendering equations in NeRF, which is based on radiative transfer equation (RTE), are not only good for inferring the 3D geometry of objects but also have the power to model the water effects such as absorption and scattering. Based on NeRF's framework, neural reflectance field [12] and its variants [15] model the reflectance of the scene which enables the high-quality rendering under novel lighting conditions. For underwater scenes illuminated by light sources attached to the robot, the appearance of the scene changes due to the robot's movement. To accommodate for these appearance changes resulting from varying illumination conditions, it is necessary to model reflectance properties of the scene instead of radiance. Therefore, we opt to use a neural reflectance field [12] as the foundational model for 3D underwater scene representations. ### _Underwater Color Correction_ Early studies on underwater color correction make assumptions on underlying color distributions, e.g. histogram equalization [16], grayworld [17], or dark-channel prior [18]. However, color balanced from the above assumptions lacks consistency when the same scene is observed from multiple views due to range-dependent water effects. Bryson et al. [9] leverages the physical constraints from multiple-view geometry to estimate the true color of the scene. However, this method only estimates the true color of feature points and is unable to directly generate color-corrected images in a dense format. Further progress in this field is made with deep learning approaches. WaterGAN [10] proposes to generate a synthetic dataset with ground truth depth and colors by training a GAN, then train a color correction network to restore the color together with depth estimations. FUnIEGAN [2] employs a GAN, emphasizing image quality for downstream tasks rather than adhering to physical constraints and as such is able to achieve real-time performance. GAN-based methods, such as those mentioned above, require pre-training on a dataset. These methods can exhibit biases if the underlying color distribution differs from that of the training set. In contrast, our approach does not require any pre-training on pre-collected datasets. Rather, it restores color by creating neural scene representations using a series of observations from multiple perspectives. WaterNeRF [11] utilizes mip-NeRF [19] to model the underwater scene. Based on depth estimation from mip-NeRF, WaterNeRF learns the absorption and backscatter coefficients by optimizing the Sinkhorn loss between rendered image and histogram equalized image. Our approach diverges from WaterNeRF in that we model the scene as a reflectance field, which accounts for changes in illuminance, as opposed to a radiance field. Furthermore, we do not make any assumptions regarding the underlying color distributions. Lastly, all the approaches mentioned above [9, 10, 11] use the model proposed in [20] to account for backscatter, which assumes natural and ambient light to be the major illumination source of scattering. In other words, their formulations are based on the assumption that the intensity of scattering is spatially constant, which does not hold for underwater robots equipped with light sources, taking light fall-off into consideration. In our work, we depart from [20]'s model and propose several approximations on backscatter for underwater robots. ## III Methodology ### _Neural Scene Representation_ We employ neural reflectance field [12] to model the underwater scene observed by an underwater robot with onboard lights. The continuous scene is represented as a function of 3D location \(\mathbf{x}=(x,y,z)\) in the global coordinate frame. The outputs of the function are the rendering properties \((\sigma,\mathbf{\alpha},\mathbf{n})\), where \(\sigma\) is the volume density, \(\mathbf{\alpha}=(\alpha_{r},\alpha_{g},\alpha_{b})\) is the albedo and \(\mathbf{n}=(n_{x},n_{y},n_{z})\) is the surface normal (see Fig. 2). In practice, we first sample 3D points \(\mathbf{x}\) on camera rays in the global coordinate frame. We then use hash encoding \(\gamma\) to map the input \(\mathbf{x}\) into a higher-dimensional space [21] before feeding it into a nested multilayer perceptron (MLP): \[(\sigma,\mathbf{\alpha},\mathbf{n})=\mathrm{MLP}(\gamma(\mathbf{x})) \tag{1}\] ### _Rendering Equations_ The volume rendering equation [22, 23] maps a camera ray \(\mathbf{x}=\mathbf{o}-t\mathbf{\omega}\) into the radiance \(L_{\lambda}\) captured at location \(\mathbf{o}\) in direction \(\mathbf{\omega}\): \[L_{\lambda}(\mathbf{o},\mathbf{\omega})=\int_{t=0}^{d}T_{\lambda}(\mathbf{x}) \sigma(\mathbf{x})l_{\lambda}(\mathbf{x})dt \tag{2}\] Here \(T_{\lambda}\) is the transmittance from \(\mathbf{x}\) to \(\mathbf{o}\), \(\sigma\) is the volume density, \(l_{\lambda}\) is the scattered radiance from \(\mathbf{x}\) to \(\mathbf{o}\) along the ray, and \(\lambda\) indicates the wavelength. In this study, the wavelength is discretized into RGB space that \(\lambda\in\{r,g,b\}\)[24]. For a light beam emitted from \(\mathbf{x}\) to \(\mathbf{o}\), the fraction of light that reaches the camera is described by the transmittance \(T_{\lambda}\): \[T_{\lambda}(\mathbf{x})=\text{exp}(-\int_{s=0}^{t}\sigma_{\lambda}(\mathbf{o} -s\mathbf{\omega})ds) \tag{3}\] Here, \(\sigma_{\lambda}\) denotes the attenuation coefficient as a function of the 3D location \(\mathbf{o}-s\mathbf{\omega}\), which combines the extinction of light due to both volume-density-dependent out-scattering and wavelength-dependent absorption [3, 23]. The formulation of \(\sigma_{\lambda}\) will be further discussed in III-C. The scattered radiance \(l_{\lambda}\) from the scene, as a part of the integrand in Eq. 2, is formulated as follows: \[l_{\lambda}(\mathbf{x})=\int_{S^{2}}f_{\lambda}(\mathbf{x},\mathbf{\omega},\mathbf{ \omega}_{i})I_{\lambda}(\mathbf{x},\mathbf{\omega}_{i})d\mathbf{\omega}_{i} \tag{4}\] where \(S^{2}\) represents the spherical domain around point \(\mathbf{x}\), \(f_{\lambda}\) is the phase function that governs the distribution of light scattered at \(\mathbf{x}\), and \(I_{\lambda}\) is the incident radiance from direction \(\mathbf{\omega}_{i}\) into \(\mathbf{x}\). In practice, we follow the assumptions in [9] that object surfaces underwater are Lambertian, which scatters light into all directions equally. Following Lambert's cosine law, the phase function for objects underwater is described as: \(f_{\lambda}(\mathbf{x},\mathbf{w},\mathbf{w}_{i})=\alpha_{\lambda}(\mathbf{x})\cos( \mathbf{n}(\mathbf{x}),\mathbf{\omega}_{i})\). Here \(\alpha_{\lambda}(\mathbf{x})\) and \(\mathbf{n}(\mathbf{x})\) are the albedo and normal at \(\mathbf{x}\) estimated by the neural network. In other words, we are not modeling any specular reflection which is rare underwater. Inferring the the phase function \(f_{\lambda}\) of water volumes, i.e. VSF, is challenging and not scalable on real robots due to different light and camera configurations. To address this, we propose approximating backscatter in the image as a constant and moving away from estimating VSF (see III-D), by which the complexity of our approach is significantly reduced while still achieving accurate and realistic rendering results. Similar to [9], we only consider direct illumination from onboard lights. While natural and ambient light also impacts the lighting in shallow water, they are out of the scope of this work. The direct illumination on point \(\mathbf{x}\) from the light source is expressed by: \[I_{\lambda}(\mathbf{x},\mathbf{\omega}_{i})=T_{\lambda}^{i}(\mathbf{x})E_{ \lambda}^{i}(\mathbf{x}) \tag{5}\] Here \(i\) indicates the light source from direction \(\mathbf{\omega}_{i}\), \(T_{\lambda}^{i}\) is the transmittance from the light source to \(\mathbf{x}\) (the calculation is similar to Eq. 3), and \(E_{\lambda}^{i}(\mathbf{x})\) is the intensity of light source \(i\) evaluated at \(\mathbf{x}\) taking light fall-off with distance into account. Fig. 2: Our proposed model: Sample points \(\mathbf{x}\) are first mapped into positional encoding \(\gamma(\mathbf{x})\), as the input of an MLP. The output of the MLP consists of albedo \(\mathbf{\alpha}\), surface normal \(\mathbf{n}\), and volume density \(\sigma\). Backscatter \(S_{\lambda}\) and attenuation coefficient \(\beta_{\lambda}\) are global parameters optimized along with the MLP. With \(\mathbf{\alpha}\) and \(\mathbf{n}\) we can calculate the reflected radiance \(l_{\lambda}\) from the scene. We apply a sigmoid function on \(\sigma\) to separate water from scene and calculate transmittance \(T_{\lambda}\) through the scene and water using different coefficients. With \(S_{\lambda}\), \(T_{\lambda}\), \(\sigma\) and \(l_{\lambda}\), our rendering model predicts the pixel values in the image. ### _Unified Transmittance Model_ The attenuation of light in water can be modeled with a transmittance term \(T_{\lambda}\) given attenuation coefficient \(\sigma_{\lambda}\) and distance \(t\): \[T_{\lambda}=\text{exp}(-\int_{s=0}^{t}\sigma_{\lambda}ds)=\text{exp}(-\sigma_{ \lambda}t) \tag{6}\] Given the emitted radiance \(E\), the arrived radiance is \(T_{\lambda}E\). The attenuation coefficient \(\sigma_{\lambda}\) for water can be decomposed into the out-scattering coefficient \(\sigma\) and the absorption coefficient \(\beta_{\lambda}\)[3]. Notably, the out-scattering coefficient \(\sigma\) is independent of the wavelength of the light [25], and can be represented as the volume density in rendering equations. In the neural reflectance field, volume density is a function of spatial location \(\mathbf{x}\), so we have: \[\sigma_{\lambda}(\mathbf{x})=\sigma(\mathbf{x})+\beta_{\lambda} \tag{7}\] where \(\sigma(\mathbf{x})\) is predicted by the neural implicit functions and \(\beta_{\lambda}\) will be optimized as a global parameter that doesn't change with spatial locations. On a camera ray, points in the water attenuate light through both absorption and out-scattering, as described by Eq. 7. In contrast, points on objects have no wavelength-dependent absorption effects. So for underwater scenes \(\sigma_{\lambda}(\mathbf{x})\) can be formulated as follows: \[\sigma_{\lambda}(\mathbf{x})=\begin{cases}\sigma(\mathbf{x})+\beta_{\lambda},&\text{if $\mathbf{x}$ is in water}\\ \sigma(\mathbf{x}),&\text{if $\mathbf{x}$ is on objects}\end{cases} \tag{8}\] When sampling points from non-transparent objects, the volume density \(\sigma(\mathbf{x})\) should typically be large enough that regardless of whether \(\mathbf{x}\) is in water or on objects, \(\sigma(\mathbf{x})\approx\sigma(\mathbf{x})+\beta_{\lambda}\). However, it's still important to maintain the separate attenuation coefficients in Eq. 8 during training until the prediction of \(\sigma(\mathbf{x})\) has converged. To apply Eq. 8, we need to differentiate water from the rest of the scene. We experimentally observe that the value of \(\sigma(\mathbf{x})\) for objects is at least 10 times greater than that in clear water. This observation also aligns with the measurements by Jerlov [26]. Assuming that there are no highly transparent objects in the scene other than water, we define the following logistic regression functions using the sigmoid function: \[m_{o}(\mathbf{x}) =\text{sigmoid}(a(\sigma(\mathbf{x})-b)) \tag{9}\] \[m_{w}(\mathbf{x}) =1-m_{o}(\mathbf{x})\] where \(m_{o}\) and \(m_{w}\) indicate the probabilities of the query point \(\mathbf{x}\) being on non-transparent objects and water, respectively. Specifically, \(a\) controls the steepness of the sigmoid function, and a higher value of \(a\) results in higher confidence in prediction, but it may also increase the risk of vanishing gradient. \(b\) determines the density threshold used to distinguish water from objects. With \(m_{o}\) and \(m_{w}\), we can express \(\sigma_{\lambda}(\mathbf{x})\) in the following form: \[\sigma_{\lambda}(\mathbf{x}) =m_{w}(\mathbf{x})(\sigma(\mathbf{x})+\beta_{\lambda})+m_{o}( \mathbf{x})\sigma(\mathbf{x}) \tag{10}\] \[=\sigma(\mathbf{x})+m_{w}(\mathbf{x})\beta_{\lambda}\] In other words, \(m_{o}\) and \(m_{w}\) can be considered as masks on sample points, exposing those in the water and objects to distinct light-transmitting physics. ### _Approximating Water Effects_ The backscatter effects in water can be described using a VSF. However, in learning neural scene representations from real underwater data, we encounter difficulties in modeling VSFs. Firstly, backscatter from the closer regions of the field of view has a greater impact in imaging (Fig. 3). We need a precise imaging system model to accurately infer the VSF in this area. This requires detailed information about the dimensions and poses of the camera and light source. However, calibrating such a system complicates the deployment of our algorithm on real robots and is hard to scale across different robot platforms. Secondly, estimating the VSF along the ray prevents us from using bounding planes, which could significantly enhance the sampling efficiency and avoid overfitting by constraining the viewing frustum from multiple views. To address the issues mentioned above, we propose several approximations to avoid modeling VSFs: #### Iii-D1 Backscatter as a constant The backscatter captured in the image can be approximated as a constant \(S_{\lambda}\), as the majority of backscatter comes from the region close to the light source, which is not affected when the images are taken from different depths and perspectives (see Fig. 3). #### Iii-D2 Co-centered camera and light source Points are only sampled between the near and far bounding planes, and their distances to the camera are sufficiently large compared to the typical dimensions of imaging system components. Therefore, we model the light source as a single point light source that is co-centered with the camera, similar to [12]. We use the inverse-square law to calculate the incident radiance \(E_{\lambda}(\mathbf{x})\). We design a loss function that enforces the model to output \(\sigma(\mathbf{x})=0\) if \(\mathbf{x}\) is in water (see III-F). With this constraint, we are able to avoid double counting backscatter with both \(S_{\lambda}\) and Eq. 2 since the integrand in Eq. 2 will have zero values for \(\mathbf{x}\) in the water. Additionally, constraining \(\sigma(\mathbf{x})=0\) for \(\mathbf{x}\) in water allows us to calculate the attenuation between the near bounding plane and the camera without sampling Fig. 3: A side view of scattering generated from an LED light (left) reflects the intensity distribution of incident radiance. We observe significant light fall-off with the distance from the light source. The plot on the right sketches a typical light fall-off curve. \(d_{n}\) and \(d_{f}\) indicates the typically positions of near and far bounding planes. When the distance is close to the dimensions of the lighting component, we need to precisely calibrate the lighting and imaging components to approximate the curve. The rest of the curve can be approximated with the inverse-square law. points. As a parameter to be optimized in training, \(\beta_{\lambda}\) will approach \(\sigma_{\lambda}(\mathbf{x})\) when \(\sigma(\mathbf{x})\) approaches 0 according to Eq. 7. Then the transmittance between the near bounding plane and the camera will be \(T_{\lambda}^{n}=\text{exp}(-\beta_{\lambda}d_{n})\) according to Eq. 6, and Eq. 2 can be written as: \[L_{\lambda}(\mathbf{o},\boldsymbol{\omega})=S_{\lambda}+T_{\lambda}^{n}\int_{t =d_{n}}^{d_{f}}T_{\lambda}(\mathbf{x})\sigma(\mathbf{x})l_{\lambda}(\mathbf{x} )dt \tag{11}\] Here \(d_{n}\) and \(d_{f}\) are the distances from the camera to near and far bounding planes respectively. ### _Ray Marching_ We numerically estimate Eq. 11 by ray marching. Rays are sampled from the center of the camera and pass through uniformly sampled points on the image plane in training. Points are then sampled along the ray between the near and far bounding planes. The rendering equation is discretized as follows: \[\begin{split} L_{\lambda}(\mathbf{o},\boldsymbol{\omega})& =S_{\lambda}+T_{\lambda}^{n}\sum\nolimits_{i=0}^{N}T_{\lambda}(x _{i})\Phi_{\lambda}(x_{i})l_{\lambda}(x_{i})\\ T_{\lambda}(x_{i})&=\text{exp}(-\sum\nolimits_{j =0}^{i}\sigma_{\lambda}(x_{j})\delta_{j})\\ \Phi_{\lambda}(x_{i})&=\frac{\sigma(x_{i})}{\sigma _{\lambda}(x_{i})}(1-\text{exp}(-\sigma_{\lambda}(x_{i})\delta_{i}))\\ l_{\lambda}(x_{i})&=T_{\lambda}^{n}T_{\lambda}(x _{i})E_{\lambda}(x_{i})\alpha_{\lambda}\cos(\mathbf{n}(x_{i}),\boldsymbol{ \omega})\end{split} \tag{12}\] where \(\delta_{i}\) denotes the step size at sample point \(x_{i}\). It is worth noticing that transmittance terms \(T_{\lambda}^{n}\) and \(T_{\lambda}(x_{i})\) are used in both the calculation of the incident radiance \(l_{\lambda}\) and the sensed radiance \(L_{\lambda}\) according to approximation III-D2. The opacity \(\Phi_{\lambda}\) corresponds to the term \(1-\text{exp}(\sigma(x_{i})\delta_{i})\) in NeRF and its variants. In NeRF, the volume density \(\sigma\) governs both the emission and attenuation of the radiance, making it sufficient to model objects in the air, haze, and even transparent glowing gas [27]. In our study, we need to model the wavelength-dependent attenuation, which requires both the volume density \(\sigma\) and the attenuation coefficient \(\sigma_{\lambda}\) to play a role together in \(\Phi_{\lambda}\). However, if \(\sigma_{\lambda}\) in the denominator approaches \(0\) in training, the model will encounter numerical issues. To avoid this, we take advantage of our proposition in III-D that enforces \(\sigma(x_{i})=0\) if \(x_{i}\) is in the water, so \(\Phi_{\lambda}(x_{i})=0=1-\text{exp}(-\sigma(x_{i})\delta_{i})\). When \(x_{i}\) falls on objects, \(\sigma_{\lambda}(x_{i})=\sigma(x_{i})\) according to Eq. 10, so \(\Phi_{\lambda}(x_{i})=1-\text{exp}(-\sigma(x_{i})\delta_{i})\). We then simplify \(\Phi_{\lambda}(x_{i})\) into the following form, which is identical to the opacity term in NeRF [14]: \[\Phi_{\lambda}(x_{i})=1-\text{exp}(-\sigma(x_{i})\delta_{i}) \tag{13}\] ### _Loss Function_ We use \(L_{2}\) loss to optimize the rendered radiance with captured pixel values from the raw image, which has linear color. As a result, the \(L_{2}\) loss will be dominated by errors in the brighter parts of the image, and the darker parts will have low rendering quality. To achieve better visual results, we apply a stronger penalization on errors in the darker parts of the image by tone-mapping \(\psi\) on both the model output and raw pixel values before passing them into the loss function as suggested by [13]: \[\mathcal{L}=\sum_{\lambda}\sum_{r\in R}\lVert\psi(\hat{L}_{\lambda}(r))-\psi (L_{\lambda}(r))\rVert_{2}^{2} \tag{14}\] Here \(R\) is the sampled ray batch, \(\hat{L}\) is the raw pixel value and \(L\) is the radiance predicted from the model. We use the gamma correction proposed in [28] as our \(\psi\) function to map the linear color to sRGB space. As proposed in III-D, we want to constrain the volume density \(\sigma(\mathbf{x})=0\) for \(\mathbf{x}\) in the water. We first set \(\sigma(\mathbf{x})=0\) for \(\mathbf{x}\) in water by multiplying \(m_{o}(\mathbf{x})\). This gives us the refined volume density \(\bar{\sigma}(\mathbf{x})\): \[\bar{\sigma}(\mathbf{x})=m_{o}(\mathbf{x})\sigma(\mathbf{x}) \tag{15}\] Then we are able to calculate the refined radiance \(\bar{L}_{\lambda}(r)\) with equations in III-E using \(\bar{\sigma}(\mathbf{x})\) in the place of \(\sigma(\mathbf{x})\). The refined loss is calculated similarly to Eq. 14: \[\bar{\mathcal{L}}=\sum_{\lambda}\sum_{r\in R}\lVert\psi(\hat{L}_{\lambda}(r))- \psi(\bar{L}_{\lambda}(r))\rVert_{2}^{2} \tag{16}\] The total loss is \(\mathcal{L}_{total}=\mathcal{L}+\bar{\mathcal{L}}\). By optimizing \(\mathcal{L}_{total}\), we are encouraging the model to generate the same results with \(\sigma(\mathbf{x})\) and \(\bar{\sigma}(\mathbf{x})\). So the prediction of \(\sigma(\mathbf{x})\) from network will converge to \(\bar{\sigma}(\mathbf{x})\), where for \(\mathbf{x}\) in the water, \(\sigma(\mathbf{x})=0\). ### _Re-rendering with True Color_ To re-render the image with true color, we just need to remove the backscatter \(S_{\lambda}\), wavelength-dependent absorption \(\beta_{\lambda}\) and volume density \(\sigma(x)\) for \(\mathbf{x}\) in water. We only need to use \(\bar{\sigma}(\mathbf{x})\) in calculating transmittance \(T\) and opacity \(\Phi\). The rendering equation in III-E becomes the following: \[\begin{split} L_{\lambda}(\mathbf{o},\boldsymbol{\omega})& =\sum\nolimits_{i=0}^{N}T(x_{i})\Phi(x_{i})l_{\lambda}(x_{i})\\ T(x_{i})&=\text{exp}(-\sum\nolimits_{j=0}^{i} \bar{\sigma}(x_{j})\delta_{j})\\ \Phi(x_{i})&=1-\text{exp}(\bar{\sigma}(x_{i})\delta_ {i})\\ l_{\lambda}(x_{i})&=T(x_{i})E_{\lambda}(x_{i})\alpha_{ \lambda}\cos(\mathbf{n}(x_{i}),\boldsymbol{\omega})\end{split} \tag{17}\] ## IV Experiments ### _Dataset_ We collect our underwater data in a water tank with 1.3m water depth. Our imaging system consists of a Sony ILCE-7M3 camera with a 40mm prime lens and LED lights. The maximum distance between lights and the camera does not exceed 20cm and their centerlines are parallel to each other. The imaging system is housed in a waterproof case and fully submerged when collecting data. The images are captured using 1/250s exposure time, \(f/5.6\) aperture, and ISO 1600. The raw image files with 14-bit pixel values in HDR space are decoded, denoised, and scaled into 8-bit images with linear values using RawPy [29]. We placed artificial decorations with various colors on the bottom of the tank together with a Macbeth ColorChecker [30]. We use the manufacturer's (X-rite) software to balance the image color as ground truth, which is only used for comparison purposes and does not play a role in our proposed algorithm. We acquire camera poses from COLMAP [31] with post-processed JPEG images to ensure high feature quality. We also build our synthetic data based on implementations from [8, 32] and measurements from [7, 26]. The ground truth color is obtained by rendering images without any water effects. Although synthetic data may not be sufficient in reflecting complex underwater lighting effects, it is useful in demonstrating that our method is able to decompose different underwater image formation components from each other. In addition, we are able to obtain absolute ground truth from the synthetic dataset by rendering with the same illumination setup, whereas calibrating the color in a real-world image by the ColorChecker could change the brightness level in the image. Both synthetic and real-world data are displayed in the second row of Fig. 4. ### _Implementations_ Our code is developed using PyTorch3D Library [33]. We use hash encoding proposed in Instant-NGP [21] for positional encoding. We choose \(a=3\) and \(b=3\) empirically for our sigmoid function in Eq. 9. Our neural implicit function consists 3 sub-MLPs predicting \(\sigma\), \(\boldsymbol{\alpha}\), and \(\mathbf{n}\) respectively similar to \(S^{3}\)-NeRF [15]. We use LeakyReLU as activation functions between consecutive linear layers and SoftPlus as the final layer in predicting \(\sigma\) and \(\boldsymbol{\alpha}\) to guarantee non-negative outputs. The model is trained on an Nvidia RTX 4090 GPU with 24GB memory. In each training iteration, we sample 1000 rays from one image and 100 points on each ray. The model is trained for \(50k\) epochs for each scene. ### _Comparisons_ We compare our results on both synthetic and real-world data with grayworld algorithm [17], histogram equalization [16], FUnIE-GAN [2] and WaterNeRF [11] (we use open-sourced Sinkhorn loss implementation from GeomLoss Library [34]). The color restoration results are shown in Fig. 4. Grayworld algorithm and histogram equalization algorithm only correct color well on Synthetic 1 data sequence, in which the object's albedo is dominated by low-saturation colors. Under such circumstances, the grayworld and histogram-equalizing assumptions align well with the underlying color distribution of the scene, so they are able to generate good Fig. 4: Visualizations of color restoration. For good visualization quality, real images are visualized in sRGB space. results. However, when we change the body color of the bulldozer to bright yellow (Synthetic 2), grayworld algorithm and histogram equalization are getting downgraded as their assumptions fail. We can observe the same in real images 1-4, where the albedo of the scene is dominated by a sand-color rock. Grayworld and histogram equalization algorithms both tend to balance it into grey color. We also observe that both predictions from grayworld and histogram equalization algorithm unpredictably add more veiling light effects into the raw image, as shown in the synthetic data for histogram equalization and real data for grayworld algorithm. As one of the latest GAN-based methods, FUnIE-GAN is pre-trained on annotated underwater images. In our experiments, we find FUnIE-GAN overshooting in the red channel as shown in Fig. 4, implying that color distributions in their training data are less red than ours. In other words, instead of naive assumptions such as histogram equalization, GAN-based methods learn a color distribution from pre-collected datasets. The inherent color distribution in the data for pretraining can deviate from observations as well. Overall, the results from FUnIE-GAN reflect the fact that methods relying on pretraining will have problems when generalized to scenes with different underlying color distributions. WaterNeRF tackles the problem by applying the physical constraints from Jaffe-McGlamery model while approaching the histogram-equalized image. We acknowledge that it's not a fair comparison since WaterNeRF works for any kind of illumination while our algorithm and data are only for situations where the light source moves with the camera as a rigid body. We observe that when the histogram-equalized image is flawed, e.g. with our synthetic data, the performance of WaterNeRF can be significantly downgraded. We also find our method outperforms WaterNeRF in color consistency on real data. For example, comparing Real 1 and 2 images in Fig. 4, which is from the same image sequence, our method restores the color of the rock with better consistency since we model the albedo and light reflection of the scene while WaterNeRF models the scene with constant radiance, which fails when the light source moves. In general, from the comparisons, our method restores color in both synthetic and real-world data with the most consistent performance. We present two metrics for quantitative evaluation: mean-squared error (MSE) of each CIELAB channel (Table I) and mean angular error [11] in the sRGB space (Table II). CIELAB is designed to approximate human vision in a uniform space [35] and sRGB is the standard colorspace in which the image is presented. For the synthetic dataset, we use the ground truth from the renderer and calculate both metrics directly. For real data, since color corrected with calibration software changes the brightness in images, we scale the images for comparison to have the same brightness before we calculate the MSE of each LAB channel. As revealed by MSE (Table I), our method performs the best on synthetic data on all LAB channels. While among the 4 real images evaluated, our method performs best on Real 1 and Real 3 images, which exhibit heavier water effects. However, on Real 2 and Real 4 images with less distortion, our method performs slightly weaker or comparably to other approaches. This suggests that our method maintains consistency as water effects increase with altitude, while other approaches may experience greater degradation. Besides evaluating LAB channels separately, angular error (Table II) reflects the color similarity in the entire RGB space. Results show that our method performs the best across all data, which is consistent with the visualizations in Fig. 4. Nevertheless, it's important to note that the error in pixel values is not only from the deviation of color but also the structural quality of image reconstruction. For example, grayworld-corrected images will retain all the features while images reconstructed with our method are subject to loss of details due to errors in pose estimation, refraction in water, lens effects, and approximations, etc. ## V Discussions This worked is directly applicable to underwater imagery collected when dominant light sources move with the camera as a rigid body, such as in deep water, ice-covered water, or cave water. However, it may fail in the following scenarios: * When the light source is a combination of onboard strobes (point light sources), natural light and ambient light, our model is inadequate for accurately representing water effects from mixed light sources. * In the presence of highly turbid and layered water, scattering effects vary more significantly with depth, and the robot will have to observe the scene at a closer range (breaking approximation III-D1). Modeling backscatter as a constant could potentially lead to failure. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Synthetic 1} & \multicolumn{3}{c}{Synthetic 2} & \multicolumn{3}{c}{Real 1} & \multicolumn{3}{c}{Real 2} & \multicolumn{3}{c}{Real 3} & \multicolumn{3}{c}{Real 4} \\ \cline{2-13} & L & A & B & L & A & B & L & A & B & L & A & B & L & A & B & L & A & B & L & A & B \\ \hline Grayworld & 110.3 & 4.664 & 19.72 & 104.4 & 22.14 & 91.36 & 96.65 & 10.70 & 82.34 & 79.05 & **15.13** & 109.59 & 91.44 & 10.72 & 77.05 & 91.51 & **12.40** & 34.01 \\ Hist. Eq. & 124.1 & 5.939 & 23.72 & 124.6 & 21.47 & 63.45 & 78.51 & 15.69 & 87.02 & 108.6 & 43.64 & 110.3 & 73.43 & 20.84 & 85.68 & 113.1 & 39.67 & 69.64 \\ FUnIE-GAN [2] & 108.4 & 75.49 & 29.99 & 106.2 & 61.18 & 36.13 & 62.90 & 33.81 & 61.46 & 83.95 & 97.36 & 49.45 & 87.62 & 24.91 & 76.29 & 96.89 & 89.23 & 54.03 \\ WaterNeRF [11] & 120.3 & 55.65 & 10.26 & 117.6 & 60.17 & 13.72 & 88.68 & 20.86 & 77.59 & 79.61 & 15.42 & 31.05 & 90.80 & 13.51 & 82.91 & **84.72** & 24.79 & 28.46 \\ Ours & **49.73** & **1.146** & **2.390** & **42.36** & **4.076** & **9.012** & **60.50** & **9.678** & **42.49** & **78.44** & 19.18 & **30.40** & **73.02** & **10.29** & **56.85** & 84.86 & 13.56 & **22.05** \\ \hline \hline \end{tabular} \end{table} TABLE I: MSE in CIELAB Space \(\downarrow\) (pixel values ranges 0-255) \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Syn. 1 & Syn. 2 & Real 1 & Real 2 & Real 3 & Real 4 \\ \hline Grayworld & 0.0724 & 0.2186 & 0.1381 & 0.0962 & 0.1351 & 0.0475 \\ Hist. Eq. & 0.0758 & 0.2482 & 0.1421 & 0.1916 & 0.1352 & 0.1931 \\ FUnIE-GAN [2] & 0.1107 & 0.1166 & 0.1221 & 0.1655 & 0.2056 & 0.1597 \\ WaterNeRF [11] & 0.1403 & 0.1748 & 0.1303 & 0.0596 & 0.1408 & 0.0567 \\ Ours & **0.0361** & **0.0458** & **0.0837** & **0.0591** & **0.1136** & **0.0412** \\ \hline \hline \end{tabular} \end{table} TABLE II: Angular Error in sRGB Space \(\downarrow\) (radians) * When the baseline between the camera and onboard light source is long, creating shadows in the observed scene, our model, which assumes co-centered light and camera, cannot accurately represent shadows (breaking approximation III-D2). This issue also arises with robots equipped with multiple cameras or light sources. ## VI Conclusions This work proposes a unified framework that learns underwater neural scene representations together with water effects. We demonstrate that our method is able to restore the true color of the underwater scene with a sequence of observations from different ranges and perspectives. By approximating the backscatter and simplifying the ray tracing, we avoid estimating VSF, which is numerically unstable and requires precise calibration of lighting and imaging system. Additionally, our proposed method generates dense results with end-to-end differentiability and does not rely on any pre-training or assumptions on prior color distributions. Future work will extend our model to address the issues discussed in V. Our long-term goal is to achieve true color correction for all types of underwater lighting conditions.
2308.13242
Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
In learning-to-rank (LTR), optimizing only the relevance (or the expected ranking utility) can cause representational harm to certain categories of items. Moreover, if there is implicit bias in the relevance scores, LTR models may fail to optimize for true relevance. Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model. Typically, ex-post fairness is achieved by post-processing, but previous work does not train stochastic ranking models that are aware of this post-processing. In this paper, we propose a novel objective that maximizes expected relevance only over those rankings that satisfy given representation constraints to ensure ex-post fairness. Building upon recent work on an efficient sampler for ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and show that it can be efficiently optimized for our objective in the LTR framework. Experiments on three real-world datasets show that our group-fair algorithm guarantees fairness alongside usually having better relevance compared to the LTR baselines. In addition, our algorithm also achieves better relevance than post-processing baselines, which also ensures ex-post fairness. Further, when implicit bias is injected into the training data, our algorithm typically outperforms existing LTR baselines in relevance.
Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis
2023-08-25T08:27:43Z
http://arxiv.org/abs/2308.13242v1
# Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness ###### Abstract In learning-to-rank (LTR), optimizing only the relevance (or the expected ranking utility) can cause representational harm to certain categories of items. Moreover, if there is implicit bias in the relevance scores, LTR models may fail to optimize for true relevance. Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model. Typically, ex-post fairness is achieved by post-processing, but previous work does not train stochastic ranking models that are aware of this post-processing. In this paper, we propose a novel objective that maximizes expected relevance only over those rankings that satisfy given representation constraints to ensure ex-post fairness. Building upon recent work on an efficient sampler for ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and show that it can be efficiently optimized for our objective in the LTR framework. Experiments on three real-world datasets show that our group-fair algorithm guarantees fairness alongside usually having better relevance compared to the LTR baselines. In addition, our algorithm also achieves better relevance than post-processing baselines, which also ensures ex-post fairness. Further, when implicit bias is injected into the training data, our algorithm typically outperforms existing LTR baselines in relevance. ## 1 Introduction Rankings of people, places, news, and products have critical real-world applications that influence our worldview. Ranking systems that optimize for relevance alone can amplify societal biases learned from their training data and reinforce certain stereotypes Castillo (2019); Zehlike et al. (2022, 2022). Thus, the field of fairness in learning-to-rank (LTR) has emerged as a response to these concerns, aiming to develop methodologies that ensure equitable and unbiased ranking outcomes. Stochastic ranking models have gained popularity in LTR Cao et al. (2007); Xia et al. (2008); Oosterhuis and de Rijke (2018), primarily due to off-the-shelf gradient-based methods that can be used to optimize these models efficiently. Further, they provide fairness guarantees that deterministic rankings for LTR cannot, e.g., ensuring that multiple items or groups have an equal (or some guaranteed minimum) probability of appearing at the top. There are two types of fairness guarantees one could ask for in a stochastic ranking: _ex-ante_ and _ex-post_. Ex-ante fairness asks for satisfying fairness in expectation, i.e., before the stochastic ranking model realizes a ranking. In contrast, ex-post fairness, often referred to as outcome fairness, requires fairness of the actual ranking after one is generated by the stochastic ranking model1. To the best of our knowledge, recent works can guarantee ex-post fairness only by deterministic or randomized post-processing of the stochastic ranking model. But, in-processing for ex-post fairness has not been studied before this work. Footnote 1: The choice between ex-post and ex-ante fairness depends on the context, available data, and additional ethical considerations. We consider the well-known Plackett-Luce (PL) model as our stochastic ranking model. PL model has been used in many fields, such as statistics Plackett (1975), Gormley and Murphy (2009), psychology Luce (1959), social choice theory Soufiani et al. (2012), econometrics Beggs et al. (1981), amongst others. Recent work has increased the popularity, scope, and efficiency of the PL model in LTR Singh and Joachims (2019), Diaz et al. (2020), Oosterhuis (2022). It is also shown to be robust Bruch et al. (2020) and effective for exploration in online LTR Oosterhuis and de Rijke (2018, 2021). Faster and practical algorithms for novel and unbiased estimators of the gradient of the expected ranking utility-for the PL model-have been proposed recently Oosterhuis (2021, 2022). These algorithms can efficiently optimize PL models for not just relevance (e.g., discounted cumulative gain) but also certain fairness notions that can be expressed as an expectation over the stochastic rankings (e.g., fair exposure). Due to the inherent randomization in the PL model, ex-post fairness guarantees are more challenging to incorporate in the training process such that the resultant model can be optimized efficiently. ### Motivating Example for Ex-Post Fairness We start by demonstrating the importance of ex-post fairness in real-world ranking systems. Consider a job recommendation platform such as _LinkedIn Talent Search2_, where a stochastic ranking algorithm determines the order in which potential interview candidates from different demographic groups are recommended to recruiters. Let us say there are candidates from two groups - \(G_{1}\), a majority group with high merit scores, and \(G_{2}\), a minority group (usually underprivileged) whose merit scores are underestimated due to biases present in the training data used for LTR. These biases may originate from historical imbalances, social prejudices, or systemic inequalities in the data. Footnote 2: [https://business.linkedin.com/talent-solutions](https://business.linkedin.com/talent-solutions) The stochastic ranking model must output the top-\(10\) candidates every time a recruiter queries for a list of suitable candidates. Consider a particular stochastic ranking that (1) chooses a group \(G_{1}\) or \(G_{2}\) with probability \(0.5\) each, and (2) shows the top \(10\) candidates from the group chosen in Step 1. This ensures _equal representation_ of both the groups ex-ante because there will be \(5\) candidates in the top \(10\) from each group, in expectation. However, none of the rankings output by the stochastic ranking satisfies equal representation ex-post. Such rankings may not be aligned with the ethical and diversity hiring policies of the recruiters (or companies). ### Our Contributions The main contribution of our work is a novel objective that maximizes relevance for a Group-Fair-PL model, where the relevance (or the expected ranking utility) is taken over only those rankings that satisfy given representation constraints for certain sensitive categories or groups of items. We show that a recent post-processing sampler for ex-post group-fair rankings Gorantla et al. (2022) combined with recent ideas to optimize the group-wise PL model Oosterhuis (2021, 2022) can be used to optimize this model efficiently. As a result, we get the best of both worlds: the efficiency of optimization in a fairness-aware in-processing objective and the ex-post fairness guarantees of post-processing methods. Our experiments on three real-world datasets, HMDA, German Credit, and MovieLens, show that our model guarantees ex-post fairness and achieves higher relevance compared to the baselines. Implicit bias in training data can often negatively affect ranking models optimized for relevance Celis et al. (2020). When implicit bias is injected into the training data as a stress test or audit for fair ranking algorithms, our algorithm outperforms existing baselines in fairness and relevance. The rest of the paper is organized as follows: In Section 2, we discuss closely related work in fair ranking to point out the significance and novelty of our results. Section 3 defines our novel relevance objective with ex-post fairness guarantees. In Section 4, we show how to optimize the Group-Fair-PL model for our objective. Section 5 contains an experimental validation of our relevance and fairness guarantees. ## 2 Related Work Stochastic ranking models have been widely studied in LTR, as they can be differentiable, and thus one can compute the gradient of a ranking utility to be optimized (e.g., discounted cumulative gain). In particular, the PL ranking model has been a popular model in recent work for optimizing relevance and fairness Oosterhuis (2021, 2022), Singh and Joachims (2019). Recent work has proposed efficient and practical algorithms, namely, PL-Rank and its variants, for optimizing PL ranking models using estimates of the gradient Oosterhuis (2021). In addition to optimizing ranking utility, the PL-Rank algorithm also optimizes _fairness of exposure_ - an ex-ante fairness metric Singh and Joachims (2019). Yadav et al. (2021) also optimize a PL ranking model for both utility and fairness of exposure in the presence of position bias, where items that are ranked higher receive more positive relevance feedback. Similar to these works, we too study the PL ranking model for LTR. However, we propose a variant that incorporates ex-post fairness rather than just ex-ante fairness. Broadly, the fair ranking algorithms can be divided into two groups: _post-processing_ and _in-processing_. Post-processing algorithms process the output of a given ranking model to incorporate group-fairness guarantees about sufficient representation of every group (especially, underprivileged demographic groups) in the top positions or top prefixes Celis et al. (2018), Geyik et al. (2019), Asudeh et al. (2019). As a result, the underlying ranking model may not be optimized in anticipation of the post-processing. In-processing algorithms, on the other hand, incorporate fairness controls to modify the objective in learning-to-rank Singh and Joachims (2018, 2019), Oosterhuis (2021). As a consequence, previous work on post-processing algorithms in fair ranking can provide ex-post (actual) guarantees on the group-wise representation in the top ranks Celis et al. (2018), Geyik et al. (2019), whereas in-processing algorithms can only provide ex-ante (expected) guarantees on group-wise exposure Singh and Joachims (2018) or amortized individual fairness Biega et al. (2018). The major drawback of the existing LTR algorithms is that none of them optimize relevance while ensuring that every output ranking satisfies group-wise representation guarantees in the top ranks. Our work aims to address this gap. Recently Gorantla et al. (2022) proposed a randomized post-processing algorithm that gives ex-post group-fairness guarantees. Their algorithm works in two steps, the first step generates a random group-fair allocation of the top-\(k\) positions that satisfy given group-wise representation constraints, and the second step fills these positions consistent with the intra-group ranking within each group. Their algorithm only requires the ordinal ranking with each group as input, not the individual scores for items or across-group comparisons. Their motivation to study this was unreliable comparisons, implicit bias and incomplete information in ranking. However, their first step of sampling a group-fair allocation is closely related to our work and we apply it in the optimization of ex-post group-fair Plackett-Luce model. Therefore, it is also the most relevant post-processing baseline chosen in our experiments. ## 3 Ex-Post Fairness in Ranking Preliminaries.Let \(\mathcal{I}\) denote the set of items (or documents). Let \(\mathsf{S}_{k}(\mathcal{I})\) be the set of all \(k\) sized permutations of the items in \(\mathcal{I}\). In the learning-to-rank setup, for any query \(q\), the goal is to output the top-\(k\) ranking of relevant items. Let \(R_{q,d}\) be an indicator random variable that takes the value of \(1\) if item \(d\) is relevant to \(q\) and \(0\) otherwise. The probability of \(d\) being relevant to \(q\) is represented as \(\rho_{d}:=P(R_{q,d}=1)\). Let \(\sigma\in\mathsf{S}_{k}(\mathcal{I})\) represent a ranking and let \(\sigma(i)\) represent the item in rank \(i\). We use \(\sigma(i:i^{\prime})\) for any \(1\leqslant i<i^{\prime}\leqslant k\) to represent the set of items in ranks \(i\) to \(i^{\prime}\) included in ranking \(\sigma\), that is, \(\sigma(i:i^{\prime}):=\{\sigma(i),\sigma(i+1),\ldots,\sigma(i^{\prime})\}\). Note that \(\sigma(1:k)\) represents the items in the ranking as a _set_, whereas \(\sigma\) itself is an ordered representation of this set of items. In the following, we drop \(q\) from the notation since, in the rest of the paper, we will be working with a fixed query \(q\). Previous works have considered stochastic ranking models since they offer equity in attention distribution across items Singh and Joachims (2019). They are preferred over deterministic rankings for diversity Xia et al. (2017) and robustness Bruch et al. (2020). We also study stochastic ranking models. We use \(\pi\) to denote a stochastic ranking model (or policy) and \(\Pi\) to denote the set of all stochastic ranking policies. The expected relevance metric for \(\pi\in\Pi\) is defined as follows, \[\mathcal{R}(\pi):=\sum_{\sigma\in\mathsf{S}_{k}(\mathcal{I})}\pi[\sigma]\sum_ {i\in[k]}\theta_{i}\rho_{\sigma(i)}, \tag{1}\] where \(\theta_{i}\in\mathbb{R}_{\geqslant 0}\) are the position discounts associated with each rank \(i\in[k]\) and \(\pi[\sigma]\) represents the probability of sampling \(\sigma\) according to \(\pi\). Policy Gradients for Placket-Luce.We use \(\pi^{\text{PL}}\) to represent the Plackett-Luce (PL) model. This is a popular stochastic ranking model that, given a prediction model \(m\) that predicts log scores \(m(d)\) for each item \(d\), samples a ranking from the distribution defined based on the individual scores of the items as follows, \[\forall\sigma\in\mathsf{S}_{k}(\mathcal{I}),\qquad\pi^{\text{PL}}[\sigma]:= \prod_{i=1}^{k}\frac{e^{m(\sigma(i))}}{\sum\limits_{d\in\mathcal{I}\setminus \sigma(1:i-1)}e^{m(d)}}.\] Singh and Joachims (2019) have proposed using policy gradients to train a PL ranking model to maximize expected relevance. They utilize the famous log trick from the REINFORCE algorithm Williams (2004) to compute the gradients of the expected relevance metric and use stochastic gradient descent to update the parameters of the PL model. Oosterhuis (2022) has developed a computationally efficient way to compute the gradients of the expected relevance metric for the PL model. As a result, PL models can now be trained efficiently to maximize the expected relevance. Group-Fair Ranking.Suppose that the set of items \(\mathcal{I}\) can be partitioned into \(\ell\) groups \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots,\mathcal{I}_{\ell}\) based on the group membership (based on age, race, gender, etc.). For any integer \(t\), we use \([t]\) to denote the set \(\{1,2,\ldots,t\}\). We consider the representation-based group fairness constraints in the top-\(k\) rankings, where, for each group \(j\in[\ell]\), we are given a lower bound \(L_{j}\) and an upper bound \(U_{j}\) on the number of items that can appear in the top-\(k\) ranking from that group. Let \(\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\) represent all possible group-fair top-\(k\) rankings. That is, \[\mathsf{S}_{k}^{\text{fair}}(\mathcal{I}):=\left\{\sigma\in\mathsf{S}_{k}( \mathcal{I}):L_{j}\leqslant|\sigma(1:k)\cap\mathcal{I}_{j}|\leqslant U_{j}, \forall j\in[\ell]\right\}.\] Let \(\mathsf{G}_{k}(\ell)\) represent the set of all _group assignments_ of the top-\(k\) rankings for \(\ell\) groups. That is, for any element \(\gamma\in\mathsf{G}_{k}(\ell)\), \(\gamma(i)\) represents the group of the item in rank \(i\). \[\mathsf{G}_{k}(\ell):=\left\{\gamma\in[\ell]^{k}\right\},\;\text{ or equivalently, }\;\mathsf{G}_{k}(\ell):=[\ell]^{k}.\] Let \(g:\mathcal{I}\rightarrow[\ell]\) be the group membership function for the items. For an item \(d\), \(g(d)\) represents its group membership. We use \(g(\sigma)\) to represent the vector of group memberships of the items in the ranking \(\sigma\). Note that for any \(\sigma\), \(g(\sigma)\in\mathsf{G}_{k}(\ell)\). We can then define \(\mathsf{G}_{k}^{\text{fair}}(\ell)\) as the set of group assignments that satisfy the group fairness constraints. \[\mathsf{G}_{k}^{\text{fair}}(\ell):=\left\{g(\sigma)\in[\ell]^{k}:\sigma\in \mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\right\}.\] Then for any \(\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\), \(g(\sigma)\in\mathsf{G}_{k}^{\text{fair}}(\ell)\). There is a many-to-one correspondence between \(\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\) and \(\mathsf{G}_{k}^{\text{fair}}(\ell)\). **Definition 3.1** (Ex-Post Fair Policy).: _A policy \(\pi\in\Pi\) is ex-post fair if each ranking \(\sigma\) sampled by the policy \(\pi\) satisfies representation-based fairness constraints. That is,_ \[\forall\sigma\sim\pi,\;\;g(\sigma)\not\in\mathsf{G}_{k}^{\text{fair}}(\ell)\; \implies\;\pi[\sigma]=0.\] Limitations of Plackett-Luce.There have been two significant contributions toward fair ranking with PL models. We list them and point out their limitations below. 1. **In-processing.**Asudeh et al. (2019) and Oosterhuis (2021) have proposed policy gradients-based optimization for expected relevance and equity of expected exposure of groups of items for PL models. The major drawback of these methods is that fairness is measured in expectation. Therefore, the trained PL model may not satisfy ex-post fairness. 2. **Post-processing.**Singh and Joachims (2018); Celis et al. (2018); Gorantla et al. (2022); Geyik et al. (2019) and many other previous works have proposed algorithms to post-process the scores or the ranking output by any LTR model (or specifically PL) to satisfy fairness. Ex-post fairness is satisfied in this case, but the trained LTR model is unaware of the post-processing that is going to be applied on the scores. Hence, it may end up learning a bad solution. We overcome these limitations by incorporating ex-post fairness during the training process of the PL-based LTR. Towards this end, we propose to use a different objective function for the stochastic ranking models. Proposed Optimization Objective.We ask for maximizing expected relevance over ex-post group-fair rankings. Then the fair expected relevance can be written as follows, \[\mathcal{R}^{\text{fair}}(\pi):=\begin{cases}\sum\limits_{\sigma\in\mathsf{S }_{k}^{\text{fair}}(\mathcal{I})}\pi[\sigma]\sum\limits_{i\in[k]}\theta_{i} \rho_{\sigma(i)},&\text{if $\pi$ is ex-post fair}\\ 0&\text{otherwise}.\end{cases} \tag{2}\] In general, the PL model may not satisfy ex-post fairness. Consider the case where the predicted scores of all the items by model \(m\) are non-zero. Then every ranking in \(\mathsf{S}_{k}(\mathcal{I})\) is sampled with a non-zero probability in the PL model based on these scores. Therefore, even if an optimal PL model that maximizes \(\mathcal{R}^{\text{fair}}\) is ex-post fair, the intermediate PL models during the training process may not be ex-post fair. Then \(\mathcal{R}^{\text{fair}}\) for intermediate PL models will be evaluated to \(0\), resulting in all the gradients being set to \(0\). Hence, we can not train the PL model with \(\mathcal{R}^{\text{fair}}\). In fact, the only way to train a model for \(\mathcal{R}^{\text{fair}}\) is to make sure that the model always samples fair rankings. Other approaches.We could also optimize a different relevance metric \(\widehat{\mathcal{R}}\) defined over \(\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\), \[\widehat{\mathcal{R}}(\pi):=\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}( \mathcal{I})}\pi[\sigma]\sum_{i\in[k]}\theta_{i}\rho_{\sigma(i)}.\] For any ex-post fair policy \(\pi\), \(\widehat{\mathcal{R}}(\pi)=\mathcal{R}^{\text{fair}}(\pi)\). Moreover, if the fairness constraints are vacuous, that is, \(L_{j}=0\) and \(U_{j}=k\), for all \(j\in[\ell]\), then, \(\widehat{\mathcal{R}}(\pi)=\mathcal{R}^{\text{fair}}(\pi)=\mathcal{R}(\pi)\). Note that \(\widehat{\mathcal{R}}\) does not strictly enforce ex-post fairness while training. Hence PL model can be trained for optimizing \(\widehat{\mathcal{R}}\). One could use _rejection sampling_ to enforce ex-post fairness during and after training. That is, to output a ranking from this model, we need to sample rankings from this model until we see a fair ranking. However, in general, the probability of seeing a fair ranking may be very small. For example, if the fairness constraints are such that \(L_{j}=U_{j}\) for all but a constant number of groups in \([\ell]\), and the predicted scores of the items are such that from each group \(j\in[\ell]\), \(k\) items have a score of \(1\) and others have score \(0\), then the probability of seeing a fair ranking is \(\frac{k^{c}}{k^{c}}\), where \(c\) is a constant. This means that, in the PL model, one needs to sample \(O(k^{\ell})\) many rankings in expectation before seeing a fair ranking3, which is computationally inefficient. This also affects the training process since the estimate of the gradient only makes sense if we have enough samples that are fair rankings. Footnote 3: This follows from the fact that the expected value of a geometric random variable with parameter \(p:=\frac{k^{c}}{k^{c}}\) is \(1/p\). For these reasons, asking for stochastic ranking models that can be trained with \(\mathcal{R}^{\text{fair}}\) as an objective is well-motivated. In the next section we describe our model that satisfies ex-post fairness, from which we can sample group-fair rankings efficiently. As a result, we get an efficient algorithm to compute gradients of our proposed model for optimizing \(\mathcal{R}^{\text{fair}}\). We can then use the stochastic gradient descent method to train our model. Group-Fair Placektt-Luce Model Let \(\pi^{\text{fair}}\) represent the Group-Fair-PL model we propose. In \(\pi^{\text{fair}}\), we have a two-step process to sample ex-post group-fair rankings, 1. Sample a top-\(k\) group assignment \(\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)\). 2. Sample a top-\(k\) ranking \(\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\) such that \(g(\sigma)=\gamma\). Then the Group-Fair-PL model can be written as, \[\pi^{\text{fair}}[\sigma]=\mu[g(\sigma)]\pi^{\text{fair}}[\sigma\mid g(\sigma)], \tag{3}\] where \(\mu[\cdot]\) is the distribution over \(\{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)\}\), and \(\pi^{\text{fair}}[\cdot\mid\gamma]\) is a conditional distribution over \(\{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I}):g(\sigma)=\gamma\}\). It is clear that, to achieve ex-post fairness, we can only sample group-fair group assignments in Step 1. For Step 2, we use PL model for items within the group for the ranks assigned to that group according to \(\gamma\). Therefore, in the Group-Fair-PL model, any group-fair ranking \(\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\) is sampled with probability, \[\pi^{\text{fair}}[\sigma]:=\mu[g(\sigma)]\prod_{i=1}^{k}\frac{e^{m(\sigma(i))} }{\sum\limits_{d\in\underbrace{\mathcal{I}_{g(\sigma(i))}}_{\text{items from group}}\backslash\sigma(1:i-1)}e^{m(d)}}, \tag{4}\] and any non-group-fair ranking is sampled with probability \(0\). Therefore, \(\mathcal{R}^{\text{fair}}\) defined in (2) is always evaluated in the **if** case for our Group-Fair-PL model. Let \(\sigma_{j}\) be the (sub-) ranking of items from group \(j\) in \(\sigma\). We use \(\pi_{j}^{\text{PL}}\) to represent the group-wise PL model for group \(j\). Note that for any \(j,j^{\prime}\in[\ell]\), \(\sigma_{j}\) and \(\sigma_{j^{\prime}}\) are sampled independently from \(\pi_{j}^{\text{PL}}\) and \(\pi_{j^{\prime}}^{\text{PL}}\) respectively. Given a group assignment \(\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)\), let \(\psi_{j}(\gamma)\subseteq[k]\) be the subset of the ranks assigned to group \(j\) according to \(\gamma\). Since \(\mathcal{I}_{1},\ldots,\mathcal{I}_{\ell}\) form a partition of \(\mathcal{I}\), \(\psi_{1}(\gamma),\ldots,\psi_{\ell}(\gamma)\) form a partition of \([k]\). Therefore, Equation (4) can be written as, \[\pi^{\text{fair}}[\sigma]: =\mu[g(\sigma)]\prod_{j=1}^{\ell}\prod_{i\in\psi_{j}(g(\sigma))} \frac{e^{m(\sigma(i))}}{\sum\limits_{d\in\mathcal{I}_{g(\sigma(i))}\backslash \sigma(1:i-1)}e^{m(d)}}\] \[=\mu[g(\sigma)]\prod_{j=1}^{\ell}\pi_{j}^{\text{PL}}[\sigma_{j}]. \tag{5}\] **Lemma 4.1**.: \(\pi^{\text{fair}}\) _is a valid probability distribution over \(\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\)._ Proof.: It is clear that \(\pi^{\text{fair}}[\sigma]\geqslant 0\) for each \(\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\). Moreover, non-group-fair rankings are sampled from \(\mu\) with probability \(0\). Therefore, \(\pi^{\text{fair}}[\sigma]=0\), for every \(\sigma\not\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})\). Further, \[\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})}\pi^{\text{fair}}[\sigma]\] \[=\sum_{\sigma\in\mathbf{S}_{k}^{\text{fair}}(\ell)}\mu[\gamma]\prod_{j \in[\ell]}\pi_{j}^{\text{PL}}[\sigma_{j}\mid g(\sigma)]\] \[=\sum_{\gamma\in\mathbf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma]\sum_ {\begin{subarray}{c}\sigma\in\mathbf{S}_{k}(\mathcal{I})\\ \text{s.t.}\ g(\sigma)=\gamma\end{subarray}}\prod_{j\in[\ell]}\pi_{j}^{\text{ PL}}[\sigma_{j}\mid\gamma]\] \[=\sum_{\gamma\in\mathbf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathbf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})} \left(\sum_{\sigma_{2}\in\mathbf{S}_{|\psi_{2}(\gamma)|}(\mathcal{I}_{2})} \ldots\left(\sum_{\sigma_{\ell}\in\mathbf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{ I}_{\ell})}\prod_{j\in[\ell]}\pi_{j}^{\text{PL}}[\sigma_{j}\mid\gamma]\right) \right)\right)\] \[=\sum_{\gamma\in\mathbf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathbf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})}\pi _{1}^{\text{PL}}[\sigma_{1}\mid\gamma]\left(\sum_{\sigma_{2}\in\mathbf{S}_{| \psi_{2}(\gamma)|}(\mathcal{I}_{2})}\pi_{2}^{\text{PL}}[\sigma_{2}\mid\gamma] \right.\right.\] \[\left.\left.\ldots\left(\sum_{\sigma_{\ell-1}\in\mathbf{S}_{| \psi_{\ell}(\gamma)|}(\mathcal{I}_{\ell})}\pi_{\ell}^{\text{PL}}[\sigma_{\ell }\mid\gamma]\right)\right)\right)\] \[=\sum_{\gamma\in\mathbf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathbf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})}\pi _{1}^{\text{PL}}[\sigma_{1}\mid\gamma]\right)\left(\sum_{\sigma_{2}\in \mathbf{S}_{|\psi_{2}(\gamma)|}(\mathcal{I}_{2})}\pi_{2}^{\text{PL}}[\sigma_{ 2}\mid\gamma]\right)\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \(j\) to be sampled in the top-\(k\) ranking. 2. Then it samples a group assignment \(\gamma=(\gamma_{1},\gamma_{2},\ldots,\gamma_{k})\) to be a uniform random permutation of the vector \((\underbrace{1,1,\ldots,1}_{x_{1}\text{ times}},\underbrace{2,2,\ldots,2}_{x_{2} \text{ times}},\ldots,\underbrace{\ell,\ell,\ldots,\ell}_{x_{\ell}\text{ times}})\). Below, we re-state their theorem about the time taken to sample a fair group assignment from this distribution. **Theorem 4.2** (Theorem 4.1 in Gorantla et al. (2022)).: _There is a dynamic programming-based algorithm that samples a group assignment \(\gamma\) in time \(O(k^{2}\ell)\)._ Therefore, this distribution is efficiently samplable. Moreover, this distribution also satisfies additional desirable properties, which we will discuss further in Section 5. Our following result then shows that using the distribution given by Gorantla et al. (2022) for \(\mu\) in our Group-Fair-PL model gives us an efficient algorithm to compute gradient of \(\mathcal{R}^{\text{fair}}\) with respect to the predicted scores \(m\). **Theorem 4.3**.: _Algorithm 1 estimates the gradient of the relevance metric \(\mathcal{R}^{\text{fair}}\) in the Group-Fair-PL model in time \(O\left(Mk^{2}\ell+M\left(|\mathcal{I}|+k\ell\log|\mathcal{I}|\right)\right)\)._ Proof of Theorem 4.3.: Note that given a group assignment \(\gamma\), the probability of sampling an item \(d\) at rank \(i\) depends only on the items from group \(\gamma(i)\) that appear in ranks \(1\) to \(i-1\). In our Group-Fair PL model, only items from group \(\gamma(i)\) are sampled in rank \(i\). Let \(\psi_{j}(\gamma)\) represent the set of ranks assigned to group \(j\) according to the group assignment \(\gamma\), for each \(j\in[\ell]\). Then, \[\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})=\sum_{\sigma\in\mathsf{S}_{k}^{ \text{fair}}(\mathcal{I})}\pi^{\text{fair}}[\sigma]\left(\sum_{i\in[k]} \theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})}\sum_{ \gamma\in\mathsf{G}_{k}^{\text{fair}}(\mathcal{I})}\pi^{\text{fair}}[\gamma, \sigma]\left(\sum_{i\in[k]}\theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})}\sum_{ \gamma\in\mathsf{G}_{k}^{\text{fair}}(\mathcal{I})}\mu[\gamma]\cdot\pi^{\text{ fair}}[\sigma\mid\gamma]\left(\sum_{i\in[k]}\theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\mathcal{I})}\mu[ \gamma]\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})}\pi^{\text{ fair}}[\sigma\mid\gamma]\left(\sum_{i\in[k]}\theta_{i}\rho_{\sigma(i)} \right).\] Equation (5) gives us, \[\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})=\sum_{\gamma\in\mathsf{G}_{k}^{ \text{fair}}(\mathcal{I})}\mu[\gamma]\left(\sum_{\sigma\in\mathsf{S}_{k}^{ \text{fair}}(\mathcal{I})}\left(\prod_{j\in[\ell]}\pi_{j}^{\text{PL}}[\sigma_{ j}\mid\gamma]\right)\left(\sum_{i\in[k]}\theta_{i}\rho_{\sigma(i)}\right) \right).\] Now since \(\psi_{1}(\gamma),\psi_{2}(\gamma),\ldots,\psi_{\ell}(\gamma)\) form a partition of \([k]\) we can rearrange the terms to get the following, \[\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma\in\mathsf{S}_{k}^{\text{fair}}(\mathcal{I})}\left(\prod_{j \in[\ell]}\pi_{j}^{\text{PL}}[\sigma_{j}\mid\gamma]\right)\left(\sum_{j\in[ \ell]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\right)\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathsf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})} \sum_{\sigma_{2}\in\mathsf{S}_{|\psi_{2}(\gamma)|}(\mathcal{I}_{2})}\ldots\right.\] \[\left.\ldots\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}( \gamma)|}(\mathcal{I}_{\ell})}\left(\prod_{j\in[\ell]}\pi_{j}^{\text{PL}}[ \sigma_{j}\mid\gamma]\right)\left(\sum_{j\in[\ell]}\sum_{i\in\psi_{j}(\gamma) }\theta_{i}\rho_{\sigma(i)}\right)\right)\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathsf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})}\pi_ {1}^{\text{PL}}[\sigma_{1}\mid\gamma]\sum_{\sigma_{2}\in\mathsf{S}_{|\psi_{2}( \gamma)|}(\mathcal{I}_{2})}\pi_{2}^{\text{PL}}[\sigma_{2}\mid\gamma]\cdots\right.\] \[\left.\ldots\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}( \gamma)|}(\mathcal{I}_{\ell})}\pi_{\ell}^{\text{PL}}[\sigma_{\ell}\mid\gamma] \left(\sum_{j\in[\ell]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)} \right)\right).\] Now the last term can be written as, \[\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{ I}_{\ell})}\pi_{\ell}^{\text{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{j\in[\ell]} \sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{ I}_{\ell})}\pi_{\ell}^{\text{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{i\in\psi_{ \ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}+\sum_{j\in[\ell-1]}\sum_{i\in\psi_{j }(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{I}_{ \ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{i\in\psi_{ \ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[+\sum_{j\in[\ell-1]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{ \sigma(i)}\underbrace{\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]}_{=1}\] \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left( \sum_{i\in\psi_{\ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)+\sum_{j\in[ \ell-1]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}.\] Taking the summation \(\sum_{\sigma_{\ell-1}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}(\mathcal{I}_{ \ell-1})}\pi_{\ell-1}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma]\) on both sides we get, \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma]\left( \sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{I}_{\ell})} \pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{j\in[\ell]}\sum_{ i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\right.\] \[=\sum_{\sigma_{\ell-1}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}( \mathcal{I}_{\ell-1})}\pi_{\ell-1}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma] \left(\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}(\mathcal{I}_{ \ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{i\in\psi_{ \ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\right.\] \[\left.\hskip 113.811024pt+\sum_{j\in[\ell-1]}\sum_{i\in\psi_{j}( \gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left( \sum_{i\in\psi_{\ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\underbrace{ \sum_{\sigma_{\ell-1}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}(\mathcal{I}_{\ell -1})}\pi_{\ell-1}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma]}_{=1}\] \[+\sum_{\sigma_{\ell-1}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}( \mathcal{I}_{\ell-1})}\pi_{\ell-1}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma] \sum_{j\in[\ell-1]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\] \[=\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\mathsf{PL}}[\sigma_{\ell}\mid\gamma]\left( \sum_{i\in\psi_{\ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[+\sum_{\sigma_{\ell-1}\in\mathsf{S}_{|\psi_{\ell-1}(\gamma)|}( \mathcal{I}_{\ell-1})}\pi_{\ell-1}^{\mathsf{PL}}[\sigma_{\ell-1}\mid\gamma] \left(\sum_{i\in\psi_{\ell-1}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\] \[+\sum_{j\in[\ell-2]}\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{ \sigma(i)}.\] Repeating the above until we reach \(j\in[1]\) in the last term, we get, \[\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{\sigma_{1}\in\mathsf{S}_{|\psi_{1}(\gamma)|}(\mathcal{I}_{1})}\pi_ {1}^{\mathsf{PL}}[\sigma_{1}\mid\gamma]\left(\sum_{i\in\psi_{1}(\gamma)}\theta_ {i}\rho_{\sigma(i)}\right)+\cdots\right.\] \[\cdots+\sum_{\sigma_{\ell}\in\mathsf{S}_{|\psi_{\ell}(\gamma)|}( \mathcal{I}_{\ell})}\pi_{\ell}^{\text{PL}}[\sigma_{\ell}\mid\gamma]\left(\sum_{ i\in\psi_{\ell}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\right)\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{j\in[\ell]}\sum_{\sigma_{j}\in\mathsf{S}_{\left|\psi_{j}(\gamma) \right|}(\mathcal{I}_{j})}\pi_{j}^{\text{PL}}[\sigma_{j}\mid\gamma]\left(\sum_ {i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right)\right)\] \[=\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma] \left(\sum_{j\in[\ell]}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})\right), \tag{6}\] where \(R_{j}(\gamma)\) is the reward obtained from the group-wise Plackett-Luce model \(\pi_{j}^{\text{PL}}\) for the ranks assigned to group \(j\) according to the group assignment \(\gamma\). That is, \[\mathcal{R}_{j}(\pi_{j}^{\text{PL}}):=\sum_{\sigma_{j}\in\mathsf{S}_{\left| \psi_{j}(\gamma)\right|}(\mathcal{I}_{j})}\pi_{j}^{\text{PL}}[\sigma_{j}\mid \gamma]\left(\sum_{i\in\psi_{j}(\gamma)}\theta_{i}\rho_{\sigma(i)}\right).\] Now for a fixed an item \(d\), the derivative with respect to the score of \(d\) will be, \[\frac{\delta}{\delta m(d)}\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})=\frac{ \delta}{\delta m(d)}\sum_{\gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[ \gamma]\Bigg{(}\sum_{j\in[\ell]}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})\Bigg{)}.\] Since in our group-fair PL model, the group assignment \(\gamma\) is sampled independently of the score \(m(d)\), we have \[\frac{\delta}{\delta m(d)}\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})=\sum_{ \gamma\in\mathsf{G}_{k}^{\text{fair}}(\ell)}\mu[\gamma]\Bigg{(}\sum_{j\in[ \ell]}\frac{\delta}{\delta m(d)}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})\Bigg{)}= \mathbb{E}_{\gamma\sim\mu}\left[\sum_{j\in[\ell]}\frac{\delta}{\delta m(d)} \mathcal{R}_{j}(\pi_{j}^{\text{PL}})\right]. \tag{7}\] Naively applying PL-Rank-3.PL-Rank-3 with \(N\) samples can estimation the gradient \(\frac{\delta}{\delta m(d)}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})\), for a fixed \(\gamma\), in time \(O\left(N\left(|\mathcal{I}_{j}|+k\log|\mathcal{I}_{j}|\right)\right)\) for group \(j\in[\ell]\). Let us say we take \(M\) samples to estimate the outer expectation. From Theorem 4.2 we have that the time taken to sample one group assignment \(\gamma\) is \(O\left(k^{2}\ell\right)\). Therefore, to sample \(M\) group assignments, it takes time \(O\left(Mk^{2}\ell\right)\). Then, the total time taken to compute \(\frac{\delta}{\delta m(d)}\mathcal{R}^{\text{fair}}(\pi^{\text{fair}})\) will be \[O\left(M\left(k^{2}\ell+\sum_{j\in[\ell]}N\left(|\mathcal{I}_{j}|+k\log| \mathcal{I}_{j}|\right)\right)\right)=O\left(Mk^{2}\ell+MN\left(|\mathcal{I}|+ k\ell\log|\mathcal{I}|\right)\right). \tag{8}\] Correctness for \(N=1\).Let \(\text{rank}(\sigma,d)\) represent the rank assigned to item \(d\) in \(\sigma\). Then from PL-Rank-3 algorithm in Oosterhuis (2022) we know that \[\frac{\delta}{\delta m(d)}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})=\mathbb{E}_{ \sigma_{j}|\gamma}\left[PR_{\sigma,d}^{(j)}+e^{m(d)}\left(\rho_{d}DR_{\sigma, d}^{(j)}-RI_{\sigma,d}^{(j)}\right)\right], \tag{9}\] where \[PR^{(j)}_{\sigma,i}=\sum_{i^{\prime}=[i+1,k]\cap\psi_{j}(\gamma)} \theta_{i^{\prime}}\rho_{\sigma(i^{\prime})}\quad\text{and}\quad PR^{(j)}_{ \sigma,d}=PR^{(j)}_{\sigma,\text{rank}(\sigma,d)},\] \[RI^{(j)}_{\sigma,i}=\sum_{i^{\prime}=[i+1,k]\cap\psi_{j}(\gamma)} \frac{PR^{(j)}_{\sigma,i}}{\sum_{d^{\prime}\in\mathcal{I}_{j}\setminus\sigma(1 :i-1)}e^{m(d^{\prime})}}\quad\text{and}\quad RI^{(j)}_{\sigma,d}=RI^{(j)}_{ \sigma,\text{rank}(\sigma,d)},\] \[DR^{(j)}_{\sigma,i}=\sum_{i^{\prime}=[i+1,k]\cap\psi_{j}(\gamma)} \frac{\theta_{\sigma,i}}{\sum_{d^{\prime}\in\mathcal{I}_{j}\setminus\sigma(1 :i-1)}e^{m(d^{\prime})}}\quad\text{and}\quad DR^{(j)}_{\sigma,d}=DR^{(j)}_{ \sigma,\text{rank}(\sigma,d)}.\] Note that for a fixed ranking \(\sigma\), PL-Rank-3 computes the term inside the expectation efficiently in time \(O(|\mathcal{I}_{j}|+k\log|\mathcal{I}_{j}|)\). Hence, even if the position discount values vary between different samples, or if the length of the ranking \(|\psi_{j}(\gamma)|\) changes between different samples, we can still use PL-Rank-3 algorithm to compute the term inside the expectation for each sample independently and efficiently. Therefore, substituting Equation (9) in Equation (7) we get, \[\frac{\delta}{\delta m(d)}\mathcal{R}^{\text{fair}}(\pi^{\text{ fair}}) =\mathbb{E}_{\gamma\sim\mu}\left[\sum_{j\in[\ell]}\frac{\delta}{ \delta m(d)}\mathcal{R}_{j}(\pi_{j}^{\text{PL}})\right]\] \[=\mathbb{E}_{\gamma\sim\mu}\left[\sum_{j\in[\ell]}\mathbb{E}_{ \sigma_{j}|\gamma\sim\pi_{j}^{\text{PL}}}\left[PR^{(j)}_{\sigma,d}+e^{m(d)} \left(\rho_{d}DR^{(j)}_{\sigma,d}-RI^{(j)}_{\sigma,d}\right)\right]\right].\] By linearity of expectation, \[\frac{\delta}{\delta m(d)}\mathcal{R}^{\text{fair}}(\pi^{\text{ fair}}) =\sum_{j\in[\ell]}\mathbb{E}_{\gamma\sim\mu}\left[\mathbb{E}_{ \sigma_{j}|\gamma\sim\pi_{j}^{\text{PL}}}\left[PR^{(j)}_{\sigma,d}+e^{m(d)} \left(\rho_{d}DR^{(j)}_{\sigma,d}-RI^{(j)}_{\sigma,d}\right)\right]\right]\] \[=\sum_{j\in[\ell]}\mathbb{E}_{\gamma,\sigma_{j}\sim\pi^{\text{ fair}}}\left[PR^{(j)}_{\sigma,d}+e^{m(d)}\left(\rho_{d}DR^{(j)}_{\sigma,d}-RI^{(j)}_{ \sigma,d}\right)\right]. \tag{10}\] Hence, we can estimate each term in Equation (10) by taking an empirical average of \(M\) samples of each group-wise rankings. For this, we can take \(M\) samples of \(\gamma\) and \(1\) sample each of \(\sigma_{j}\). From Oosterhuis (2022) we know that for group \(j\) we can compute the corresponding term in the summation in time \(O(|\mathcal{I}_{j}|+k\log|\mathcal{I}_{j}|)\), resulting in a total time complexity of \(O(Mk^{2}\ell+M(|\mathcal{I}|+k\ell\log|\mathcal{I}|))\). Note that this is same as replacing \(N=1\) in Equation (8). Note that PL-Rank-3 takes time \(O\left(M\left(|\mathcal{I}|+k\log|\mathcal{I}|\right)\right)\) to compute the gradients. ## 5 Experiments We conduct experiments on real-world datasets to evaluate our algorithm empirically. First, we compare Group-Fair-PL against the unconstrained PL model and observe that Group-Fair-PL is competitive in optimizing relevance. Second, we use bias-injected data to verify that Group-Fair-PL can mitigate bias by ensuring ex-post fairness, while achieving higher true utility than the PL model. Finally, we also compare our algorithm with post-processing baselines for relevance and ex-post fairness. Metrics.We use NDCG as the ranking utility metric, with position discounts \(\theta_{i}=\frac{1}{\log_{2}(i+1)}\) for all \(i\in[k]\) (first row in all the figures). The second row in the figures shows the _fraction of rankings_ sampled from the stochastic ranking models, where an item from the minority group is placed at rank \(i\) for each rank \(i\in[k]\). The minority group is as mentioned in Table 1. The lower and upper bound lines in the figures show \((p\pm\delta)k\), where \(p\) is the proportion of the minority group in the dataset and \(\delta\) is a small number (see Table 1). Baselines.Apart from PL-Rank-3, we consider **PL-Rank-3 + GDL22** and **PL-Rank-3 + GAK19** as baselines to compare our fair in-processing algorithm Group-Fair-PL with post-processing baselines by Gorantla et al. (2022) and Geyik et al. (2019), respectively. GAK19 is a fairness post-processing method that also simultaneously optimizes for relevance, unlike GDL23, which only optimizes for relevance within the groups but completely ignores the inter-group comparisons. We also compare results with **PL-Rank-3 (true)** which is the PL model trained with PL-Rank-3 on the unbiased (or true) relevance scores. For more details regarding the parameter choices, see Table 1. Hyperparameters.We use a two-layered neural network of \(32\) hidden units each to predict relevance scores. We use stochastic gradient descent with a learning rate of \(0.001\) and batch size \(512\) to optimize our relevance metric. We report aggregate results for \(10\) runs of each algorithm. We selected other hyperparameters after searching for \(\delta\) in the range \(0.01\) to \(0.1\), \(M\) in the range \(10\) to \(100\), and \(k\) in the range \(10\) to \(30\). We chose the final values to be the smallest in the range where implicit bias had a significant impact on the output. Implementation.The unconstrained PL model was trained using PL-Rank-3 algorithm from Oosterhuis (2022). All the experiments were run on an Intel(R) Xeon(R) Silver 4110 CPU (8 cores, 2.1 GHz clock speed, and 128GB DRAM). For reproducibility, our data and implementation of Group-Fair-PL will be uploaded at github.com/sruthigorantla/Group-Fair-PL. Datasets.We perform experiments on datasets, for which several past works have raised fairness concerns and demonstrated the performance of their fair ranking algorithms Singh and Joachims (2019); Yadav et al. (2021); Oosterhuis (2022); Cooper et al. (2023). The **German Credit** dataset encodes users' creditworthiness as a \(0/1\) label Hofmann (1994). To put this data into query-document pairs, we followed preprocessing similar to Singh and Joachims (2019). The **Movie-Lens** dataset consists of user ratings of movies from the movielens.org website Harper and Konstan (2016). We first performed a singular value decomposition to generate \(50\)-dimensional features. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{**Dataset**} & \multicolumn{1}{c|}{**Experiment**} \\ \hline \multirow{3}{*}{**Name**} & \multirow{3}{*}{**\#queries**} & **max** & \multirow{3}{*}{**Relevance**} & \multirow{3}{*}{**Sensitive**} & \multirow{3}{*}{**Graps**} & \multirow{3}{*}{**Minority**} & \multirow{3}{*}{\(M\) (Alg. 1)} & \multirow{3}{*}{\(k\)} & **Avg. running** & **Avg. running** & \multirow{3}{*}{**Reference**} \\ & & **filters per query** & & & & & & & (_Group-Fair-PL_) & & **(_PL-Rank-3_)** & \\ \hline \multirow{3}{*}{Movielens} & \multirow{3}{*}{\(2290\)} & \multirow{3}{*}{\(588\)} & \(1,2,3\) & \multirow{3}{*}{Genre} & Action(33\%), Crime(12\%), & & \multirow{3}{*}{Cime} & \multirow{3}{*}{10} & \multirow{3}{*}{10} & \multirow{3}{*}{0.02} & \multirow{3}{*}{4285} & \multirow{3}{*}{118} & Figure 1 \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ \hline German Credit & \(500\) & \(25\) & \(0.1\) & Gender & Male(\(4\%\)), Female(26\%) & Female & 50 & 20 & 0.05 & 3008 & 59 & Figure 2 \\ \hline HMDA (AK) & \(75\) & \(25\) & \(0.1\) & Gender & Male(\(7\%\)), Female(26\%) & Female & 100 & 25 & 0.06 & 1528 & 50 & Figure 3 \\ \hline HMDA (CT) & \(731\) & \(100\) & \(0.1\) & Gender & Male(\(67\%\)), Female(33\%) & Female & 10 & 25 & 0.06 & 7850 & 673 & Figure 4 \\ \hline \end{tabular} \end{table} Table 1: Parameters and results of the experiments on various datasets. We then chose the largest \(5\) genres (see Table 1) and kept users that rated at least \(50\) movies. The **HMDA** dataset consists of data regarding home mortgage loans in the US Federal Financial Institutions Examination Council (2017). We used the preprocessed dataset released by Cooper et al. (2023). The HMDA dataset is available for every year since \(2007\), for all \(50\) US states. We used the data for Alaska **(AK)** from \(2017\) and created a train and test split. For a more rigorous testing of our algorithms, we also used Connecticut's (**CT**) data, using years \(2013-2016\) as training data and year \(2017\) as test data. We did a PCA pre-processing to reduce feature dimension to \(50\) Ding and He (2004) and created query-document pairs similar to the German Credit pre-processing in Singh and Joachims (2019). The details of the datasets are in Table 1. Datasets with implicit bias.For each dataset, we inject multiplicative implicit bias in the relevance scores of the items from the minority group as a stress test for ranking algorithms. In the HMDA dataset, we multiply the relevance scores of the _female_ candidates by \(\beta\), where \(\beta\) is varied between \(0\) and \(1\) across the columns of Figure 3. For datasets with more than two groups, such as _MovieLens_, we use different values of bias for different groups. We report the bias values for all the groups other than _Action_ group. This model of bias is inspired by Celis et al. (2020), a practical model that gives useful insights about the correct optimization objective to consider. Figure 1: Results on the MovieLens dataset. Figure 4: Results on the HMDA (CT) dataset. Figure 3: Results on the HMDA (AK) dataset. Figure 2: Results on the German Credit dataset. ### Key Observations Group-Fair-PL gets the best of both fairness and NDCG.In the presence of implicit bias, Group-Fair-PL outperforms PL-Rank-3 in the NDCG computed on the true scores and achieves almost same NDCG as PL-Rank-3 (true). Compared to just post-processing for ex-post fairness (PL-Rank-3 + GDL22 and PL-Rank-3 + GAK19), our algorithm almost always achieves better NDCG, This suggests that by explicitly enforcing ex-post fairness during training, we are able to overcome implicit bias via eliminating unreliable comparisons of items from different groups - main motivation of Gorantla et al. (2022). Even when there is no bias, our Group-Fair-PL still outputs ex-post-fair rankings while not compromising on the NDCG. Group-Fair-PL preserves the fairness properties of Gorantla et al. (2022).Theorem 3.5 of Gorantla et al. (2022) says that with their group assignment sampling, the probability of each group being ranked at _any_ of the top \(k\) ranks is between the lower and the upper bound (see Figure 3 row 2), even though the constraints are only for the representation of the group in the whole of top \(k\) ranks. This guarantee is achieved by GDL22 post-processing anyway. But even without further post-processing, Group-Fair-PL preserves this property. In contrast, PL-Rank-3 and PL-Rank-3 + GAK19 push the protected group items to the bottom of the top \(k\) ranking in the presence of bias (see Figure 3 row 2), even when the true representation of the minority group is uniform across the ranks (see PL-Rank-3 (true)). Running time.The running time of the algorithms is as shown in Table 1. We note that Group-Fair-PL spent most of the time sampling the group assignments. Finding faster algorithms for sampling group assignments is an interesting open problem. ## 6 Conclusion We propose a novel group-fair Plackett-Luce model for stochastic ranking and show how one can optimize it efficiently to guarantee high relevance along with guaranteed ex-post group-fairness instead of ex-ante fairness known in previous literature on fair learning-to-rank. We experimentally validate the fairness and relevance guarantees of our ranking models on real-world datasets. Extending our results to more stochastic ranking models in random utility theory is an important direction for future work. ## Acknowledgements SG was supported by a Google PhD Fellowship. AL is grateful to Microsoft Research for supporting this collaboration.
2303.16048
Amortized Analysis via Coinduction
Amortized analysis is a program cost analysis technique for data structures in which the cost of operations is specified in aggregate, under the assumption of continued sequential use. Typically, amortized analyses are presented inductively, in terms of finite sequences of operations. We give an alternative coinductive formulation and prove that it is equivalent to the standard inductive definition. We describe a classic amortized data structure, the batched queue, and outline a coinductive proof of its amortized efficiency in $\textbf{calf}$, a dependent type theory for cost analysis.
Harrison Grodin, Robert Harper
2023-03-28T15:27:10Z
http://arxiv.org/abs/2303.16048v3
# Amortized Analysis via Coinduction ###### Abstract Amortized analysis is a program cost analysis technique for data structures in which the cost of operations is specified in aggregate, under the assumption of continued sequential use. Typically, amortized analyses are presented inductively, in terms of finite sequences of operations; we demonstrate that coinduction provides an equivalent but more natural characterization. We describe a classic amortized data structure, the batched queue, and outline a coinductive proof of its amortized efficiency in **call**, a type theory for cost analysis. amortized analysis, coinduction, data structure, mechanized proof [color=black,linewidth=0.5]0.0 ###### Contents * 1 Program Cost Analysis in the call Framework * 2 Cofree Comonads for Abstract Data Types * 3 Cofree Comonads for Abstract Data Types * 4 Cofree Comonads for Abstract Data Types * 5 Cofree Comonads for Abstract Data Types ## 1 Program Cost Analysis in the call Framework The **call** framework is a dependent type theory that supports verification of both correctness conditions and cost bounds [10], based on dependent call-by-push-value [9, 12]. In **call**, the primitive effect \(\mathsf{step}^{c}(-)\) incurs \(c\) units of abstract cost. Value types are interpreted in **Set**, and computation types are interpreted in the Eilenberg-Moore category \(\mathbf{Set}^{W}\) of the writer monad on some cost monoid \(\mathbb{C}\), \(W=\mathbb{C}\times(-)\). ## 2 Cofree Comonads for Abstract Data Types Queues are an abstract data type representing an ordered collection with a first-in-first-out data policy. Let \(E\) be the type of elements, and let \(Q\) be the queue representation type; the destructor signature can be written as follows, using \(F\) and \(U\) of call-by-push-value: \[\mathsf{enqueue}: \!\!\!^{-}\;Q\to(E\Rightarrow Q)\] \[\mathsf{dequeue}: \!\!\!^{-}\;Q\to F((E+1)\times UQ)\] We may thus find the corresponding cofree comonad [6] to be the following coinductive type: \[\mathsf{queue}_{E}\triangleq\nu Q.\;(\mathsf{quit}:F1)\times(\mathsf{enqueue }:E\Rightarrow Q)\times(\mathsf{dequeue}:F((E+1)\times UQ))\] Here, we fix the comonad parameter to be \(F1\), requiring that queues terminate with an element of \(F1\) (i.e., a cost \(\mathbb{C}\)); a definition using **call**-inspired pseudocode is shown in Listing 1. The type \(\mathsf{queue}_{E}\) can be understood as "object-oriented" [2, 7, 3]. One simple implementation of a queue, called \(\mathsf{QUEUE}\), is given in Listing 2 by coinduction, using a single list as the underlying representation type. The enqueue operation is annotated with one unit of cost; however, this is unrealistic, since a full traversal of the list is performed for each enqueue operation. We will use this implementation as a specification, later defining a queue whose implementation actually reflects this idealized cost model in a suitable sense. ## 3 Amortized Analysis Amortized analysis is a cost analysis technique for data structures where the operation costs are specified in aggregate, under the assumption of sequential use of the data structure. [13] As discussed, the single-list implementation of a queue is slow. Thus, we may wish to use a different implementation which only incurs one large cost infrequently; this approach will have low amortized cost, equivalent to the specification in Listing 2. [4, 5, 1, 11] This underlying representation type of the "batched" implementation is two lists: the "front list", \(\mathsf{fl}\), and the "back list", \(\mathsf{bl}\). Elements are enqueued to \(\mathsf{bl}\) and dequeued from \(\mathsf{fl}\); if \(\mathsf{fl}\) is empty when attempting to dequeue, the current \(\mathsf{bl}\) is reversed and used in place of \(\mathsf{fl}\) going forward. The \(\mathsf{calf}\) implementation, called batched-queue, is shown in Listing 3. The cost specification for batched queues claims that an enqueue incurs one cost and a dequeue incurs zero cost, _amortized_. To prove this, one may use a potential function \(\Phi(\mathsf{bl},\mathsf{fl})=\mathsf{length}(\mathsf{bl})\) to track how much cost is "owed" for future dequeues. It remains to show that the cost of an operation is as stated, up to changes in \(\Phi\). To make the cost align with the specification precisely, the potential \(\Phi\) is explicitly spent in the quit method. Amortized analysis is typically framed algebraically, describing the cost incurred after a finite sequence of operations. However, we take the perspective that the analysis is more naturally viewed as _coalgebraic_. We define a relation \(\approx\) on \(\mathsf{queue}_{E}\) relating queues with the same behavior and cost, up to amortization. The relation, given in Listing 4, is a relaxation of the standard bisimulation; the dequeue component allows cost to be deferred for amortization. Eventually, the implementation must "pay up" to satisfy the quit requirement. For all lists \(\mathsf{bl}\) and \(\mathsf{fl}\), batched-queue \(\mathsf{bl}\approx\mathsf{step}^{\Phi(\mathsf{bl},\mathsf{fl})}(\mathsf{QUEUE }\left(\mathsf{fl}\mathbin{\,\mathbin{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}} \mathsf{\mathsf{\mathsf{\mathsf{b}}}}}\right)\). The proof by coinduction is in Listing 5, mirroring the usual argument via \(\Phi\). This technique can be elegantly related to the existing definition of amortized analysis, as well. Define the free monad corresponding to the constructor signature dual to the presentation of queues: \[\mathsf{program}_{E}(A)\triangleq\mu P.\ (\mathsf{return}:A)+(\mathsf{enqueue}:P \times E)+(\mathsf{dequeue}:U((E+1)\Rightarrow FP))\] An element of \(\mathsf{program}_{E}(A)\) is a finite sequence of queue instructions terminated by returning a value of type \(A\). We may evaluate a program on a queue, by induction on the program: \[\psi:^{-}\mathsf{program}_{E}(A)\times U(\mathsf{queue}_{E})\Rightarrow FA\] This expresses the usual notion of running a sequence of operations on a data structure; the code is in Listing 6. We note a similarity to monad-comonad interaction laws [8], here adjusted for call-by-push-value. For all \(c\), \(p\), and \(q\), it is the case that \(\mathsf{step}^{c}(\psi(p,q))=\psi(p,\mathsf{step}^{c}(q))\). [Amortizing Sequences of Operations] Suppose \(\mathbb{C}\) is commutative, and let \(q_{1},q_{2}:^{-}\mathsf{queue}_{E}\). Then, \(q_{1}\approx q_{2}\) iff for all \(A\) and \(p:^{+}\mathsf{program}_{E}(A)\), \(\psi(p,q_{1})=\psi(p,q_{2})\). Thus, the coalgebraic notion of amortized equivalence is equivalent to the classical algebraic notion. In future work, we hope to generalize and more precisely characterize the given constructions, especially in the context of call-by-push-value. We also hope to extend this approach to support abstract data types with binary and parallel operations.
2308.03051
TARJAMAT: Evaluation of Bard and ChatGPT on Machine Translation of Ten Arabic Varieties
Despite the purported multilingual proficiency of instruction-finetuned large language models (LLMs) such as ChatGPT and Bard, the linguistic inclusivity of these models remains insufficiently explored. Considering this constraint, we present a thorough assessment of Bard and ChatGPT (encompassing both GPT-3.5 and GPT-4) regarding their machine translation proficiencies across ten varieties of Arabic. Our evaluation covers diverse Arabic varieties such as Classical Arabic (CA), Modern Standard Arabic (MSA), and several country-level dialectal variants. Our analysis indicates that LLMs may encounter challenges with dialects for which minimal public datasets exist, but on average are better translators of dialects than existing commercial systems. On CA and MSA, instruction-tuned LLMs, however, trail behind commercial systems such as Google Translate. Finally, we undertake a human-centric study to scrutinize the efficacy of the relatively recent model, Bard, in following human instructions during translation tasks. Our analysis reveals a circumscribed capability of Bard in aligning with human instructions in translation contexts. Collectively, our findings underscore that prevailing LLMs remain far from inclusive, with only limited ability to cater for the linguistic and cultural intricacies of diverse communities.
Karima Kadaoui, Samar M. Magdy, Abdul Waheed, Md Tawkat Islam Khondaker, Ahmed Oumar El-Shangiti, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed
2023-08-06T08:29:16Z
http://arxiv.org/abs/2308.03051v2
# **Yarjamat**: Evaluation of Bard and ChatGPT on Machine Translation of Ten Arabic Varieties ###### Abstract Despite the purported multilingual proficiency of instruction-finetuned large language models (LLMs) such as ChatGPT and Bard, the linguistic inclusivity of these models remains insufficiently explored. Considering this constraint, we present a thorough assessment of Bard and ChatGPT (encompassing both GPT-3.5 and GPT-4) regarding their machine translation proficiencies across ten varieties of Arabic. Our evaluation covers diverse Arabic varieties such as Classical Arabic (CA), Modern Standard Arabic (MSA), and several country-level dialectal variants. Our analysis indicates that LLMs may encounter challenges with dialects for which minimal public datasets exist, but on average are better translators of dialects than existing commercial systems. On CA and MSA, instruction-tuned LLMs, however, trail behind commercial systems such as Google Translate. Finally, we undertake a human-centric study to scrutinize the efficacy of the relatively recent model, Bard, in following human instructions during translation tasks. Our analysis reveals a circumscribed capability of Bard in aligning with human instructions in translation contexts. Collectively, our findings underscore that prevailing LLMs remain far from inclusive, with only limited ability to cater for the linguistic and cultural intricacies of diverse communities. ## 1 Introduction Large language models (LLMs) finetuned to follow instructions Wei et al. (2021); Wang et al. (2022); Ouyang et al. (2022) have recently emerged as powerful systems for handling a wide range of NLP tasks. In accordance with the scaling law (i.e., pretraining larger models will continue to result in better performance) Kaplan et al. (2020), a number of LLMs such as GPT-3 Brown et al. (2020), Chinchilla Hoffmann et al. (2022), Claude Anthropic (2023), ChatGPT1 OpenAI (2022), GPT-4 OpenAI (2023), and Bard Google (2023) have been introduced. Most of these models, however, are 'closed'. That is, little-to-no information about them is known. This includes details about model architectures, pretraining data, languages involved, and training configurations. LLMs are also expensive both to pretrain and deploy. To alleviate these concerns, 'open' LLMs such as BLOOM Scao et al. (2022), LLaMA-1 Touvron et al. (2023), Falcon Almazrouei et al. (2023), and LLaMA-2 Touvron et al. (2023) were introduced. These more open models can facilitate research and (non-) commercial deployment. Footnote 1: In this work, we refer gpt-3.5-turbo as ChatGPT. In spite of drawbacks such as their closed nature, computational costs Dasgupta et al. (2023), and biases they exhibit Ferrara (2023), closed LLMs remain attractive primarily due to their remarkable performance Bang et al. (2023); Laskar et al. (2023). It is thus important to fully understand the full capabilities of these closed models. Although there has been a recent flurry of works attempting to evaluate ability of LLMs to carry out NLP tasks, many of these models remain opaque. This is especially the case when it comes to understanding how LLMs fare on different varieties and dialects of several popular languages and on vital tasks such as machine translation (MT). For example, the extent to which LLMs can handle MT from Arabic varieties into other languages is unknown. Figure 1: Experimental setup for our evaluation. We evaluate multiple language models on different Arabic varieties. Another challenge is how more recent models such as Google's Bard are yet to be evaluated and understood. Bard was released in \(41\) different languages, which makes it a particularly attractive target for MT evaluation. This is also the case given Google's strong history of investment in MT (Wu et al., 2016). In this work, we offer a thorough evaluation of LLMs on MT from major Arabic varieties into English (Figure 1). Namely, we evaluate ChatGPT, GPT-4, and Bard on MT of ten Arabic varieties into English. Since there are usually concerns about downstream evaluation data leaking into LLM pretraining, which involves data collected from the web, we benchmark the models on new test sets that we manually prepare for this work. Our evaluation targets diverse varieties of Arabic. Namely, we evaluate on Classical Arabic (CA), Modern Standard Arabic (MSA), and several country-level Arabic dialects such as Algerian and Egyptian Arabic (Section 3). Bard provides three different drafts for each text input we ask it to translate. Contents of the three drafts are diverse, providing us with excellent contexts to analyze the degree to which the model adheres to our prompts. We leverage these contexts to carry out a human evaluation study investigating the _helpfulness_ of the model, allowing us to reveal a number of Bard's limitations. We carefully analyze these limitations against the different Arabic varieties we target, thus affording even better understanding of the model's ability to translate from Arabic. Overall, our work offers the following contributions: 1. We offer a detailed MT evaluation of instruction finetuned LLMs on ten diverse varieties of Arabic. 2. To the best of our knowledge, our work is the first to assess performance of Bard on NLP tasks in any language, and on Arabic MT in particular. 3. We introduce a new manually created multi-Arabic dataset for MT evaluation that has never been exposed to any existing LLM. 4. We extensively evaluate Bard through a human study to analyze its behavior in terms of _helpfulness_. We examine how well the model follows human instructions when tasked with translating across ten different Arabic varieties. The rest of the paper is organized as follows: In Section 2, we review previous research evaluating LLMs on NLP tasks in general and MT in particular. In Section 3, we introduce our newly developed multi-Arabic MT dataset. In Section 4, we describe our evaluation methods. In Section 5, we present our results and the main findings obtained from comparing ChatGPT and Bard to various commercial MT products. In Section 6, we present our human study analyzing Bard's helpfulness, particularly in terms of its ability to follow human instructions in MT. We conclude in Section 7. ## 2 Related Work **Evaluation of ChatGPT and Other LLMs.** A growing body of literature has focused on evaluating ChatGPT and other LLMs on NLP tasks. Laskar et al. (2023) find ChatGPT effective on many tasks. Other works find it either on par with supervised models (Ziems et al., 2023) or in some cases (e.g., sequence tagging) falling behind these models (Qin et al., 2023). Both Jiao et al. (2023) and Ogundare and Araya (2023) find that GPT-4 is competitive with commercial systems for high-resource languages but lags behind for low-resource languages. Bang et al. (2023) find a similar pattern for ChatGPT. Guerreiro et al. (2023) find complex translation scenarios, such as in the low-resource setting, to be prone to hallucination. Peng et al. (2023) demonstrate that ChatGPT can surpass Google Translate on many translation pairs, but Zhu et al. (2023) show it is outperformed by NLLB (NLLB et al., 2022) on at least \(83\)% of the English-centric pairs they study. Wang et al. (2023); Karpinska and Iyyer (2023), however, show that ChatGPT can match the performance of fully supervised models for document-level translation. Peng et al. (2023) find that adding task and domain-specific information in the prompt can improve the robustness of the MT system, which corroborates the findings by Gao et al. (2023). Huang et al. (2023) propose a prompting technique called cross-lingual-thought prompting (XLT) to improve cross-lingual performance for a wide range of tasks, including MT. Similarly, Lu et al. (2023) asks ChatGPT to correct its own mistakes as a way to improve the model's translation quality. Lu et al. (2023) propose Chain-of-Dictionary (CoD) prompting to solve rare word translation issues. Prompting with CoD improves the performance of ChatGPT for both X-En and En-X language direc tions. **Evaluation of ChatGPT on Arabic.**Khandaker et al. (2023) evaluate ChatGPT and other contemporary LLMs such as BloomZ Muennighoff et al. (2022) in few-shot settings (0, 1, 3, 5, and 10) on four X-Arabic and two code-mixed Arabic-X language sets. They show that providing in-context examples to ChatGPT achieves comparable results to a supervised baseline. Alyafeai et al. (2023) evaluate ChatGPT and GPT-4 on \(4,000\) Arabic-English sentence pairs from Ziemski et al. (2016) and find a supervised SoTA model to outperform ChatGPT and GPT-4 by a significant margin. These works, however, only consider a limited number of Arabic varieties. They also do not conduct a thorough analysis of the LLMs for MT. Additionally, none of the works evaluate Bard. Our work bridges these gaps by performing a comprehensive evaluation of these systems on a wide range of Arabic varieties. We also conduct our study on novel in-house data that we guarantee no leakage for (i.e., our data cannot have been seen by ChatGPT, GPT-4, or Bard since we create the data for this work). Other works have focused on evaluating smaller-sized Arabic language models Abu Farha and Magdy (2021); Inoue et al. (2021); Alammary (2022), including on recent benchmarks Nagoudi et al. (2023); Elmadany et al. (2023). **Arabic MT.** There are several works on Arabic MT itself, including rule-based Bakr et al. (2008); Mohamed et al. (2012); Salloum and Habash (2013), statistical Habash and Hu (2009); Salloum and Habash (2011); Ghoneim and Diab (2013), and neural Junczys-Dowmunt et al. (2016); Almahairi et al. (2016); Durrani et al. (2017); Alrajeh (2018). While these systems focus on MSA, others target Arabic dialects Zbib et al. (2012); Sajjad et al. (2013); Salloum et al. (2014); Guellii et al. (2017); Baniata et al. (2018); Sajjad et al. (2020); Farhan et al. (2020); Nagoudi et al. (2021, 2022). We provide a more detailed review of related literature in Appendix A, with a summary in Table 7. ## 3 Coverage and Datasets ### Arabic Varieties Our goal is to provide a comprehensive evaluation of MT on ChatGPT, GPT-4, and Google Bard, focusing on their performance across ten different varieties of Arabic. These can vary across _time_ (i.e., old vs. modern day) and _space_ (e.g., country-level geography) as well as their _sociopragmatic_ functions (e.g., standard use in government communication vs. everyday street language). Before introducing our dataset, we provide a brief background about Arabic and its varieties. Arabic, the collection of languages spoken by approximately \(450\) million people across the Arab world, encompasses a broad spectrum of varieties. Classical Arabic (CA) is known as Quranic Arabic, the language of the Quran Rabin (1955), and has emerged from the medieval dialects of the Arab tribes. It was spoken early in Mecca around \(1,500\) years ago in the sixth or seventh century AD. CA is considered the most eloquent form of Arabic and is preserved notably in the Holy Quran and pre-Islamic epic poems Versteegh (2014). It is often described as exhibiting archaic words, figurative speech, and rhyming sentences that are no longer (or less frequently) used in MSA and dialectal Arabic varieties. Modern Standard Arabic (MSA) Holes (2004), on the contrary, is deeply rooted in CA that has been lightened to a great extent to encompass the modern uses in Modern literature, poetry and official statements. MSA additionally serves as the standardized language for formal events, news broadcasts, sermons, and formal communication. We now explain how we acquire our dataset for each Arabic variety. \begin{table} \begin{tabular}{l l l} \hline \hline **Vafety** & \multicolumn{1}{c}{**Example with English Translation**} \\ \hline **EGY** & \multicolumn{1}{c}{\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\( ### Datasets **CA.** We manually curate \(200\) sentences from the Open Islamic Texts Initiative (OpenITI) (Nigst et al., 2020) dataset, namely from the latest 2022.16 version. It includes a collection of premodern Arabic works featuring a comprehensive library of \(10,342\) books. The sentences were chosen based on a set of specified criteria: Initially, we identify books originating from the first and second-century Anno Hegirae (in the year of the Hijra), excluding those written after this period. Then we compile a collection of \(15\) distinctive books, including notable works like Abdullah Ibn AlMuqfaa's "Al-Adab Al-Kabir" and "Al-Adab Al-Saghir", Mohamed Idis Al-Shafi's "Al-Umm", "Al-Risala", and "Al-Adab Wal-Muraa", among others. We subsequently extract sentences of a minimum of ten words. We provide the list of the \(15\) books we sample from in Appendix B (Table 9). **MSA.** We collect a total of \(200\) sentences from current event news picked from two online news websites: Aljazeera2 and BBC Arabic3. The curated sentences showcase various news genres, including political, social, and sports. Footnote 2: [https://aljazeera.net/news](https://aljazeera.net/news) Footnote 3: [https://bbc.com/arabic](https://bbc.com/arabic) **Various Dialects.** We manually select a dataset of dialectal Arabic from an in-house project where we transcribe TV series collected from YouTube videos belonging to Arabic dialects. Again, we use \(200\) sentences from each dialect, resulting in a total of \(1,600\) sentences across eight dialects, each transcribed and translated by their respective native speakers. The dialects belong to North African countries such as Algeria, Morocco, and Mauritania; Gulf area dialects, namely Emirati; Levantine Arabic (focusing on Palestinian and Jordanian); and Egyptian Arabic. For all varieties, we collect sentences that are _at least ten words_ long. We present one sample from some of the dataset in Table 1. Statistics of the datasets across the Arabic varieties is presented in Appendix B (Table 8). ## 4 Methodology ### Prompt Design The term _prompt_ refers to the set of instructions used to program an LLM with a goal to steer and enhance its purpose and capabilities (White et al., 2023). Prompts can influence subsequent interactions with the model as well as its generated outputs. Therefore, it is important to clearly identify the right prompts to obtain the desired outcome for a particular task. To determine the right prompt for our translation task, we set up a pilot experiment that we now describe. **Pilot experiment.** In our pilot experiment, we investigate three prompt candidates. To limit the search space, we perform this experiment only with ChatGPT. We experiment with both Arabic and English prompts to _concisely_ instruct ChatGPT to translate from an Arabic variety into English, again restricting our search space to MSA as a variety that is known to overlap with other varieties at all linguistic levels (Abdul-Mageed et al., 2020; Habash, 2022). We also experiment with an _elaborate_ English prompt that clearly defines the role and the objective of ChatGPT before asking the model to carry out the translation task. We then evaluate the performance of ChatGPT on \(100\) MSA\(\rightarrow\)English samples. We present the prompt templates and the corresponding performance we acquire in Table 2. **Evaluation.** As evident, the concise English prompt outperforms the other two prompts, including the Arabic counterpart (by 1\(\sim\)2 BLEU scores). This result substantiates findings in prior works (Khondaker et al., 2023; Lai et al., 2023) regarding the superiority of English prompts on ChatGPT over non-English prompts. Therefore, in the rest of the paper we employ the concise and direct English prompt to conduct our experiments. ### _N_-Shot Experiments We run ChatGPT MT generation under \(0\)-shot, \(1\)-shot, \(3\)-shot, and \(5\)-shot settings. For a particular translation task, we always select the samples for these in-context learning experiments from the same set of training examples. This means that for a \(k\)-shot setting, we make sure that if a training sample is selected then it will also be selected for \(n\)-shot settings where \(n>k\). We generate translation with ChatGPT (gpt-3.5-turbo4, an optimized version of GPT-3.5 series) by setting the temperature to \(0.0\) to ensure _deterministic and reproducible results_. In addition, we restrict the maximum token length to \(512\) for all the generation tasks. For GPT-4, we use the web interface for MT generation under \(0\)-shot and \(5\)-shot settings. For Bard5, we use the web interface but opt out of gen erating any few-shot response because it lacks an API and its outputs can be problematic requiring intensive manual preprocessing (Section 6). ### Evaluation and Baselines **Evaluation metrics.** Different evaluation metrics are usually employed to automatically evaluate MT systems. These metrics are often based on word overlap and/or context similarity between references and model outputs. In our work, we employ both types of metrics to evaluate the quality of various translation systems that we consider in our study. Namely, we use BLEU [14], COMET [15], ChrF [16], ChrF++, and TER [21]. We provide a detailed description of each metric in Appendix 4.1. **Baselines.** We compare instruction-tuned LLMs to a number of MT systems, including both commercial services [1] as well as the supervised NLLB-200 system (NLLB et al., 2022)6. We provide more details about each of these systems in Appendix 4.2. Footnote 6: For NLLB-200, we use the distilled 1.3B ## 5 Results and Discussion We evaluate all models on X-English translation direction where X is an Arabic variety (MSA and CA). As mentioned earlier, we evaluate LLMs (ChatGPT, GPT-4, and Bard) in _n_-shot settings. We report BLEU, COMET, and ChrF++ in Table 3. We report additional metrics in Appendix C. We summarize our main findings here. **Is GPT-4 better than ChatGPT?**_In most cases, yes_. GPT-4 consistently outperforms ChatGPT on many dialects and varieties. However, for JOR and UAE, ChatGPT 0-shot performs better than 0-shot GPT-4. Overall, on average, GPT-4 0-shot outperforms ChatGPT 0-shot by \(1\sim 3\) points on all metrics. Additionally, GPT-4 in 0-shot setting is on par with ChatGPT in the 5-shot setting. When comparing ChatGPT with GPT-4 under \(5\)-shot setting, we observe that ChatGPT substantially closes the performance gap, even outperforming GPT-4 in \(6\) out of \(10\) varieties in terms of BLEU score. Although GPT-4 marginally outperforms ChatGPT on average BLEU score, _this result shows that by providing few-shot examples, it is possible for ChatGPT to achieve comparable performance to GPT-4 on Arabic MT_. **Is ChatGPT/GPT4 better than Bard?**_In most cases, yes_. For fairness, we compare Bard, ChatGPT, and GPT-4 only under the 0-shot condition. In the majority of the varieties, either ChatGPT or GPT-4 outperforms the best Bard draft (i.e., Draft 1). Our results show that Bard is better than both of these models in only three cases (i.e., CA, EGY and JOR). Overall, GPT-4 ranks best (BLEU score at \(23.12\)), followed by ChatGPT (\(21.77\) BLEU points), which in turn is followed by Bard (\(20.47\) BLEU points). **Is ChatGPT/GPT4 better than commercial systems?**_Yes, but only on dialects_. We evaluate three commercial translation systems, namely, Amazon, Microsoft, and Google Translate. Among commercial systems, we find Google Translate to outperform other commercial systems across all varieties except YEM. The average score for Google Translate is \(22.29/64.89/43.11\) (BLEU/COMET/ChrF++) compared to \(18.80/63.68/41.55\) and \(17.77/62.85/39.76\) for Microsoft and Amazon systems, respectively. From our evaluation results in Table 3, we observe that commercial systems are better at translating CA and MSA but fail to produce high-quality translations when it comes to dialectal Arabic. ChatGPT and GPT-4 in 0-shot and few-shot settings are on par or better than the best-performing commercial system (i.e., Google Translate) for all Arabic dialects except JOR. The average BLEU score of ChatGPT and GPT-4 in few-shot setting is \(23.62\) (\(5\)-shot) and \(13.64\) (\(5\)-shot), respectively, compared to \(2.29\) for Google Translate. However, we notice that Google Translate outperforms ChatGPT and GPT-4 on MSA by a significant margin (while it stays behind on other dialects). Hence, we conclude that _ChatGPT and GPT-4 are better translators of Arabic dialects than the commercial Google Translate system_. We find similar patterns in other metrics. **Is ChatGPT/GPT-4 better than the supervised baseline?**_Yes, it is_. We evaluate NLLB (NLLB et al., 2022) as the supervised baseline, finding both ChatGPT and GPT-4 able to outperform this baseline in the 0-shot setting. The average BLEU score for NLLB is \(12.97\) compared to \(21.77\) and \(23.12\) of ChatGPT and GPT-4 under \(0\)-shot settings, respectively. Similar to the commercial systems, the supervised baseline (NLLB) does well on MSA and is on par with ChatGPT and GPT-4. However, both ChatGPT and GPT-4 outperform it on dialect translation by a significant margin. **Is NLLB with dialects as source better than vanilla NLLB?**_Yes, it mostly is when the dialects match_. Our supervised baseline, NLLB, takes the dialects of the source into consideration. For example, both JOR and PAL dialects in NLLB can be defined as South Levantine, i.e., (_JOR, PAL_)\(\rightarrow\)_South Levantine_. In addition, source dialects like EGY and MOR can be defined in their actual forms, while YEM can be defined as Taizzi. The column _NLLB (Dia)_ in Table 3 provides BLEU score where the NLLB model treats the input as a particular dialect. We find that when the actual dialect matches the appropriate mapping with this NLLB source dialect, we acquire performance. One exception is the case of PAL, where NLLB does poorly compared to MSA. **Is Bard a good instruction following model?**_Not always._ We evaluate Bard for our translation using the web interface7. We find that Bard can fail to follow the instructions we prompt it with. We further discuss and describe this in Section 6. Bard often provides the main translation output within double quotes (""), which we extract semi-automatically.8 Additionally, Bard provides three different drafts. We report results for each draft independently, as well as the average of all three drafts in our results. Footnote 7: [https://bard.google.com/](https://bard.google.com/) **Are instruction following models better at dialect translation?**_In most cases? Yes._ In order to clearly see performance on dialects, we exclude CA and MSA results and report the average performance of the models on the various dialects as reported in Table 4. We observe that GPT-4 at its 5-shot setting is the best model on dialects. Although commercial systems fare well on CA and MSA, their performance degrades on dialects. For example, the gap between the best performing commercial system (Google Translate) and the best instruction-tuned model (GPT-4 5-shot) across the various dialects races to \(4.85\) from \(1.35\) in terms of average BLEU score. **Do diacritics affect translation?**_Yes, in most cases they do._ Although in most real-world use, native speakers do not usually employ diacritics, \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Met**} & \multirow{2}{*}{**Var/M**} & \multicolumn{3}{c}{**ChatGPT**} & \multicolumn{3}{c}{**GPT-4**} & \multicolumn{3}{c}{**Bard**} & \multicolumn{3}{c}{**NLLB**} & \multirow{2}{*}{**NLLB**} \\ & & **0-shot** & **1-shot** & **3-shot** & **5-shot** & **0-shot** & **5-shot** & **D1** & **D2** & **D2** & **Avg** & **(SB)** & **(Dia)** & **Amazon** & **MST** & **GT** \\ \hline \multirow{10}{*}{\begin{tabular}{c} NLLB \\ \end{tabular} } & CA & 11.27 & 12.02 & 12.22 & 12.52 & 11.79 & 11.36 & 12.32 & 10.43 & 12.39 & 11.71 & 7.32 & - & 11.35 & 11.96 & **14.30** \\ & MSA & 42.85 & 44.11 & 44.29 & 44.81 & 43.18 & 43.66 & 37.23 & 33.23 & 36.18 & 35.55 & 41.34 & - & 46.76 & 47.36 & **66.01** \\ & ALG & 14.48 & 16.41 & 17.16 & 17.31 & 18.37 & **17.83** & 15.24 & 11.67 & 12.58 & 13.16 & 7.27 & - & 10.08 & 11.67 & 11.93 \\ & EGY & 19.96 & 21.00 & 21.38 & **21.74** & 21.15 & 21.49 & 21.33 & 19.39 & 20.91 & 20.54 & 11.12 & 13.87 & 14.95 & 16.64 & 18.09 \\ & JOR & 25.74 & 26.75 & 27.63 & 26.82 & 24.57 & 25.26 & 26.93 & 23.48 & 25.09 & 25.17 & 13.07 & 18.5 & 21.56 & 21.71 & **29.35** \\ & MAU & 8.52 & 8.96 & 9.27 & 9.05 & 9.19 & **9.87** & 61.1 & 42.5 & 23.7 & 4.24 & 3.48 & - & 7.21 & 6.89 & 7.67 \\ & MOR & 27.15 & 28.19 & 28.86 & 29.80 & 32.90 & **33.32** & 31.59 & 30.84 & 31.25 & 31.23 & 10.45 & 19.47 & 12.76 & 14.25 & 16.94 \\ & PAL & 29.47 & 29.37 & 31.62 & 31.56 & **31.97** & 30.48 & 22.57 & 20.59 & 24.25 & 22.47 & 14.98 & 12.56 & 21.75 & 24.23 & 25.78 \\ & UAE & 24.20 & 24.61 & 24.55 & 26.17 & 23.86 & **26.91** & 21.93 & 19.61 & 21.29 & 20.94 & 11.27 & - & 16.85 & 19.05 & 19.56 \\ & YEM & 14.03 & 15.13 & 16.24 & **16.44** & 14.27 & 16.22 & 9.46 & 6.38 & 5.33 & 7.06 & 9.41 & 12.56 & 14.41 & 14.23 & 13.25 \\ \hline \multirow{10}{*}{ \begin{tabular}{c} NLLB \\ \end{tabular} } & **Avg** & 21.77 & 22.66 & 23.32 & 23.62 & 23.12 & **23.64** & 20.47 & 17.99 & 11.96 & 19.21 & 12.97 & 15.39 & 17.77 & 18.80 & 22.29 \\ \cline{1-1} \cline{2-16} & CA & 70.11 & 70.08 & 70.01 & 70.24 & **71.47** & 70.95 & 68.29 & 67.04 & 68.65 & 67.99 & 58.87 & - & 63.03 & 63.16 & 66.37 \\ \cline{1-1} & MSA & 85.87 & 86.14 & 86.22 & 86.24 & 86.32 & 86.22 & 80.21 & 80.00 & 80.44 & 80.22 & 84.76 & - & 86.15 & 85.70 & **87.23** \\ \cline{1-1} & ALG & 62.69 & 63.77 & 63.98 & 63.85 & 65.06 & **65.52** & 69.50 & 55.62 & 59.72 & 58.75 & 49.98 & - & 54.55 & 56.48 & 55.33 \\ \cline{1-1} & EGY & 72.41 & 73.15 & 74.20 & 73.96 & 74.14 & **74.91** & 71.50 & 68.20 & 71.30 & 70.33 & 61.15 & 63.81 & 64.24 & 65.59 & 68.41 \\ \cline{1-1} & JOR & 74.46 & 75.20 & 75.52 & 75.27 & 76.37 & **76.50** & 74.19 & 70.65 & 72.65 & 72.50 & 60.25 & 65.05 & 67.33 & 70.46 & 71.83 \\ \cline{1-1} & MAU & 58.37 & 58.99 & 60.35 & 60.66 & 59.24 & **62.13** & 52.53 & 46.38 & 50.41 & 49.77 & 48.50 & - & 52.37 & 51.45 & 51.58 \\ \cline{1-1} & MOR & 69.36 & 69.64 & 70.58 & 70.73 & 73.94 & **73.95** & 72.12 & 70.60 & 71.82 & 71.51 & 53.23 & 62.74 & 54.50 & 51.89 & 56.55 \\ \cline{1-1} & PAL & 74.59 & 74.94 & 75.40 & 75.51 & **76.62** & 76.19 & 69.37 & 67.78 & 69.49 & 69.03 & 60.57 & 59.04 & 65.80 & 68.54 & 68.69 \\ \cline{1-1} & UAE & 69.64 & 69.62 & 69.80 & 70.80 & 72.93 & 72.38 & 66.71 & 63.08 & 66.12 & 65.30 & 54.57 & - & 59.40 & 61.74 & 61.57 \\ \cline{1-1} & YEM & 64.48 & 65.41 & 66.09 & 65.88 & 62.47 & **68.77** & 58.34 & 55.35 & 56.8 some Arabic texts (especially those written in CA) do make use of diacritic markers. We were inquisitive about the effect of diacritics on the translation task across the different systems and so carry out a limited study of any such effect. To this end, we collect and manually translate \(50\) new CA sentences that are fully diacritized. The sentences conform to the identical selection criteria as those utilized within the study, specifically with regard to their length and as they originate from the first and second centuries AH books. We make a copy of this set and remove diacritics, and then independently feed both the diacritized and undiacritized versions to all the systems that we evaluate in this work. As shown in Table 5, we find most systems to work better when we remove diacritics. However, we also observe that some systems provide the same output regardless of whether the input is diacritized or not. This prompts us to conduct a quick analysis on a list of \(20\) word pairs of heterophonic homographs, i.e., words with the same spelling that change meaning and pronunciation according to the diacritics. We provide this list in Appendix 12 (Table 14). An example of such a pair is _- _- _he wrote_ and _- books_. For this analysis, we perform single word translation by all the systems to ensure that the intended meaning cannot be retrieved from context, but rather solely based on changes in the diacritics. We find that Google Translate and Microsoft Translation provide the same meaning for both words of each pair, while the rest of the systems show different outputs when diacritics change. **Robustness.** We also run a series of bootstrapping experiments that confirm the robustness of the results we acquire from the different models. We describe these experiments in Appendix 3.2. ## 6 Human Analysis of Bard Helpfulness Our experience working with Bard reveals that the model does not always follow human instructions. For this reason, we decided to carry out a human study to assess Bard's helpfulness. We define _helpfulness_ here simply as the model's ability to follow human instructions. For each variety of Arabic, we task two native speakers of Arabic with familiarity with the dialects to assign one tags from the set {wrong_lang, no_translation, degeneration, content_filtering} to the model responses. We develop this tagset based on a bottom-up approach where we let the categories emerge from the data. Although this tagset may not be exhaustive, we find it to reasonably capture errors we identify with model responsiveness to instructions. Each of the two annotators manually label each draft, independently, with one tag from the set of our helpfulness error tags. The annotators meet and discuss differences, reaching 100% agreement which indicates that the categories are clear and independent. Table 6 shows one example from each of the categories. The most frequent issue with model helpfulness is translating into the wrong target language (wrong_lang), followed by not providing any translation at all (no_translation) (Figure 2). The former is predominantly due to a translation into MSA instead of English, oftentimes prefacing the output with the sentence "_- _- _- _- _- _- _- _- _- _- _- -_ - _- -_ -_ - terestingly, Bard does not seem to struggle with wrong_lang errors when translating from MSA (and the same scenario almost happens for translating from CA). Instead, Bard tends to mistake the translation task for a text generation one where it generates a couple of paragraphs that start with the input sentence. From Figure 3, it seems that the error rate may be proportional to the resource availability of a given variety (i.e., varieties for which no much data are publicly available tend to suffer from higher error rates). This observation should be couched with caution since the LLMs we evaluate remain closed, with little know about their pretraining as well as finetuning datasets and processes. When we look at each of Bard's drafts separately, we find that the first draft shows a higher number of wrong_lang and content_filtering errors. Meanwhile, draft 2 is the most prone to no_translation errors, with these accounting for \(57\)% of the wrong generations it produces (Figure 4 in Appendix 4.3). **Other behavior.** While Bard has a feature where it occasionally adds sources to support the information it provides, these sources can be unrelated. For example, it can cite links to GitHub repositories attached to political news translations. It also has a tendency to respond to input sentences that are questions the way it would for a Question Answering (QA) task. Sometimes it also produces an opinion about a sentence it translates: "_g **Single reference translations.** Again, due to the laborious nature of manually translating data from the various dialects and the challenge of finding qualified native speakers to carry out these translations, our evaluation dataset involves only one single reference of each source sentence. It continues to be desirable to create evaluation datasets with \(3-5\) references for each source sentence. We alleviate this challenge by providing results in different metrics such that the results are not only based on surface level matching but also similarity of the translation pairs. More references would still be better since different human translators would collectively provide data less prone to human subjectivity or errors. **Evaluation of multiword expressions.** While we provide translations of full sentences that may involve multiword expressions, including idioms and proverbs, it would be useful to develop evaluation datasets that focus on these types of expressions as these data could uncover particular types of model capabilities. For example, a model that is able to translate and explain a proverb can be thought of as somewhat knowledgeable about culture and pragmatic phenomena. **Evaluation by different lengths.** We provide results on our data regardless of sentence length. In the future, it would be useful to report results based in various sentence length bins as longer sentences are usually more challenging to MT models. Again, this is alleviated by the fact that we design our datasets to be at least ten words long from the outset. **Orthography normalization:** Due to the lack of a standardized writing form, Arabic dialects are characterized by an important variation in orthography. In this paper, we do not perform normalization on the input sentences before inputting them into the models since (i) we want our input to reflect the full diversity of orthography in the wild. In addition, (ii) there is currently no normalization tool that covers all the dialects we treat in this work. ## 9 Ethics Statement **Intended use.** We understand our work will likely inspire further research in the direction of exploring the multilingual capabilities of LLMs, especially newly released ones such as Bard. Our findings both highlight some of the strengthens of these models as well as expose some of their weaknesses and limitations. For example, available LLMs still \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline \multicolumn{2}{c}{_Wrong Target Language_} \\ \hline **Input**: \\ _i.i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i.j._ _i._ _i.j._ _i.j._ _i._ _i.j._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i._ _i.j._ _i._ _i.j._ _i._ _i.j._ _i._ _i._ _i.j._ _i._ _i._ _i.j._ _i._ _i._ _i.j._ _i._ _i._ _i.j._ _i._ _i. struggle to translate from dialects of even major language collections such as Arabic. Our work also further showcases the limited capability of Bard to follow simple instructions such as those typical of an MT context. Consequently, we believe our work can provide useful feedback for improving both coverage and usefulness of LLMs. **Potential misuse and bias.** Since there exists little-to-no information about the data involved in pretraining and finetuning LLMs we consider, we cannot safely generalize our findings to varieties of Arabic we have not investigated. We conjecture, however, that the models will perform equally poorly on dialects with no or limited amounts of public data. Although our work does not focus on studying biases in the models nor how they approach handling harmful content (Laskar et al., 2023b), we could observe that especially Bard puts a lot of emphasis on filtering harmful and potentially offending language so much that its instruction tuning leads it to interact negatively with the model's usefulness as an MT system. Overall, our recommendation is not to use the models in applications without careful prior consideration of potential misuse and bias. #### Acknowledgments We gratefully acknowledge support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Alliance of Canada,9 and UBC ARC-Sockey.10 Footnote 9: [https://allianeccan.ca](https://allianeccan.ca) Footnote 10: [https://arc.ubc.ca/ubc-arc-sockeye](https://arc.ubc.ca/ubc-arc-sockeye)
2306.04292
Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research
Despite progress in the field, significant parts of current XAI research are still not on solid conceptual, ethical, or methodological grounds. Unfortunately, these unfounded parts are not on the decline but continue to grow. Many explanation techniques are still proposed without clarifying their purpose. Instead, they are advertised with ever more fancy-looking heatmaps or only seemingly relevant benchmarks. Moreover, explanation techniques are motivated with questionable goals, such as building trust, or rely on strong assumptions about the 'concepts' that deep learning algorithms learn. In this paper, we highlight and discuss these and other misconceptions in current XAI research. We also suggest steps to make XAI a more substantive area of research.
Timo Freiesleben, Gunnar König
2023-06-07T09:46:38Z
http://arxiv.org/abs/2306.04292v1
# Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research ###### Abstract Despite progress in the field, significant parts of current XAI research are still not on solid conceptual, ethical, or methodological grounds. Unfortunately, these unfounded parts are not on the decline but continue to grow. Many explanation techniques are still proposed without clarifying their purpose. Instead, they are advertised with ever more fancy-looking heatmaps or only seemingly relevant benchmarks. Moreover, explanation techniques are motivated with questionable goals, such as building trust, or rely on strong assumptions about the 'concepts' that deep learning algorithms learn. In this paper, we highlight and discuss these and other misconceptions in current XAI research. We also suggest steps to make XAI a more substantive area of research. Keywords:XAI Interpretable Machine Learning ## 1 Introduction This is an unusual paper from start to end. We don't start the paper with generic examples of great Machine Learning (ML) achievements. The thoughts in this paper are directed at people who are already working on eXplainable Artificial Intelligence (XAI), so we are long past promotional talks. Our goals with this paper are twofold: 1. to highlight misconceptions within parts of the XAI community in past and current research; 2. to provide constructive feedback and steps forward to make XAI a scientific discipline that actually improves ML transparency. After wrapping our heads around XAI-related topics for a couple of years, we became increasingly frustrated whenever we attended a workshop or conference on the topic. We do not claim that no progress is being made or that no high-quality research is being conducted. However, we are saddened that many computational, intellectual, and financial resources are being poured into projects that, in our view, do not stand on solid grounds: * proposals for new interpretation techniques that serve no clear purpose * anecdotal evidence from intuitive-looking heatmaps or "benchmarks" on seemingly relevant criteria are used as a substitute for a clear motivation * explanations are generated that mislead humans into trusting ML models without the models being trustworthy Instead of swallowing our frustration, we decided to canalize it into this paper with the hope of helping researchers avoid such projects that might be technically interesting but conceptionally unfounded. We believe that such a debate is especially urgent since funding for XAI research is inexorably high, and the community is ever-growing. Without clear purposes and proper conceptual foundations, the XAI boom could lead to a bubble endangered to implode. We would like to see our field become a pillar of ML transparency rather than the ML trust-washing machine. The perspective we will take is more of a philosophical bird's eye view of XAI research. It is not our style to expose specific papers by pointing out their flaws. We also feel that this is not necessary because the misconceptions discussed are 'elephants in the room' in our community. Before sharing our thoughts, we would like to point the reader to work that guided our perspective on XAI and that may help to underpin our arguments. ## 2 Related Work Many papers criticize XAI on various grounds, and we believe many of the criticisms still apply to current XAI. We focus on the critiques that most impacted the community and/or our thoughts. In his seminal paper, Zachary Lipton argues that XAI lacks a proper problem formulation and that this problem must be tackled to make progress as a field [28]. Instead of a well-defined goal, XAI offers a potpourri of motivations for explainability, such as increasing trust, fairness, or understanding. Summarized, he argues that: "When we have solid problem formulations, flaws in methodology can be addressed by articulating new methods. But when the problem formulation itself is flawed, neither algorithms nor experiments are sufficient to address the underlying problem." [28, p.8] Finale Doshi-Velez and Been Kim highlight the problem of assessing the quality of explanations and comparing different explanation techniques. They describe three potential standards for evaluation: application, human, and functionally grounded interpretability, the first two rely on human studies and the third one on formal model properties [10]. They posit the intuitive principle that "the claim of the research should match the type of the evaluation." [10, p.9] Cynthia Rudin provides examples of post-hoc explanations that can mislead the user because they are difficult to interpret [36]. She argues that this issue becomes particularly threatening when the stakes are high, and model authorities have a financial interest in model opacity. Rudin and her co-authors point out that: "interpretable models do not necessarily create or enable trust - they could also enable distrust. They simply allow users to decide whether to trust them." [37, p.6] In consequence, they argue in favor of inherently interpretable models. Our views on XAI have also been strongly shaped by philosophical discussions around explanation and interpretability. Philosophers gave formal accounts of what constitutes an explanatory relationship, namely a statement about the phenomenon to be explained (called the _explanandum_), a statement about a phenomenon that explains the explanandum (called the _explanans_), and an _explanatory link_ between explanans and explanandum [47, 19]. For formalizing the explanatory link, especially causal accounts dominate, where the explanans is a difference maker with respect to the explanandum [46]. Krishnan rightfully highlights the importance of distinguishing the causal explanatory from the justificatory role of explanations. She notes that the two may often not align in the context of XAI as we might face explanations that do not justify decisions and justifications that do not explain them [24]. Others have emphasized the different explananda present in XAI, are we interested in explaining the model or the modeled real-world phenomenon [12, 42, 45]? Finally, Erasmus, Brunet, and Fisher argued that many statements may formally explain a phenomenon, however, it is often difficult to interpret these explanations correctly [11]. ## 3 Misconceptions in XAI Research In this section, we highlight the key misconceptions we see present in current XAI research and illustrate them in little caricatures. For many of these misconceptions, we are not the first to identify them. However, these misconceptions have persisted over time despite strong and convincing criticism. We see nothing wrong in repeating true things that are still ignored by parts of our community. #### 3.0.1 Misconception 1: "Explanation Methods are Purpose-Free" Many 'explanation methods' are presented as mathematical constructs without a conceptual or practical justification. Usually, such papers have the following storyline: 1. ML models are black-boxes 2. Explanations are needed because of [trust, transparency, detecting bugs, etc.] 3. Here are some formalisms, theorems, and the implementation 4. Look at the nice [images, text annotations, plots, etc.], don't they look exactly how you would expect them? 5. In this arbitrary benchmark we invented, our method is much better than all the others in 'explaining'. However, it remains unclear why anyone should call these images or plots explanations in the first place. Worse, it even remains unclear what purpose these 'explanations' might serve and under what conditions they are helpful. We do not claim that explanations can serve only one purpose, but rather that they should serve at least one purpose. Moreover, it should be shown, or at least clearly motivated, how exactly the proposed explanation technique serves this purpose. One may contend here that we do science for science's sake; the purpose is knowledge. However, as long as we do not have a widely accepted definition of explainability or interpretability, a purpose is the only way to connect explainability techniques with the real world. 'Explanation techniques' that are not motivated by any practical purpose should be suspicious to our community. If you cannot think of any context in which your explanation helps potential explainees (i.e. the recipients of explanations), this is a good indication that you should trash the technique. #### 2.0.1 Misconception 2: "One Explanation Technique to Rule Them All" There is a persistent belief in our community that we only need to find and research the single best explanation technique (e.g., SHAP), choose the best hyperparameters (e.g., the ideal baseline), and then we will always have the best explanations that provide perfect understanding. However, the goals we pursue with explanations are diverse: we may want to audit the model, learn something about the modeled phenomenon, debug models, or provide end-users with the ability to contest the model's decision or act based on it. Depending on the goal, an entirely different technique, with different hyperparameters choices and additional side constraints may be appropriate. Figure 1: Misconception that explanation methods are purpose-free Figure 2: Misconception that there is one true explanation technique Explanation Purposes are generally in conflict. Counterfactual explanations are the ideal example to illustrate these conflicts and the trade-offs we must make [23]. In the original paper by Wachter et al [44], counterfactuals are presented as explanations that provide understanding, contestability, and recourse. If we think of algorithmic recourse (counterfactuals that guide human actions to reach a desired outcome), the actionability of features is crucial; for example, humans cannot simply become younger to reach the desired outcome. Thus, age is not part of counterfactuals tailored for recourse. Discrimination based on age, on the other side, might be a good reason to contest a decision. That is why, age can surely be part of a counterfactual tailored for contesting. Finally, for the vague purpose of understanding the ML model, counterfactuals might not be the right tool at all, as they only provide extremely limited insight into the model. #### 3.2.2 Misconception 3: "Benchmarks do not Need a Ground-Truth" Benchmarks are meant to be objective comparisons between competitors according to a universally agreed standard. Machine learners love benchmarks. Benchmarks have been the bread and butter in ML research in the last decade and an important pillar for progressing the field. Because of the success of benchmarking in ML, the XAI community figured that benchmarks should be a central part of our field as well. Unfortunately, in XAI we generally lack the central element we have in supervised ML to make objective comparisons - a ground truth. Without a ground-truth, it is hard to come up with metrics that quantify desirable properties and that are widely agreed upon. Accepting the problem of the missing ground truth, there would have been two ways for progress in XAI: 1. abandon the idea of benchmarks in XAI altogether and move toward a more qualitative evaluation of explanations; 2. define Figure 3: Misconception that we can have benchmarks without ground-truth benchmarks through the explanation purpose, i.e., how well does the explanation serve that purpose, which gives us again some notion of ground-truth. Parts of our community, however, have taken less rocky paths: Regardless of the explanation purpose, and with little conceptual motivation, they formally define properties that they are optimizing their explanations for. Other explanation techniques (often designed for completely different applications and optimized for distinct desiderata) are then compared according to their own standards. In this form, benchmarks lose their justification; they become advertisement space rather than an objective standard for comparison. #### 3.1.1 Misconception 4: "We Should Give People Explanations They Find Intuitive" Many papers in our field use standards to motivate explanations that we find particularly questionable. For instance, explainees are given images or annotations that should convince them that the explanation technique actually highlights the right things. The images and annotations are tailored to look compelling and intuitive, conveying a message like - "You see, the model is actually looking at the parts of the object that you also look at when performing the task; you can trust this." As a consequence, we (over-)fit explanation techniques to human intuition; however, the question is whether these 'explanations' are still faithful to the explained ML model. Figure 4: Misconception that the goal is to give people explanations they find intuitive We think that a categorical mistake is made here; XAI should help make the model mechanism more transparent, not compel people into believing the system is good. Explanations provide grounds to decide whether to trust the model; they should not be designed to compel people into trusting the model. We should distinguish between an _explanation_ of a decision and a _justification_ of a decision. Justifications are good reasons for a decision; Explanations are the actual reasons for a decision [2, 24]. They may align in decisions where the actual reasons for a decision can be ethically justified. In XAI, however, they very often diverge. Think of cases where an 'explainer' 'explains' the predictions of the prediction model without any access to it beyond the single prediction. Or, when the evaluation standard for explanations is which kinds of explanations people like better. Indeed, it can be argued that people also often provide only justifications for their actions, but do not provide their actual reasons or are often not even aware of them. However, this is not an argument for why we should accept the same for XAI explanations; instead, we should strive for higher standards, explanations that are faithful to the causal decision-making process [16]. #### 2.2.2 Misconception 5: "Current Deep Nets Accidentally Learn Human Concepts" Big parts of our field share the following, in our opinion unwarranted, presupposition: Deep neural nets learn the same concepts as humans. The idea is that early layers learn low-level concepts, such as edges in images or syllables in sound; Layers closer to the output on the other side learn high-level concepts, such as the concept of a wheel or the concept of a noun [34]. Concepts are assumed to be learned without explicitly forcing the model to learn such concepts, but only by optimizing the model to classify images or correctly complete sentences. The assumption is that the only way to solve complex tasks is to use exactly the concepts that humans use [5]. Thus, all we need to do is to train the network and then use XAI techniques like activation maximization or network dissection to discover/reveal which nodes in the network stand for which concept, and then - tada - we have a fully transparent model where every part of the model stands for something, and the model basically does logical reasoning again [33]. We agree that this would be fantastic; however, for the following reasons, we are far more pessimistic concerning the conceptual reasoning in neural nets: * Many regularization techniques, for instance, dropout [41], explicitly force the model to represent in a distributed manner by punishing overreliance on individual neurons. * Even though research showed that some nodes in the network co-activate in the presence of certain concepts (actually, the co-activation in percentage is far less impressive than one would think), the causal role of the concept is not shared [4, 32, 43, 9, 13]. That means that for instance cutting the neuron in a bird classifier that'represents' wings or intervening on it does not or only marginally change the model's performance/prediction when birds with different wings are presented. Is this really what we mean when we talk about representing concepts? * One of the reasons why humans have shared concepts is because they need to effectively communicate with other humans about the world [15, 40, 35]. However, effective communication has not been a constraint in the training of ML models. Also, humans do not face one but a variety of different tasks. For simple classifications, abstract concepts are not needed as there exist shortcuts [14]. Fancy images like those generated by activation maximization techniques [34, 31] should not fool us in this regard: Just because the images generated have some wing-like elements does not mean that they represent wings. Not only are the images we get extremely sensitive to the source image on which we perform activation maximization [34], but they are likely to contain other forms and small shapes that we, as humans, blend out. For instance, research on adversarial examples indicates that deep nets use features in their classification that humans do not attend to [21]. It is questionable whether we as humans will ever understand the 'concepts' of ML models [6]. #### 3.1.1 Misconception 6: "Every XAI Paper Needs Human Studies" Many pointed to the importance of human studies in making progress on XAI [27, 10, 8]. We agree that evaluating the quality of explanations based on their impact on human performance on a particular task (to which the explanations are tailored) is reasonable and solid research. However, when it comes to explaining a specific phenomenon, at least two distinct questions must be addressed [29]: 1. What counts conceptually as an explanation for the phenomenon? 2. Which among the explanations for the phenomenon are good explanations for a specific explainee? While the latter question requires properly designed human studies, the former does not; instead, it's a philosophical/conceptual question that can be addressed with conceptual analysis and formal mathematical tools. Figure 5: Misconception that current deep nets accidentally learn human concepts Why is the conceptual definition of what counts as an explanation important at all? Why can't we go directly to the second step and test explanations in the real world, with real human explainees? In principle we could do that, but in practice the space of possible 'explanations' is unlimited. Conceptualizing what counts as an explanation for a phenomenon is building up the theory needed for an informed search for good explanations. In many cases where human studies are conducted, a more careful conceptual analysis would have been advisable. More generally, not conducting human studies does not mean dismissing explanation evaluation. For instance, a purely formal evaluation of explanation techniques can be justified if human studies have already been conducted for that type of explanation. Also, not all purposes of XAI require conducting human studies. For example, if we want to use XAI to estimate a specific quantity using the model, the speed and accuracy by which this quantity is measured allows us to compare it with other estimators estimating the same quantity [30]. #### 3.2.2 Misconception 7: "XAI Methods can be Wrong" Many papers have recently shown how saliency-based or model-agnostic explanation techniques like SHAP, LIME, counterfactuals can be 'tricked' to provide any desired explanation [39, 26, 1, 22]. This has been taken as major arguments against these techniques and led to arguments why the techniques are wrong or questioning their reliability [26, 36, 37, 45]. To us, there seem to be misunderstandings concerning the consequences of these lines of research. While we allow for arbitrary model and data complexity, we require that explanations be simple. Therefore, explanations will indeed not be faithful to every aspect of the model. In this sense, they do nothing wrong; they describe the formal aspects they describe. The fact that explanations are not faithful to every Figure 6: Misconception that every XAI paper needs human studies model aspect is the motivation for having different kinds of XAI techniques, each illuminating a different aspect while neglecting another. You may be able to fool SHAP, you may be able to fool LIME, but you won't be able to fool all techniques all the time. It is difficult to find the right level of abstraction in a given context: easily interpretable and local explanations like counterfactuals might have too little expressive power, they can be manipulated without changing much of the overall model behavior; more abstract and global explanations like partial dependence plots may zoom out too far, thereby allowing to hide problematic behavior in the specifics of the model. The fact that small model modifications can mislead explanation techniques is nevertheless important - it shows that the XAI techniques we have and the explanations they provide are very hard to interpret. We may need more diverse evidence to draw conclusions based on XAI explanations. Our field should take this as a call for developing XAI techniques on all levels of abstraction, describing all aspects of behavior relevant to real-world purposes. #### 4.2.1 Misconception 8: "Extrapolating to Stay True to the Model" Most XAI techniques rely on probing the ML model in one way or the other: LIME is locally sampling inputs, predicts them, and fits a linear model; counterfactuals search for close input points from a desired predicted class; Permutation feature importance (PFI) permutes the values in a specific feature and measures the drop in performance due to this permutation; Activation maximization uses gradient descent to find an input that maximally triggers a specific unit; integrated gradients approximate the integral over the path integral between the 'explained' image and a baseline image. The problem is not THAT the model is probed, but WHERE - namely in areas where it has not seen any data, i.e., in areas where the model has to Figure 7: Misconception that XAI methods can be wrong extrapolate [20]. ML models are notoriously bad at extrapolating to completely unseen instances [17, 3, 18]. In extrapolation regions, models disagree even when fitted to exactly the same data and achieve similar high performance on a test set. Asking an ML model to extrapolate is like asking a five-year-old kid who hasn't gone to school about her insights into algebraic topology. You might get an answer, but that answer will not really help you. Recent literature argues that explanations that rely on extrapolation are true to the model, while those that only probe the model within the data manifold are true to the data [7].3 Clearly, since the model is defined for instances outside of the manifold, probing the model in these areas will give us further insight into the model (for purposes such as debugging or robustness checks) that we would not have gained otherwise. However, we believe that for most XAI purposes, we are interested in the behavior of the model in areas where it is (at least putatively) qualified. As soon as we leave the data manifold, the interpretation of explanation techniques becomes very blurry. We think it is highly problematic for the interpretation of current explanation methods that they rely so strongly on extrapolation. Footnote 3: If we stay within the manifold, the model explanations can even be interpreted in terms of the data-generating mechanism [12]. ## 4 Steps Forward We hope that these misconceptions show: XAI is still a pre-paradigmatic discipline [25]. We cannot simply adopt some arbitrary assumptions and move on to paradigmatic scientific problem-solving. We must fight about the right conceptions of what the field is about, the language we should use, and the right evaluation strategies. We know that it is very easy to be critical while it is very difficult to be constructive. So we want to share at least some thoughts and intuitions about how we think the field should evolve to become a more substantive discipline. Figure 8: Misconception that model evaluation in extrapolation regions is unproblematic #### 3.2.2 Step 1: Go From Purpose to Benchmark Explanation techniques should start with a purpose. Again, this does not mean that they can only serve one purpose, but they should show that they serve at least one purpose. A purpose is a goal humans have in mind when they ask for explanations. Once the purpose is fixed, the evaluation of the explanations follows naturally. Your explanation technique should enable debugging? Then the evaluation for the method should be a qualitative study of whether the method suits model developers and helps them to debug their models. If your global explanation technique is supposed to infer relevant properties of the data-generating mechanism, then show in a simulation how well and how resource efficient your technique approximates these properties. When your local explanation technique is designed to provide recourse options to end-users, then either carefully conceptually justify desiderata for recourse and base your evaluation upon these desiderata, or test the suitability of these recourse options in experiments. The purpose determines the right evaluation metric; the evaluation metric(s) often allows for benchmarking. Different explanation techniques that are designed for the same purpose can be judged by the same evaluation metric(s) and thus benchmarked. One simple example is when two methods are estimating the same quantity i.e. a quantifiable property of the model. #### 3.2.3 Step 2: Be Clear What You Need to Explain and by What Every explanation comes with an explanation target, the so-called _explanandum_. The explanandum specifies what is to be explained and is determined by the explanation's purpose. Very often, confusion in XAI research arises because it is unclear what the explanandum is in a given context. For instance, confusions about the right sampling technique are often implicit confusions about the right explanandum [45, 12]. XAI techniques may for instance aim to explain: * the model prediction \(\hat{Y}\), * the predicted target \(Y\), or * an intermediate model element. If you are clear about the explanandum, the second big question is by what you want to explain it - the so-called _explanans_. The explanans describes the factor(s) you are pointing to in order to account for the state of the explanandum. There are a variety of explanantia (plural of explanans) in XAI research such as: * the model inputs \(\overline{X}\), * the predictors \(X\), * the dataset or a subset of it, or * intermediate model elements. Finally, be clear on the connection between the explanans and the explanandum. Explanations can be established by pointing to associations between the explanans and the explanandum [38]. Usually, however, the relationship we are interested in is causal, that is, the explanans makes a difference for the explanandum [46]. While causal explanations are more desirable than reference to mere associations, they are also more difficult to establish. **Step 3: Give Clear Instructions for How to Interpret Explanation Techniques** Interpreting the outputs of XAI techniques is extremely difficult. Rather than letting people figure out how to interpret XAI statements on their own, papers should provide clear guidance on how to do so. We believe that addressing the following questions in new proposals for XAI techniques would contribute to securing good usage: * What purpose does this XAI technique serve and how should it be applied? * Under which (model) conditions does the XAI technique enable a clear interpretation? * How do the hyperparameters of the technique affect the interpretation? * What is the intuitive meaning of extremes, namely high, close to zero, or negative values? * In what way, does the explanation guide actions and decisions? * When is it better to rely on other explanation techniques and why? **Step 4: XAI Needs Interdisciplinarity AND Expertise** XAI is a highly interdisciplinary field. XAI involves so many aspects that a single field would fail terribly; we need interaction. XAI needs to solve the following key questions, among others: * **Conceptual:** What are relevant explanation purposes? What is required to establish an explanatory relationship between an explanans and an explanandum? What are general explanation desiderata for a specific purpose? How can explanations be conceptualized? How to interpret explanations? * **Technical:** How to describe the conceptual definitions formally? What can be shown formally about the properties of these explanations? How to compute the explanations efficiently? How to implement explanations accessibly and correctly? How to interpret formalized explanations? * **Psychological:** How to visualize explanations the right way? What makes a good explanation for a particular explainee? What are context and person-specific desiderata of explanations? What cognitive biases do people have when interpreting explanations? Is the explanation successful in serving the explanation purpose? * **Social and Ethical:** Should we provide explanations and if yes, what are ethical desiderata? What are the risks with XAI in high-stakes decisions? How do explanations affect people's trust and actions? What level of transparency do we need? Not every paper must involve researchers from each group. However, the questions between the different categories should be seen as closely tied: Formal XAI methods without a conceptual foundation should be disregarded; Conceptually solid XAI tools that experimentally fail in guiding humans should be modified and fine-tuned; Finally, XAI explanations that serve a purpose successfully but this purpose is morally questionable should be dismissed. At the same time, nothing is wrong with XAI research that focuses on a narrow field-specific question such as providing a more efficient algorithm or testing a specific XAI method in human experiments concerning its success in finding model flaws. Every field has its expertise and it is important that conceptual foundations, algorithms, experiments, and ethical evaluations live up to the highest standards of the individual fields. All we want to emphasize is to not run around having blenders on but recognize how the questions are intertwined. ## 5 Conclusion This paper covered the key misconceptions in current XAI research. In our opinion, the most important one is the idea of purpose-free explanations. Fixing specific purposes will provide a way for evaluating and benchmarking XAI techniques objectively. The explanation purpose will also guide us: how XAI techniques must be constructed, when they should be used, and how they have to be interpreted. Overall, purpose-centered XAI research will help us make ML systems more transparent. Therefore, we hope that future researchers will start thinking more about the purpose of explanations before they make grand proposals for new methods. ## Acknowledgements This project has been supported by the German Federal Ministry of Education and Research (BMBF) and the Carl Zeiss Foundation (project on "Certification and Foundations of Safe Machine Learning Systems in Healthcare").
2310.02105
Simulations of two-temperature jets in galaxy clusters: II. X-ray property of forward shock
Forward shocks by radio jets, driven into the intracluster medium, are one of the indicators that can be used to evaluate the power of the jet. Meanwhile high-angular-resolution X-ray observations show the Mach numbers of powerful radio jets are smaller compared to that of theoretical and numerical studies, $\mathcal{M_{\rm obs}} < 2$. Our aim is to systematically investigate various factors, such as projection effects and temperature non-equilibration between protons and electrons, that influence the Mach number estimate in a powerful jet. Using two-temperature magnetohydrodynamic simulation data for the Cygnus A radio jets, whose Mach number is approximately 6, we construct mock X-ray maps of simulated jets from various viewing angles. Further, we evaluate the shock Mach number from density/temperature jump using the same method of X-ray observations. Our results demonstrate that measurements from density jump significantly underestimate the Mach numbers, $\mathcal{M} < 2$, around the jet head at a low viewing angle, $\lessapprox 50^{\circ}$. The observed post-shock temperature is strongly reduced by the projection effect, as our jet is in the cluster center where the gas density is high. On the other hand, the temperature jump is almost unity, even if thermal electrons are in instant equilibration with protons. Upon comparison, we find that shock property of our model at viewing angle of $<$ $55^{\circ}$ is in a good agreement with that of Cygnus A observations.
Takumi Ohmura, Mami Machida, Hiroki Akamatsu
2023-10-03T14:49:11Z
http://arxiv.org/abs/2310.02105v1
# Simulations of two-temperature jets in galaxy clusters ###### Abstract Context:Forward shocks by radio jets, driven into the intracluster medium, are one of the indicators that can be used to evaluate the power of the jet. Meanwhile high-angular-resolution X-ray observations show the Mach numbers of powerful radio jets are smaller compared to that of theoretical and numerical studies, \(\mathcal{A}_{\rm obs}<2\). Aims:Our aim is to systematically investigate various factors, such as projection effects and temperature non-equilibration between protons and electrons, that influence the Mach number estimate in a powerful jet. Methods:Using two-temperature magnetohydrodynamic simulation data for the Cygnus A radio jets, whose Mach number is approximately 6, we construct mock X-ray maps of simulated jets from various viewing angles. Further, we evaluate the shock Mach number from density/temperature jump using the same method of X-ray observations. Results:Our results demonstrate that measurements from density jump significantly underestimate the Mach numbers, \(\mathcal{M}<2\), around the jet head at a low viewing angle, \(\xi\) 50\({}^{\circ}\). The observed post-shock temperature is strongly reduced by the projection effect, as our jet is in the cluster center where the gas density is high. On the other hand, the temperature jump is almost unity, even if thermal electrons are in instant equilibration with protons. Upon comparison, we find that shock property of our model at viewing angle of \(<\) 55\({}^{\circ}\) is in a good agreement with that of Cygnus A observations. Conclusions:These works illustrate that the importance of the projection effect to estimate the Mach number from the surface brightness profile. Furthermore, forward shock Mach numbers could be a useful probe to determine viewing angles for young, powerful radio jets. ## 1 Introduction Powerful radio jets are launched from radio-loud active galactic nuclei (AGN) (see review by Blandford et al. 2019). AGN jets can be grouped into two categories at kilo-parsec scales: the Fanaroff & Riey (FR) class I and FR class II sources (Fanaroff & Riley 1974). Typical FR II sources are more powerful than FR I sources, and the jet beam of the FR II source seems to maintain a relativistic velocity until its termination. The kinetic energy of these jets is converted to heating energy in the intracluster medium (ICM) through shocks, and it is widely accepted that the radio-mode feedback plays a fundamental role in the formation and evolution of galaxies and large-scale structure (e.g., Peterson & Fabian 2006; McNamara & Nulsen 2007; Fabian 2012, and references therein). However, there are many remaining questions about the energetics and dynamical properties of AGN jets. Theoretical models of the FR II type radio source are well established (Scheuer 1974; Blandford & Rees 1974; Kaiser & Alexander 1997). Begelman & Cioffi (1989) showed that accumulated plasma in the lobe is highly over-pressured against the ICM, and hence provides a strong forward shock drive into it. Therefore, a high density shell can be formed around the lobe. This is in agreement with hydrodynamic and magnetohydrodynamic (MHD) simulations of jet propagation (e.g., Krause 2003; Gaibler et al. 2009; Hardcastle & Krause 2013; Perucho et al. 2014) In particular, when the jet beam can propagate stably and reach the jet head, the Mach number around the jet head is high (e.g., Ehlert et al. 2018; Perucho et al. 2019). In high-angular-resolution X-ray observations, forward shocks are observed for both FR I and FR II sources. For example, the discontinuity in the X-ray image due to the forward shocks is clearly visible in Hydra A (Nulsen et al. 2005), MS0735.6+7421 (McNamara et al. 2005), and Cygnus A (Wilson et al. 2006). Furthermore, ripples, like weak shocks, are observed around the radio lobes for the Perseus cluster (Fabian et al. 2006) and M87 (Forman et al. 2007). The forward shock information provide important clues to understand the energetics of radio jets and the nature of the tenuous plasma. furthermore, the forward shocks would be expected to be a possible cosmic-ray acceleration site (Fujita et al., 2007; Ito et al., 2011). In fact, Croston et al. (2009) reported the detection of non-thermal X-ray components from the forward shock associated with Centaurus A. Using deep _Chandra_ X-ray data, Snios et al. (2018) investigated the properties of forward shocks for Cygnus A, that is the archetype of Fanaroff-Riley type II radio galaxies (Fanaroff & Riley, 1974; Carilli & Barthel, 1996). The estimated jet power of Cygnus A is in the range of \(10^{45}-10^{46}\) erg s\({}^{-1}\). Surprisingly, the measured Mach numbers of the forward shocks of Cygnus A are below two, even around the jet head. This is inconsistent with several analytical and numerical studies. Meanwhile, Ineson et al. (2017) performed another estimation of the shock Mach number for a large sample of the FR II radio galaxies lobes, at redshifts of 0.1 and 0.5. Using non-thermal X-ray emission data, they evaluated the internal pressure of relativistic electrons in the lobe, and derived the Mach number from the pressure ratio between internal lobe and external ICM. Their results suggest that the median value of the Mach number is about two. However, this value might be somewhat underestimated, as it does not include the contribution of non-radiation particles in a lobe. The forward shock of AGN jet would be an ideal celestial laboratory to examine the electron heating mechanism of collisionless shock. In a collisionless system, the efficient heating of electrons is not trivial (Schwartz et al., 1988; Matsukiyo, 2010; Vink et al., 2015; Guo et al., 2017, 2018; Tran & Sironi, 2020). Heavier protons hold most of the bulk kinetic energy, and hence retain most of the thermal energy at the downstream shock. Furthermore, the timescale of the proton-electron temperature equilibrium is given by \[t_{\rm ep}=20\ {\rm Myr}\left(\frac{\ln\Lambda}{40}\right)^{-1}\left(\frac{n_{ \rm p}}{10^{-2}{\rm cm}^{-3}}\right)^{-1}\left(\frac{T_{\rm e}}{10^{8}{\rm K}} \right)^{3/2}, \tag{1}\] which is comparable to the shock propagation time scale, as the ICM is hot and has low density (Takizawa, 1998). This is, therefore, expected to be the timescale of the temperature equilibrium in the far downstream of the forward shock. In the context of the X-ray observations of galaxy clusters, the shock Mach number can be measured independently using the X-ray surface brightness and spectroscopic temperature (Markevitch & Vikhlinin, 2007). Some observations indicate the existence of a temperature non-equilibrium in the post-shock region (Markevitch, 2010; Hoshino et al., 2010; Akamatsu et al., 2011; Wang et al., 2018) Both the projection and viewing angle affect the measured Mach number. Several studies of cluster shocks examined and discussed these effects (Skillman et al., 2013; Hong et al., 2015; Zhang et al., 2019; Breuer et al., 2020; Wittor et al., 2021), however, for the forward shock of AGN jets, no quantitative discussion of this effect has been presented so far. Furthermore, when measuring the shock Mach number from X-ray surface brightness profile, the actual density profiles are obtained by model fitting (e.g., Owers et al., 2009). Hence, we must address this model dependence. We studied the dynamical properties of powerful jets propagating within the Cygnus-A like cluster using two-temperature MHD simulations presented in Ohmura et al. (2022, hereafter referred to as Paper I). The aim of this study is to systematically investigate various effects, such as projection and temperature non-equilibrium, that influence the estimates of the Mach number from X-ray surface brightness profiles and spectroscopic temperature jumps in powerful radio jets. To achieve this purpose, we first focus on the thermodynamics of the shocked-ICM, which is heated by the forward shock. Then, we conduct mock X-ray observations at several viewing angles to investigate the thermal X-ray properties of the forward shock. Our paper is structured as follows: in Section 2, we introduce the model for our MHD simulations and the numerical method of mock X-ray observations. We report our MHD simulation results for the thermodynamical properties of the shocked-ICM and evolution of the forward shock. Section 4 describes the results of mock X-ray observations in terms of their dependence on the viewing angle and the fitting model for X-ray surface brightness (section 4.1) and spectroscopic-like temperature (Section 4.2). The model dependence of the fitting results is discussed using actual X-ray data of Cygnus A in 5. Finally, summary and discussions are given in section 6. ## 2 Mock X-ray observation ### Numerical model We presented the results of the two-temperature MHD jets in Paper I using CANS+MHD code (Matsumoto et al., 2019). In this study, we analyzed only the model B jet in Paper I. We briefly summarize the model and method employed in our MHD simulations. The simulations were carried out in a Cartesian domain of size \((L_{x},L_{y},L_{z})\in(\pm 32,\pm 32,96)\) kpc. The kinetic power of our jet and ICM properties are roughly consistent with Cygnus A (Godfrey & Shabala, 2013). Important parameters for the jet are listed in Table 1. The density profile of the ICM is given by \[n(r)=\frac{n_{0}}{\left[1+(r/r_{\rm c})^{2}\right]^{3/2}}, \tag{2}\] where \(r=\sqrt{x^{2}+y^{2}+z^{2}}\), \(n_{0}\), \(r_{\rm c}\), and \(\beta\) represent the radius, core density, core radius, and ratio of the specific energy in the galaxies to the specific thermal energy in the ICM, respectively (King, 1962; Smith et al., 2002; Wilson et al., 2006). We set \(\beta=0.5\), \(r_{\rm c}=20\) kpc, and \(n_{0}=0.05\) cm\({}^{-3}\). Furthermore, our atmosphere is initially isothermal with a temperature \(kT_{\rm p}=kT_{\rm e}=5\) keV. Our simulations implemented a temperature non-equilibrium between the electron and proton. To calculate two-temperature plasma, we determine both the proton- and electron-specific entropies. Electrons and protons exchange energy by Coulomb collisions, and electrons lose energy by thermal free-free emission (Spitzer, 1962; Stepney & Guilbert, 1983). We assume that electrons can receive 5 % of the dissipated energy as shock waves. In areas other than in the shock regions, the dissipated energy is divided by the protons and electrons using the sub-grid model for the turbulent damping process, which is derived from gyrokinetic simulations (Kawazura et al., 2019). Details on the numerical method and model are provided in Paper I. ### Mock X-ray observation X-ray observations provide the surface brightness and spectroscopic temperature obtained by fitting a thermal model to the X-ray spectrum. X-ray surface brightness can be obtained by integral of the X-ray emissivity, which is summed of continuous and line components, from a optically thin plasma along the line of sight (LOS): \[{\rm SB}=\int_{\rm LOS}n_{\rm e}^{2}{\rm A}(Z,T_{\rm e}){\rm d}V, \tag{3}\] where \(\Lambda(Z,T_{\rm e})\), \(Z\) and \(dV\) are the cooling curve, metalicity, and the volume element, respectively. The cooling curve \(\Lambda(Z,T_{\rm e})\) for each element in specified spectral ranges is calculated as \[\Lambda(Z,T_{\rm e})=\int\varepsilon_{\nu}(Z,T_{\rm e}){\rm d}\nu, \tag{4}\] where \(\varepsilon_{\nu}\) is specific X-ray emissivity. To make cooling curves, we use a spectral model based on the APEC model for a thermal plasma from AtomDB database, and set a constant metalicity \(Z=0.5Z_{\odot}\), which is suggested by the X-ray observation of Cygnus A (Snios et al. 2018) Figure 1 shows cooling curves in the energy bands: 0.5-3.5 keV, 3.5-7 keV, and 0.5-7.0 keV. Cooling curves for 0.5 - 7.0 keV (solid black) and 0.5 - 3.5 keV (solid blue) have a very weak dependence on the electron temperature in a range of typical electron temperature of ICM. Several studies of X-ray observations for galaxy clusters therefore assume that X-ray surface brightness can simply be obtained as follows (e.g., Owers et al. 2009; Zhuravleva et al. 2015; Snios et al. 2018), \[{\rm SB}=\int_{\rm LOS}n_{\rm e}^{2}\Lambda(T_{\rm e}){\rm d}V\approx A_{0} \int_{\rm LOS}n_{\rm e}^{2}dV, \tag{5}\] where \(A_{0}\) is an constant. For the spectroscopic temperature \(T_{\rm spec}\), we adopt the spectroscopic-like formula proposed by Mazzotta et al. (2004): \[T_{\rm spec}=\frac{\int_{\rm LOS}WT_{\rm e}dV}{\int_{\rm LOS}WdV}, \tag{6}\] where \(T_{\rm e}\) is the electron temperature, and \(W=n_{\rm e}^{2}T_{\rm e}^{-3/4}\). Nonthermal components contribute to the X-ray emission, especially in radio lobes. In particular, non-thermal X-ray radiation from lobes is brighter than thermal radiation that comes from the ICM in the high-\(z\) galaxy and/or powerful radio galaxy (Turner & Shabala 2020). This study focuses on the nearby radio galaxy where the X-ray discontinuity is clearly visible. Therefore, we neglect the non-thermal X-ray radiation in this study. We use mock X-ray observation to calculate equations (5) and (6) at \(\pm 1\) Mpc along the LOS using snapshot data of our simulation. This integral length is comparable with \(r_{500}\) of Cygnus A (Halbesma et al. 2019). While our numerical model does not cover whole range of the cluster, we calculate the emissivity and spectroscopic-like temperature in the no-data region using extrapolated values from the \(\beta\)-profile (see equation (2)). ### Broken Power-law model of surface brightness profiles Density jumps at the discontinuity of X-ray surface brightness are conventionally determined by fitting with a broken power-law model, hereafter referred to as BknPow (e.g., Owers et al. 2009). In this model, the electron density of the broken power-law model that assumes spherical symmetry is given by \[n_{\rm e}(r)=\begin{cases}n_{0}\left(\frac{r}{r_{\rm a}}\right)^{-\alpha_{1}} &(r<r_{\rm ah})\\ \frac{\alpha_{1}}{C}\left(\frac{r}{r_{\rm ah}}\right)^{-\alpha_{2}}&(r>r_{\rm sh }),\end{cases} \tag{7}\] where \(n_{0}\) is the density normalization, \(\alpha_{1}\) and \(\alpha_{2}\) are the power-law indices, and \(r_{\rm sh}\) is the shock radius. At the density discontinuity location, \(C\) is the shock compression factor, \(C\equiv n_{2}/n_{1}\), where \(n_{1}\) and \(n_{2}\) are the pre- and post-shock densities, respectively. In the case of a forward shock of the AGN jet, the shock curvature is significantly lesser than that of the host cluster. Because the density slope \(\alpha_{2}\) must obey the ICM profile, we set the Figure 1: Cooling curves in the energy bands: 0.5–3.5 keV (blue), 3.5–7 keV (red), and 0.5–7.0 keV (black). The dashed lines show only the bremsstrahlung emissions. We set \(Z=0.5Z_{\odot}\). Shaded region shows a range of gas temperatures in the shocked-ICM of our simulations. \begin{table} \begin{tabular}{c c c} \hline \hline Jet speed & \(v_{\rm jet}\) & 0.3\(c\) \\ Jet gas temperature & \(T_{\rm g,jet}\) & \(10^{10}\) K \\ Jet Kinetic power & \(L_{\rm kin}\) & \(5.0\times 10^{45}\) erg s\({}^{-1}\) \\ Jet thermal power & \(L_{\rm th}\) & \(4.4\times 10^{44}\) erg s\({}^{-1}\) \\ Jet radius & \(r_{\rm jet}\) & 1 kpc \\ Jet Sonic Mach Number & \(\dot{\cal M}_{\rm jet}\) & 6.2 \\ Jet magnetic field & \(B_{\rm g,jet}\) & 6.17 \(\mu\)G \\ Jet plasma beta & \(\beta_{\rm jet}\) & 5 \\ \hline ICM temperature & \(T_{\rm ICM}\) & 5 keV \\ Core density & \(n_{0}\) & \(5\times 10^{-2}\) cm\({}^{-3}\) \\ Core radius & \(r_{\rm c}\) & 20 kpc \\ Core parameter & \(\beta\) & 0.5 \\ ICM magnetic field & \(B_{x,\rm ICM}\) & 0.44 \(\mu G\) \\ ICM plasma beta & \(\beta_{\rm gas,ICM}\) & 1000 \\ \hline Numerical domain & \((L_{x},L_{y},L_{z})\in(\pm 32,\pm 32,96)\) kpc \\ Resolution & \(640\times 640\times 960\) (Uniform grids) \\ \hline \hline \end{tabular} \end{table} Table 1: Jets and ICM setup parameters of MHD simulation Figure 2: Geometry cut along the LOS for ModBow. origin as the center of the cluster. Thus, we slightly modify the BknPow model, referring to it as the ModBkn model, to optimize the jet-ICM system as follows: \[n_{\rm e}(r)=\begin{cases}n_{\rm o}(\frac{r}{r_{\rm e}})^{-\alpha_{1}}&(r<r_{\rm sh })\\ \frac{\alpha_{2}}{\alpha}(\frac{R}{R_{\rm e}+r_{\rm sh}})^{-\alpha_{2}}&(r>r_{ \rm sh}),\end{cases} \tag{8}\] where \(r\), \(R\), and \(R_{0}\) are projected radius vector from the spherical center, projected radius vector from AGN, and projected shock distance to the AGN, respectively (see Figure 2). This model is very similar to the model proposed in Bourdin et al. (2013), where the authors adopt the model for a better fit at the shock fronts, which are less curved than the cluster. As we already mentioned in section 2.2, studies of X-ray observations have been assumed that the X-ray brightness is independent of the electron temperature (see equation (5)). Thus, under this assumption, the shock compression factor \(C\) can be obtained by fitting the surface brightness. In addition, the discussion of the absolute value of the X-ray surface is nonsense to determine the shock compression factor. Since our numerical model is aimed at comparing with the _Chandra_ broadband (0.5 - 7.5 keV) observational data of the Cygnus A, we adopt same approximation (equation(5)) for the fiducial calculation. However, it is necessary to investigate the influence of temperature variations in the ICM across the shock on the measurements. We therefore confirm validity of this approximation in section 4.3. The shock Mach number is determined by the Rankine-Hugoniot jump conditions as \[C=\frac{n_{2}}{n_{1}}=\frac{\gamma_{\rm gas}+1}{\gamma_{\rm gas}-1+2/\mathcal{ M}^{2}}, \tag{9}\] where \(n_{1}\) and \(n_{2}\) are the pre- and post-shock density, respectively. The adiabatic index \(\gamma_{\rm gas}\) is. Set to \(5/3\) in this study. Fitting was performed with the non-linear least-square minimization Python package Imfit (Newville et al., 2021). The shock Mach number can be measured from the spectroscopic-like temperature as \[\frac{T_{2}}{T_{1}}=\frac{[(\gamma-1)\mathcal{M}^{2}+2][2\gamma\mathcal{M}^{2 }-(\gamma-1)]}{(\gamma+1)^{2}\mathcal{M}^{2}} \tag{10}\] where \(T_{1}\) and \(T_{2}\) are the pre- and post-shock temperature, respectively. The measured Mach numbers from the density and temperature jumps can be different due to several factors, such as the projection effect and temperature non-equilibrium between protons and electrons. Some observations compared the results of two measurements (Akamatsu et al., 2017). Although the above discussion is based on the theory for hydrodynamic shock, the influence of the magnetic field is expected to be very small for the forward shock. The observations indicate that the plasma beta of the ICM is very high (Govoni & Feretti, 2004). We further confirmed that the plasma beta of the post-shock region is significantly higher than 100 in our MHD simulation. To extract the radial profile of the (averaged) X-ray surface brightness and the spectroscopic-like temperature, the region is taken to be a partial annulus, whose central angle is 120 degrees, around an X-ray discontinuity by hand. The radial profile is computed in increments of 1 kpc in the annulus. We try to set the curvature of the X-ray discontinuity and the partial annual to be almost the same. These procedures have made use of SAOImage DS9, developed by the Smithsonian Astrophysical Observatory. ## 3 Results of MHD simulation In this study, we focus on the X-ray property of the forward shock. From our MHD simulation performed in Paper I, we first summarize the details of the thermodynamical evolution of the shocked-ICM, which emits enhanced thermal X-ray emission. In particular, temperature non-equilibrium between protons and electrons is important for thermal emission. Then, we report the time evolution of the Mach number of the forward shock to compare the measured Mach number from the mock X-ray observation in Section 4. ### Thermodynamics of shocked-ICM We elucidate the time evolution of the shocked-ICM, especially for thermodynamic balance. First, we show the number density distribution at \(t=9.94\) Myr in Figure 3a. The forward shock drives into the ICM and has an elliptical shape, which is the usual image obtained in AGN jet simulations and is consistent with FR II radio sources. Electrons and protons are heated by the forward shock. Notably, protons are first hotter than electrons in the shocked-ICM, as the shocks primarily heat protons in our simulations. Subsequently, electrons and protons approach temperature equilibrium through Coulomb collisions (see more details in next paragraph). The shocked-ICM still exhibits a different temperature between electrons and protons in the region of \(z>40\) kpc when jets reach \(z\sim 90\) kpc. The sound wave is another important heating source of protons for the shocked-ICM. The origin of sound wave production is the supersonic turbulence motion of cocoon plasma (see Figure 3b). Because plasma-\(\beta\) is very high in the shocked-ICM, the fraction of the electron heating is almost zero. Thus, sound waves selectively heat protons. A more detailed analysis of the dissipation of sound waves driven by the AGN jet is given in previous studies (Fujita & Suzuki, 2005; Bambic & Reynolds, 2019; Wang & Yang, 2022). To analyze further details for ICM thermodynamics, we define the shocked-ICM as the grids where \(T_{\rm e}<10^{8}\) K and \(n(t=t^{\prime})-n(t=0)>0.05n_{0}\), where \(t^{\prime}\) is the current time. We distinguish between the cocoon and the ICM by using the first criterion. Also, from the second criterion, the ICM region is further divided into the shocked-ICM region and the non-perturbed ICM region. Figure 4 shows the averaged density of the shocked-ICM (top) and the averaged ratio of proton to electron temperature of the shocked-ICM (bottom) along the z-axis for model B at \(t=2.8,~{}4.2,~{}5.6,~{}8.4\), and \(9.8\) Myr, respectively. Herein, the averaged quantities \(q\) of the shocked-ICM along the z-axis is calculated in the form, \[<q(z)>=\frac{\int\int qdxdy}{\int\int dxdy} \tag{11}\] for \(T_{\rm e}<10^{8}\) K and \(n(t=t^{\prime})-n(t=0)>0.05n_{0}\). The initial density profile is the \(\beta\)-model, and thus the averaged density profiles of the shocked-ICM along the z-axis have a lower value, as they are father from the core (see in the top panel of figure 4). The density has the highest value at the tips of the bow shock. In particular, the shock compression is effective in the early phase. The averaged density of the shocked-ICM likewise decreases with time due to adiabatic expansion. Electrons and protons do not have same temperature in the shocked-ICM during simulation time (see the bottom panel of Figure 4). Around the tips of the forward shock, the temperature ratio between protons and electrons is about six for all plots. Meanwhile, electrons and protons reached thermal equilibrium, \(T_{\rm e}=T_{\rm p}\), in the area close to the core due to Coulomb coupling. Notably, a timescale of proton-electron temperature equilibrium is less than 5 Myr (see Equation 1). In Figure 5, we plot radial profile for electron (blue) and proton (red) temperature at t = 8.4 (top), 9.1 (middle) 9.8 (bottom) Myr, respectively. All panels plot temperatures along x-axis at \(z=60\) kpc and \(y=0\) kpc. As mentioned above, protons receive a large amount of the dissipation energy of the shock and sound waves, rather than electrons. Strong sound waves propagate in the shocked-ICM from the cocoon to the forward shock, as shown in the top panel of Figure 5 at \(z=4\), 5, and 6 kpc. Thus, the proton temperature at the shocked-ICM decreases in the radial direction. Meanwhile, electrons cannot receive heating energy through the sound wave. Therefore, the electron temperature of the shocked-ICM increases in the \(r\) direction, and it is maximum at the front of the shock. Note that the sudden increase in electron temperature at the same location of the sound wave is due to adiabatic compression, not the shock heating. ### Evolution of forward shock We show the time evolution of the forward shock Mach number at xz-plane in Figure 6. The shock-finding algorithm is adopted to determine the shock Mach number, as described in Ryu et al. (2003) and Schaal & Springel (2015). The Mach number of the forward shock around the jet head is 6-7 for \(t<8\) Myr, as that of the injected beam is 6.3 for our model. The high Mach number region is only a small fraction of the forward shock. At the side of the forward shock, the Mach number has smaller value, which is about two. After \(t>8\) Myr, the jet deteriorates due to suffered kink instability (see details in Paper I). Thus, the shape of the forward shock becomes wider, and the Mach number around the jet head is slightly reduced. To check the consistency, we evaluate the Mach number from the jet head velocity \(v_{\rm head}\). The ICM temperature is 5 keV homogeneously, and the Mach number is simply described as \(\mathcal{M}=v_{\rm head}/c_{\rm s}\). During \(t=4-8\) Myr, the jet head velocity \(v_{\rm head}\) is 0.025c-0.03c from our simulation data. We derived the Mach number of 6.5-7.8. After \(t>8\) Myr, the jet head velocity is reduced until 0.02c, which leads to \(\mathcal{M}\sim 5.2\). There are several tension points, but we obtain a roughly consistent result with that estimated from the shock jump condition, which is observed in figure 6. ## 4 Result of mock X-ray observation The main focus of this section is to understand the impact of the projection and two-temperature effects on the measured Mach number. We present the result of mock X-ray observation of our MHD model with different viewing angles, \(\theta=30^{\circ}\), \(45^{\circ}\), \(52^{\circ}\) 60\({}^{\circ}\), 75\({}^{\circ}\), and 90\({}^{\circ}\), where \(\theta\) is the angle between the LOS and the jet propagating direction. The observed time is set \(t=9.94\) Myr because the length of the radio lobe is consistent with the apparent size of Cygnus A. In this time, the total outburst energy of our model is \(\sim 1.6\times 10^{60}\) erg. We evaluate the Mach number in two different ways: by measuring the shock compression ratio from the fitting of X-ray surface brightness profile, and by measuring the temperature jump from the spectroscopic-like temperature. ### X-ray Surface brightness In Figure 7a, we show simulated X-ray surface brightness image of a AGN jet at a 60 degree viewing angle with overlaid radio intensity contours (white and red). Adopting a simplified assumption, the radio intensity is computed to integrate a product of the thermal electron energy density and magnetic energy density along the LOS. The forward shock compresses thermal gas, which is consequently clearly visible in X-ray map. Furthermore, we observe the X-ray cavity, i.e., the surface brightness is depressed spatially corresponding to projected radio lobe, as the jet plasma has low density. Non-thermal X-ray emission, which originates from non-thermal relativistic electrons, is ignored. Thus, the X-ray jet and hotspot are not present on this map. When the MHD jet remains collimated and maintains supersonic velocity at the observed time, the reverse shock survives at the jet heads (see Paper I for details). Hence, we can observe the radio hotspot around the jet heads, because the jet kinetic energy is converted into the thermal electron and magnetic energy at the reverse shock. The model fitting results, which yield the shock compression parameter of the surface brightness profile across the forward shock are shown in Figure 7b. The BknPow and ModBkn models fit the jump with shock compression ratios \(C=2.28\pm 0.06\) and \(3.24\pm 0.08\), respectively. The simulated X-ray discontinuity is more diffusive than both best fits. This is due to the smearing effect over the complex structure at the edge of the shock in the sector, as the shape of the forward shock is not a perfect arc. We explore the dependence of the measured Mach number on the viewing angle and show the results in Figure 8. Fitting parameters are listed in Table 2. Herein, we mention once more that the actual shock Mach number of the simulated jet around the jet head is about five at the observed time (Figure 6). At a 90 degree viewing angle, we measure the Mach number correctly using the ModBkn model. However, the result of the BknPow model strongly underestimates the measured shock compression ratio compared with that of the ModBkn model. This tendency is observed at all viewing angles. For both models, observed shock compression ratios strongly depend on the viewing angles, in a proportional manner. The X-ray forward shock at a low viewing angle spatially corresponds to the side of the forward shock, not around the jet head. From the head to the side, the Mach numbers become small (Figure 6). Hence, we must focus on the effect of the viewing angle to measure the Mach number. To explain the model dependence for the shock compression ratio, we show the radial profile of the thermal density, derived by the simulation result and both best fits, in Figure 9. We measure the thermal density profile from the simulation result at the x-z plane (y = 0 kpc) in the sector of Figure A.1. As observed in the inset, the surface brightness profile of both best fits are very similar. However, the derived thermal density profile is different. Because the spherical center of the BknPow model does not spatially correspond to the AGN, the indices of ambient profile \(\alpha_{2}\) for the BknPow model are clearly erroneous. Here, we mention that the thermal density profile obeys the \(n(r)\to r^{-1.5}\) at large distances. Consequently, the BknPow model underestimates the density jump parameter \(C\) and the shock Mach number. ### Spectroscopic-like temperature From X-ray observations, we can measure the shock Mach number using the spectroscopic (observed) temperature jump in Equation (10), not only using the density jump estimated form the surface brightness profile. The shocked-ICM around the jet head is in a two-temperature state, where the electron temperature is lower than proton temperature at the observed time (see section 2). Hence, we calculate the spectroscopic-like temperature in two ways: by using the electron temperature in Equation (6) in a straightforward manner, and by using the gas temperature, \(T_{\rm gas}=0.5(T_{\rm p}+T_{\rm e})\), instead of the electron temperature. This case is same that the one assuming one-temperature plasma, i.e., temperature equilibrium between the electron and proton. Here, we mention once more that in our simulation, the electron can receive 5 % of the dissipated energy by shocks from the proton. In Figure 10, we show spectroscopic-like temperature maps in the case of the two- and one-temperature shocks at a 60 degrees viewing angle. The discontinuity of the spectroscopic-like temperature can be observed for both maps. The electron temperature at the forward shock around the jet head is about \(2.0\times 10^{8}\) K for the two-temperature plasma case, and \(1.0\times 10^{9}\) K for the temperature equilibrium case. These values are consistent with the shock jump condition (we describe the temperature jump condition for the two-temperature shock in the Appendix of Ohmura et al. (2020)). Meanwhile, the spectroscopic-like (observed) temperature is significantly lower than the electron temperature, due to the projection effect. Because the pass length at the shocked-region is smaller than that at the unshocked region, the foreground, and background ICM, which have a lower temperature than the shocked gas, contribute largely to the observed temperature. Moreover, we find that the observed temperature for the temperature equilibrium case is slightly higher than that of the two-temperature case. The dependence of the temperature jump on different viewing angles is plotted in Figure 11. We measure the spectroscopic-like temperature at magenta the sectors inside and outside of the forward shocks, shown in Figure A.2, to determine the temper \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline & \multicolumn{4}{c|}{BknPow} & \multicolumn{4}{c}{ModBkn} \\ \hline \(\theta\) & \(C\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(r_{\rm sh}\) & \(C\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(r_{\rm sh}\) & \(R_{0}\) \\ \hline \hline \(30^{\circ}\) & \(1.32\pm 0.01\) & \(-0.22\pm 0.01\) & \(0.68\pm 0.04\) & \(11.0\pm 0.03\) & \(1.62\pm 0.01\) & \(-0.71\pm 0.01\) & \(1.32\pm 0.01\) & \(10.9\pm 0.01\) & 40.0 \\ \(45^{\circ}\) & \(1.71\pm 0.03\) & \(-1.61\pm 0.06\) & \(0.63\pm 0.11\) & \(11.2\pm 0.02\) & \(2.29\pm 0.07\) & \(-2.25\pm 0.11\) & \(1.32\pm 0.06\) & \(11.3\pm 0.00\) & 55.9 \\ \(52^{\circ}\) & \(2.00\pm 0.07\) & \(-2.29\pm 0.10\) & \(0.58\pm 0.02\) & \(10.9\pm 0.01\) & \(2.76\pm 0.10\) & \(-3.36\pm 0.19\) & \(1.33\pm 0.08\) & \(10.9\pm 0.01\) & 62.9 \\ \(60^{\circ}\) & \(2.28\pm 0.06\) & \(-3.49\pm 0.14\) & \(0.57\pm 0.01\) & \(10.4\pm 0.03\) & \(3.37\pm 0.09\) & \(-5.42\pm 0.27\) & \(1.39\pm 0.05\) & \(10.4\pm 0.05\) & 69.5 \\ \(75^{\circ}\) & \(2.30\pm 0.09\) & \(-2.56\pm 0.19\) & \(0.54\pm 0.02\) & \(9.44\pm 0.02\) & \(3.32\pm 0.14\) & \(-3.31\pm 0.29\) & \(1.38\pm 0.10\) & \(9.44\pm 0.02\) & 79.4 \\ \(90^{\circ}\) & \(2.44\pm 0.13\) & \(-1.85\pm 0.07\) & \(0.47\pm 0.02\) & \(7.71\pm 0.04\) & \(3.82\pm 0.16\) & \(-2.84\pm 0.12\) & \(1.20\pm 0.10\) & \(7.75\pm 0.03\) & 85.0 \\ \hline \end{tabular} \end{table} Table 2: Shock parameters of simulation data. These parameters are described in Section 2.3 Figure 3: The structure of the simulated jet. (a) Slice of number density distribution in the x-z plane (\(y=0\) kpc) at \(t=9.94\) Myr. (b) Slice of gas pressure distribution in x-z plane (\(y=0\) kpc). Supersonic turbulent motions of the cocoon, which is shocked jet gas, drive sound waves into shocked-ICM. ature jump. At first glance, the Mach number measured from temperature jump is significantly underestimated for all cases. Figure 11 illustrates the same tendency as in the shock compression ratio (Figure 8). While the plasma has a one-temperature, i.e., the efficient Coulomb collision case, the measured temperature jump is lower than a factor two, even if the viewing angle is high. Namely, the observed shock Mach number from the temperature jump is below 1.5. However, it is difficult to detect the temperature jump in the case temperature non-equilibrium. The effect of the viewing angle for the observed temperature is almost same as discussed in Section 4.1. Nevertheless, there is an additional factor reducing the temperature jump. When the viewing angle is low, the projected distance from AGN to the jet head becomes short. Consequently, the LOS passes through a high density region, which contributes largely to the observed temperature. ### Validity of the approximation for the thermal emission In this subsection, we discuss the effect of the electron temperature dependence of the X-ray emissivity (see also figure 1). We made X-ray maps of \(\theta=90^{\circ}\) with three different energy bands (\(0.5-3.5\) keV, \(3.5-7\) keV, and \(0.5-7.0\) keV), and performed model fit of the X-ray surface brightness in the same forward shock region. The model fitting results are shown in Table 3. The ICM and shocked-ICM is about 5 keV and 10-50 keV in the case for a single-fluid case. For the \(3.5-7\) keV, the cooling curve monotonically increase by factor two or three between 5 keV to 50 keV. As a results, the shock compression ratios is larger than 4, which is a maximum shock compression in the hydrodynamics. In contrast, the electron temperature dependence of the X-ray emissivity in the temperature range we interest is weak for the 0.5 - 3.5 keV and 0.5 - 7.0 keV. We can see that the compression ratios observed in these bands differ by about \(\pm 0.2\), compared with the result by using the simple approximation. Note that if we assume the two-temperature plasma, this difference become small. This is because the electron temperature for two-temperature plasma is lower than that for single-temperature plasma. Therefore, it is not so bad to ignore the temperature dependence of X-ray emissivity, unless we compare it to data with 3.5 - 7.0 keV. ## 5 Application for forward shock of Cygnus A First, we present the fitting results of the forward shock of Cygnus A. In the previous section, we show that the shock Mach number could be determined with good accuracy using the _Chandra_ broadband X-ray surface brightness profile because the X-ray emissivity has weak dependence with the electron temperature in this observed range. Furthermore, the ModPow model can evaluate the measured Mach number more accurately than the BknPow model. Thus, we adopt this model in the actual X-ray data of Cygnus A to investigate model dependence. To perform the investigation, we use archival _Chandra_ observation data in an Figure 4: Averaged density profile of shocked-ICM along z-axis (**top**) and the averaged ratio of proton to electron temperature (**bottom**) as \(t=2.8\), \(4.2\), \(5.6\), \(8.4\), and \(9.8\) Myr. Figure 5: Radial profile for electron (blue) and proton (red) temperature for model B at t = 8.4 (**top**), 9.1 (**middle**), and 9.8 (**bottom**) Myr, respectively. All panels plotted along x-axis at \(z=60\) kpc and \(y=0\) kpc. Black arrows indicate the location of the sound wave. Figure 6: Counters of the forward shock Mach number in the x-z plane at \(t=2.80\), \(5.60\), \(7.84\), and \(9.94\) Myr from left to right. energy band of 0.5 - 7.0 keV. The process was performed with the latest CALDB and utilizes the Chandra tool chandra_repro available in CIAO. Then, we provide a physical interpretation of Cygnus A, achieved from the comparison of mock X-ray results and actual observations. gin is AGN, would be a good approximation. The fitting values of \(\alpha_{2}\), therefore, would not be consistent with the profile of the entire _Chandra_ field. In contrast, for ModBkn, shock compression ratios vary within 1.90-2.40, and they are higher those for BknPow, in all sectors. The value of \(\alpha_{2}\) is approximately 1.5, which is consistent with the profile of the whole _Chandra_ field. BknPow may also provide more accurate fits, as the reduced chi square values of ModPow are lower than those of BknPow in all cases. However, we cannot conclude that ModPow is statistically better. Finally, the measured shock distances \(r_{\rm sh}\) exhibit similar values for the two models. ### Comparison with our jet model and physical interpretation Our model contributes to determine the viewing angle. The viewing angle of Cygnus A is poorly constraint from observations, in ranging 35\({}^{\circ}\)-80\({}^{\circ}\)(Bartel et al., 1995). From the comparison of two results (see Table 2 and 4, and figure 8), it is reasonable that the viewing angle of Cygnus A roughly ranges in 35\({}^{\circ}\)-55\({}^{\circ}\). Surely, part of our constraint is model dependent. If the actual total outburst energy and power of Cygnus A is significantly higher than that of our model, the range of the viewing angle tends to be lower. Comparing our simulations and the X-ray observation of Cygnus A, the sign of \(\alpha_{1}\), which is the index of shock slope, is different. The minus sign of \(\alpha_{1}\) for Cygnus A indicates that the gas density is higher inside the forward shock than at shock front. One explanation for this difference is that our model ignores non-thermal emission via inverse Compton scattering by relativistic electrons in the lobe. Detailed X-ray analysis (Yaji et al., 2010; de Vries et al., 2018) shows that there is a large population of non-thermal X-ray components in the lobe and jets. To investigate the effect of non-thermal components for the measurement of the shock Mach number, we performed same analysis using the _Chandra_ soft-band data-set (0.5-1.2 keV), whose energy band is expected to be the small contribution of non-thermal radiation. Although the systematic error is large for small photon numbers, we observe that \(\alpha_{1}=0.39\pm 0,29\) in the sector B, which Figure 9: Radial profile of thermal density. Red dashed and green dotted lines correspond to the best fitting BknPow model and ModPow model at a 90 degrees viewing angle, respectively. A sector to measure the surface brightness profiles is shown in Figure A.1 at a 90 degree viewing angle. We measure the thermal density profile from the simulation result at the x-z plane (y = 0 kpc) in the sector. The shaded region shows the maximum and minimum thermal density, and the black solid line represents the average value. The BknPow and ModBkn models fit the jumps with a shock compression ratio \(C=2.44\pm 0.13\) and \(3.82\pm 0.16\), respectively. The surface brightness profile (black circles), the best fits of BknPow (red dashed line) and ModBkn (green dotted line) are shown in the inset. is slightly smaller than that observed in the energy band of 0.5 - 7 keV. Note that the shock compression ratio \(C\) remains relatively unchanged for both energy bands. From the simulation results, \(\alpha_{1}\) is larger when the viewing angle is lower. While this trend may supports that Cygnus A has a small viewing angle, the modeling of non-thermal components is needed for further discussion. Next, we discuss the spectroscopic temperature. Snios et al. (2018) reported the spectroscopic temperature jump of Cygnus A. They found that the temperature jump is almost unity, which is significantly smaller than the value predicted from the measured Mach number using the shock compression. The spectroscopic-like temperature is significant affected by the projection effect in the jet-ICM system. These observational results are, both quantitatively and qualitatively, consistent with our two-temperature shock model, despite of a viewing angle (Figure 11). Unfortunately, if the viewing angle of Cygnus A is lower than 50 degrees, as in the discussion above, it is difficult to decide whether the shocked-plasma is in the two-temperature state. Our simulations adopt a isothermal \(\beta\)-model of 5 keV ICM. However, the temperature gradient exists in the X-ray observations (Snios et al., 2018). The temperature far from the AGN is hotter than that near the AGN. The temperature gradient slightly affects the spectroscopic-like temperature map and the temperature jump at the forward shock. As discussed in Snios et al. (2018), the smearing effect over the complex structure at the edge of the shock. \begin{table} \begin{tabular}{c|c c c c|c c c c c} \hline & \multicolumn{4}{c|}{BknPow} & \multicolumn{4}{c}{ModBkn} \\ \hline Sector & \(C\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(r_{\rm sh}\) & \(\chi^{2}\)/d.o.f & \(C\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(r_{\rm sh}\) & \(\chi^{2}\)/d.o.f \\ \hline \hline A & \(1.49\pm 0.05\) & \(0.60\pm 0.07\) & \(0.89\pm 0.01\) & 0.23 & 1.10 & \(2.23\pm 0.08\) & \(0.64\pm 0.07\) & \(1.5\pm 0.02\) & 0.23 & 1.05 \\ B & \(1.73\pm 0.07\) & \(0.27\pm 0.11\) & \(0.93\pm 0.01\) & 0.22 & 1.61 & \(2.49\pm 0.11\) & \(0.48\pm 0.09\) & \(1.6\pm 0.02\) & 0.23 & 1.46 \\ C & \(1.54\pm 0.09\) & \(0.48\pm 0.13\) & \(0.90\pm 0.01\) & 0.23 & 1.71 & \(2.18\pm 0.11\) & \(0.58\pm 0.10\) & \(1.51\pm 0.02\) & 0.25 & 1.24 \\ A+B+C & \(1.44\pm 0.04\) & \(0.60\pm 0.08\) & \(0.90\pm 0.01\) & 0.23 & 1.44 & \(1.90\pm 0.05\) & \(0.82\pm 0.05\) & \(1.49\pm 0.01\) & 0.25 & 1.09 \\ \hline \end{tabular} \end{table} Table 4: Shock parameters of Cygnus A. In ModBkn, the projected radius from AGN to the center of sector \(R_{0}\) is 0.78 arcmin for all sectors. Figure 10: Spectroscopic-like temperature maps in the case of two-temperature shock (**left**) and one-temperature shock (**right**) at a 60 degree viewing angle. The magenta annular sector is used to measure the extracted post- and pre-shock temperatures, and the middle arc shows the position of X-ray discontinuity. ## 6 Summary and discussions We report the thermodynamics of shocked-ICM and the evolution of the forward shock for our two-temperature MHD simulation of AGN jets. We performed mock X-ray observation to measure the Mach number of the forward shock from the surface brightness and spectroscopic-like temperature. The discrepancy in the Mach number of the forward shock for the analytic and numerical model is significantly higher than that of the observation (Hardcastle and Croston, 2020). This study systematically investigates various effects that influence the observed Mach number. Our results attribute this discrepancy to the projection effect, in particular, the measured shock Mach number from surface brightness profile is significantly underestimated when the viewing angle is low. Our mock X-ray observations indicate that the measured Mach number from the surface brightness profile is very sensitive to the observed viewing angle, and monotonically increases with it. Because the curvature of the forward shock is significantly higher than that of the cluster, we propose a new fitting model ModBkn, instead of a traditional model BknPow. we demonstrate that ModPow provides higher Mach number than one from BknPow. Further, the result of ModPow is consistent with the actual shock Mach number of MHD data at the higher viewing angle. Spectroscopic-like temperatures are calculated from our MHD data. The projection effect significantly reduces the temperature jump, which is lower than one from the Rankine-Hugoniot condition. Even if the electron cannot be heated instantaneously at shock front, i.e., the plasma is two-temperature at post-shock, the detection of the temperature jump is very difficult. We estimate the shock Mach number of Cygnus A using archival _Chandra_ observation data with BknPow and ModPow. The best fit of ModPow is shown in the three following results. Compared with that of BknPow, (1) Shock compression ratio is high. (2) The slope of ICM, \(\alpha_{2}\), is consistent with the observed slope. (3) The reduced chi square values are small for whole regions, but we cannot conclude that ModPow is statistically better. Studies of X-ray observations have been assumed that the surface brightness is independent of the electron temperature to the measurement of the shock compression factor. To check validity of approximation, we conduct mock X-ray observation with three different energy bands. We find that the approximation is not so bad when we use the X-ray data with \(0.5-7.0\) keV and \(0.5-3.5\) keV. From the mock X-ray observation results, the viewing angle of Cygnus A may range within \(35^{\circ}\)-\(55^{\circ}\)degrees. In this range, the observed Mach number and the temperature jump are consistent with our analysis of Cygnus A. However, we note that the lower limit of a viewing angle is very rough because our X-ray maps at low viewing angles are small compared with that of actual observation. Our results indicate that, to construct numerical models for powerful and young radio jets like those of Cygnus A, the jet with the strong shock can be used with X-ray observations. However, the results of this study do not directly contribute to solve the cooling flow problem, as heating by strong forward shocks is a sub-dominant source for transporting jet energy to the ICM (Heinz, 2003). In this study, we only focus on forward shock of young powerful FR II type jets. However, the ModPow model can also be adopted for measuring the density jump of the shock and cold fronts that are more or less curved than the cluster. Shocks of the galaxy clusters Abell 2146 (Russell et al., 2010) and A 512 (Bourdin et al., 2013)may be good candidates to test our fitting model. Our model has several limitations. Primarily, we do not treat heat conduction. Further, there is a high temperature gradient across a shock front. Thus, the shock becomes diffusive, and a temperature and density precursor can form at the pre-shock region if thermal conduction works well across the shock front (Komarov et al., 2020). This would decrease the measured Mach number. However, whether heat conduction works efficiently depends on the thermal conductivity and topology of magnetic fields. Furthermore, if the Braginsky viscosity (Braginskii, 1965) and turbulence motion of the ICM would be incorporated, the result would change. The presence of cosmic rays likewise influences the dynamics. The back-reaction from cosmic rays to the fluid modifies the shock structure, and the shock jump becomes smaller (Drury and Falle, 1986). In a future study, we aim to conduct MHD simulations including these omitted factors. ###### Acknowledgements. We thank the anonymous referee for the useful comments that greatly improved the presentation of the paper. This work was supported by JSPS KAKENHI Grant Numbers JP22K14032 (T.O.) and 19K03916 (M.M.). Our numerical computations were carried out on the Gray XC50 at the Center for Computational Astrophysics of the National Astronomical Observatory of Japan. The computation was carried out using the computer resource by Research Institute for Information Technology, Kyushu University. This work was also supported in part by MEXT as a priority issue (Elbucidation of the fundamental laws and evolution of the universe) to be tackled by using post-K Computer and JICFuS and by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Toward a unified view of the universe: from large scale structures to planets). SRON is supported financially by NWO, the Netherlands Organization for Scientific Research.
2309.00671
Dark Energy Survey Year 6 Results: Intra-Cluster Light from Redshift 0.2 to 0.5
Using the full six years of imaging data from the Dark Energy Survey, we study the surface brightness profiles of galaxy cluster central galaxies and intra-cluster light. We apply a ``stacking'' method to over four thousand galaxy clusters identified by the redMaPPer cluster finding algorithm in the redshift range of 0.2 to 0.5. This yields high signal-to-noise radial profile measurements of the central galaxy and intra-cluster light out to 1 Mpc from the cluster center. Using redMaPPer richness as a cluster mass indicator, we find that the intra-cluster light brightness has a strong mass dependence throughout the 0.2 to 0.5 redshift range, and the dependence grows stronger at a larger radius. In terms of redshift evolution, we find some evidence that the central galaxy, as well as the diffuse light within the transition region between the cluster central galaxy and intra-cluster light within 80 kpc from the center, may be growing over time. At larger radii, more than 80 kpc away from the cluster center, we do not find evidence of additional redshift evolution beyond the cluster mass dependence, which is consistent with the findings from the IllustrisTNG hydrodynamic simulation. We speculate that the major driver of intra-cluster light growth, especially at large radii, is associated with cluster mass growth. Finally, we find that the color of the cluster central galaxy and intra-cluster light displays a radial gradient that becomes bluer at a larger radius, which is consistent with a stellar stripping and disruption origin of intra-cluster light as suggested by simulation studies.
Yuanyuan Zhang, Jesse B. Golden-Marx, Ricardo L. C. Ogando, Brian Yanny, Eli S. Rykoff, Sahar Allam, M. Aguena, D. Bacon, S. Bocquet, D. Brooks, A. Carnero Rosell, J. Carretero, T. -Y. Cheng, C. Conselice, M. Costanzi, L. N. da Costa, M. E. S. Pereira, T. M. Davis, S. Desai, H. T. Diehl, P. Doel, I. Ferrero, B. Flaugher, J. Frieman, D. Gruen, R. A. Gruendl, S. R. Hinton, D. L. Hollowood, K. Honscheid, D. J. James, T. Jeltema, K. Kuehn, N. Kuropatkin, O. Lahav, S. Lee, M. Lima, J. Mena-Fernández, R. Miquel, A. Palmese, A. Pieres, A. A. Plazas Malagón, A. K. Romer, E. Sanchez, M. Smith, E. Suchyta, G. Tarle, C. To, D. L. Tucker, N. Weaverdyck
2023-09-01T18:00:00Z
http://arxiv.org/abs/2309.00671v1
# Dark Energy Survey Year 6 Results: Intra-Cluster Light from Redshift 0.2 to 0.5 ###### Abstract Using the full six years of imaging data from the Dark Energy Survey, we study the surface brightness profiles of galaxy cluster central galaxies and intra-cluster light. We apply a "stacking" method to over four thousand galaxy clusters identified by the redMaPPer cluster finding algorithm in the redshift range of 0.2 to 0.5. This yields high signal-to-noise radial profile measurements of the central galaxy and intra-cluster light out to 1 Mpc from the cluster center. Using redMaPPer richness as a cluster mass indicator, we find that the intra-cluster light brightness has a strong mass dependence throughout the 0.2 to 0.5 redshift range, and the dependence grows stronger at a larger radius. In terms of redshift evolution, we find some evidence that the central galaxy, as well as the diffuse light within the transition region between the cluster central galaxy and intra-cluster light within 80 kpc from the center, may be growing over time. At larger radii, more than 80 kpc away from the cluster center, we do not find evidence of additional redshift evolution beyond the cluster mass dependence, which is consistent with the findings from the IllustrisTNG hydrodynamic simulation. We speculate that the major driver of intra-cluster light growth, especially at large radii, is associated with cluster mass growth. Finally, we find that the color of the cluster central galaxy and intra-cluster light displays a radial gradient that becomes bluer at a larger radius, which is consistent with a stellar stripping and disruption origin of intra-cluster light as suggested by simulation studies. keywords: galaxies: evolution - galaxies: clusters: general ## 1 Introduction Galaxy clusters contain a diffuse stellar component of intra-cluster light (ICL). First discovered more than half a century ago (Zwicky, 1951, 1952), ICL is abundant around the cluster central galaxies (CGs) or the brightest cluster galaxies (BCGs), and contains stars dispersed into the intra-cluster space. It has been studied using optical or infrared imaging, and spectroscopic observations, which have been reviewed in Abraham et al. (2017); DeMaio (2017); Contini (2021); Montes (2022); Arnaboldi & Gerhard (2022). Because of the ICL's faint brightness and the overall difficulties of studying low surface brightness features (Abraham et al., 2017; Mihos, 2019), ICL has remained a poorly understood subject until recently, when the number of studies jumped with refreshed interests due to new data (Montes & Trujillo, 2022), simulations (Shin et al., 2022), and techniques (Marini et al., 2022). Simulation and semi-analytical studies have investigated ICL formation in many different channels (e.g., Sommer-Larsen, 2006; Rudick et al., 2006; Barai et al., 2009; Rudick et al., 2009; Puchwein et al., 2010; Contini et al., 2014; Martel et al., 2012) including galaxy disruption, stellar stripping, merging (Murante et al., 2004, 2007; Contini et al., 2018) and preprocessing (Chun et al., 2022). The ICL's formation is often studied together with the evolution of cluster CGs or even the cluster's overall galaxy distribution due to difficulties in separating the two (e.g., Conroy et al., 2007; Monaco et al., 2006; Cooper et al., 2015; Pillepich et al., 2018; Canas et al., 2020). Different channels of ICL formation carry implications for the ICL's observational properties and their redshift evolution in terms of age, color, and metallicity (Harris et al., 2017; Contini et al., 2019), fraction of ICL in the cluster stellar light (Purcell et al., 2007; Murante et al., 2007; Cui et al., 2014; Tang et al., 2018), morphology (Rudick et al., 2006), or scaling relation to cluster mass or mass distribution (Alonso Asensio et al., 2020; Contini & Gu, 2021). For example, Contini et al. (2019) analyzed ICL color and metallicity using semi-analytical models which contain ICL formed through tidal stripping of cluster satellite galaxies as well as through merging relaxation; they found a negative radial color and metallicity gradient. From hydrodynamic simulations, Pillepich et al. (2018) found that ICL stellar mass strongly correlates with the host halo mass, but this correlation appears to evolve little from redshift 1 to 0. In observational studies, the formation and evolution of ICL has been studied using its color (Mackie, 1992; Krick et al., 2006; Krick & Bernstein, 2007; Montes & Trujillo, 2014; DeMaio et al., 2015; Iolice et al., 2017; Morishita et al., 2017; DeMaio et al., 2018; Montes & Trujillo, 2018; Ko & Jee, 2018; Montes et al., 2021; Ragusa et al., 2021; Yoo et al., 2021; Golden-Marx et al., 2023a; Martinez-Lombilla et al., 2023), stellar mass (Krick et al., 2011; Burke et al., 2012; DeMaio et al., 2020; Sparone et al., 2020; Barfrey et al., 2022), stellar population spectroscopic studies, (e.g., Coccato et al., 2010, 2011; Ventimiglia et al., 2011; Arnaboldi et al., 2012; Longobardi et al., 2015a; Edwards et al., 2016; Barbosa et al., 2016) and is often investigated together with BCG evolution (e.g., Gonzalez et al., 2005; Zhang et al., 2016; Golden-Marx et al., 2022). For example, DeMaio et al. (2020) studied the stellar mass of BCG and ICL between redshift 0.05 to 1.75 and found its growth rate to be greater than that of the cluster by a factor of two. They also found that the core of the BCG formed early while the BCG outskirt and ICL were built at later times. On the other hand, detailed analysis of local BCG and ICL stellar populations by Edwards et al. (2020) indicate that while the stellar population in the ICL is old, it is still younger (\(\approx 9\) Gyr) than the BCG (\(\approx 13\) Gyr), pointing towards a late and continuous formation of ICL through minor merging. In this work, we continue the observational study of ICL evolution by examining its properties in the redshift range of 0.2 to 0.5. Our work is based on thousands of galaxy clusters and the full six years of observations from the Dark Energy Survey (DES) (Abbott et al., 2021), a wide-field imaging survey (DES Collaboration, 2005) designed to probe cosmic structures in the late Universe (e.g., Abbott et al., 2020, 2022b, a; DES Collaboration et al., 2022). We use a "stacking" method (Zibetti et al., 2004, 2005; Tal & van Dokkum, 2011; Zhang et al., 2019; Sampaio-Santos et al., 2021; Chen et al., 2022; Ahad et al., 2023) with the DES galaxy cluster sample to reduce measurement noise. Our goal is to acquire high signal-to-noise measurements of the ICL surface brightness profile, color, and luminosity and quantify their evolution between redshift 0.2 and 0.5. This paper presents one of the largest ICL redshift evolution studies, based on a cluster sample a few times larger than that in Golden-Marx et al. (2023a) which used a Cosmic Microwave Background (CMB) selected cluster sample from the Atacama Cosmology Telescope. One challenge to ICL studies is the difficulty in disentangling ICL from cluster CGs (see discussions in Dolag et al., 2010; Contini et al., 2022). Although stars in the intra-cluster space may have different stellar composition or dispersion dynamics (e.g., Longobardi et al., 2015, 2018a, 2018; Hilker et al., 2018; Gu et al., 2020; Perez-Hernandez et al., 2022) than cluster CGs or BCGs - from imaging data alone, it is often difficult to separate the ICL from the low-surface brightness outskirts of those galaxies. Different separation methods have been suggested (Rudick et al., 2011), including analytical decomposition, using a machine learning algorithm (Marini et al., 2022), surface brightness limits (Presotto et al., 2014), or using physical distance apertures to separate those components. In this paper, we follow the practice of Pillepich et al. (2018) who analyzed CG and ICL as the "diffuse light" of galaxy clusters. We use the phrase diffuse light interchangeably with CG+ICL in this paper. When needed, we use a physical aperture to separate CG and ICL, with ICL defined as the diffuse light beyond 30 kpc from the CG center (an outer radius limit is defined according to the context), while CG is defined as the diffuse light component within 30 kpc. The remainder of this paper is organized as the following. In section 2 we review our data sets and the methods. Section 3 presents our measurements of the diffuse light surface brightness, while Section 4 quantifies the cluster mass and redshift dependence of the diffuse light luminosities. Section 5 discusses observational effects that may impact the interpretation of our results, and Section 6 discusses our results in the context of simulations and other observational studies. Section 7 summarizes our findings. Throughout this paper, we assume a flat \(\Lambda\)CDM cosmology with \(\Omega_{m}=0.3\), and \(h=0.70\). ## 2 Data and Methods ### The redMaPPer Galaxy Cluster Catalog The red sequence Matched-filter Probabilistic Percolation cluster finder algorithm (redMaPPer ) (Rykoff et al., 2014) has been used by the DES Collaboration to derive galaxy cluster catalogs from the Science Verification data, (Rykoff et al., 2016), the Year 1 observations (McClintock et al., 2019), and the Year 1 to Year 3 observations (O'Donnell et al., 2021). redMaPPer is a red-sequence based algorithm that provides excellent cluster richness (\(\lambda\)) and photometric redshift estimates (Rykoff et al., 2014). It also provides a random point catalog that tracks the sky footprint and depth covered by the cluster-finding algorithm. This paper is based on the redMaPPer cluster catalog, version 6.4.22+2, derived from the DES Year 3 Gold data sets (Sevilla-Noarbe et al., 2021). A relevant difference between this catalog and the DES Year 1 version (McClintock et al., 2019) is the much larger sky coverage. As a result, this catalog contains more than 21,000 galaxy clusters with richness above 20, which approximately corresponds to a halo mass threshold of \(10^{14.1}\)M\({}_{\odot}\)(McClintock et al., 2019; Farahi et al., 2019). These galaxy clusters are detected from the DES single-object fit (SOF) catalog (Drica-Wagner et al., 2018; Sevilla-Noarbe et al., 2021) which contains objects detected and deblended by Souirc Extractor(Bertin & Arnouts, 1996), while the photometry was derived from single-object fitting using the ngmix algorithm on multi-epoch image stamps of each object, and the deblended nearby objects are masked on each single-epoch image. For the DES Year 3 data processing campaign, the SOF photometry measurements are preferred in many applications because of the tighter photometry constraints compared to the Souirc Extractor measurements derived using the coadded images. Of particular importance to this analysis, redMaPPer provides CG (Central Galaxy) candidates for each cluster. Unlike algorithms that aim to select the BCG (Brightest Cluster Galaxies), redMaPPer aims to select a relatively luminous cluster galaxy that is nearest to the cluster's gravitational center, and the goal of this selection is to find the central galaxy of the massive dark matter halo as defined in simulation modeling studies (e.g. De Lucia & Blaizot, 2007; Yang et al., 2008). redMaPPer provides five CG candidates for each cluster. We use the most likely CG candidate; multi-wavelength studies have shown that this candidate is the correct one with an \(\sim 80\%\) frequency (Saro et al., 2015; Zhang et al., 2019; Bleem et al., 2020). For this diffuse light analysis, we apply a few additional selection criteria to the redMaPPer clusters as well as to the redMaPPer random points. (1) Around the cluster center (or a random point), in a circular region with a radius of 0.15 deg, we require at least one DES exposure image in each of the \(g\), \(r\), \(i\), and \(z\) filters. (2) Around the cluster center (or a random point), in a circular region with a radius of 0.15 deg, we require the 10 \(\sigma\) depth magnitude of the DES Auto measurements to be deeper than a redshift dependent "masking" magnitude (see next section for details). This selection criterion ensures that our diffuse light measurements are comparable between different redshift slices. The cut has minimal effect on the clusters/randoms below redshift 0.4 but excludes a small fraction of the clusters between redshift 0.4 and 0.5 and most of the clusters above redshift 0.5. (3) Around the cluster center (or the random point), in a circular region with a radius of 0.2 deg, we exclude areas containing famous or bright stars (the Yale bright stars or 2MASS stars of \(J<8\)) 1, nearby galaxies including the Large Magellanic Cloud, and globular clusters to reduce scattered light in the images. Footnote 1: This cut requires HEALPix values in the DES foreground map file to be less than 2. After applying these selection criteria, we are left with a sample of over 4000 clusters in the redshift range of 0.2 to 0.5. The number of clusters in each richness/redshift bin is listed in Table 1. We also include clusters in the redshift range of 0.5 to 0.6 in some of the analyses in this paper. However, because of the DES depth limit, we are concerned that our galaxy masking procedure (see Section 2.3) may be incomplete in this redshift range2 and above. Therefore, the measurements of the 0.5 to 0.6 clusters are presented only for illustrative purposes and are not included in our quantification of diffuse light evolution. Footnote 2: The redshift 0.5 to 0.6 clusters will be masked to 22.32 mag in z-band in mag_auto. Only \(\sim 1\%\) of the redMaPPer clusters reach 22.3 mag in the DES 10\(\sigma\) z-band depth map continuously in the whole 0.15 deg\({}^{2}\) region around them and a depth-based cut would eliminate most of the clusters. For comparison, The redshift 0.4 to 0.5 clusters will be masked to 21.9 mag in z-band in mag_auto, while 67% of the redMaPPer clusters reach 21.9 mag in the DES 10\(\sigma\) z-band depth map continuously in the whole 0.15 deg\({}^{2}\) region around them. Note that the DES coadd catalogs are generally over 95% complete above 23.7 mag in z-band, and the decision of a depth cut is made out of an abundance of caution. We note that a redMaPPer galaxy cluster catalog based on the full six years of DES observations is also internally available to the DES collaboration. However, we opt to use the Year 3 version described here because its richness definition has better consistency with the Year 1 version in McClintock et al. (2019), which provides the richness-mass relation used in our estimations. The redMaPPer catalog based on Year 1 to Year 6 observations goes to higher redshift (\(z\sim 0.9\)) than the Year 3 version in this paper (which is based on Year 1 to Year 3 observations), but both versions have excellent redshift coverage in the 0.2 to 0.5 redshift range studied in this paper. ### DES Object Catalogs and Images In this paper, we use the DES images and catalogs produced by the Dark Energy Survey Data Management (DESDM) project (Sevilla et al., 2011; Morganson et al., 2018). A detailed description of the DESDM pipeline can be found in Abbott et al. (2018). To summarize, the DESDM pipeline takes raw images from the Dark Energy Camera (DECam) (Flaugher et al., 2015), performs instrumental signature removals and corrections (Plazas et al., 2014; Gruen et al., 2015), flat-field corrections (Bernstein et al., 2018), full-focal-plane background subtractions (Bernstein et al., 2017), as well as photometric (Burke et al., 2018) and astrometric (Bernstein et al., 2017) calibrations to produce science-ready single exposure images. Those images are coadded (Bertin et al., 2002) into multi-epoch coadd images, which are used to produce object catalogs and photometry measurements by the Source Extractor software (Bertin and Arnouts, 1996). The science-ready single exposure images are also used as input for photometry measurements such as the ngmix photometry measurements mentioned in the previous section. For this work, we benefit from the full 6 years of DES operations (Diehl et al., 2018), and the DES Data Release 2 (DR2) processing campaign (Abbott et al., 2021) which includes not only more data, but also improved processing since the previous data release. Changes and improvements relevant to our analysis include: coadded images based on single-epoch full-focal-plane background subtraction, which do not include local background subtraction as applied to previous versions of DES coadd images (we use the "_nobkg" version of the coadd images, which do not have the local short-scale sky background subtracted as mentioned in Section 2.3); combining DES \(r\), \(i\) and \(z\) band images into detection images as an average to create more robust faint objects detection; finally, changing the source detection threshold from 10\(\sigma\) to 5\(\sigma\) to produce more com \begin{table} \begin{tabular}{|l|l|l l l l l l l|} \hline \multicolumn{1}{|c|}{Redshift (\(z\)) Bin} & \multicolumn{1}{c|}{Richness (\(\lambda\)) Bin} & \multicolumn{1}{c|}{Number Counts} & \multicolumn{1}{c|}{Median \(z\)} & \multicolumn{1}{c|}{Mean \(z\)} & \multicolumn{1}{c|}{Median \(\lambda\)} & \multicolumn{1}{c|}{Mean \(\lambda\)} & \multicolumn{1}{c|}{\(R\_\lambda\) (Mpc/h)} & \multicolumn{1}{c|}{Masking limit} \\ \hline 0.2-0.3 & & 1169 & 0.256 & 0.255 & 28.55 & 33.61 & & 20.67 \\ & 20-30 & 656 & 0.254 & 0.253 & 23.72 & 24.17 & 1.03 & & \\ & 30-45 & 326 & 0.259 & 0.256 & 35.51 & 35.97 & 1.23 & & \\ & 45-60 & 121 & 0.257 & 0.255 & 50.51 & 51.16 & 1.44 & & \\ & 60+ & 66 & 0.274 & 0.263 & 73.55 & 83.58 & 1.80 & & \\ 0.3-0.4 & & 1556 & 0.359 & 0.355 & 27.33 & 32.92 & & 21.38 \\ & 20-30 & 942 & 0.359 & 0.354 & 23.70 & 24.11 & 1.03 & & \\ & 30-45 & 399 & 0.358 & 0.355 & 35.44 & 36.01 & 1.23 & & \\ & 45-60 & 115 & 0.360 & 0.356 & 52.56 & 52.06 & 1.45 & & \\ & 60+ & 100 & 0.361 & 0.355 & 75.25 & 81.57 & 1.78 & & \\ \hline 0.4-0.5 & & 1357 & 0.449 & 0.449 & 27.13 & 32.10 & & 21.87 \\ & 20-30 & 836 & 0.449 & 0.449 & 23.44 & 23.97 & 1.03 & & \\ & 30-45 & 349 & 0.451 & 0.449 & 35.14 & 35.90 & 1.23 & & \\ & 45-60 & 96 & 0.454 & 0.452 & 51.08 & 51.78 & 1.45 & & \\ & 60+ & 76 & 0.440 & 0.445 & 69.83 & 79.17 & 1.76 & & \\ \hline \end{tabular} \end{table} Table 1: The Galaxy Cluster Sample in this Analysis plete object catalogs. The DR2 coadd images have a combined sky coverage of 4913 deg2 in DES \(g\), \(r\), \(i\), \(z\) and \(Y\) bands, and the 95% completeness of the coadd catalogs reaches 23.7 magnitude in the DES \(z\)-band, with a 10 \(\sigma\) magnitude limit of 23.1. Footnote 2: [https://www.cfa.harvard.edu/](https://www.cfa.harvard.edu/) Footnote 3: [https://www.cfa.harvard.edu/](https://www.cfa.harvard.edu/) We use both DES images and catalogs in this paper. Other than the redMaPPer cluster catalog, the coadd object catalogs used in this paper are constructed from DES coadd images using the Source Extractor software to detect and deblend objects. Moreover, we make use of the object's Auto measurements from Source Extractor to determine each object's masking area, which is based on Kron apertures and magnitudes (Kron, 1980). The images used in this paper include both the science-ready single exposure images and the multi-epoch coadd images. In the next section, we describe how we use these images and catalogs in our workflow. ### The Averaging/Stacking method In this paper, we again use a "stacking" method described in Zhang et al. (2019), which has also been adopted in Leung et al. (2020); Sampaio-Santos et al. (2021); Golden-Marx et al. (2023). We present ICL and CG properties averaged over a large cluster sample. The "stacking" method proceeds as the following: 1. Coadd images of each galaxy cluster, are downloaded from the DESDM database3. For each image, we mask all objects above a \(z\)-band magnitude limit determined by the cluster's redshift. We exclude the redMaPPer CG from masking to preserve the CG light. The masking magnitude is chosen to be 0.2\(L*\) with \(L*\) being the characteristic luminosity of a cluster red galaxy luminosity function measurement (Zhang et al., 2019). Assuming a faint end slope of -1, this masking limit would remove 82% of the light from cluster galaxies. Footnote 3: [https://opesciencegrid.org](https://opesciencegrid.org) 2. After masking, the radial surface brightness (SB) profile is derived from the masked images, as the mean pixel value in radial annuli on the images. Footnote 4: [https://github.com/esheldom/kmeans_radec](https://github.com/esheldom/kmeans_radec) 3. Similarly, radial SB profiles are derived for a sample of random points that cover the same sky area as the cluster catalog. In later steps, those random profiles are subtracted from the cluster profiles to eliminate residual backgrounds in the cluster profiles. Footnote 5: [https://opesciencegrid.org](https://opesciencegrid.org) 4. Using a Jackknife resampling method (see examples of applications in Norberg et al., 2009; Melchior et al., 2017), we divide the cluster samples, and the random points into 40 subsamples according to their sky coordinates4. For each sky coordinate subsample, we derive the CG+ICL SB profile by averaging the profiles of clusters and randoms, and then subtracting the random profiles from that of the clusters. We then apply the Jackknife resampling method (Efron, 1982) to those sky coordinate subsamples to derive the means and uncertainties of the final CG+ICL measurements. Footnote 5: [https://opesciencegrid.org](https://opesciencegrid.org) 5. Additional quantities, such as the CG+ICL color and luminosities are further computed from the CG+ICL SB profiles. In this paper, we also analyze the radial SB profiles of the cluster total light including CG, ICL and cluster satellite galaxies. Those measurements are derived using the same procedures listed here, but without the objects masking described in step (i). We highlight one difference between this paper and Zhang et al. (2019); Leung et al. (2020); Sampaio-Santos et al. (2021); Golden-Marx et al. (2023). In Step (i), for each cluster, the previous analyses processed and coadded single exposure images from DESDM. For this work, we make use of the already coadded images from the DESDM database. Those DESDM images are based on the single exposure images, but coadded without applying local background subtraction steps in the swapp and Source Extractor software. In section 5.3, we compare the profiles derived from those coadded images and those based on the single exposure images Zhang et al. (2019); Leung et al. (2020); Sampaio-Santos et al. (2021); Golden-Marx et al. (2023). The results are highly consistent. The computational resources needed for this method are not trivial. The masking of a \(0.15\times 0.15\) deg\({}^{2}\) region centered on one galaxy cluster, depending on the masking magnitude, can take a few minutes to hours with a single CPU processor. For this work, the masking and profile measurements of each cluster/random point is performed on the Open Science Grid6, a High Throughput Computing Consortium. The processing of tens of thousands of clusters or random locations is distributed to thousands of parallel processes on the Open Science Grid in an "opportunistic" mode. Given the need to test and validate the analyses with different set-ups, we estimate that up to hundreds of thousands of CPU hours have been used in this work. Footnote 6: [https://opensciencegrid.org](https://opensciencegrid.org) ## 3 ICL Surface Brightness ### Richness and Redshift Dependence Our first analysis is the surface brightness (SB) radial profiles of the diffuse light (CG+ICL), as well as the SB of the total cluster stellar content including the rest of the cluster galaxies. The goal of this analysis is to visually examine the shapes of those profiles and their general redshift/v richness trends. The galaxy clusters are split into redshift and richness sub-samples, and those surface brightness profiles are presented in Figure 1. We use three redshift bins from 0.2 to 0.5 to analyze the clusters. In each redshift bin, the clusters are further divided into four richness bins, with the richness binning defined in previous DES clustering studies (McClintock et al., 2019) and listed in Table 1. As mentioned in Section 2, we apply the "stacking" procedure to each redshift/v richness binned cluster subsample. The residual background for each subsample is derived from random points, but the masking magnitude limit for the random points is adjusted according to the cluster subsample's redshift range. For each redshift bin, we apply distance corrections so that the measurements are shifted to be in the observer frame of redshift 0.25. Those measurements are presented in Figure 1. In each of those richness and redshift bins, we measure the diffuse light profiles up to 1 Mpc from the center. Our measurements agree with previous studies which show the radial extension of ICL up to several hundreds of kpc, or even one Mpc from the cluster center (Krick and Bernstein, 2007; Zibetti et al., 2005; Kluge et al., 2021; Li et al., 2022; Chen et al., 2022). The upper panels of Figure 1 show the SB profiles of the galaxy clusters first split by redshift and then by richness. In each subpanel, the redshift range of the clusters is fixed to be the same and each line represents a different richness range. Those radial SB profiles show a clear richness dependence: richer galaxy clusters generally are brighter in SB, while less rich clusters are fainter. The trends are observed for both the diffuse light, and for the total light including cluster satellite galaxies. Moreover, the distinctions between the different richness subsamples are present throughout the three redshift bins, indicating robust richness dependence across the 0.2 to 0.5 redshift range. Our result supports previous findings that detect strong ICL correlations with cluster mass (e.g., Gonzalez et al., 2005; Montes and Trujillo, 2019; Huang et al., 2020; Sampaio-Santos et al., 2021; Kluge et al., 2021; Huang et al., 2022; Golden-Marx et al., 2023a; Ragusa et al., 2022; Chen et al., 2022). Further, the SB richness dependence in Figure 1 grows more prominent with enlarging radius. In the middle panel of Figure 1, we present the SB profiles, first split by richness and then by redshift. In each subpanel, the richness range of the clusters is fixed to be the same and each line represents a different redshift range. Interestingly, those redshift-divided profiles appear similar within their uncertainty ranges; while fixing the cluster's richness range, we do not observe a consistent trend of the SB profiles being either brighter or fainter at a lower redshift. The lack of a consistent trend does not necessarily indicate that there is Figure 1: Surface brightness of the clusters in richness/redshift ranges. Upper Row: Clusters in the same redshift ranges in each panel, with different lines representing different richness subsamples. We show both the surface brightness of the diffuse light (CG+ICL, red hues) and also the surface brightness of total cluster light (blue hues). Both profiles display strong richness dependence across the four redshift panels. Middle Row: Clusters in the same richness ranges in each panel, with different lines representing different redshift subsamples. Again, we show both the surface brightness of the diffuse light (CG+ICL, red hues) and also the surface brightness of total cluster light (blue hues). We do not observe consistent redshift trends across the richness panels, indicating weak, if any, signs for redshift evolution. We quantify the significance of the redshift/richness trends in the next Section. Lower Row: Surface brightness profiles after the cluster’s radius has been scaled by a percolation radius (corresponding to the cluster subsample’s average \(R_{200\rm{m}}\)). The radial profiles of the diffuse light as well as the clusters’ total stellar contents are “self-similar” after radial scaling. no redshift evolution for a richness-fixed sample, but potentially an evolution that is too small to be noticeable in those SB figures. In Section 4, we further quantify the redshift-related trends using their luminosity measurements. ### ICL "Self-Similarity" Previously we have noted the remarkable similarity of the ICL SB profiles after scaling by the cluster's radius in Zhang et al. (2019); Sampaio-Santos et al. (2021). Similarly, we investigate this effect with a much larger sample in this analysis. Given the relatively small richness range of each bin, we use a similar procedure as described in Sampaio-Santos et al. (2021) to scale the radial profiles. For each richness bin, we rescale the clusters' SB profiles by one radius determined by the average richness of each richness bin. Because there are no weak-lensing mass measurements for the galaxy cluster samples studied in this paper, we scale their radial SB profiles by the redMapper percolation radius which is a function of richness, \(R_{200\lambda}=1.95\times(\lambda/100)^{0.45}\mathrm{Mpc}/h\). This radius relation is based on the \(R_{200m}\) to richness relation derived using the DES Year 1 richness-mass relation (McClintock et al., 2019), and meant to be an approximation of \(R_{200m}\) derived from cluster richness. We note that the richnesses are not necessarily consistent between different versions of the redMaPPer catalogs based on different DES data releases (a small difference has been found in preliminary comparisons). The percolation radii used here is a close but not necessarily accurate estimation of the clusters average \(R_{200m}\). The last row of Figure 1 shows the SB radial profiles after scaling by the percolation radius. This row is meant to be compared to the top row of the same figure (without radial rescaling). These scaled profiles, both of the diffuse light and the cluster light, indeed appear to be much more similar across the richness bins, especially outside a transition radial range around 0.04 \(R_{200\lambda}\). In the central regions, the profiles do not appear to be similar after rescaling, suggesting that the CG SB profiles can not be well-described by scaled cluster radii. This phenomenon can be explained by an inside-out growth scenario (Oser et al., 2010; van Dokkum et al., 2010), such that CG stellar cores formed early at \(z>2\), while the accretion of CG outskirts and the ICL profiles happen later and are more influenced by the galaxy cluster's mass accretion process. ### Volume-Limited Cluster Sample In the previous subsection, we have shown that when fixing the cluster's richness, clusters in different redshift ranges have similar SB profiles. However, this does not answer the question of how ICL and CGs evolve with time in a single galaxy cluster whose richness will also evolve with time - more likely, their richnesses/masses will increase over time because of ongoing merging events. In this sub-section, we account for cluster richness evolution by constructing a volume-limited cluster sample in different redshift bins. Specifically, in the highest redshift bin of 0.4 to 0.5, we compute the cosmic volume contained in this redshift bin, and select the clusters above richness of 50. For the lower redshift bins, 0.3 to 0.4 and 0.2 to 0.3, we again compute their respective cosmic volumes, and then adjust the richness thresholds for the cluster selections, so that each redshift sub-samples have the same cluster density given their different cosmic volumes. For the redshift slice of 0.3 to 0.4, the richness threshold becomes 55 and for the redshift slice of 0.2 to 0.3, the richness threshold becomes 61. Only clusters above those richness thresholds are selected for comparison in this sub-section. These selections ensure that the cluster samples have the same spatial densities in each redshift bin. A similar volume-limited selection method is also use in Golden-Marx et al. (2023), with the distinction that Golden-Marx et al. (2023) select clusters based on SZ-computed masses, while this analysis is using optical richness. The SB-profiles of those volume-limited samples are presented in Figure 2. We again do not observe significant differences between the redshift sub-samples, as the previous subsection has already noted no visible differences between redshift subsamples within fixed richness ranges. In this exercise, we limit the analysis to a high richness threshold which tends to have lower richness-to-mass scatter (Farahi et al., 2019; Anbajagane et al., 2020) and is less subjective to potential systematic effects such as line-of-sight projections (Costanzi et al., 2019; Abbott et al., 2020; Wetzell et al., 2022; Wu et al., 2022). For the same reason, we also do not sub-divide the clusters according to richness. The consequently smaller sample size lowers the significance of a possible redshift trend. We further quantify the redshift-related trend in Section 4. ### Color Radial Profiles We derive the \(r-i\) color of the ICL radial profiles using measurements of the DES \(r\) and \(i\) band SB profiles. These color measurements are shown in Fig. 3. Given that such measurements require highly-significant ICL SB profiles in two bands, we only show color measurements out to a radius slightly beyond 400 kpc. The color profiles are presented in redshift subpanels with clusters further divided in richness bins. As previously noted in the literature, the color of the diffuse light displays a radial gradient, becoming bluer at a larger radius. Interestingly, we also notice a consistent, although not significant, richness trend in those colors, which appear to be redder in richer clusters. In addition to the diffuse light color profiles, we have acquired the color measurements of the cluster's total stellar content, but the measurements are much more uncertain because of Poissonian noise. However, we note that the average color of the cluster's total stellar content is generally consistent with the color of the diffuse light. To further quantify the colors, we fit those measurements as a Figure 2: Surface brightness of a volume-limited cluster sample. The radial profiles of these redshift subsamples appear to be consistent within the measurement uncertainties, indicating a redshift evolution that is below a detection level. Again, we show both the surface brightness of the diffuse light (CG+ICL, red hues) and the total cluster light (blue hues). We will further quantify the redshift evolution of the cluster luminosities in the next section. function of radius: \[\text{Color}(R)=a\times\log(R)+b \tag{1}\] Here, \(a\) is the radial slope of the colors, and \(b\) is the intercept of the profile at \(R=1.0\) kpc. The fitted parameters are shown in Figure 4. In each redshift/richness bin, \(a\) appears to be negative, indicating a robust detection of a radial gradient. This is consistent with previous measurements of ICL radial color gradient (e.g., Zibetti et al., 2005; Chen et al., 2022; DeMaio et al., 2018; Yoo et al., 2021) and that the ICL consists of more metal-poor and younger stars (e.g., Edwards et al., 2020) compared to the CG. As mentioned and analyzed in Contini et al. (2019), the color radial gradient suggests that the ICL's origin is from galaxy disruption and stripping: if clusters acquire ICL mainly through merging, ICL would have a relatively uniform color because of stellar population mixing. On the other hand, the disruption and stripping of cluster satellite galaxies would produce a radial gradient because of the radial dependence of those processes. In addition to the radial gradient, we also detect a possible richness dependence in color, as the intercept (\(b\)) of the fitting results appears to be redder (higher positive value) in richer clusters. This is possibly related to richer and thus more massive clusters containing a higher fraction of red sequence galaxies than the less massive clusters (Hansen et al., 2009; Sarron et al., 2018; Radovich et al., 2020; Golden-Mars et al., 2023). ## 4 ICL luminosity ### Richness and Redshift Dependence To further quantify the ICL's richness and redshift dependences, we examine the luminosities of the diffuse light and the cluster's total stellar content. Those luminosities are derived by integrating the SB profile in radial ranges as the following: \[L(r<R,z=0.25)=\int_{0}^{R}S(r,z=0.25)\pi r\,\mathrm{d}r \tag{2}\] Here, \(S(r,z=0.25)\) is the SB measurements presented in the previous section, which have been distance-corrected as if it was observed at redshift 0.25. Thus, \(L(r<R,z=0.25)\) is the luminosity measurements enclosed within radius \(R\), but shown as the apparent magnitude in the observer frame of redshift 0.25. We choose this pivot redshift because it is close to the median redshift value in the lowest redshift subsample. Figure 5 shows the radial profiles of the integrated luminosity as a function of the outbounding \(R\). There are two significant trends in those luminosity profiles. First, richer and thus more massive clusters contain more diffuse and total light. The richness dependence is observed across the four redshift ranges. Second, at small radii, within \(\sim 50\) kpc, diffuse light contributes to the bulk of the cluster total stellar light. Outside \(\sim 50\) kpc, the cluster's total stellar light increases significantly because of the contribution from cluster satellite galaxies. As a result, diffuse light appears to grow less significantly than the total stellar content with radius. We also show the luminosity radial profiles as a function of radius scaled by \(R_{2004}\). The richness/mass dependence of the luminosity becomes even more pronounced in those scaled radius plots. Figure 4: We fit the color profiles in Figure 3 to a linear model with radius \(\text{Color}(R)=a\times log(R)+b\) to further quantify those profiles. The radial slope parameter \(a\) (upper panel) is negative for all of the richness and redshift bins, which is a robust detection of the radial gradient of the diffuse light’s color profile. The \(b\) parameter (lower panel), which is the diffuse light’s color at \(R=1\) kpc, appears to be slightly larger for richer clusters, possibly reflecting a more massive and redder satellite population in richer clusters. Figure 3: The DES \(r-i\) color profile of the diffuse light in cluster subsamples of different redshift and richness ranges. The diffuse light color is consistent with the average color of the cluster’s total stellar content in the center. In addition, a radial gradient can be seen in all of the redshift and richness ranges, such that the diffuse light becomes bluer at larger radii. This is consistent with previous studies that ICL consists of more metal-poor and younger stars (Edwards et al., 2020) and suggests an ICL origin from galaxy disruption and tidal stripping (Contini et al., 2019). Figure 5: Integrated luminosity (K-corrected to be the apparent magnitude in the observer frame of \(z=0.25\)) as a function of radius (upper panels), or radius scaled by \(R_{200,\lambda}\) (lower panels). This figure illustrates the luminosity measurements used in our analyses and shows that diffuse luminosity (red lines) and total cluster stellar luminosity (blue lines) increase as the radial range increases. See Section 4 for discussions on the trends manifested in this figure. Each panel shows a different redshift range, while the last redshift panel (redshift 0.5 to 0.6) is coded in a gray color, indicating that this redshift slice may be less reliable because of potential incomplete masking. We note that the last redshift range is not used in quantitative analyses. Figure 6: Luminosities (distance-corrected to be the apparent magnitude in the observer frame of \(z=0.25\)) enclosed within 4 radial bin (0 to 30 kpc, 30 to 80 kpc, 80 to 300 kpc and 300 to 600 kpc). We examine how the luminosities change with redshift and richness. The luminosities of the total cluster stellar content (blue lines) and the cluster diffuse light (CG+ICL, red lines), increase significantly as the cluster’s richness increases. However, those luminosities do not appear to change significantly with redshift, except in the lowest richness range and within 30 kpc. See Section 4 for quantitative analyses. We further investigate how these luminosities change with redshift and radial apertures. For clusters in a fixed richness range, we derive their luminosities enclosed within 30 kpc and in 30 to 80, 80 to 300 and 300 to 600 kpc annuli. The innermost 30 kpc radial bin is chosen to match the CG size, while the second radial bin, out to 80 kpc (we have experimented with 50 kpc, 75 kpc and 100 kpc, and found 80 kpc to be most representative in terms of the redshift itself, is chosen to probe the CG to ICI transition range. Finally, the 80 to 300 and 300 to 600 kpc annuli are chosen to probe the extended components of ICL. Those luminosity measurements in apertures/annuli are shown in Figure 6. Interestingly, with the lowest richness sample - the clusters in the richness range of 20 to 30 - we notice that both their diffuse light and total light appear to get brighter towards lower redshift, indicating redshift evolution. In some of the higher richness bins, the luminosities of both the cluster's total and stellar light appear to be getting brighter or unchanged towards lower redshift within 30 kpc. However, outside of 30 kpc, there is no sign of consistent redshift evolution. To further quantify the richness dependence and redshift evolution in those different apertures, we fit the measurements to the following relation: \[L(R_{0}<r<R_{1})=a\times\log_{10}\frac{\lambda_{0}}{20}+b\times\log_{10}\frac{ 1+z_{0}}{1.25}+c. \tag{3}\] In this relation, the total amount of light (\(L(R_{0}<r<R_{1})\), in the unit of magnitudes) contained within an aperture or annulus, is fitted with a linear relation to the logarithmic values of the cluster subsample's average richness and redshift. The richness and redshift dependences are described by parameters \(a\) and \(b\) respectively, and a non-zero value would indicate detection of dependence. In the relation, the pivot richness is chosen as 33, which is the mean richness value of the sample, while the pivot redshift is chosen as 0.25, which is close to the median redshift value in the lowest redshift subsample. Thus, the intercept of the relation, \(c\), can be interpreted as the apparent magnitude of a richness 33 and redshift 0.25 cluster. The fitting of those parameters \(a\), \(b\), and \(c\), is performed with a Markov Chain Monte Carlo (MCMC) method, and the likelihood is constructed from the \(\chi^{2}\) values between the measurements and the relations (using the uncertainties of the measurements as the weighting). Table 2 shows the derived posterior values of the \(a\), \(b\), and \(c\) parameters. The fitted results confirm our fore-mentioned observations. First, the values of \(a\) deviate from 0 at very significant levels, between \(\sim 7\) to 16 \(\sigma\) levels in all of the analyzed apertures for both the diffuse and the total stellar content. This result confirms the significant richness, and thus cluster mass dependence for the diffuse and total stellar content. Further, the value of \(a\) is increasingly negative at large radii, suggesting stronger richness, and thus mass, dependence for the diffuse light as well as the cluster total stellar content. Also consistent with our fore-mentioned observations, for the diffuse light, the value of parameter \(b\), which quantifies redshift evolution, is consistent with 0 outside 30kpc. This indicates that when fixing cluster richness, the amount of diffuse light is not obviously increasing (or decreasing) towards lower redshift. Cluster mass may be the main driver for diffuse light evolution in a large radial bin. However, within 30 kpc, the amount of diffuse light appears to be increasing towards lower redshift, suggesting that the amount of stellar light associated with the CG is building up over time. According to the fitting results, the luminosity of the CG becomes brighter by 0.113 magnitude (calculated from \(b\times\log_{10}(1.45/1.25)\)) from redshift 0.45 to 0.25 within 30 kpc. This brightening corresponds to a flux increase of 11%. The \(b\) value of the diffuse light between 30 to 80 kpc is consistent with 0, but we have also adjusted the 80 kpc outbound range between 50 and 100 kpc, and found that the \(b\) value is larger in apertures closer to 30 kpc, indicating possible additional redshift evolution closer to the CG. For the cluster's total light, \(b\) is positive within 30 kpc, also indicating some growth with time (brightening towards lower redshift). Because CG is the dominating component of cluster total light within 30 kpc, the growth in this radial range mostly reflects CG growth. In annuli outside 30 kpc, \(b\) is generally consistent with 0, indicating no evidence of significant redshift evolution. ### Volume-Limited Cluster Sample We further investigate the luminosity evolution of the volume-limited cluster sample discussed in Section 3.3, which helps answer the question of ICL growth when tracking the same cluster's evolution over time. Similarly, we calculate the luminosities enclosed within radial bins for both the cluster diffuse light and total light and show how they change with redshift in Figure 7. A sign of redshift evolution can be seen within 80 kpc. We do not find evidence of redshift evolution in the rest of the radial bins. To quantify the redshift evolution of this cluster sample, we also fit their measurements to the following relation: \[L(z)=L_{0}+a\times\log_{10}(1+z) \tag{4}\] In this relation, \(a\) quantifies the redshift evolution of this cluster sample. A positive value would indicate brightening luminosity over time (towards lower redshift), while a negative value would indicate the opposite. The fitting procedure is performed with the curve_fit function of Scipy, and the derived values and uncertainties of \(a\) are noted in Figure 7. With both the diffuse light and the cluster total light, we detect positive values of \(a\) above the significance level of \(1\sigma\) within 30 kpc in the CG as well as in the CG to ICL transition region, between 30 to 80 kpc range. In both of the radial ranges, the redshift evolution, indicated by the positive value of \(a\), appears to be larger in the diffuse light than in the cluster total light. This result, together with the results from the richness/redshift subsamples, indicates that the increase in stellar content with time associated with diffuse light is partially driven through the deposition of new material onto the CG and at the CG's outskirt. For the larger radial bins, we again find \(a\) to be consistent with 0, and thus, do not find evidence of redshift evolution. It is possible though, that the redshift evolution in those radial bins falls below our measurement limit as the uncertainties of \(a\) are large. ### Diffuse fraction The fraction of CG and ICL in the cluster's total stellar content is an important quantity (e.g., Gonzalez et al., 2013; Behroozi et al., 2013; Ragusa et al., 2022; Joo & Jee, 2023). The build-up of the ICL, CG, and cluster total light is not necessarily aligned over time. For example, the ICL and cluster total light may have gone through more or less significant growth in the recent era compared to the CG. Thus, one may observe a change over time in the CG/ICL to cluster total light ratio. Based on the luminosity measurements, we quantify the fractions of CG and ICL in the total cluster stellar content, with the following equation, \[{\rm Ratio_{Diffuse}}(R_{0}<r<R_{1})=\frac{{\rm Lum_{diffuse}}(R_{0}<r<R_{1})}{ {\rm Lum_{total}}(R_{0}<r<R_{1})} \tag{5}\] In this equation, \({\rm Lum_{total}}(R_{0}<r<R_{1})\) enclosed within radius \(R_{0}\) and \(R_{1}\) is the luminosity of the cluster's total stellar content derived from its surface brightness, and \({\rm Lum_{diffuse}}(R_{0}<r<R_{1})\) is the luminosity of the diffuse light derived from its surface brightness. We refer to the ratio of these measurements as the diffuse fraction in this analysis. Figure 8 shows those fractions derived for clusters in different redshift/richness ranges. Other than the 0.021 to 0.056 \(R_{200,4}\) radial Figure 8: Diffuse fractions in the total cluster stellar content, calculated within physical radii (right column) and scaled radii by \(R_{200,4}\) (left column). We do not observe consistent redshift or richness-dependent trends (except in the 0.021 to 0.056 \(R_{200,4}\) bin) in the measurements. However, the diffuse fractions appear to be dropping at larger radii. See Section 4.3 for more detailed discussions. Figure 7: Luminosity (distance-corrected to be the apparent magnitude in the observer frame of \(z=0.25\)) as a function of redshift in a volume-limited cluster sample. Again we analyze the brightness enclosed within 4 radial bins (0 to 30 kpc, 30 to 80 kpc, 80 to 300 kpc, and 300 to 600 kpc, top to bottom panels) and examine how they change with redshift. The luminosities of the total cluster stellar content (blue lines) and the cluster diffuse light (CG+ICL, red lines) both show some signs of becoming brighter over time within 30 kpc, and between 30 to 80 kpc. Those trends indicate growth in the CG and in the CG to ICL transition region. In section 4, we include quantitative analyses of those trends. range, the diffuse fraction appears to stay unchanged with richness and redshift, indicating similar richness and redshift dependence between diffuse light and cluster total stellar content. However, the diffuse fraction does appear to decrease at a large radius, which is close to 40% in the 30 to 80 kpc range, but decreases to \(\sim 20\%\) in the 300 to 600 kpc range. Between the 0.021 to 0.056\(R_{200A}\) radial range, the diffuse fraction does appear to decrease with richness, but this is likely because of the scaling of \(R_{200A}\) with richness; a richer cluster would have a higher \(R_{200A}\) value, which excludes more of the BCG outskirt with a \(0.021R_{200A}\) cut. In addition to those trends, Figure 8 highlights the importance of selecting a radial and cluster mass/richness range when studying diffuse fractions. The fractions appear to drop with an increasing radius, but also is dependent on whether or not the measurements are made in physical radii or radial units scaled by cluster radius. Given the discrepancies in literature reports on diffuse fractions, fair comparisons will need to be made between cluster samples of comparable masses in similar radial scales. ## 5 Systematic effects and tests ### PSF effect Because the point spread function is known to have extended wings (Moffat, 1969; King, 1971; Racine, 1996; Bernstein, 2007), it would contribute to the extended low surface brightness features of galaxies or galaxy clusters. The radial scales we probe in this paper are significantly larger than the PSF FWHM of the DES images, therefore we expect minimal PSF contributions to the ICL detection (see discussion in Zhang et al. (2019c)). On the other hand, those contributions may change with redshift given the change of angular distance scale with redshift. Thus, we perform image simulations to probe the possible effect of PSF on the results presented in this paper. To do so, we convolve a PSF model with an analytical diffuse light profile model and examine the differences before and after PSF convolution. We generate mock 2D images of diffuse light using an analytical model, setting the angular scale of the analytical diffuse light profile models at four redshifts, 0.25, 0.35, 0.45 and z = 0.55 (but without adjusting their surface brightness level as we are only looking at before-and-after PSF convolution differences). These 2D images are then convolved with a 2D PSF image model. Both the analytical models and the PSF models are based on the DES-Year1 measurements in Zhang et al. (2019c) in r-band, as the PSF models have similar large radial behaviors outside 2 arcseconds. We then derive the SB measurements in radial bins before and after PSF convolution. The results are shown in Figure 9. The top panel shows the flux changes of the profiles before and after the PSF convolution for the three profiles at different redshifts. PSF convolution flattens the central regions of those profiles limited by the pixel scale of the images (0.263 arcsecond pixel scale). The middle panel of Figure 9 shows the relative changes in those profiles before and after convolution. Outside of 10 kpc, PSF convolution has a minor effect on SB measurements which change by less than 10%, but the change depends on redshift. Outside of 100 kpc, PSF effects appear to be negligible for all of the four redshifts, which is less than 1% at 100 kpc for \(z=0.55\). As with the integrated (within radius) brightness measurements, similarly, the PSF effect appears to be negligible if integrating to 20 kpc, affecting less than 5% of the flux measurement, or around 2% if integrating to 30 kpc. Within 10 to 20 kpc, the PSF may affect the CG flux measurements by up to 12%, depending on the redshift. Within 10 kpc, the integrated luminosity needs to be carefully interpreted due to the PSF effect. We conclude that PSF effect alone can not account for the redshift evolution in the diffuse light luminosity measurement within 30 kpc, which shows a change of \(\sim 0.2\) mag, or \(\sim 20\%\) in flux from redshift 0.45 to 0.25 (Section 4). With a carefully designed CG aperture (30 kpc in this analysis), our luminosity redshift evolution results should be minimally affected by the PSF effect. ### Masking Magnitude Limit The masking magnitude limit we use for this work varies with redshift. This may affect the results of this paper when cluster galaxies below the masking limit contribute a noticeable amount of light to the diffuse light measurements. We acknowledge this issue as a limitation in our analysis, as we do not explicitly account for the contributions from the fainter cluster member galaxies below the masking limit. We test how much our results may have been affected by these magnitude limits. In this test, we redo the measurements of the diffuse light using a masking magnitude that is fainter by 0.7526 mag (or Figure 9: Testing the effect of PSF on SB and luminosity measurements. Upper panel: diffuse light SB models are convolved with a PSF model at different redshifts. The PSF flattens out the SB distribution in the center. Middle Panel: Relative change in SB after the diffuse light profile models are convolved with a PSF model. The changes are most significant within 10 kpc. Lower panel: Relative changes in luminosity (derived by integrating the SB profiles radially as described in Section 4) after a PSF model convolution. The integrated luminosity is most affected within 20 kpc. masking to \(0.1L*\) of the cluster luminosity function), and compare the results to those from the fiducial analyses presented earlier. We have not applied this deeper magnitude limit in our fiducial analysis because of the increasingly incomplete galaxy detection associated with this magnitude limit, which would render the results in the redshift 0.4 to 0.5 bin less reliable. Nevertheless, we show the SB measurements with this deeper magnitude and the comparison to the fiducial analysis in Section 4. Indeed, using a deeper masking limit notably reduces the surface brightness measurements of the diffuse light throughout the redshift and richness bins. Outside of 100 kpc, the reduction in flux consistently reaches a \(\sim 10\%\) level, although there are significant fluctuations as indicated by the uncertainties. Given that satellite galaxies 2.5 to 5 times brighter than the ICL in this radial range (Section 4) are excluded, a reduction of \(10\%\) in flux means that the deeper magnitude is only further removing 2% to 4% of the faint cluster satellite galaxy contribution. A deeper masking magnitude is unlikely to significantly further reduce ICL brightness unless there is a noticeable upturn in the cluster galaxy luminosity function at the faint end (e.g., Lan et al., 2016). Other than the masking limit as well as the PSF effect, there are other additional effects that influence our results. Another issue related to masking is that the masking aperture does not enclose all of the light from cluster satellite galaxies. A galaxy's light can reach tens or even hundreds of kpcs. In Zhang et al. (2019), we found that the aperture of masking only affects diffuse light measurements at a percentage level. In addition, in this analysis, we have adjusted the masking radius to be 3.5 Kron radii rather than 2.5 Kron radii which will further reduce the effect. Moreover, the cluster galaxy luminosity function may evolve with redshift. However, recent literature studies find that the redshift evolution of the cluster galaxy luminosity function is very mild at most (e.g., Hansen et al., 2009; Sarron et al., 2018; Zhang et al., 2019; Puddu et al., 2021). ### Sky Background Accurate diffuse light measurements require accurate evaluation and removal of the sky background in optical images. Similar to Zhang et al. (2019), in this paper, the images we use have removed sky background that is estimated over the whole field-of-view (FOV) of DECam, approximately 3 deg\({}^{2}\), using a PCA method (Bernstein et al., 2017). Given that one galaxy cluster, even at redshift 0.2, only makes up a very small area in the DECam FOV, the sky background estimation is not sensitive to the presence of galaxy clusters, thus avoiding a background over-estimation issue that often plagues ICL measurements. Zhang et al. (2019) tested the DECam FOV PCA background evaluations for ICL measurements, and it was shown that the PCA sky estimations at the cluster centers and at a large cluster radius (1.36 arcmin from the cluster center) are highly consistent. After removing the full FOV sky background level, the images still possess a residual background. Since we average the measurements for several hundreds and sometimes several thousands of clusters, we estimate a residual background for those averaged measurements, through processing "sky randoms" that track the area coverage of the cluster sample. A surface brightness profile of the sky randoms is acquired using the same procedure of the cluster measurements. Those "random" profiles are subtracted from the "raw" cluster measurements to acquire the final cluster-related measurements. The top panel of Figure 11 illustrates the procedure. In Figure 11, because of the residual background, the "raw" cluster measurements still have a SB level of \(\sim 2\) /kpc\({}^{2}\) in flux measured at large radii (\(\sim 2\) Mpc), but this residual is also present in the Figure 10: Diffuse light profiles derived when using a deeper masking magnitude limit (upper panels) and the relative differences to the fiducial measurements presented in previous sections (lower panels). The surface brightness measurements outside 100 kpc can be lowered by 10% when using a deeper masking magnitude, which removes more contamination from the faint cluster satellite galaxies. "random" measurements. After subtracting the randoms, the final cluster measurements fluctuate around 0 at very large radii (\(\sim 2\) Mpc). Note that in Figure 11, we are showing the measurements in DES "flux" (\(10^{-12}\) of a maggy), where the "flux" used here is a linear measure of an object's brightness, as opposed to the logarithmic "magnitude" unit of brightness with the following relation mag \(=30-2.5\times\log_{10}(\rm flux)\). We note the importance of using random catalogs that faithfully trace the sky coverage of the redMaPPer cluster catalog. The raw profile measurements of randoms in Figure 12 are sensitive to the selection of the random catalogs (and thus the redMaPPer cluster catalogs). These two catalogs are selected to avoid sky regions that contain bright foreground galaxies and stars - at least 0.2 deg away from their centers. If we adjust the distance cuts to 0.3 deg or 0.4 deg, the random's profile value would become lower, indicating different "residual" background levels. Finally, a crucial difference between this paper and Zhang et al. (2019c) is that we use the coadded images from the Dark Energy Survey directly, which is based on coadding single epoch images after the PCA sky background subtraction. The DES coadd images (the "no-bkg" coadd images in the DES data release, which did not subtract local background) are based on the procedure in Zhang et al. (2019c) to better preserve low-surface brightness features. To test that the DES coadds are indeed suitable for detecting intra-cluster light, we separately process the redshift 0.2 to 0.35 clusters by coadding single epoch images using the same procedures in Zhang et al. (2019c) and compare the measurements to the DES special coadd-based measurements. Their differences are shown in Figure 11. The raw SB measurements of those clusters and the randoms from both sets of images are offset at a flux level of 0.2, corresponding to a surface brightness level of 31.7 mag/kpc\({}^{2}\). Those raw measurement differences between the two coadding procedures are likely caused by pixel weighting differences. After the random subtraction, the measurements are similar at a surface flux level of 0.015, which means that the two methods are similar to a surface brightness level of 40.5 mag/kpc\({}^{2}\), and thus highly consistent. ## 6 Discussion on Redshift Evolution ### Comparison to Simulation To gain theoretical insights into the evolution of ICL, we turn to the IllustrisTNG simulation suite (Nelson et al., 2019; Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Marinacci et al., 2018) to examine how the diffuse stellar components of galaxy clusters change with cluster mass and redshift. This has already been a subject of investigation in Pillepich et al. (2018). Our analysis here is based on the TNG300-1 simulation, which has the largest volume (300 Mpc\({}^{3}\)) in the IllustrisTNG simulation suite, and also the highest simulating resolution among the 300 Mpc\({}^{3}\) volume series. The TNG300-1 simulation contains 263 dark matter halos above the mass threshold of \(6\times 10^{13}\rm M_{\odot}/h\) at redshift 0.27. It has an advantage over the smaller-volume simulations (for example the TNG 100 Mpc\({}^{3}\) and 50 Mpc\({}^{3}\) series) which contain much smaller samples of cluster-sized dark matter halos despite their higher simulation resolutions. We select the redshift snapshots at 0.27, 0.42 and 0.58 for this analysis, to represent the redshift range studied in this paper. In each redshift snapshot, we select dark matter halos with \(M_{\rm 200m}\) above \(6\times 10^{13}\rm M_{\odot}/h\) as "galaxy clusters". After cutting dark matter halos that Figure 11: Upper Panel: Diffuse profiles derived from DES special coadded images (red lines, Y6ACoad, fiducial results in this paper) vs. those derived from single epoch images as in Zhang et al. 2019 (Y6SE, blue lines). Lower Panel: Differences in these profiles. These two approaches yield consistent surface brightness measurements to an accuracy level of over 30 mag/kpc\({}^{2}\) in terms of raw diffuse light and random profile measurements. After random profile subtractions, the differences vanish at a surface brightness level of 40.5 mag/kpc\({}^{2}\). Figure 12: Surface brightness of the randoms when using different distance cuts to bright objects in the sky. Our analysis requires the bright objects identified in a DES masking file to be 0.2 deg away from the cluster center. Using further cuts would lower the surface brightness measurements of randoms because of less contamination from the bright objects. are too close to the simulation box boundaries (within \(20\) cMpc\(/\)h6), we are left with 205, 155 and 115 dark matter halos respectively in the three redshift snapshots. Those dark matter halos will be referred to as galaxy clusters in the rest of this analysis. For each simulated cluster, centered on its weighted mass center, we select the diffuse stellar particles contained in 3D distance apertures and compute their total stellar masses. Those stellar masses are shown together with the host halo mass in each redshift snapshot in Figure 13. In addition to the diffuse stellar component, we also included the dark matter halo's total stellar content (subhalo+diffuse) within those radial apertures for comparison. Footnote 6: c in cMpc\(/\)h stands for comoving distance. In this simulation, both the diffuse and total stellar components of galaxy clusters steadily increase as the galaxy cluster mass increases. This mass dependence grows steeper in larger radial apertures. On the other hand, examining the mean of those stellar masses (\(M_{*}\)) as a function of halo mass (\(M_{200\rm m}\)), there does not appear to be tangible differences in different redshift snapshots, indicating no redshift evolution. To further quantify the mass dependence and redshift evolution in the simulation, we fit the halos's stellar masses \(M_{*}\), halo masses \(M_{200\rm m}\) and redshifts \(z_{0}\) to the following stellar-mass and halo-mass relation. \[\log_{10}M_{*}=a\times(\log_{10}M_{200\rm m}-14.0)+b\times\log_{10}\frac{1+z_ {0}}{1.25}+c. \tag{6}\] This relation is similar to the one adopted in Section 4, substituting richness-dependence for mass-dependence. The fitted constraints on the relation are listed in Table 3. In the relation, parameter \(a\) quantifies the mass-dependence of the stellar masses. Its value is positive in all of the radial bins, and becomes even more positive at larger radii, in agreement with the steeper mass dependence we have seen in Figure 13. On the other hand, parameter \(b\) quantifies the redshift evolution. Its value is consistent with 0 in all of the bins, indicating a non-detection of redshift evolution. Overall, our results confirm the findings in Pillepich et al. (2018) (as well as in Golden-Marx et al.2023) that did not find redshift evolution in the stellar-mass to halo-mass relation of galaxy clusters, in either the diffuse component or the subhalo component. However, both stellar mass components scale strongly with halo mass. We note that in small radial ranges, the properties of halo central galaxies or massive galaxies in the simulation do not always match observations (e.g., Pillepich et al.2018; DeMaio et al.2020; Li et al.2019; Cannarozzo et al.2023). Nevertheless, those simulation results qualitatively agree with our measurements in the large radial ranges outside of 80 kpc. ### Comparison to Literature Perhaps the most surprising result from this paper is the relative lack of ICL evolution at a radius larger than 80 kpc. Many analyses characterizing CG and ICL growth, including the work of the co-authors of this paper, have predicted significant growth of the ICL (e.g., Behroozi et al.2013; Zhang et al.2016; Contini et al.2018; Golden-Marx et al.2022) as a mechanism to explain the relatively slow CG growth observed below redshift 1.5 (e.g., Stott et al.2010; Lidman et al.2012; Lin et al.2013). However, we do find signs of Figure 13: The stellar mass of diffuse light and of the cluster’s total stellar content in the IllustrisTNG 300-1 simulation, as a function of the host halo’s mass. The different lines indicate the running means in different redshift snapshots. These stellar mass-halo mass relations do not seem to vary with redshift. However, those relations depend on the apertures used to calculate the stellar masses, and are steeper in large radial ranges. \begin{table} \begin{tabular}{|l|l|l|l|} \hline & \(a\) & \(b\) & \(c\) \\ \hline \(r\leq 30\) kpc Total & \(0.51\pm 0.20\) & \(-0.08\pm 1.22\) & \(11.17\pm 0.07\) \\ \(r\leq 30\) kpc Diffuse & \(0.51\pm 0.21\) & \(-0.11\pm 1.21\) & \(11.16\pm 0.07\) \\ \hline \(30\leq r\leq 80\) kpc Total & \(0.63\pm 0.21\) & \(0.17\pm 1.22\) & \(11.14\pm 0.07\) \\ \(30\leq r\leq 80\) kpc Diffuse & \(0.67\pm 0.21\) & \(0.03\pm 1.21\) & \(11.08\pm 0.07\) \\ \hline \(80\leq r\leq 300\) kpc Total & \(0.81\pm 0.20\) & \(0.36\pm 1.22\) & \(11.45\pm 0.07\) \\ \(300\leq r\leq 300\) kpc Diffuse & \(0.89\pm 0.21\) & \(0.44\pm 1.19\) & \(11.20\pm 0.07\) \\ \(300\leq r\leq 600\) kpc Total & \(0.99\pm 0.21\) & \(0.53\pm 1.21\) & \(11.34\pm 0.07\) \\ \(300\leq r\leq 600\) kpc Diffuse & \(1.11\pm 0.21\) & \(0.65\pm 1.22\) & \(10.93\pm 0.07\) \\ \hline \end{tabular} \end{table} Table 3: Constraints on parameters in the simulation stellar mass to halo mass relation \(\log_{10}M_{*}=a\times(\log_{10}M_{200\rm m}-14.0)+b\times\log_{10}\frac{1+z_ {0}}{1.25}+c\) diffuse light redshift evolution in the CG as well as in the CG to ICL transition within 80 kpc. Prior to this analysis, there have been few works that analyze large samples of ICL profiles over a broad range of redshift to directly quantify their redshift evolution. One of the most comparable literature studies to our work is presented in DeMaio et al. (2020), which analyzed 42 clusters in the redshift range of 0.05 to 1.75. DeMaio et al. (2020) measured the BCG and ICL growth out to about 100 kpc from the cluster center, and found that the stellar masses of BCG and ICL increase more rapidly than the cluster's total mass from redshift 1.5 to the present. They conclude that BCG+ICL growth is not solely driven by cluster mass growth. In this analysis, we indeed observe that the CG and ICL luminosity increases mildly within 30 kpc. There are some differences between our work and that of DeMaio et al. (2020). DeMaio et al. (2020) find that the ICL grows by a factor of \(1.08\pm 0.21\) from redshift 1.55 to 0.4 when examining the 10 kpc to 100 kpc range. While in this paper, we find evidence of ICL growth by 11% within the 30 kpcfrom redshift of 0.45 to 0.25. In our analysis, the results are derived for clusters in a time span of roughly 1.7 Gyr (redshift 0.45 to 0.25). The ICL growth observed in DeMaio et al. (2020) occurs over an extended period of 4.97 Gyr from redshift 1.55 to 0.4. Interpolating from their measurements, the ICL measured between 10kpc to 100 kpc can grow by 37% in 1.7 Gyr, significantly higher than the 11% measured in our work in the 30 kpc range where we see the most growth. On the other hand, DeMaio et al. (2020) have noted a slow-down in the BCG and ICL growth after redshift 0.4, that there's no change in the diffuse light stellar mass (between 10 and 100 kpc) to halo mass relation from redshift 0.4 to 0.1. Furthermore, in our work, we do not find signs of ICL growth outside 80 kpc. On the other hand, Golden-Marx et al. (2023a) studied ICL growth from redshift 0.8 to 0.2, but also do not find much evidence for ICL growth. Golden-Marx et al. (2023a) defines ICL with a large radial aperture of between 50 to 300 kpc and those results are based on the same imaging data set and processing method as in this paper. In both Golden-Marx et al. (2023a) and this work, we are limited by the PSF resolution (as discussed in Section 5) to probe a smaller radial range such as 10 to 30 kpc. Combining the findings from Golden-Marx et al. (2023a); DeMaio et al. (2020) and this work, we speculate that the CG, as well as the region close to the CG within 100 kpc, rather than the ICL at a very large cluster radius, holds the key for explaining CG and ICL growth. However, the growth may not be very noticeable below redshift 0.45. Another comparable analysis is from Furnell et al. (2021), which studied ICL growth over the redshift range of 0.1 to 0.5, using 18 X-Ray selected clusters with Hyper Suprime Cam Subaru Strategic Program observations. Using a radial aperture of \(R_{500}\) and a surface brightness limit of \(25\mathrm{mag}/\mathrm{arcsec}^{2}\), Furnell et al. (2021) find that the ICL fraction increases by a factor of \(2-4\) over the 0.1 to 0.5 redshift range with no obvious mass dependence. However given that the ICL definition in Furnell et al. (2021) is based on a surface brightness limit, a radius aperture of \(R_{500}\) that scales with cluster mass, as well as a "divor" correction due to background subtraction in the images, it is possible that the ICL definitions in their and our analyses are not directly comparable. ## 7 Summary and prospects In this paper, we present measurements of the CG and ICL radial profiles using the full 6 years of DES data. The major findings from those measurements can be summarized as the following: (1) the diffuse light (CG+ICL) extends to 1 Mpc in the redshift range of 0.2 to 0.5 investigated in this analysis. Prior to this analysis, multiple studies have already detected ICL in the several hundreds of kpc to Mpc radial range, which includes both "stacking" based analysis like this paper (e.g., Zibetti et al., 2005; Chen et al., 2022), and deep imaging studies of individual galaxy clusters (e.g., Krick and Bernstein, 2007; Kluge et al., 2021; Golden-Marx et al., 2022). Our finding again showcases the wide radial reach of ICL. There may be much to study with the radial properties of ICL. (2) We find that the diffuse light surface brightness and luminosity strongly depend on richness - a galaxy cluster mass proxy. This dependence is stronger at large radii outside of 50 kpc from the cluster center. The richness and thus cluster mass dependence appears to be the major factor behind the differences between diffuse light observations in different subsamples, as their radial profiles scale well with the cluster's radius (\(R_{200,4}\)) and their fractions in the cluster's total stellar luminosity appears to be richness-independent. The results agree with previous studies that find a strong mass correlation between ICL luminosity or stellar mass, or a possible correlation between the cluster mass distribution and ICL surface brightness (e.g., Montes and Trujillo, 2019; Huang et al., 2020; Sampaio-Santos et al., 2021; Kluge et al., 2021; Ragusa et al., 2022). Perhaps most interesting of all to cluster cosmology studies, this finding, again, suggests the potential of ICL as a cluster mass proxy (Golden-Marx et al., 2023a), or to help improving cluster finding algorithms (Huang et al., 2022). Cosmology studies based on galaxy cluster abundance measurements have long emphasized the importance of developing accurate and precise cluster mass proxies (i.e., galaxy cluster observables that scale well with masses), because a mass proxy with low scatter to the cluster's true mass can significantly reduce the requirement for follow-up observations, and thus reduce the derived uncertainties on cosmological parameters such as \(\Omega_{m}\) and \(\sigma_{8}\)(Rozo et al., 2010). Further, the precision of those cosmology studies also depends on having an accurate mass proxy that is not affected by the cluster's large-scale structure environment (Wu et al., 2022). It will be particularly interesting to incorporate diffuse light quantities in developing cluster mass proxies or cluster finding algorithms (Huang et al., 2022). (3) Perhaps with a bit of a surprise, we find that the diffuse light at large cluster radii (outside of 80 kpc from the cluster center) is not evolving much with redshift in the 0.2 to 0.5 range. Closer to the cluster center, within 80 kpc, we have found some evidence that the diffuse light's luminosity increases with time (towards lower redshift). We speculate that ICL build-up may be more pronounced closer to the CG, while at large radii, on the scale of hundreds of kpc, ICL build-up is more in tune with the cluster mass build-up, which also explains the stronger cluster mass dependence at large radii. In the context of CG and ICL co-evolution studies, many (including the authors of this paper) have speculated a more rapid growth of ICL than the BCGs below redshift 1. Given that ICL and CG is often vaguely defined in those studies, our findings suggest that ICL growth happens at a much smaller radius (i.e., in the CG or at the CG outskirt) than we previously expected. On the other hand, our finding of little redshift evolution at large cluster radius is in excellent agreement with the hydrodynamic simulation study of IllustrisTNG (Pillepich et al., 2018), which finds little redshift evolution in diffuse light stellar mass once the cluster's halo mass is fixed. (4) We have measured additional properties of ICL: the color profile of diffuse light has a radial gradient, which becomes bluer at a larger radius, and also bluer in less rich/massive clusters. In addition, the diffuse light SB profiles appear to be "self-similar" after scaling by the cluster radius, and that ICL fraction in the total cluster stellar light appears to be dropping at a larger radius. Moving forward, there are multiple follow-up opportunities from our measurements. For example, in this paper, we have only studied the average properties of galaxy clusters in richness-redshift subsamples using a "stacking" method. As demonstrated in Golden-Marx et al. (2023a), it is possible to acquire diffuse light measurements of individual galaxy clusters, especially within the 300 kpcs radial range. This would allow us to study how diffuse light properties may change with cluster ellipticity, dynamical state, or with CG properties. It may also be interesting to compare the diffuse light to other galaxy cluster measurements, such as their weak lensing signals as done in Sampaio-Santos et al. (2021). That said, there are also limitations in this study, especially related to the masking depth as discussed in Section 5. The redshift evolution results reported here are limited by the masking depth of cluster galaxies detected by DES. Faint or undetected cluster galaxies below the masking magnitude limit would have blended into our diffuse light measurements. In this analysis, we use the luminosity function and a test with a deeper magnitude limit to argue that the contribution from those faint galaxies does not affect our redshift evolution conclusions. However, this masking issue can be largely avoided by using a much deeper photometric catalog to more thoroughly mask the contribution of cluster galaxies. Future cosmic surveys like the Legacy Survey of Space and Time (LSST) from the Vera C. Rubin Observatory will be able to provide such a photometric catalog. On a different note, those future surveys will also provide many more photons, and a much larger cluster sample for this "stacking" (averaging) method, significantly improving the accuracy of diffuse light measurements. Meanwhile, space-based cosmic survey programs like Euclid and the Nancy Grace Roman Telescope can provide imaging data that are less affected by sky background. We look forward to using those data in the coming years. ## Data Availability and Acknowledgements This paper is dedicated to the memory of the pioneering Egyptian/American astronomer, Sahar Allam, a woman whose wisdom, bravery, care, sensitivity, and sense of humor have guided and supported us in the past decade and during the most difficult times. We miss you dearly. Our analyses are performed with a few software packages, including Astropy (Astropy Collaboration et al., 2013, 2018, 2022), Numpy (Harris et al., 2020), Scipy (Virtanen et al., 2020), and Emcee (Foreman-Mackey et al., 2013). The data underlying this article were accessed from the DES and Illustris-TNG database. The derived data generated in this research will be shared on reasonable request to the corresponding author. The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Suttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A&M University, and the OzDES Membership Consortium. Based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2012B-0001; PI: J. Frieman), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171. The DES participants from Spanish institutions are partially supported by MICINN under grants ESP2017-89838, PGC2018-094773, PGC2018-102021, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CEREA program of the Generalitat de Catalunya. Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478. We acknowledge support from the Brazilian Instituto Nacional de Ciencia e Tecnologia (INCT) do e-Universo (CNPq grant 465376/2014-2). This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2308.08104
ConservationBots: Autonomous Aerial Robot for Fast Robust Wildlife Tracking in Complex Terrains
Today, the most widespread, widely applicable technology for gathering data relies on experienced scientists armed with handheld radio telemetry equipment to locate low-power radio transmitters attached to wildlife from the ground. Although aerial robots can transform labor-intensive conservation tasks, the realization of autonomous systems for tackling task complexities under real-world conditions remains a challenge. We developed ConservationBots-small aerial robots for tracking multiple, dynamic, radio-tagged wildlife. The aerial robot achieves robust localization performance and fast task completion times -- significant for energy-limited aerial systems while avoiding close encounters with potential, counter-productive disturbances to wildlife. Our approach overcomes the technical and practical problems posed by combining a lightweight sensor with new concepts: i) planning to determine both trajectory and measurement actions guided by an information-theoretic objective, which allows the robot to strategically select near-instantaneous range-only measurements to achieve faster localization, and time-consuming sensor rotation actions to acquire bearing measurements and achieve robust tracking performance; ii) a bearing detector more robust to noise and iii) a tracking algorithm formulation robust to missed and false detections experienced in real-world conditions. We conducted extensive studies: simulations built upon complex signal propagation over high-resolution elevation data on diverse geographical terrains; field testing; studies with wombats (Lasiorhinus latifrons; nocturnal, vulnerable species dwelling in underground warrens) and tracking comparisons with a highly experienced biologist to validate the effectiveness of our aerial robot and demonstrate the significant advantages over the manual method.
Fei Chen, Hoa Van Nguyen, David A. Taggart, Katrina Falkner, S. Hamid Rezatofighi, Damith C. Ranasinghe
2023-08-16T02:24:26Z
http://arxiv.org/abs/2308.08104v3
# ConservationBots: Autonomous Aerial Robot for Fast Robust Wildlife Tracking in Complex Terrains ###### Abstract Radio tagging and tracking are fundamental to understanding the movements and habitats of wildlife in their natural environments. Today, the most widespread, _widely applicable_ technology for gathering data relies on experienced scientists armed with handheld radio telemetry equipment to locate low-power radio transmitters attached to wildlife from the ground. Although aerial robots can transform labor-intensive conservation tasks, the realization of autonomous systems for tackling task complexities under real-world conditions remains a challenge. We developed _ConservationBots_--small aerial robots for tracking multiple, dynamic, radio-tagged wildlife. The aerial robot achieves robust localization performance and fast task completion times--significant for energy-limited aerial systems while avoiding close encounters with potential, counter-productive disturbances to wildlife. Our approach overcomes the technical and practical problems posed by combining a lightweight sensor with new concepts: i) planning to determine both trajectory and measurement actions guided by an information-theoretic objective, which allows the robot to strategically select near-instantaneous range-only measurements to achieve faster localization, and time-consuming sensor rotation actions to acquire bearing measurements and achieve robust tracking performance; ii) a bearing detector more robust to noise and iii) a tracking algorithm formulation robust to missed and false detections experienced in real-world conditions. We conducted extensive studies: simulations built upon complex signal propagation over high-resolution elevation data on diverse geographical terrains; field testing; studies with wombats (_Lasiorhinus latifrons_; nocturnal, vulnerable species dwelling in underground warrens) and tracking comparisons with a highly experienced biologist to validate the effectiveness of our aerial robot and demonstrate the significant advantages over the manual method. aerial robotics, autonomous UAV, radio-collared animals, remote sensing, VHF telemetry, position tracking Introduction Understanding movements, activities, and patterns in animal behaviors are essential for biodiversity conservation, natural resource management, and precision agriculture. Today, field scientists employ multiple techniques, such as vision-based sensors (Christiansen et al., 2014; Gonzalez et al., 2016; Olivares-Mendez et al., 2015; Selby et al., 2011), Global Positioning System (GPS) tags, or Very High Frequency (VHF) tags (Cochran and Lord Jr, 1963; Kenward, 2000) to study animal behaviors, movements, and activity. Despite the advances in technology, VHF radio telemetry or radio-tracking is still the most important tool employed to study the movement of animals in their natural environments. Because they are smaller, lightweight, suited for nearly all mammal and bird species, and operate for longer in comparison to GPS-based counterparts. Consequently, remain a popular and cost-effective technique for field studies (Bridge et al., 2011; Wikelski et al., 2007). However, the traditional method of radio-tracking typically requires researchers to trek long distances in the field, armed with cumbersome radio receivers with hand-held antennas and battery packs to manually home in on radio signals emitted from radio-tagged or collared animals. The precious spatial data acquired through radio-tracking comes at a significant cost to researchers in terms of manpower, time, and funding. The problem is often compounded by other challenges, such as low animal recapture rates, equipment failures, and the inability to track animals that move into inaccessible terrain or underground burrows. Developments in low-cost unmanned aerial vehicles (UAVs) with the capacity to carry payloads, such as radio receivers and antennas (Anderson, 2017), is a potential solution. Because of the advantages offered by the ease of deployment and _high mobility_, UAVs have the potential to automate and scale up manual tasks to significantly reduce the time, labor, and cost of employing traditional tracking approaches. Early achievements in _autonomous systems_ for wildlife tracking have demonstrated robotic platforms for the task (O. M. Cliff et al., 2018; Nguyen, Chesser, et al., 2019; Tokekar et al., 2010; Vander Hook et al., 2014). The approaches localize VHF radio-tagged animals using either the Receiver Signal Strength Indicator (RSSI) or Angle of Arrival (AoA) of radio signals emitted from radio tags where the robot's trajectory planning algorithm endows autonomy to improve the localization accuracy--(O. Cliff et al., 2015; O. M. Cliff et al., 2018) or reduce the tracking error-- (Nguyen, Chesser, et al., 2019). Despite the recent advancements, the realization of an autonomous UAV capable of dealing with the technical and practical complexities of the problem remains a challenge. For example: * RSSI-based approach--capable of rapid measurements and benefiting from a simple receiver that requires a single directional antenna, only demonstrates superior performance across mostly flat terrains or when the radio propagation model is accurately known (Nguyen, Chesser, et al., 2019). But, building and employing an accurate radio propagation model requires access to terrain information and dealing with variables that can change dynamically, such as changes in signal attenuation due to the appearance of trees and the impact of moisture conditions on signal propagation. * AoA approach--although more robust in unfamiliar or complex environments, requires a larger and bulky antenna array and long measurement acquisition times; 45 seconds per measurement (O. Cliff et al., 2015). * Practically, autonomous systems need to operate under limited battery power, on-board processing, flight times, and payload capabilities of UAVs (ideally, an aerial robot should fly, track and locate animals and return to base without needing an intervening battery change away from the base). * Multiplicity of VHF radio signal propagation in complex terrains complicates automatic localization algorithms and detection of, often weak signals, of simplistic wave modulations from VHF radio collars. * Potential disturbances to animals caused by the operation of the UAV (Headland et al., 2021; Hodgson and Koh, 2016; Scobie and Hugenholtz, 2016), is counterproductive to the task and requires mitigation strategies. Our work formulates a planning problem for a new _hybrid_ approach to exploit the simple and fast RSSI measurement acquisitions and selectively exploit the slower, more robust AoA measurements by providing an aerial robot the autonomy to plan not only its trajectory to track and locate animals but also its measurements to track multiple, mobile, radio-tagged wildlife, simultaneously. The robot we have developed is fast, robust, and scalable to simultaneously track and localize multiple radio-tagged wildlife while planned trajectories _minimize_ disturbances to wildlife. We summarize our main contributions below: 1. We propose planning not only trajectories but also the measurement method. The _planning_ algorithm formulation determines the most informative trajectory and measurement acquisition actions to reduce the time needed to track and locate multiple, dynamic, radio-collared wildlife in various terrain conditions. The planner allows the robot to i) exploit the simplicity and rapidity of RSSI measurement acquisitions for range-only tracking; and ii) the time-consuming bearing measurement method when range-only measurement uncertainty is high. Importantly, to avoid close approaches to wildlife and minimize potential disturbances, trajectory planning is constrained by a probabilistic void region. 2. Our state estimation algorithm for tracking is robust across variable environmental and terrain conditions as well as VHF signal propagation artifacts to localize radio-tagged wildlife in various outdoor environments in 3D settings without detailed terrain information. To achieve this robustness: i) we integrate an imprecise likelihood function to account for the complex radio propagation effects such as signal diffraction and vegetation attenuation, removing the need for a precise RSSI measurement model. This method expands the versatility of the estimation algorithm to allow RSSI measurements to be used for tracking when an accurate model of measurements in a given terrain is difficult to build; ii) we propose a bearing detector based on the rotation AoA measurement method to generate more robust AoA measurements under noisy conditions; and iii) compared to prior approaches, we have _explicitly_ considered practical challenges in _tracking_ radio-tagged animals such as missed measurements, object birth, and death, poor signal-to-noise ratio impacting signal detection probabilities and formulated a Bernoulli Bayesian filter (BF) for the tracking task. 3. To examine the performance of our system, we used extensive Monte-Carlo simulations modeling complex VHF signal propagation over 3D terrains with different levels of complexity and extensive field experiments. Our field testing included over \(30\) missions and a pilot study with radio-tagged southern hairy-nosed wombats (_Lasiorthinus latifrons_), a nocturnal, burrowing species that uses underground warrens and, therefore, challenging to track with manual methods as well as an autonomous system _versus_ expert-human-tracker field trials to demonstrates the efficacy, performance, and versatility of our system. ## 2 Related Work The problem of using UAVs to localize radio sources has been studied recently in the literature. More broadly, existing studies can be categorized based on the measurement methods employed; those using RSSI-based methods or those using AoA-based methods. The design of the system and algorithms are primarily a function of the measurement method employed. Hence it is useful to consider previous methods from this perspective. We also provide a brief review of related works in Multi-object tracking methods since our work focuses on tracking multiple animals. ### AoA-based systems A widely used method for measuring the AoA of RF signal is through the use of a phased array antenna. While the method can measure AoA with high accuracy with minimum measurement time, it requires specialized hardware and sophisticated signal processing algorithm. Therefore, such a sensor payload is difficult to mount and employ on a UAV platform due to weight, size, and processing power constraints. Consequently, an alternative approach has emerged using a directional antenna, which is rotated to determine the direction of the signal source to detect AoA. Graefenstein et al., 2009, an early study, demonstrated a ground-based robot system that used a rotating directional antenna to determine the AoA and locate the source of a wireless node. Venkateswaran et al., 2013 later developed an RF source-seeking system with a single-wing rotating micro aerial vehicle. By fitting a directional Figure 1: A ConservationBot in flight, right after take-off to track and locate Southern Hairy-nosed wombats. Inset: Southern Hairy-nosed wombat _Lasiorhinus latifrons_ released into their habitat after being tagged with a VHF radio collar. antenna to its wing and exploiting the natural rotation of the vehicle, the system can quickly estimate the AoA of an RF source at each rotation. Early efforts to demonstrate the rotating AoA-based methods for radio-tagged animal localization were reported in Tokekar et al., 2010,O. Cliff et al., 2015,VonEhr et al., 2016, O. M. Cliff et al., 2018, and Torabi et al., 2018. The studies developed a multi-rotor UAV system with a path-planning algorithm to direct the UAV to collect AoA measurements. AoA methods are robust against the multi-path effects of radio propagation but the measurement time required (45 seconds per measurement in O. Cliff et al., 2015) is significantly longer than RSSI-based approaches with near-instantaneous measurements. The impact of longer measurement times is more practical--it limits the maximum search area, increases flight times to complete a task, and reduces the ability to locate mobile animals. For example, while undertaking a slow rotation can increase the accuracy of the acquired AoA measurement for static wildlife, it is counterproductive if objects are mobile. An alternative, a pseudo-bearing approach (Dressel & Kochenderfer, 2018), sought to address the limitations of rotational AoA methods by incorporating an additional uni-directional antenna along with a directional antenna, albeit for operation at a much higher frequency, 2.4 GHz, than VHF. As a result, the methods can perform measurement updates more quickly and improve localization time. However, this approach requires a more complex radio receiver with multiple antennas; consequently, it increases the weight of sensor payload on a UAV, especially in VHF signal tracking scenarios necessitating antennas with large physical dimensions, while trajectory planning with such an approach also remains to be demonstrated in practice. ### RSSI-based systems In contrast, the RSSI-based systems utilize signal strength to estimate the distance between the radio transmitter and the receiver. This approach only requires a simple and lightweight receiver and antenna. The use of RSSI-based measurements on board a UAV to locate radio-tagged wildlife was demonstrated by (Korner et al., 2010), where a fixed-wing UAV equipped with a directional antenna was used to locate a fixed location radio tag. Then, a system based on a multirotor UAV with an omnidirectional antenna was presented in Santos et al., 2014. The approach employed a receiver to capture measurements of the radio signal's signal-to-noise ratio and estimated the radio tag's position _offline_. Nguyen, Chesser, et al., 2019, Nguyen et al., 2020 demonstrated an RSSI-based aerial robot with online path planning for RSSI measurements and a particle filter-based estimation method using a customized, lightweight directional antenna for localizing multiple _mobile_ objects on relatively flat terrains. Hui et al., 2021 took a similar approach using a small UAV with a dipole antenna to collect RSSI measurements along a fixed trajectory where the positions of radio-collared animals were determined _offline_ based on a signal propagation model. In contrast to AoA-based methods for the task, an RSSI-based system is more efficient due to its simpler receiver and faster measurement acquisition times. However, the key limitation is sensitivity to environmental effects impacting radio signal propagation, signal diffraction, scattering, and vegetation attenuation. Because propagation characteristics of the radio signal need to be accurate but, in practice, can be difficult to model. Topographical variations, vegetation coverage, or weather can result in unpredictable attenuation of radio signals and thus limit the scenarios in which RSSI-based methods can be reliably applied. As a consequence, RSSI-based methods are mostly used in less complex environments where radio propagation is easily modeled and predictable. ### Multi-object tracking The primary problem in multi-object tracking is to estimate the state of multiple objects when the associations between measurements and objects are unknown. Traditional methods, including joint probabilistic data association (JPDA) filter (Bar-Shalom, 1987), multiple hypothesis tracking (MHT) filter (Blackman, 1986) explicitly associate measurements and objects. More recent approaches based on random set statisticsMahler, 2007b have led to methods such as the probability hypothesis density (PHD) filter(Mahler, 2003), cardinalized PHD (CPHD) filter (Mahler, 2007a), multiobject multi-Bernoulli filter (Mahler, 2007b; B.-T. Vo et al., 2009), generalized labeled multi-Bernoulli (GLMB) filter (B.-T. Vo & Vo, 2013) and labeled multi-Bernoulli (LMB) filter (Reuter et al., 2014). However, in our problem, individual radio-collared wildlife can be uniquely identified by the frequency of its radio-collar signal. Therefore, we do not need to solve the complex data association problem. The model for measurements (RSSI or AoA) is non-linear, therefore a filter suitable for non-linear systems, such as a particle filter (Gordon et al., 1993) was used in prior robotic systems for the task. In contrast, we consider a particle implementation of a Bernoulli filter (Mahler, 2007b) formulation not only to account for a non-linear system but also to explicitly model practical signal propagation effects, such as measurement miss detections and false detections into the formulation. Thus, making the method of estimating the location of wildlife more robust than the particle filter. ### Summary RSSI-based and AoA-based methods are commonly used for estimating the location of radio sources. When radio propagation can be accurately modeled, the RSSI-based methods provide significant advantages over AoA-based approaches, given their simplistic receiver design and low measurement time. However, in a complex environment, the AoA approach is a more robust method due to it being invariant to various environmental variables and the difficulty of building accurate propagation models for complex terrains to support RSSI-based methods. We present an approach that combines the advantages of these two measurement approaches; an aerial robot system that takes advantage of both methods while minimizing their limitations. Importantly, existing systems, irrespective of the measurement method, used an online estimator to determine the location of objects. The estimator, based on Bayesian estimation theory, requires accurate noise models and sensor measurement models to determine the probability distribution of objects; these estimation methods include particle filters (Nguyen, Chesser, et al., 2019)(Korner et al., 2010), grid filters (O. Cliff et al., 2015)(O. M. Cliff et al., 2018) and Kalman filters (Jensen & Chen, 2013). Notably, these estimators cannot handle practical challenges with trackings, such as missed detections and false detections; explicitly modeling these real-world conditions may lead to better estimation accuracy. ## 3 Proposed Planning for Tracking and Localization Problem Formulation We consider the problem of controlling a UAV equipped with a simple sensor system--a directional antenna and a digital signal processing module--for autonomously localizing multiple radio-tagged wildlife while maintaining a safe distance from the wildlife of interest to prevent potential disturbances. Performing tracking (estimating the positions of individual radio-tagged wildlife over time) in real-time necessitates an online estimation method. And performing the task autonomously necessitates a dynamic planning Figure 2: An overview of the proposed Bayesian-POMDP theoretical framework to realize an autonomous aerial vehicle for fast, robust tracking of multiple wildlife in complex terrains. Here, the UAV state is denoted by \(\mathbf{u}_{t}\), and the belief densities of the set of objects (in our case wildlife) are denoted by \(\mathbf{X}_{t}\). Briefly: i) the proposed _compensated AoA measurement detector_ employs RSSI measurements \(\mathbf{Z}_{\mathbf{R},t}\) during an AoA measurement action \(\mathbf{a}_{t}\in\mathbb{A}_{\text{AoA}}\) to generate an AoA measurement \(\mathbf{Z}_{\mathbf{A},t}\); ii) Bernoulli filters utilize RSSI measurements \(\mathbf{Z}_{\mathbf{R},t}\) along with the imprecise RSSI model and AoA measurements \(\mathbf{Z}_{\mathbf{A},t}\) along with the AoA model at time \(t\) to achieve robust estimations of object states (e.g., the position of each wildlife); and iii) the _new_ measurement (AoA, RSSI) and trajectory planning formulation using a POMDP generate control actions \(\mathbf{a}_{t}\) while ensuring the UAV maintains a safe distance from the wildlife of interest by generating void constrained trajectories. method for robot navigation. In contrast to previous problem formulations, we consider incorporating an RSSI measurement model uncertainty to remove the need for accurate measurement models--difficult to derive in practice due to changing terrain and environmental conditions--and consider practical signal detection artifacts such as missed and false detection to formulate a robust method of estimating the locations of wildlife using a Bernoulli Bayesian filter. Further, in contrast to previous approaches, we consider dynamically planning both the trajectory and the signal measurement method using a POMDP (partially observable Markov decision process) formulation to allow the autonomous selection of the most informative measurement method: i) simple and fast RSSI measurements; or ii) slower but more robust AoA measurements. Figure 2 provides an overview of our proposed planning for tracking and localization approach built upon a joint Bayesian-POMDP theoretical framework, which includes: i) an AoA measurement model and an RSSI measurement imprecise model for increased tracking and localization robustness against the impacts of varying terrain and environmental conditions; ii) compensated rotation AoA measurement method to generate AoA measurements with higher accuracy under noisy conditions; iii) Bernoulli Filter employing both RSSI and AoA measurements to produce estimated object states (tracks), even under low measurement detection probabilities, experienced in _practical_ system deployment settings; iv) measurement and trajectory planning to select both the best trajectory and measurement method for faster and more robust localization under different terrain conditions while _minimizing disturbances_ to the wildlife of interest. In the following sections, we detail our formulation of real-time planning for tracking and localizing wildlife described in Figure 2. ### State Estimation and Measurement Models This section presents our online tracking and localizing formulation under the theoretical framework of a Bernoulli Bayesian filter to formulate a robust method of estimating the locations of wildlife using the proposed AoA measurement detection method, the associated measurement model, and the imprecise RSSI measurement model. Prior to proceeding further, we introduce the following conventions for notation consistency: standard letters (_e.g._\(x,~{}X\)) for scalar values, lowercase bold letters (_e.g._\(\mathbf{x}\)) for vector values (_e.g._ single-object states); bold capital letters (_e.g._\(\mathbf{X}\)) for set values (_e.g._ multi-object states); blackboard letters (_e.g._\(\mathbb{X}\)) for state spaces. #### 3.1.1 Radio Signal Model of a VHF Wildlife Collar It is first useful to understand the nature of the signal source since the measures of this signal will need to be employed to estimate the location of each wildlife. Each radio tag employed for studying wildlife emits an on-off-keying signal at a unique frequency \(f\) with an unknown time offset \(\tau\in\mathbb{R}_{0}^{+}\) as illustrated in Figure 3. Now, let's denote the state of the UAV (observer) as \(\mathbf{u}=[\mathbf{u}_{I}^{T},\theta^{(u)}]^{T}\in\mathbb{R}^{3}\times[0,2\pi)\), including its position \(\mathbf{u}_{I}=[u_{x},u_{y},u_{z}]^{T}\) and heading angle \(\theta^{(u)}\) and the state of each radio-tagged wildlife as \(\mathbf{x}=[p_{x},p_{y},p_{z}]^{T}\in\mathbb{X}\subseteq\mathbb{R}^{3}\), where \((\cdot)^{T}\) is the matrix transpose. Then, the noiseless signal \(\chi(t)\) at time \(t\) from an object with state \(\mathbf{x}\) received by a directional antenna mounted on a UAV with state \(\mathbf{u}\) in the far field region can be modeled as (Nguyen, Rezatofighi, et al., 2019): \[\chi(t)=\gamma(\mathbf{x},\mathbf{u})\cos[2\pi ft+\Phi_{t}]\operatorname{rect} (D,T,t-\tau) \tag{1}\] where [MISSING_PAGE_POST] Figure 3: On-off-keying signal with pulse width \(D\), period \(T\), time offset \(\tau\) and amplitude \(\gamma\). * \(\gamma(\mathbf{x},\mathbf{u})=\Gamma(d_{0})G_{r}G_{a}(\zeta(\mathbf{x},\mathbf{u}) )(d_{0}/d(\mathbf{x},\mathbf{u}))^{n/2}\) is the received signal amplitude when the distance between object \(\mathbf{x}\) and observer \(\mathbf{u}\) is \(d(\mathbf{x},\mathbf{u})\) and with source signal amplitude \(\Gamma(d_{0})\), which is measured at reference distance \(d_{0}\)(Patwari et al., 2005); \(n\in\mathbb{R}^{+}\) is the environment-dependent path-loss exponent; \(G_{r}\) is the receiver gain; \(G_{a}(\cdot)\) is antenna gain pattern; \(\zeta(\mathbf{x},\mathbf{u})=\arctan\!2(\frac{p_{\mathbf{x}}-u_{\mathbf{x}}}{p _{\mathbf{x}}-u_{\mathbf{x}}})-\theta^{(u)}\) which convert the azimuth angle from object \(\mathbf{x}\) to observer \(\mathbf{u}\) into the local reference frame of the observer with heading \(\theta_{u}\); * \(\mathrm{rect}(D,T,\cdot)\) is a periodic rectangular function with pulse width \(D\) and period \(T\), and \(T>D\): \[\mathrm{rect}(D,T,t-\tau)=\sum_{i=-\infty}^{\infty}\mathrm{boxcar}(\tau,\tau+ D,t+iT)\] (2) and \[\mathrm{boxcar}(a,b,x)=\begin{cases}1,&a\leq x\leq b\\ 0,&\mathrm{otherwise}\end{cases}\] (3) #### 3.1.2 Bernoulli Filter To infer the unknown state \(\mathbf{x}\in\mathbb{X}\) (3D coordinates of wildlife) given noisy measurements \(z\in\mathbb{Z}\), AoA and RSSI measurements extracted from the received signals, we consider a Bernoulli filter--also known as JoTT or joint object detection and tracking filter--(Mahler, 2007b)(B. T. Vo, 2008)(B. T. Vo et al., 2012). Recall each object--radio-collared wildlife--emits a signal at a unique frequency; hence a unique object's state \(\mathbf{x}\) can be estimated from the measurement and tracked independently. Thus, we do not need to solve the complex data association problems typical of a multi-object tracking setting and can estimate the state of each object using a Bernoulli filter formulation independently. The Bernoulli filter is an exact Bayesian filter based on the random finite set theory (RFS) (Mahler, 2007b). Notably, the filter is capable of handling practical signal detection artifacts such as missed and false detections and dealing with the reality of animals with VHF radio collars wandering in or out of the sensor's detection range in a unified framework. The Bernoulli RFS \(X\) can either have at most one element with probability \(r\) distributed over the state space \(\mathbb{X}\) according to probability density function (PDF) \(s(\mathbf{x})\) or empty with probability \(1-r\): \[\Psi(\mathbf{X})=\begin{cases}1-r,&\text{if }\mathbf{X}=\emptyset\\ r\cdot s(\mathbf{x}),&\text{if }\mathbf{X}=\{\mathbf{x}\}\\ 0&\text{if }|\mathbf{X}|\geq 2\end{cases} \tag{4}\] where \(|\mathbf{X}|\) denotes the cardinality of \(\mathbf{X}\). Given measurement set \(\mathbf{Z}_{t}=\{z_{t}^{(1)},\ldots,z_{t}^{(|\mathbf{Z}_{t}|)}\}\) at time \(t\), the posterior distribution \(\Psi(\mathbf{X}_{t}|\mathbf{Z}_{1:t})\) from time \(t-1\) to time \(t\) can be propagated in two steps, the _prediction_ step and _update_ step. Notably, Bernoulli RFS (4) is entirely described by its existence probability \(r\) and single object PDF \(s(\mathbf{x})\). Therefore, the prediction and update step of (4) only needs to propagate \(r\) and \(q_{\mathbf{x}}\). The prediction step for the Bernoulli filter is \[\begin{split} r_{t|t-1}&=r_{b}\cdot(1-r_{t-1|t-1})+r _{s}\cdot r_{t-1|t-1}\\ q_{t|t-1}(\mathbf{x})&=\frac{r_{b}\cdot(1-r_{t-1|t-1}) \cdot b_{t|t-1}(\mathbf{x})+r_{s}\cdot r_{t-1|t-1}\cdot\int q_{t|t-1}( \mathbf{x}|\mathbf{x}^{{}^{\prime}})\cdot s_{t-1|t-1}(\mathbf{x}^{{}^{\prime} })d\mathbf{x}^{{}^{\prime}}}{r_{b}\cdot(1-r_{t-1|t-1})+r_{s}r_{t-1|t-1}}\end{split} \tag{5}\] \(r_{b}\) is the probability of object birth and \(b_{t|t-1}(\mathbf{x})\) is the spatial distribution of predicted object birth. These two parameters models object to enter or leave a search space. In our context, wildlife may disappear from a search space, such as going underground or appearing suddenly from burrows, as is the case with the species we investigate in our field trials. \(q_{t|t-1}(\mathbf{x}|\mathbf{x}^{{}^{\prime}})\) is the object transitional density, which describes the object's dynamic. The update step for the Bernoulli filter is \[\begin{split} r_{t|t}&=\frac{1-\Delta_{t}}{1-\Delta_{ t}\cdot r_{t|t-1}}\\ q_{t|t}(\mathbf{x})&=\frac{1-P_{D}(\mathbf{x})+P_{D} (\mathbf{x})\sum_{\mathbf{x}\in\mathbf{Z}_{t}}\frac{L_{t}(\mathbf{x}| \mathbf{x})}{\lambda c(\mathbf{x})}}{1-\Delta_{t}}s_{t|t-1}(\mathbf{x})\end{split} \tag{6}\] where \[\Delta_{t}=\int P_{D}(\mathbf{x})s_{t|t-1}(\mathbf{x})d\mathbf{x}-\sum_{\mathbf{z }\in\mathbb{Z}_{t}}\frac{\int L_{t}(\mathbf{z}|\mathbf{x})s_{t|t-1}(\mathbf{x} )d\mathbf{x}}{\lambda c(\mathbf{z})} \tag{7}\] with \(L_{t}(\mathbf{z}|\mathbf{x})\) being the measurement likelihood function and \(\lambda\) being the expected number of false measurement with PDF \(c(\mathbf{z})\). \(P_{D}(\mathbf{x})\) is the probability of detection given state \(\mathbf{x}\). For more detailed derivation and implementations of the Bernoulli filter, we refer the reader to (Mahler, 2007b)(Ristic et al., 2013) for further reference. The filter update step (6) requires the likelihood of measurements \(L_{t}(\mathbf{z}|\mathbf{x})\) to obtain the posterior distribution. We derive the measurement likelihood for two types of measurements; i) RSSI; and ii) AoA. Recall we consider an imprecise measurement model for RSSI; we describe its formulation next, followed by our proposed robust AoA detector formulation and the AoA measurement model for filter updates. #### 3.1.3 Imprecise RSSI measurement model Given signal with form (1), the primary measurement can be obtained is RSSI measurement, which is completely characterized by \(\gamma(\mathbf{x},\mathbf{u})\) and defined as its Root-Mean-Square (RMS) power. Suppose the receiver gains \(G_{r}=1\), then an RSSI measurement can be expressed as: \[\begin{split} z_{R}=h_{R}(\mathbf{x},\mathbf{u})& =10\log_{10}\big{(}(\gamma(\mathbf{x},\mathbf{u})/\sqrt{2})^{2} \big{)}\\ &=\tilde{\Gamma}(d_{0})-10n\log_{10}(d(\mathbf{x},\mathbf{u})/d_ {0})+\tilde{G}_{a}(\zeta(\mathbf{x},\mathbf{u}))\end{split} \tag{8}\] where \(\tilde{\Gamma}(d_{0})=10\log_{10}((\frac{\Gamma(d_{0})}{\sqrt{2}})^{2})\), and \(\tilde{G}_{a}(\zeta(\mathbf{x},\mathbf{u}))=10\log_{10}(G_{a}(\zeta(\mathbf{x },\mathbf{u}))^{2})\). In a non-urban environment, the received radio signal is usually corrupted by environmental noise and can be modeled as \[z_{R}=h_{R}(\mathbf{x},\mathbf{u})+w_{R} \tag{9}\] where \(w_{R}\sim\mathcal{N}(\cdot;0,\sigma_{R}^{2})\) is measurement Gaussian white noise with covariance \(\sigma_{R}^{2}\). (9) yields the RSSI likelihood function: \[\mathcal{L}_{R}(z_{R};\mathbf{x},\mathbf{u})=\mathcal{N}(z_{R};h_{R}(\mathbf{ x},\mathbf{u}),\sigma_{R}^{2}) \tag{10}\] where \(\mathcal{N}(\cdot;\mu,\sigma^{2})\) is the Gaussian probability density function with mean \(\mu\) and variance \(\sigma^{2}\). The path-loss model in (9) is accurate when the receiver is within direct line-of-sight to the transmitter and other forms of loss are negligible. However, in complex terrain conditions, other forms of loss introduced by multi-path propagation, diffraction, scattering, shadowing, or attenuation due to vegetation, are not negligible, which makes such a model generally inadequate. An illustration of vegetation and terrain loss variations over an example terrain gathered from digital elevation map data from (Australia-Geoscience, 2022) is presented in Figure. 4. Without detailed terrain and site information (such as vegetation), it is generally difficult or impractical to construct an accurate measurement model, especially under complex terrain conditions such as hills, mountains, and varying vegetation conditions. In order to handle the practical constraints of using RSSI measurements, we consider incorporating a model uncertainty or _imprecision_ to remove the need for an accurate measurement model. We introduce an additional error term \(\mu_{S}(\mathbf{x},\mathbf{u})\) which can represent any practical propagation complexities that can cause the RSSI measurements to deviate from the simple model in (9). Now, we express the RSSI measurement model that incorporates various measurement model uncertainties: \[z_{R}=h_{R}(\mathbf{x},\mathbf{u})+\mu_{S}(\mathbf{x},\mathbf{u})+w_{R}. \tag{11}\] The term \(\mu_{S}(\mathbf{x},\mathbf{u})\) can be considered as an unknown parameter of the measurement function, and (11) can be rewritten as: \[h(\mathbf{x},\mathbf{u};\mu)=h_{R}(\mathbf{x},\mathbf{u})+\mu_{S}(\mathbf{x },\mathbf{u}) \tag{12}\] where we refer \(\mu\in[\mu_{min},\mu_{max}]\) as the _(RSSI) model imprecision_, and \(\mu_{min}=\min(\mu_{S}(\mathbf{x},\mathbf{u}))\) and \(\mu_{max}=\max(\mu_{S}(\mathbf{x},\mathbf{u}))\), \(\forall\mathbf{x}\in\mathbb{X},\mathbf{u}\in\mathbb{U}\) are the upper and lower bounds of the model imprecision respectively. Although it is usually impractical to know \(\mu_{S}(\mathbf{x},\mathbf{u})\) precisely, it is relatively easier to estimate its upper and lower bounds \(\mu_{min}\) and \(\mu_{max}\). Due to the presence of unknown parameter \(\mu\), \(h:\mathbb{X}\rightarrow\mathbb{Z}\) from (12) is not a function, since a point in \(\mathbb{X}\) now map to infinitely many points in \(\mathbb{Z}\). To find the likelihood function for the imprecise measurement model in (11), the measurement set \(h(\mathbf{x},\mathbf{u};\mu)+w_{R}\) can be represented by a random closed set \(\mathcal{Z}\subseteq\mathbb{Z}\)(Mahler, 2007b). Then the generalized likelihood function characterized by the imprecise measurement function \(h(\mathbf{x},\mathbf{u};\mu)\), which accounts for the model imprecision \(\mu\) is defined as: \[\tilde{L}_{R}(z;\mathbf{x},\mathbf{u})=Pr(z\in\mathcal{Z})=Pr(z\in h( \mathbf{x},\mathbf{u};\mu)+w_{R}) \tag{13}\] where \(Pr(\cdot)\) denotes the probability of an event. In the following sections, we will refer to (13) as the imprecise likelihood function since \(\mu\) represents the model imprecision. When \(w_{R}\) is zero mean white Gaussian, (13) can be solved analytically (Ristic, 2011): \[\tilde{L}_{R}(z;\mathbf{x},\mathbf{u})=\int_{\mu_{min}}^{\mu_{max}}\mathcal{N }(h;z,\sigma_{R}^{2})dh=\mathcal{C}(z;\underline{h},\sigma_{R}^{2})-\mathcal{ C}(z;\overline{h},\sigma_{R}^{2}) \tag{14}\] where \(\mathcal{C}(z;\mu,\sigma^{2})=\int_{-\infty}^{z}\mathcal{N}(\zeta;\mu,\sigma^ {2})d\zeta\) is the Gaussian cumulative distribution function (CDF), \(\underline{h}=\min_{\mu}(h(\mathbf{x},\mathbf{u};\mu))\) and \(\overline{h}=\max_{\mu}(h(\mathbf{x},\mathbf{u};\mu))\). We can understand the consequence of using an imprecise likelihood as having the effect of broadening the posterior PDF and imparting a higher variance compared to using the precise likelihood where the model imprecision \(\mu\) is known (Ristic, 2011). The imprecise likelihood enables us to expand the application scenarios of RSSI measurements for object tracking and localization by simply providing upper and lower bounds of model imprecision \(\mu\). In addition to the RSSI measurement, we also consider obtaining 2-D (azimuth) angle of arrival measurements possible by planning a gyration motion by the drone as these are robust to variations in RSSI measurements impacted by complex terrain conditions. However, AoA measurements are still detected from the RSSI measurement receiver, and we propose a robust detection method and describe the AoA measurement model in the following section. Figure 4: (a) An example complex terrain from (Australia-Geoscience, 2022) with a fixed receiver (RX) marked by a red circle; (b) Simulated environment-dependent signal strength attenuation resulting _only_ from terrain loss and vegetation loss, without the attenuation component \((d_{0}/d(\mathbf{x},\mathbf{u}))^{n}\) over distance with a transmitter placed at each coordinate point on the terrain map. For transmitter locations without significant blockage from terrain conditions (locating at \(Y<1000\,\mathrm{m}\)), the signal attenuation is significantly less than those blocked by the terrain (most locations when \(Y>1000\,\mathrm{m}\). Notably, terrain loss simulated is formulated in (33) and vegetation loss is formulated in (32) in Section 4.1. #### 3.1.4 Compensated AoA detector and measurement model Since the UAV system we consider is highly maneuverable and is only equipped with a lightweight payload of a directional antenna and a simple receiver to provide RSSI measurements, we adopted the antenna rotation approach. The planning algorithm considers a measurement action involving the rotation of the antenna-equipped UAV to obtain the AoA of a radio signal. However, a problem arises under weak signals from distant radio tags. We describe the problem and our proposed compensation method to build a robust AoA detector and the associated measurement model below. **Correlation coefficient based detector.** The UAV performs one full rotation and uses the correlation between the collected RSSI measurements and the antenna gain pattern to determine the AoA. More specifically, after collecting \(k\) detected RSSI measurements \(\mathbf{z}_{R,t_{1}:t_{k}}=[z_{R,t_{1}},\dots,z_{R,t_{k}}]^{T}\) with associated detected object state \(\mathbf{x}_{t_{1}:t_{k}}=[\mathbf{x}_{t_{1}},\dots,\mathbf{x}_{t_{k}}]\) and UAV state \(\mathbf{u}_{t_{1}:t_{k}}=[\mathbf{u}_{t_{1}},\dots,\mathbf{u}_{t_{k}}]\) at time \([t_{1},\dots,t_{k}]^{T}\), the rotation AoA measurement is then given by: \[z_{A_{1}}=h_{A1}(\mathbf{x}_{t_{1}:t_{k}},\mathbf{u}_{t_{1}:t_{k}})=\operatorname {argmax}_{\alpha}\rho(\mathbf{z}_{R,t_{1}:t_{k}},\tilde{G}_{\alpha}(\mathbf{ \theta}_{t_{1}:t_{k}}^{(u)}+\alpha)) \tag{15}\] where Pearson correlation coefficient \(\rho(\mathbf{x},\mathbf{y})\triangleq\operatorname{cov}(\mathbf{x},\mathbf{ y})/(\sigma_{\mathbf{x}}\sigma_{\mathbf{y}})\), and \(\mathbf{\theta}_{t_{1}:t_{k}}^{(u)}=[\theta_{t_{1}}^{(u)},\dots,\theta_{t_{k}}^{ (u)}]^{T}\) are UAV headings extracted from UAV states \(\mathbf{u}_{t_{1}:t_{k}}\) (Recall that \(\mathbf{u}=[\mathbf{u}_{t},\theta^{(u)}]^{T}\)). While the correlation coefficient approach is sufficient when the receiver can detect the majority of radio pulses, its performance deteriorates as the strength of the detected signal reduces. This can significantly impact the ability to localize distant objects or objects equipped with radio tags configured with low transmit power. Figure 5 illustrates a scenario from field testing the AoA measurement method where the receiver is only able to detect a fraction of the radio pulses emitted by the radio tag during a full rotation action when the distance between the UAV and the radio source is increased. The measurement sequence for distance objects correlates more strongly to the back lobe of antenna gain than the front (main) lobe under a low number of detections; consequently, the correlation coefficient approach leads to an AoA measurement with approximately \(180^{\circ}\) error in these instances. **Cross correlation-based detector.** In a scenario where the signal strength is weak, we can observe the receiver to be more likely to detect the signals when the main lobe (front) of the antenna is directed at the signal source. Therefore, instead of using the Pearson correlation coefficient, cross-correlation can be used to prioritize matching the strongest RSSI measurement to the front (main) lobe Figure 5: Illustrations of the correlation coefficient based AoA detector performance with RSSI measurements collected at different distances in field tests. (a) For a close object, the majority of the signal emitted can be detected, and a correct AoA can be detected from the distinct peak in the correlation coefficient plot; (b) For a distant object, not all signals emitted are detected. In this scenario, measurements correlate more strongly to the back lobe of antenna gain than the (main) front lobe. Consequently, an incorrect AoA is detected; we can observe an approximately \(180^{\circ}\) error. of the antenna: \[z_{A_{2}}=h_{A2}(\mathbf{x}_{t_{1}:t_{k}},\mathbf{u}_{t_{1}:t_{k}})=\underset{ \alpha}{\mathrm{argmax}}\ \langle\mathbf{z}_{R,t_{1}:t_{k}},\tilde{G}_{a}(\boldsymbol{\theta}_{t_{1}:t _{k}}^{(u)}+\alpha)\rangle \tag{16}\] where \(\langle\cdot\rangle\) is the dot product, _i.e._\(\langle a,b\rangle=\int a(x)b(x)dx\). While (16) can be used individually to generate AoA measurements with fewer outliers, from our observation in practice, when the radio tag's signal strength is strong, the cross-correlation method (16) produces AoA measurements with higher variance than the correlation coefficient-based AoA detector. Hence, the sole use of a cross-correlation-based detector is detrimental to localizing radio tags approaching the UAV receiver. To overcome this issue, we introduce an approach to correct outlier AoA measurements. **Compensated AoA detector.** Based on the observations we discussed, we propose taking advantage of both AoA detection approaches to construct a more robust detector. We propose employing the cross-correlation (16) method alongside the correlation coefficient method (15) to mitigate the AoA measurement ambiguity resulting from a correlation coefficient detector. We propose exploiting the deviation between both AoA detectors to correct the correlation coefficient AoA detector measurements. When an outlier AoA measurement is produced by the correlation coefficient AoA detector, the cross-correlation AoA detector's measurement will be significantly different. Notably, the correlation coefficient AoA detector can generate AoA measurement with \(180^{\circ}\) error. Now, the compensated AoA measurement based on a decision threshold, \(z_{A_{Th}}\), can be expressed as: \[z_{A}=\begin{cases}z_{A_{1}}&\text{if }|z_{A_{1}}-z_{A_{2}}|<z_{A_{Th}}\\ z_{A_{1}}-\pi\ \mathrm{rad}&\text{otherwise}\end{cases} \tag{17}\] While we present the formulation of the method here, further discussion of experimental results from field tests to demonstrate the effectiveness of the proposed approach is presented in Section 6.3. **AoA measurement model.** The distribution of \(z_{A}\) is complex but can be approximated by a Gaussian distribution in practice (O. M. Cliff et al., 2018; Torabi et al., 2018) while assuming the object remains stationary during the rotation. Then: \[z_{A}\approx h_{A}(\mathbf{x}_{t_{k}},\mathbf{u}_{t_{k}})+w_{A} \tag{18}\] where\(w_{A}\sim\mathcal{N}(\cdot;0,\sigma_{A}^{2})\) is a zero mean Gaussian noise with variance \(\sigma_{A}^{2}\) and \[h_{A}(\mathbf{x},\mathbf{u})=\mathrm{arctan2}(\frac{u_{x}-p_{x}}{u_{y}-p_{y}}) \tag{19}\] Then the AoA likelihood is: \[L_{A}(z_{A};\mathbf{x},\mathbf{u})=\mathcal{N}(z_{A};h_{A}(\mathbf{x}, \mathbf{u}),\sigma_{A}^{2}). \tag{20}\] ### Joint Measurement and Trajectory Planning Method In our proposed approach, the UAV not only needs to determine the trajectory action to navigate in the search environment but the measurement method to reduce the position uncertainty of wildlife to achieve a robust and rapid method of tracking and locating wildlife. The problem of automatically determining the best action can be formulated and solved efficiently under a POMDP framework. Formally, a POMDP is defined by tuple \((\mathbb{X},\mathbb{A},\mathbb{Z},\phi,\mathcal{R},\mathcal{L})\). \(\mathbb{X},\mathbb{A},\mathbb{Z}\) are state space, action space, and observation space respectively. \(q(\mathbf{x}^{\prime}|\mathbf{x},\mathbf{a})\) is the state transition function given action \(a\) and current state \(\mathbf{x}\), \(\mathcal{R}\) is the reward function which characterizes the objective of the planner, and \(\mathcal{L}(\mathbf{Z}|\mathbf{x},\mathbf{a})\) is the observation likelihood function, where \(\mathbf{Z}\in\mathbb{Z},\mathbf{x}\in\mathbb{X},a\in\mathbb{A}\). Considering the resource limitations and the need for an online planner for the tracking task, we consider a computationally tractable POMDP formulation. Consequently, we employ a myopic planning strategy where the goal is to determine an optimal control action using a discrete action space at each planning iteration. Under a myopic planning strategy, the computational complexity is reduced by selecting one control action at a single planning iteration as opposed to considering multiple control actions in the future at multiple planning iterations. The optimal control action for a myopic planner \(\mathbf{a}_{t}^{*}\) is defined by maximizing the expected reward function \(\mathcal{R}_{t+H}(\cdot)\) over the action space: \[\mathbf{a}_{t}^{*}=\underset{\mathbf{a}\in\mathbb{A}}{\mathrm{argmax}}\ \mathbb{E}\left[\mathcal{R}_{t+H}(\mathbf{a})\right] \tag{21}\] The following subsections describe essential elements of our POMDP-based planner and considerations that enable real-time planning decisions in the context of a UAV with limited onboard computing resources. #### 3.2.1 Information-based rewards In a POMDP framework, the reward function can be categorized as i) task-based rewards (Gostar et al., 2014); ii) and information-based rewards (Kreucher et al., 2003). A task-based reward is only applicable when the objective can be explicitly formulated. For object localization scenarios, the information-based reward is preferable to a task-based reward since the primary objective is reducing the position uncertainty of objects of interest, and information-based rewards prioritize gathering information; hence, has a strong relationship with the objective of improving the localization accuracy of objects (Beard et al., 2017; Hoffmann et al., 2006). Importantly, in our problem formulation, information-based rewards provide a means to evaluate the quality of measurement type to aid the planner in deciding between taking RSSI or AoA measurements. Consequently, we considered three information-based reward formulations since a theoretical basis for determining the most effective formulation for our planning for tracking problems does not exist. Given the predicted belief density of an object \(\boldsymbol{\Psi}_{t+H|t}=\Psi_{t+H}(\cdot|\mathbf{Z}_{1:t})\) and the future updated posterior belief density \(\boldsymbol{\Psi}_{t+H|t+H}=\Psi_{t+H}(\cdot|\mathbf{Z}_{1:t},\mathbf{Z}_{t+1: t+H}(\mathbf{a}))\) at time \(t+H\), where \(\mathbf{Z}_{t+1:t+H}(\mathbf{a})\) is the hypothesized measurement set if action \(\mathbf{a}\) was executed, The three information-based rewards we considered are defined: 1. Renyi Divergence (Renyi, 1961) (Ristic & Vo, 2010) \[\mathcal{R}^{(\text{R}\text{a}\text{R}\text{i})}_{t+H}(\mathbf{a})=\frac{1} {\alpha-1}\log\int\boldsymbol{\Psi}^{\alpha}_{t+H|t}\cdot\boldsymbol{\Psi}^{ 1-\alpha}_{t+H|t+H}\delta\mathbf{X},\] (22) where \(\alpha\geq 0\) parameter determines the effect of the tails of two distributions on the rewards. 2. Shannon Entropy (Shannon, 1948) \[\mathcal{R}^{(\text{Shannon})}_{t+H}(\mathbf{a})=\mathcal{H}(\boldsymbol{\Psi }_{t+H|t})-\mathcal{H}(\boldsymbol{\Psi}_{t+H|t+H}),\] (23) where \(\mathcal{H}(\Psi(\mathbf{X}))=-\int\Psi(\mathbf{X})\log\Psi(\mathbf{X})\delta \mathbf{X}\). 3. Cauchy-Schwarz Divergence (Hoang et al., 2015) \[\mathcal{R}^{(\text{CS})}_{t+H}(\mathbf{a})=-\log\left[\frac{\langle \boldsymbol{\Psi}_{t+H|t},\boldsymbol{\Psi}_{t+H|t+H}\rangle}{\langle \boldsymbol{\Psi}_{t+H|t},\boldsymbol{\Psi}_{t+H|t}\rangle\langle\boldsymbol{ \Psi}_{t+H|t+H},\boldsymbol{\Psi}_{t+H|t+H}\rangle}\right],\] (24) #### 3.2.2 Measurement and trajectory planning control actions One of the major strengths of employing the quad-copter UAV is the high maneuverability offered for traversing in a 3D space. Consequently, we have significant flexibility in designing the control action space. In our task, the available control actions should allow the UAV to explore the search area and collect RSSI or AoA measurements. To measure RSSI, the UAV does not require any special maneuvers, thus allowing for a change in heading to change the navigation path is sufficient. But, to measure AoA, a full rotation action must be executed by the UAV. Therefore, our action space for the planner can be decomposed into two sub-spaces, illustrated in Figure. 6, and described below: \[\mathbb{A}=\mathbb{A}_{\text{RSSI}}\cup\mathbb{A}_{\text{AoA}} \tag{25}\] Recall that we consider a discrete action space to reduce the computational demands on the planner. Then, for the RSSI action space \(\mathbb{A}_{\text{RSSI}}\) we define \(n_{\xi}\) discrete headings, with each heading \(\xi\) uniformly distributed across \(0^{\circ}\) to \(360^{\circ}\) (with reference to the geographical north), at a fixed altitude for \(T_{P}\) seconds. For the AoA action space \(\mathbb{A}_{\text{AoA}}\), we augment the RSSI actions to include a full rotation action. Therefore, the AoA actions include two modes: i) traveling along \(n_{\xi}\) discrete, uniformly distributed headings at a fixed altitude for \(T_{R1}\) seconds followed by: ii) a full rotation maneuver with duration \(T_{R2}\) seconds. The action space we define intentionally limits the UAV to a constant altitude, with two benefits: i) it extends the flight time of the UAV as changing flight altitude consumes a substantial amount of energy; ii) it simplifies the planning procedure. Further, allowing a change in altitude necessitates an obstacle avoidance component to ensure the safe operation of the UAV, which increases computation demands. By limiting the control action to only allow a UAV to travel on a 2D plain, we are able to ensure the UAV will not collide with obstacles as long as its initial altitude is higher than the tallest obstacle in the search area. Notably, when evaluating the best control action between an RSSI action and an AoA action, their reward \(\mathcal{R}_{t+T_{P}}(\mathbf{a}\in\mathbb{A}_{\text{RSSI}})\), \(\mathcal{R}_{t+T_{R1}+T_{R2}}(\mathbf{a}\in\mathbb{A}_{\text{AoA}})\) cannot be directly compared unless both rewards are evaluated at the same horizon, _i.e._, \(t+T_{P}=t+T_{R1}+T_{R2}\). Therefore, we constraint the action space such that \(T_{p}=T_{R1}+T_{R2}\) so that RSSI actions and AoA actions take the same amount of time to execute. Notably, when the UAV performs an AoA action, the receiver is still capable of measuring RSSI during the first traversal phase of the action. Hence, to avoid loss of useful information, the UAV also uses these RSSI measurements in addition to the AoA measurement generated at the end of executing the action to update the densities of objects. #### 3.2.3 Void constrained trajectory planning To minimize the disturbance of UAV operations to wildlife, we incorporated a void constraint into our planner. The void constraint provides a probabilistic approach to maintaining distance to an object without knowing the exact state of the object. Given a region \(S\subset\mathbb{X}\) and Bernoulli density \(\Psi=(r,q(\cdot))\) on \(\mathbb{X}\) where \(p\) is approximated by set of weighted particles: \(q(\mathbf{x})\approx\sum_{i=1}^{N_{s}}\omega^{(i)}\delta(\mathbf{x}-\mathbf{x}^ {(i)})\). The void probability function can be expressed as (Beard et al., 2017): \[B_{\Psi}(S)\approx(1-r)+r\cdot\big{(}1-\sum_{i=1}^{N_{s}}\omega^{(i)}\mathbf{1} _{S}(\mathbf{x}^{(i)})\big{)} \tag{26}\] where \(\mathbf{1}_{S}(\cdot)\) is the indicator function of the region \(S\) equal to \(1\) if \(\mathbf{x}^{(i)}\in S\) and \(0\) otherwise. We can interpret (26) as the probability of an object with belief density \(\Psi\) that is outside the region \(S\). While the void region can be an arbitrary shape, we use a cylindrical void region with radius \(\iota_{\min}\), where \(V(\mathbf{u},\iota_{\min})\) denote the void region given UAV state \(\mathbf{u}\), given by: \[V(\mathbf{u},\iota_{\min})=\bigg{\{}\mathbf{x}\in\mathbb{X}\bigg{|}\sqrt{(p_{ x}-u_{x})^{2}+(p_{y}-u_{y})^{2}}<\iota_{\min}\bigg{\}}, \tag{27}\] Now, the planning constraint can be expressed as: \[\min\{B_{\Psi_{t}}(V(\mathbf{u}_{t},\iota_{\min}))\}>P_{v\,\min} \tag{28}\] where \(P_{v\,\min}\in[0,1]\) is a user-defined void probability threshold. Figure 7 illustrates the resulting void region and the application of the constraint defined in (28) with two object densities as examples. Importantly, a cylindrical void region is a natural choice for two reasons: i) when the radio signal is transmitted from a radio tag directly below the antenna, due to the orientation of the antenna onboard the UAV and the resulting lower gain, it can lead to increasing the missed detection rate. A cylindrical void region can potentially eliminate this scenario since a minimum horizontal distance can be maintained between the UAV and radio tags; and ii) because the UAV will maintain a constant altitude during a mission to conserve limited onboard battery power. Provided the flight altitude is high enough, it is not necessary to consider the minimum vertical separation distance. Figure 6: Illustration of an RSSI action in which the UAV travels along heading \(\xi\) and an AoA action in which the UAV travels along heading \(\xi\) followed by performing a full rotation maneuver. #### 3.2.4 Implementation considerations for a real-time system We considered a myopic planning formulation with a discrete action space to manage the complexity of the planning problem. To reduce the computational demands and realize a real-time planner without sacrificing the system's localization performance, we considered the two following approaches. Planning for the closest unlocalized objectAt every planning iteration, given a set of unlocalized objects' densities \(\mathbf{\Psi}=\{\Psi_{1}(\mathbf{X}),\ldots,\Psi_{n}(\mathbf{X})\}\), instead solving for the optimal action that maximized the total reward for all densities \(\mathbf{\Psi}\), we consider maximized the expected reward for the closest belief density \(\Psi_{c}(\mathbf{X})\) to the UAV as adopted in (Nguyen, Chesser, et al., 2019). where \[\Psi_{c}(\mathbf{X})=\operatorname*{argmin}_{\Psi(\mathbf{X})\in\mathbf{\Psi}}d (\mathbf{\tilde{x}},\mathbf{u}) \tag{29}\] with \(\mathbf{\tilde{x}}\) being the mean of object state given PDF \(\Psi(\mathbf{X})\). Once an object meets the condition to be considered localized, the object's belief density will be excluded from the next planning iteration and, therefore, reduce the number of densities the planner needs to process over time. Planning for the closest unlocalized object has the benefit of reducing the computational complexity of calculating the reward. Limiting the number of planning densities makes the planner focus on computing actions that best minimize localization uncertainty for the closest object and, consequently, reduces the number of densities that need to be considered by the planner from \(n\) to one. Interestingly, conservation biologists in the field also follow an identical strategy to locate multiple animals; they employ a handheld receiver system to home in on the closest perceived wildlife based on their determination of signal strength from audible _beeps_ from the receiver. Predicted ideal measurement set (PIMS)In general, Monte Carlo integration is used to evaluate the expected reward in (21). This process requires drawing \(M\) measurements \(\mathbf{Z}_{t+1:t+H}^{(i)}(\mathbf{a}),i=1,\ldots,M\) which is obtained by sampling the belief density \(\mathbf{\Psi}_{t+H|t}\) followed by generating a simulated measurement according to the measurement model. Then the estimated expected reward is given by: \[\mathbb{E}[\mathcal{R}_{t+H}(\mathbf{a})]\approx\frac{1}{M}\sum_{i=1}^{M} \mathcal{R}_{t+H}^{(i)}(\mathbf{a}) \tag{30}\] As the number of samples \(M\) increases, the estimated reward converges to the true expectation. However, this method is computationally intensive, so instead, we adopted the _predicted ideal measurement set_ (PIMS) approach (Mahler, 2004) to compute the reward where only one instance (\(M=1\)) of future measurement set under an ideal condition is generated. The future measurement \(\mathbf{Z}_{t+1:t+H}(\mathbf{a})\) is now computed by: _i)_ computing the expected state of the belief density \(\mathbf{\Psi}_{t+H|t}\), and _ii)_ generating the expected measurement following the measurement function (9) or (19) in the absence of measurement noise, false measurements, and miss detections (\(P_{D}=1\)). Therefore, the estimated reward using PIMS is \[\mathbb{E}[\mathcal{R}_{t+H}(\mathbf{a})]\approx\mathcal{R}_{t+H}^{(\text{PIMS })}(\mathbf{a}) \tag{31}\] Following the above implementation approach, the measurement and trajectory planning algorithm we developed is succinctly summarized in Algorithm 1. Figure 7: Illustration of a cylindrical void region \(V\) with two object densities. The void constraint may be violated for object A if the probability of the non-overlapping region (red shaded area) between object A’s belief density and void region is less than desired bound set by \(P_{v\min}\). ``` input : Set of unlocalized objects' belief densities \(\mathbf{\Psi}_{t}\), UAV state \(\mathbf{u}_{t}\), action execution time \(T_{p}\), reward function \(\mathcal{R}(\cdot)\), void radius \(\iota_{\min}\), void probability threshold \(P_{v\min}\), RSSI measurement action space \(\mathbb{A}_{RSSI}\), AoA measurement action space \(\mathbb{A}_{AoA}\) output : action \(\mathbf{a}\) 1\(\Psi_{t}(\mathbf{X})\leftarrow\) ClosestObject(\(\mathbf{\Psi}_{t}\)) // Find the closest belief density for the planning iteration, see (29) 2foreach\(\mathbf{a}^{(k)}\in\{\mathbb{A}_{RSSI},\mathbb{A}_{AoA}\}\)do 3for\(i=1:T_{p}\)do 4\(\bar{X}_{t+i}\leftarrow\mathbb{E}[\Psi_{t+i-1}(\mathbf{X})]\) // Compute the expected object state \(\mathbf{Z}_{t+i}\gets h_{\mathbf{a}_{t}^{(k)}}(\bar{X}_{t+i},\mathbf{u}_{t +i})\) // Generate simulated measurements using PIMS 5\(\Psi_{t+i}(\mathbf{X})\leftarrow\) BernoulliFiltering(\(\Psi_{t+i-1}(\mathbf{X}),\mathbf{Z}_{t+i}\)) // Filtering, see (5) and (6) 6ifcheckVoidConstraint(\(\Psi_{t+i}(\mathbf{X}),\mathbf{u}_{t+i},\iota_{\min},P_{v\min}\)) then 7\(R^{(k)}\gets 0\) 8break // Void Constraint violated. Evaluate next action 9\(R^{(k)}\leftarrow\mathcal{R}^{(PIMS)}_{t+T_{p}}(\mathbf{a}^{(k)}_{t})\) // Compute the expected reward. See (31) where \(H=T_{p}\) 10\(k\leftarrow\text{argmin}(R)\) return:\(\mathbf{a}^{(k)}\) ``` **Algorithm 1**Information theoretic measurement and trajectory planner (Meta-Pilot) ## 4 Simulation Experiments To reduce development time and risks associated with evaluating concepts with a physical UAV system, and to investigate a wide range of settings we employed simulation-based experiments to evaluate our proposed approach to answer the following questions: * **Robustness under various terrain conditions.** Our approach aimed to provide a robust, fast tracking and localization method under various terrain conditions. How does our proposed joint measurement and trajectory planning for tracking algorithm perform compared to existing approaches under different terrain conditions? * **Impact of information-based reward functions.** We investigated three different information-based reward function formulations. Does the choice of information-based reward functions provide a performance advantage? * **Effectiveness and impact of void constrained trajectories.** We employ a void constraint to maintain a safe distance between the UAV and our target wildlife. However, the void-constrained trajectories could impact the duration of a mission to localize wildlife. Is the approach effective and what is the impact on localization performance? * **Robustness under practical signal detection limitations.** In the field, missed detections can occur and negatively impact performance. Hence, how does our approach perform under the different measurement detection probabilities compared to prior state estimation methods employed for localizing wildlife? First, we elaborate on the complex VHF signal propagation model necessary to generate signals impacted by terrain conditions in our simulations, in Section 4.1. Then, we describe the simulation settings in Section 4.3 and discuss the results from the simulations in Section 4.4. ### Complex VHF Signal Propagation Model One of the key properties of our proposed formulation is the ability to relax the requirement for an accurate radio propagation model by incorporating an imprecise likelihood. To validate our approach in simulation settings, we employ a radio propagation model that captures the effects of: _i)_ vegetation; and _ii)_ terrain variations for generating the radio tag signals. We illustrate both types of signal propagation loss and parameters related to the model in Figure 8 and briefly describe the formulation of the model used for generating signals below. **Vegetation loss.** We used the International Telecommunication Union (ITU) vegetation loss model to capture the effect of vegetation on VHF radio signals (ITU-R, 2021b). The loss due to vegetation (assumed to be pine woodlands in our model) is defined as: \[h_{v}=0.25f^{0.39}L_{v}^{0.25}\varphi^{0.05} \tag{32}\] where \(f\) is the signal frequency in \(\mathrm{MHz}\), \(L_{v}\) is the vegetation depth in meters, and \(\varphi\) is the elevation angle in degrees. We use \(f=150\,\mathrm{MHz}\) in our simulations. Terrain shadowing and diffraction. To model diffraction and shadowing effects from terrain conditions, we adopted the ITU terrain model where the additional loss is given by (ITU-R, 2021a): \[h_{d}=-20\cdot\frac{h}{F_{1}}+10\,\mathrm{dB} \tag{33}\] where \(h\) is the height difference between the most significant path blockage and the path trajectory, \(F_{1}\) is the radius of the first Fresnel zone given by (ITU-R, 2021a): \[F_{1}=17.3\cdot\sqrt{\frac{d_{1}d_{2}}{f\cdot d}}\;\mathrm{m}, \tag{34}\] where \(d\) is the distance between signal transmitter and receiver in \(\mathrm{km}\), \(d_{1},d_{2}\) are the distances from transmitter and receiver to the blockage in \(\mathrm{km}\) and \(f\) is the signal frequency in \(\mathrm{GHz}\). Overall, by subtracting (32) and (33) from the ideal RSSI measurement model (8), the propagation model that considers the complexities imposed by vegetation and terrain conditions related losses can be described as: \[h(\mathbf{x},\mathbf{u})=\tilde{\Gamma}(d_{0})-\underbrace{10n\log_{10}(d( \mathbf{x},\mathbf{u})/d_{0})}_{\text{distance loss}}+\tilde{G}_{a}(\zeta( \mathbf{x},\mathbf{u}))-\underbrace{h_{v}}_{\text{vgetation loss}}- \underbrace{h_{d}}_{\text{ terrain loss}} \tag{35}\] An illustration of the impact of vegetation and terrain loss alone, over a physical terrain obtained from (Australia-Geoscience, 2022) is presented in Figure. 4 in Section 3.1.3. ### Comparison Approaches We consider previous planning and measurement methods employed for tracking and locating wildlife to understand and evaluate the effectiveness of our proposed measurement and trajectory planning formulation for the problem. We employed the Bernoulli filter formulation we proposed for the state estimator for all methods. This decision benefits all other methods because the filter formulation is inherently capable of dealing with practical issues such as miss detections. Importantly, employing the Bernoulli filter formulation for all comparison methods ensures the differences in performance are related to demonstrating that the proposed trajectory and measurement planning is a more effective approach for the task. This strategy can more clearly demonstrate the performance advantages gained from our proposed measurement and trajectory planning approach (abbreviated as _Meta-Pilot_). Further, we use mobile radio tags to better capture wandering wildlife. Given the objective of void-constrained trajectories is to reduce disturbances, we employ void-constrained trajectory planning for _all_ comparison methods. In the following, we describe previous approaches and our specific improvements to facilitate a more useful comparison, especially in challenging settings. Figure 8: Illustration of simulated VHF signal propagation model and related parameters. Radio signal strength at receiver RX is influenced by the line-of-sight distance \(d\), distance from the transmitter to most significant path blockage \(d_{1}\), distance from the receiver to blockage \(d_{2}\), height difference \(h\) between the blockage and the path trajectory, vegetation depth \(L_{v}\), and elevation angle \(\varphi\) * **RSSI-only approach**. We adopt the RSSI-only approach described in (Nguyen, Chesser, et al., 2019) where the UAV receives RSSI measurements from each radio tag, and the planner considers all future measurements RSSI only. The study in (Nguyen, Chesser, et al., 2019) used a two-ray model to describe the propagation effect on RSSI over mostly flat terrains and was therefore not expected to perform well in more complex terrains. As a result, we _introduced our imprecise RSSI measurement model_ to improve the robustness of the original method in both filtering and planning algorithms. The RSSI-only approach with our proposed imprecise model is referred to as _Imp-RSSI_ in the following sections. * **AoA-only approach**. The AoA-only approach uses rotation actions to acquire bearing measurements (O. Cliff et al., 2015; Hood & Barooah, 2011; Torabi et al., 2018; Venkateswaran et al., 2013; VonEhr et al., 2016). Instead of the AoA detector methods in prior work, we employed the _improved_ detector proposed in Section 3.1.4 to generate AoA measurements. Further, for this approach, we propose using \(20\,\mathrm{s}\) to complete an AoA measurement, instead of the \(45\,\mathrm{s}\) described in prior work (O. Cliff et al., 2015) using _AoA only measurements_. Consequently, we keep the AoA measurement acquisition action and detector to that used in our Meta-Pilot in this setting; hence, we can expect the performance improvement to relate to our proposed measurement and trajectory planning approach in contrast to the detector improvements. We refer to this approach as _cAoA(20 s)_ to highlight the use of the proposed compensated AoA detector and the time duration for the measurement action. * **AoA-with-RSSI-update approach**. The method described in (O. M. Cliff et al., 2018) sought to combine the benefits reported in (Nguyen, Chesser, et al., 2019) with an AoA method. Here, rotation-correlation-based AoA measurements are used for object state estimation and trajectory planning, but an RSSI measurement update is also performed after generating each AoA measurement. The method in (O. M. Cliff et al., 2018) takes \(45\,\mathrm{s}\) to acquire a single AoA measurement and uses a log-path loss measurement model, given in 9, for the RSSI measurement update. Given the problems we have outlined in using RSSI models in complex terrains, we used a higher measurement noise in the log-path loss measurement model for hilly and mountain terrain to attempt to manage the RSSI model mismatch in complex terrains and ensure the comparison method remains competitive. We evaluate two variants. We employed implemented _AoA-RSSI(20 s)_--using \(20\,\mathrm{s}\) for an AoA measurement--and _AoA-RSSI(45 s)_--using \(45\,\mathrm{s}\) for an AoA measurement-to compare with potential advantages with a longer measurement duration selected in (O. M. Cliff et al., 2018). ### Simulation setup We describe the experimental settings and parameters employed in our extensive simulation-based study below. Notably, we employed Digital Elevation Model (DEM) data from (Australia-Geoscience, 2022) with \(1\,\mathrm{m}\) resolution to create real-world terrain conditions for our experiments. We consider three terrain conditions with increasing signal propagation complexity: * **Flat Terrain.** The data is obtained from Parkes, New South Wales (NSW), where elevation changes from \(233\,\mathrm{m}\) to \(239\,\mathrm{m}\). This terrain is representative of a simple environment for localization since the area has small elevation variations, and an accurate measurement model can be easily obtained for state estimation. * **Hilly Terrain.** The hilly terrain is in Flinders Chase National Park, South Australia (SA), where the elevation changes from \(40\,\mathrm{m}\) to \(77\,\mathrm{m}\). The hilly terrain is more challenging than the flat terrain, given the higher elevation variation. * **Mountain Terrain.** The mountain terrain is in Rugby, NSW, where elevation changes from \(595\,\mathrm{m}\) to \(704\,\mathrm{m}\). The mountain terrain is the most challenging, given the large elevation variance and terrain obstructions. **Settings.** In each terrain, a UAV is tasked with localizing \(N=20\) mobile objects within a \(2000\,\mathrm{m}\times 2000\,\mathrm{m}\) area. Each radio tag object generates an RSSI measurement every \(1\,\mathrm{s}\). The initial state of UAV is \(\mathbf{u}_{1}=[1,1,80+h_{0},\pi/4]^{T}\) where \(h_{0}\) is the elevation of terrain at the UAV's initial position. The UAV has a maximum velocity \(v_{\mathrm{max}}=10\,\mathrm{m}/\mathrm{s}\) and rotation angular velocity of \(\pi/3\,\mathrm{rad}/\mathrm{s}\). A cylindrical void region with radius \(\iota_{\mathrm{min}}=50\,\mathrm{m}\) and void probability threshold \(P_{v\,\mathrm{min}}=0.95\) is used to constrain the UAV's trajectories. For the RSSI measurement model, \(\tilde{\Gamma}(d_{0})=40\,\mathrm{dBm}\), \(n=4\), \(\sigma_{R}=4\,\mathrm{dB}\) are used. For the bearing measurement model, \(\sigma_{A}=0.095\,\mathrm{rad}\) is chosen. The rotation time to collect RSSI measurements to generate a bearing measurement is set to \(20\,\mathrm{s}\) except when evaluating the AoA-RSSI(45 s) method in (O. M. Cliff et al., 2018). The measurement and trajectory planner evaluates actions every \(T_{p}=30\,\mathrm{s}\). For the Renyi divergence reward, \(\alpha=0.1\) is selected based on the study in (Nguyen, Chesser, et al., 2019). The Bernoulli filter implementation, in all of the methods, uses birth probability \(r_{b}=1\times 10^{-5}\), expected number of clutters \(\lambda=0.05\) with clutter density \(c_{\text{RSSI}}(z)=\mathcal{U}[-120,0]\,\mathrm{dBm}\) and \(c_{\text{AoA}}(z)=\mathcal{U}[0,2\pi]\,\mathrm{rad}\) for the clutter density of RSSI and AoA measurement updates, respectively--here, \(\mathcal{U}[a,b]\) is continuous uniform distribution with interval \([a,b]\). **Detection Probability.** To simulate the limited sensitivity of the radio receiver in practice, a sensitivity threshold \(h_{Th}=-120\,\mathrm{dBm}\) is implemented such that any simulated radio signal received with signal strength less than the threshold are discarded. Due to the effect of limited receiver sensitivity, the detection probability can vary as the state of UAV and radio tags changes. Therefore, given the RSSI measurement model has Gaussian noise (9), we can express the detection probability \(P_{D}(\cdot)\) of the radio signal as the following equation for use in the Bernoulli update step expressed in (6) \[P_{D}(\textbf{x,u})=\int_{h_{Th}}^{\infty}\mathcal{N}(z_{R};h(\textbf{x,u}), \sigma_{R}^{2})dz_{R}=1-\mathcal{C}(h_{Th};h(\textbf{x,u}),\sigma_{R}^{2}), \tag{36}\] here, recall that \(\mathcal{C}(;)\) is the Gaussian CDF mentioned in (14). **Mobile radio tag objects.** Radio tag objects to track and locate were randomly placed in the testing environment with elevation \(0.2\,\mathrm{m}\) above ground. All objects were placed under vegetation with depth \(L_{v}=1\,\mathrm{m}\) to generate complex VHF signal propagation artifacts and create challenging conditions for the proposed measurement and trajectory planner operating without an accurate measurement model and using an imprecise model instead. We modeled the object dynamics using a wandering model (Nguyen, Rezatofighi, et al., 2019) with transitional density given by: \[q_{t|t-1}(\textbf{x}_{t}|\textbf{x}_{t-1})=\mathcal{N}(\textbf{x}_{t};\textbf {F}\textbf{x}_{t-1},\boldsymbol{\Sigma}) \tag{37}\] where \(\textbf{F}=\textbf{I}_{3}\) with \(\textbf{I}_{3}\) being \(3\times 3\) identity matrix, \(\boldsymbol{\Sigma}=\text{ diag}([2.5,2.5,0.0025]^{T})\,\mathrm{m}^{2}\). We assume the radio tag carried by each object emits an on-off-keying pulse signal with a unique frequency as illustrated in Figure 3. Therefore, each object can be uniquely identified by estimating its signal frequency and this significantly reduces computation-intensive data association procedures. Recall, to reduce the computational complexity of planning, the planning algorithm only selects optimal actions to reduce the estimation uncertainty associated with the object with the smallest Euclidean distance to the UAV. Once an object is considered localized, it will no longer be considered by the path-planning algorithm to control measurement and trajectory-planning actions. An object is considered localized when the estimation uncertainty has reduced to a sufficiently small level; for this, the determinant of its estimated covariance on the \(x-y\) axis, \(N_{\text{th}}\), being less than or equal to \(2\times 10^{4}\,\mathrm{m}^{4}\) was used in our simulations. We only use the \(x-y\) axis to determine the termination condition for tracking because, generally, elevation resolution is not as important and to provide a fair comparison with the AoA method, which is not able to obtain elevation measurements. **Performance measures.** For each simulation, \(100\) Monte-Carlo runs were performed. Given the key objectives of the UAV task, we used the following metrics to evaluate and compare the performance of each method. * _Estimation Error._ We are interested in the accuracy of estimating the location of wildlife. Hence, we use mean error of \(N\) objects given by \(\frac{1}{N}\sum_{i=1}^{N}\sqrt{(x_{truth}^{(i)}-x_{est}^{(i)})^{2}+(y_{truth} ^{(i)}-y_{est}^{(i)})^{2}}\); * _Localization time._ We want to minimize the flight time to locate radio-tagged wildlife since a UAV has limited onboard battery capacity and returning the UAV to the home base and changing batteries for a new mission is undesirable. Therefore, we measure the total time the UAV spends in the air to locate all objects. ### Simulation Experiments and Results **Robustness under various terrain conditions.** These experiments aim to quantify advantages in terms of localization time and accuracy under different environments based on the information-based reward functions under investigation. We examined the performance of our proposed method in three distinct terrains: _i)_ flat; _ii)_ hilly, and _iii)_ mountain terrains. Figure 9 shows the Monte-Carlo simulation results in flat, hilly, and mountain terrains. Our proposed method, Meta-Pilot, is able to locate all \(20\) radio tags consistently and faster--see localization times-than all AoA-based methods--CAoA(20 s), AoA-RSSI(20 s), AoA-RSSI(45 s)--across all terrain conditions while maintaining low localization errors. Then, given the action choices the planner is able to make over the relatively flat terrain, where _Imp-RSSI_ methods are expected to perform well, our proposed method was able to acquire the location of all \(20\) radio tags as fast as the RSSI-based method whilst being significantly faster than AoA-based methods and without compromising localization accuracy. As the terrain complexity increases (Hilly to Mountain terrains), our method is able to select AoA measurement actions when necessary and achieve faster localization times than the Imp-RSSI and all AoA-based methods without sacrificing localization accuracy. In the mountain terrain, our planning for tracking approach performed better than all of the AoA-based methods--localization error is the same or better whilst the localization time is the least. Notably, in the mountain terrain, the Imp-RSSI method is inherently not able to meet the tight termination condition set at \(2\times 10^{4}\,\mathrm{m}^{4}\) due to the complexity of the terrain. Therefore, a higher localization termination condition (more relaxed) of \(2\times 10^{6}\,\mathrm{m}^{4}\) is used for the Imp-RSSI method instead. As a result, we can observe the localization error of the method to be significantly larger than other methods. **Impact of information-based reward functions.** In the series of experimental results described in Figure 9, all three reward functions resulted in approximately similar localization times and errors across all of the measurement and planning methods across all of the environments. However, Renyi divergence indicates a slight advantage in localization time. Hence, Renyi divergence was selected as the reward function to employ in field experiments. **Robustness under practical signal detection limitations.** In this experiment, we investigated the performance of our proposed Figure 9: Comparing performance with mobile radio tags under various terrain conditions _and_ measurement and planning methods with different information-based reward functions under investigation. Performance is compared between different methods in terms of localization time and estimation error in flat, hilly, and mountain terrains over 100 MC runs where all of the planning methods use void-constrained trajectories. The time required for each AoA measurement is marked in brackets. (Note: Imp-RSSI\({}^{*}\) method in mountain terrain uses a localization termination condition of \(2\times 10^{6}\,\mathrm{m}^{4}\)) since the tighter uncertainty bound employed in other methods prevents localization of objects under less informative RSSI measurements. approach under different measurement detection probabilities to understand the impact of our formation to address the practical problems experienced by signal detectors--missed detections. The parameters used in this experiment are those used for the hilly terrain and Renyi divergence was used as the reward function. Figure 10 shows the Monte-Carlo-based comparison results of localization error between the Bernoulli filter and the particle filter (employed in prior work) implementations as the detection probability \(P_{D}\) varies from \(0.7\) to \(0.99\). As \(P_{D}\) decreases, the particle filter implementation suffers from low measurement detections, leading to high localization errors. In contrast, as expected, the Bernoulli filter formulation with explicit consideration for measurement detection probabilities is able to maintain a consistent localization accuracy under different detection probabilities. **Effectiveness and impact of void constrained trajectories.** The void probability functional constraint applied to planning aims to minimize the disturbances to wildlife by distancing UAV trajectories away from the wildlife of interest. This experiment investigates the performance impact of void constraints in terms of localization time and error. Figure 11 illustrates \(100\) trajectories for the UAV (generated over \(100\) MC trials) in a task to localize \(20\) mobile radio tags over the flat terrain. With the void constraint applied, as shown in Figure 11(b), the flight path is changed noticeably as the UAV attempts to maintain a safe distance to each radio tag. Since radio tags are mobile, only the initial ground truth position and their path are marked in the figure; hence, the mobility results in some trajectories appear to violate the void constraint. Notably, without the void constraint, the planner is expected to select the shortest path over the flat terrain, and the planner is expected to select RSSI-based measurement actions over AoA measurement actions, resulting in the UAV moving directly towards each radio tag. Figure 11: Void Impact Experiments With Mobile Radio Tags — UAV trajectory heatmap over 100 MC trials. (a) Without void constraint; and b) with void constraint on the flat terrain. Green circles mark the initial ground truth location of mobile radio tags, while green paths denote the traversal paths of the radio tags. Figure 10: Detection Probability Impact Experiments — Comparison between Bernoulli filter (BF) formulation to explicitly consider miss detections and Particle filter (PF) under varying detection probability \(P_{D}\) for the tracking and localization task over the hilly terrain with \(100\) MC trials. The BF formulation is able to maintain a low localization error across different detection probabilities. Figure 12 shows performance comparisons with and without void constraints on flat, hilly, and mountain terrains. In these terrains, with the void-constrained trajectories, the localization time with different reward functions all increase from \(10\%\sim 80\%\) as the terrain becomes more complex. The increase in localization time is an expected result from both the void constraint trajectories and the less informative RSSI measurement model in the mountain terrain. Due to the void constraint, the UAV needs to maintain a safe distance to radio tags and therefore requires spending extra time to navigate around each tag. In addition, in the mountain terrain, the RSSI measurement model is less effective in improving state estimation, while its use is conflicted by the need to maintain a minimum distance under void-constrained planning where the planner prevents the UAV from approaching the target to improve estimates. Consequently, the UAV is more reliant on time-consuming AoA measurements and this results in increased mission times for the localization tasks in more complex terrains under void-constrained planning. In contrast, over the flat terrain, the RSSI measurement imprecision is relatively low, and the measurements are more useful in improving state estimations even in the presence of trajectory constraints imposed by void preventing the approach of a UAV to a radio tag to obtain more informative RSSI measurements. In terms of localization error, we can observe void-constrained trajectories to lead to comparable performance with planning without void constraints. Interestingly, the results in the mountain terrain show that with void-constrained trajectory and measurement planning, the increased mission time to locate all radio tags has led to slight improvements in median localization accuracy. ## 5 ConservationBot Prototype An overview of the prototype system--_ConservationBot_-we built is shown in Figure 13. We employed a commercial directional H-type VHF antenna (Telonics RA-2AK) with \(4\) dBd gain and \(10\,\mathrm{dB}\) front-to-back gain ratio and a Software Defined Radio (SDR) to construct the receiver. Given the significant advances in software-defined radios, their small form factor, and low weight (giving us the ability to reduce the payload of the sensor subsystem), we employed an SDR in our receiver. We selected the USRP B200mini-i because its lightweight has an adequately large \(70\,\mathrm{MHz}-6000\,\mathrm{MHz}\) receiving frequency range, high sensitivity, and large bandwidth for simultaneously detecting multiple radio tags. We employed a DJI Manifold 2-C companion computer to implement the digital signal processing blocks of the receiver's detector module as well as the planning for tracking and localization algorithms. The signal processing components of our receiver system are shown in Figure 13(a) and is implemented using GNURadio. The SDR is configured with a sample rate of \(3\,\mathrm{MS}\,\mathrm{s}^{-1}\). We implemented matched filters to detect and measure the RSSI value of each radio tag. The digitized RF data from the SDR are first channelized and down-sampled into a series of sub-channel with \(80\,\mathrm{kHz}\) Figure 12: Void Impact Experiments — Comparison with and without void constrained trajectories over \(100\) MC trials in flat, hilly, and mountain terrain. bandwidth. In each channel, the data is further decimated into \(5\,\mathrm{kHz}\) bandwidth signal to further improve SNR (signal-to-noise ratio) and reduce the computation complexity needed in later processing stages. The data is then passed through a matched filter, and the RSSI value is identified by using a peak detector to generate the measurement \(\mathbf{Z}_{\mathrm{R}}\). ## 6 Field Experiments We describe our extensive experimental regime to validate our approach and evaluate the performance of our aerial robot in the field. Our aims were to: * Evaluate the detection range of the software-defined receiver architecture and hardware to understand the scanning range and the effectiveness of the proposed compensated AoA detector (Section 6.1 and 6.2) * Conduct field experiments to evaluate and compare performance between the proposed measurement and trajectory planning for tracking method with prior approaches (Section 6.3.1, 6.3.2) and illustrate the effectiveness of our approach (Section 6.3.3). * Conduct field experiments to demonstrate the significant advantage provided by our aerial field robot over the manual methods employed for wildlife tracking (Section 6.3.4) and evaluate the proposed aerial field robot using Southern Hairy Nosed Wombats as a model species (Section 6.4). We used Lotek VHF wildlife radio collars in our field experiments. The radio collar is designed for continuous operation of \(18\) months, and as a result of limited on-board battery power, its output power is limited to \(200\,\mathrm{\SIUnitSymbolMicro W}-500\,\mathrm{\SIUnitSymbolMicro W}\). It transmits a \(18\,\mathrm{ms}\) pulse every \(1\,\mathrm{s}\) as illustrated in Fig. 3. ### Software defined radio receiver detection range To understand the maximum detection range possible with our receiver architecture and hardware components, we performed multiple flights at a fixed \(50\,\mathrm{m}\) altitude and measured the RSSI and SNR values of two radio collars placed at \(0\,\mathrm{m}\) and \(0.5\,\mathrm{m}\) above the ground. The \(0.5\,\mathrm{m}\) height was chosen to represent the typically expected antenna height for above-ground wombats and the \(0\,\mathrm{m}\) height represents a more challenging scenario where wombats are at the entrance of their warrens and are also representative of smaller wildlife modelling closer to ground level. During the flights, the heading of the UAV and antenna is fixed and directed such that the antenna's maximum gain is directed towards radio collars. Importantly, the detection range demonstrates the scanning area possible for the receiver for a typically low-power, long-life, VHF radio collar--such as the one we used in our experiments--even without the ability of the platform to travel and cover a larger territory. The detection distance is determined by the maximum distance between the receiver and radio collar when the received signal's SNR reached \(15\,\mathrm{dB}\). Here, we used a conservative SNR level to yield minimal false alarms and a high detection probability. Consequently, in practice, a significantly longer detection range can be achieved and successfully employed given the capability of the Bernoulli filter formulation to accommodate false alarms and miss detections in real-world settings. Figure 13: (a) A system overview. The UAV state, control actions, and RSSI measurements are denoted by \(\mathbf{u}_{t}\), \(\mathbf{a}_{t}\) and \(\mathbf{Z}_{\mathrm{R}}\) respectively; (b) Lotek VHF Wildlife Radio Collar used in field experiments; (c) ConservationBot. Figure 14(a) shows the measured gain pattern \(\tilde{G_{a}}(\cdot)\) of the antenna used in our system; the deviations from an ideal pattern in free-space is expected as the gain of the antenna is modified once mounted onto the UAV. Hence, the measured pattern is used in the measurement models we employ. Figure 14(b) shows the resulting detection range measurements. For the radio collar placed \(0.5\,\mathrm{m}\) above ground, the signal can be reliably detected up to \(2000\,\mathrm{m}\). As the height of the radio collar decreases, the system detection range reduced as expected, but even when the collar is placed directly on the ground, the range consistently exceeds \(1000\,\mathrm{m}\). Figure 14: (a) The measured gain pattern of antenna used by the receiver; (b) Detection range experiment: SNR at varying distances for radio collars placed at \(0.5\,\mathrm{m}\) and \(0\,\mathrm{m}\) above the ground measured at or above \(15\,\mathrm{dB}\) SNR. Figure 15: AoA measurements statistics at different distances. (a) Percentage of RSSI measurements collected during each AoA measurement, the shaded area shows one standard deviation. Each data point was built using AoA measurements of size \(10\). (b) AoA measurement error using correlation coefficient (15), error close to \(180^{\circ}\) can be observed when radio collar is above \(1\,\mathrm{km}\). (c) AoA measurement error using cross-correlation (16), no significant outlier is observed but overall has higher variance than (c) when radio collar distance is less than \(1\,\mathrm{km}\). (d) AoA measurement error using (17). ### Compensated AoA detector evaluation and measurement model parameter estimation The AoA measurement errors can result from the accuracy of the UAV heading at the time of each RSSI detection since each AoA measurement requires the UAV to perform a full rotation and weak radio collar signals; for instance, at longer distances, we can expect the number of RSSI detections to reduce and potentially increase the AoA errors. To validate the effectiveness of our proposed compensated AoA measurement method and determine the AoA measurement noise variance, we collected 10 AoA measurements with a stationary radio collar at distances from \(45\,\mathrm{m}\) to \(1509\,\mathrm{m}\) while flying the UAV at a fixed altitude of \(50\,\mathrm{m}\). Figure 15 illustrates the AoA measurement errors and the percentage of detections. As shown in Figure 15(b), at distances larger than \(1000\,\mathrm{m}\), the correlation coefficient based AoA measurement calculation produced outlier measurements with significant errors while the cross-correlation methods shown in Figure 15(c) generates AoA measurements with relatively large variance. But the measurements are less sensitive to the detection rate. The results in Figure 15(d) demonstrate our proposed compensated AoA measurement method described in (17); we can observe the errors to be reasonably consistent across varying distances with small variances; notably, with the exception of one outlier at \(45\,\mathrm{m}\), the majority of errors are within \(10^{\circ}\). Although the variation of AoA measurement noise can be modeled as a function of correlation coefficient (O. Cliff et al., 2015) or distance, we opt for a fixed variance Gaussian noise model since the variation of AoA error with increasing distance is observed to be minimum with the compensated detector and detailed modeling would likely yield only marginal improvements. ### Field Experiments We conducted our field experiments to evaluate our prototype ConservationBot implementation over a more challenging, hilly terrain. The field experiments were conducted in the Inman Valley, approximately \(10\,\mathrm{km}\) from Victor Harbor, SA, Australia--the area that the field experiments were conducted in was \(40.86\,\mathrm{ha}\) in size. The terrain at this location is hilly and vegetated with remnant eucalypt forest to the height of approximately \(12\,\mathrm{m}\) interspersed with thick shrubs to the height of approximately \(1\,\mathrm{m}\), making it difficult for humans to traverse and an ideal environment to test our system. Figure 16 shows the contour map of the field test area. #### 6.3.1 Localization Performance The first set of field trials was designed to evaluate the localization performance of our proposed Meta-Pilot method. For comparison, we selected Imp-RSSI and cAoA(20 s) measurement methods _without_ measurement planning. Here, Imp-RSSI method is able to account for uncertainty in the measurement model resulting from the hilly terrain and we can expect the approach to provide lower localization times due to the fast measurement acquisition times; and, the cAoA(20 s) with compensated AoA measurements minimize outliers and we expect the approach to provide improved localization performance because AoA measurement is less affected by terrain condition. (Please see Section 4.2 for detail descriptions of each method) We selected a dispersed but stationary set of radio collars (placed at fixed locations). This setting not only ensures the safety of the Figure 16: Contour map of hilly terrain field experiment site, Inman Valley, South Australia. personnel involved but also allows us to design a consistent and repeatable experimental setting for conducting multiple missions to compare different approaches. A radio collar or radio-collared wildlife was considered localized if its location uncertainty, evaluated by the determinant of its estimated covariance \(N_{th}\) is sufficiently small; we employed \(N_{th}\leq 1\times 10^{5}\,\mathrm{m}^{4}\) and imprecision range of \([-16,9]\,\mathrm{dB}\). The ConservationBot was tasked to take off to \(50\,\mathrm{m}\) above the launch position and execute the measurement and trajectory planning for the tracking algorithm to localize all radio-tagged objects. Table. 1 summarizes localization times and error results. We found that our proposed approach Meta-Pilot provided the best set of localization and total mission duration results (lowest mean error with the smallest standard deviation and shortest mean localization time with the smallest standard deviation). As expected, confirming our results in the simulation study for hilly terrain, Meta-Pilot significantly outperformed the cAoA(20 s) method in terms of localization time in field trials, where Meta-Pilot required only one-third of the time on average to successfully locate the four stationary radio collars with better mean localization accuracy. We found that the mean localization time of Imp-RSSI is significantly better than the cAoA(20 s) method. Notably, we observed a similar result for the flat and hilly terrains in our simulation study and confirmed that the proposed imprecise RSSI models were able to account for the measurement model uncertainty and improve localization time and accuracy results (see Fig. 9). But, the ability to flexibly employ AoA measurement planning actions in Meta-Pilot achieved much better localization accuracy (highest mean and smallest standard deviation demonstrating consistent performance) compared to Imp-RSSI. #### 6.3.2 Tracking and localizing mobile objects In this set of field experiments, we employed the same settings used in Section 6.3.1, with the exception of having two of the four VHF radio collars mobile to validate the capability of ConservationBots to track and locate mobile radio collars. Here, two VHF radio collars were carried by human volunteers tasked with performing a wandering motion from their starting locations at approximately \(1\,\mathrm{m}/\mathrm{s}\sim 2\,\mathrm{m}/\mathrm{s}\). The trajectory of the mobile objects was captured using a phone-based GPS data logger and later compared to the reported object location to obtain the localization error. Table. 1 summarizes the results for localizing mobile objects. The results demonstrate that our proposed Meta-Pilot can track and locate mobile and stationary objects. When comparing mobile objects to the results for localizing only stationary objects, we observed a decrease in accuracy and a slightly larger variation in localization times as the moving radio collars can impact the time to reduce the uncertainty associated with estimated positions to a desirable level whilst also planning for void constrained trajectories. #### 6.3.3 Minimizing Disturbances with Void Constrained Trajectories We present two missions as examples to illustrate the progression of the tracking and localization task and the manner in which the void-constrained trajectories are able to maintain a safe distance from the VHF radio collars of interest. Figure 17(a) depicts the evolution of belief densities for each radio collar over time. From these snapshots, we can observe a typical trajectory and behavior as a result of the void constraints. At time \(t=$80\,\mathrm{s}$\), the UAV is focusing on locating collar \(2\). Shortly after collar \(2\) is located, the UAV proceeds to navigate toward the next closest collar (collar \(1\)). We can observe that during this process, the majority of collar \(2\)'s belief density (represented by orange particles) remains outside the void region of the UAV (green dashed circle). After time \(t=$150\,\mathrm{s}$\), the UAV finishes locating collar \(1\) and subsequently heads toward collar \(3\) while locating collar \(4\) during the process and \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Setting & Trials & Error \(\pm\)\(1\sigma\) (m) & Time \(\pm\)\(1\sigma\) (s) \\ \hline Measurement and Trajectory Planner (**Meta-Pilot**) & Stationary radio collars & 8 & \(\mathbf{34.5}\pm\mathbf{8.5}\) & \(\mathbf{231}\pm\mathbf{23}\) \\ Imprecise RSSI only (Imp-RSSI) & Stationary radio collars & 8 & \(43.3\pm 11.2\) & \(245\pm 30\) \\ Compensated AoA only (cAoA(20 s)) & Stationary radio collars & 8 & \(40.0\pm 12.7\) & \(745\pm 123\) \\ \hline Measurement and Trajectory Planner (**Meta-Pilot**) & Mobile \& Stationary & 8 & \(45.1\pm 17.5\) & \(230\pm 83\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of localization performance for stationary and mobile objects in a _hilly_ environment where our ConservationBot was configured with: i) our proposed imprecise RSSI model alone; ii) compensated AoA detector alone as described in Section 4.2; and iii) our proposed measurement and trajectory planner with the imprecise RSSI model and compensated AoA detector. return to the home-base location after all collars have been found at time \(t=252\,\mathrm{s}\). Here the effect of void constraint becomes more prominent as illustrated by the UAV navigating around collar \(1\). Figure 17(b) shows another instance of intermediate belief densities for the task of locating two stationary (collar \(1-2\)) and two mobile (collar \(3-4\)) VHF radio collars. A similar planning strategy illustrated in Figure 17(a) can also be observed here. At time \(t=85\,\mathrm{s}\) the UAV moves towards the collar determined to be the closest (collar \(2\)). However, due to the void constraint, the UAV is unable to move too close to collar \(2\) and instead plans a trajectory around collar \(2\) as seen at time \(t=120\,\mathrm{s}\). Then, after both collar \(1\) and \(2\) are located, the UAV traverses towards collar \(3\) and completes locating all of the collars at time \(t=190\,\mathrm{s}\). These results from field experiments demonstrate that our proposed planning method is able to locate all stationary or mobile radio collars while maintaining a safe distance. #### 6.3.4 Comparisons with a Human Expert In order to demonstrate the benefits of the autonomous method of tracking wildlife we developed here using our Conservationbot, compared to traditional wildlife tracking methods (involving a field scientist trekking, often through difficult terrain, carrying bulky radio-telemetry equipment), we invited an expert conservation biologist with over 20 years wildlife tracking experience, to compete with our robotic platform. To ensure comparable settings, \(4\) stationary radio collars were used in this experiment. The human expert was given the list of radio collar frequencies at the start but had no prior knowledge of the positions of the radio collars. The human expert and the ConservationBot set off from the same starting position. The localization time for each radio collar and the traveled path of the human expert is shown in Figure 18. As expected, the results show a significant difference in search and localization time between the manual method and the ConservationBot, demonstrating Figure 17: Instances of intermediate belief density representing the estimated location of radio collars for two scenarios selected from our field trials: (a) localizing \(4\) stationary VHF radio collars, (b) localizing \(2\) stationary collars (1 and 2) and \(2\) mobile collars (3 and 4). The effect of void-constrained trajectories can be observed as the UAV navigates around (unlocalized) radio collars to maintain a safe distance. Here we can observe the convergence of belief densities of all radio collar location estimates and the operation of the planner generating trajectories to maintain the void constraint. the effectiveness of the ConservationBot as a field robot for the task. Human expert, first localized collar \(2\) after \(13\) minutes of search time (notably, during this time-lapse, the ConservationBot has located all 4 radio collars and returned to the home base). Due to the terrain and vegetation coverage impacts on VHF signal propagation, collar \(3\) was selected by the human expert as the next collar to localize, although collar \(1\) was closer. Completing the localization task took \(37\) minutes; this was significantly higher than the \(4\) minutes required for our ConservationBot. Notably, the equipment used by the human tracker was superior to that employed on the ConservationBot; specifically, a three-element Yagi antenna with a higher gain and front-to-back ratio, compared to the lower gain two-element model used by the ConservationBot, along with a more sensitive radio receiver--an Australia 26K radio receiver--was used by the human expert. ### Field trials with Southern Hairy Nosed Wombats We participated in a field experiment where our ConservationBot was deployed to localize radio-tagged wombats in a conservation project. The trials were performed near Swan Reach SA, Australia. A total of \(6\) southern hairy-nosed wombats (Lasiorhinus latifrons) were captured, radio-tagged, and released prior to the trials. The terrain at this location is comprised of remnant malleee vegetation interspersed with native grasslands, eucalyptus trees and is, thus, representative of flat terrain with less than \(5\,\mathrm{m}\) elevation variations across the site as shown in Figure 20. Trials were conducted during the daytime to comply with university health and safety regulations, legal and risk implications, as well as regulations and procedures governing the testing of autonomous aerial vehicles. Southern hairy-nosed wombats, a nocturnal species, are usually less active and located in warrens underground (Taggart et al., 2020) during daylight hours. While this behavior meant that wombats were mostly stationary, the radio signal would be greatly attenuated by the ground resulting in a significant reduction in the maximum detection range of signals. These attributes provided a very challenging setting for a field trial. The UAV was launched to a fixed altitude of \(50\,\mathrm{m}\) and tasked to localize all of the detectable radio-tagged wombats. Subsequently, manual tracking was undertaken to determine the ground truth of each wombat's location and compare it to the reported location of our system to determine the reported accuracy. Table.2 presents a quantitative summary of the results of our field experiments. Five missions were carried out to localize two radio collars found to be detectable from Wombat dwelling underground. With the exception of the last mission, we were able to localize both wombats within an average of \(252\,\mathrm{s}\) with a mean localization error of \(40\,\mathrm{m}\). In mission 5, after \(200\,\mathrm{s}\), the signal from wombat \(2\) could no longer be detected before it was localized. We suspect that the wombat moved deeper underground, resulting in further signal attenuation. The intermediate belief density (particle distributions) from mission 1 are illustrated in Figure 21 and demonstrate the effectiveness of the information-theoretic planning objective to reduce the uncertainty of the estimated location of Wombats. Figure 18: Results from conducting the tracking task with a human expert with over 20 years of field experience using the manual method with \(4\) stationary VHF radio collars in a _hilly_ environment. Radio collars are found in the order of \(2,3,4,1\) by the expert human tracker. Completing the localization task took \(37\) minutes for the human expert and only \(4\) minutes for our ConservationBot. Figure 19: (a) Photograph showing a southern hairy-nosed wombat captured and released after radio tagging into the habitat at Swan Reach in Australia. (b) Main figure: Current (dark color) and extrapolated (light color) distribution of hairy-nosed wombats—based on Figure 1 in (Swinbourne et al., 2016). Inset: map of the terrain at Swan Reach in Australia. The ground truth location of tagged wombats is marked by ’\(\times\)’. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{Error (m)} & Time (s) \\ \cline{2-5} & Wombat 1 & Wombat 2 & Mean & Total \\ \hline Mission 1 & 48.4 & 8.0 & 28.2 & 203 \\ Mission 2 & 43.4 & 62.2 & 52.8 & 291 \\ Mission 3 & 49.4 & 25.7 & 37.5 & 231 \\ Mission 4 & 31.5 & 52.6 & 42.1 & 281 \\ Mission 5\({}^{\dagger}\) & 35.1 & 84.9 & - & 200 \\ \hline Mean\({}^{\ddagger}\) & 43.2 & 37.1 & 43.3 & 252 \\ \hline \hline \end{tabular} * Mission did not locate Wombat 2 due to loss of detections from Wombat 2’s VHF radio collar tag, potentially as a result of the Wombat moving deeper underground during the trial. * Excluding mission 5 \end{table} Table 2: Results from field trials where the ConservationBot was deployed to track and locate underground radio-tagged southern hairy-nosed wombats in a conservation project. Figure 20: Contour map of wombat habitat terrain in Swan Reach, South Australia. ### Ethics and Regulatory Compliance This study was conducted under the University of Adelaide Animal Ethics permit number S-2018-112a. All of the flights were undertaken with the Civil Aviation Safety Authority (CASA, Australia) approvals and followed the safety protocols mandated by The University of Adelaide as such our experiments were designed around the University of Adelaide and CASA regulations governing the conduct of UAV research. The two pilots conducting and supervising the experiments had a Remote Pilot License (RePL). ## 7 Lessons Learned This study describes, for the first time, the development and optimization of an aerial robotic system for the tracking and monitoring of wildlife across a variety of vegetation and terrain conditions. We present innovative solutions that address practical and technical issues impacting radio tag detection--speed, accuracy, and reliability, and evaluate the proposed algorithms, their integration, and operation in field conditions. In this section, we reflect upon our observations and lessons learned during our extensive field experiments as well as potential future work. Given the significant improvements to the detection range of the software-defined VHF receiver, the maximum search area of our system is primarily limited by its flight time. For a given UAV platform, flight time is dependent on the weight of the payload. The total mass of the payload of the sensing and computing hardware is \(550\,\mathrm{g}\), where the antenna we employed contributes to more than \(45\%\) of the total payload. To reduce the mass of the payload, and increase flight times, a customized directional antenna design with lighter materials can be investigated in further research. Our software systems onboard the UAV employed the existing \(915\,\mathrm{MHz}\) telemetry channel used for communication between the UAV and the ground station to provide data for the localization and planning algorithm executed on the ground control station as well as the transmission of control action to the UAV. The choice of \(915\,\mathrm{MHz}\) provided a superior range compared to the \(2.4\,\mathrm{GHz}\) wireless link used in (Nguyen, Chesser, et al., 2019) and removed the need for an additional transceiver onboard the UAV. We found the exploitation of the telemetry channel to be convenient and use the full capability to monitor the UAV operations and meet regulatory compliance requirements with the benefit of being able to use open-source hardware and software to support the development of the robotic platform. But, we observed a packet drop rate of around 10%, predominantly due to the limited communication channel quality. Although executing the algorithms onboard the companion computer for better reliability and ease of use addressed the problem, for safety reasons, in the testing phase, we could not employ this mode of operation. The software-defined radio receiver design allows us to easily facilitate the simultaneous detection of multiple radio collar signals at different frequencies whilst facilitating the realization of the receiver in a small form factor and a lightweight hardware realization. The software programmable hardware simplifies the reconfiguration on the fly, such as receiver SNR and radio tag frequencies. Figure 21: Intermediate distributions of belief density representing the estimated location of the underground VHF radio collared southern hairy-nosed wombats. The UAV first moves toward Wombat 1 (\(t=50\) to \(t=90\)). Then moves around Wombat 1 due to the void-constrained trajectory planner and navigates to Wombat 2 after Wombat 1 is localized (\(t=203\)). The \(\square/\triangle\) denotes the truth—determined by manual tracking methods–and estimated wombat positions by the ConservationBot, respectively; the green dashed line denotes the void region employed, and the solid black line denotes the trajectories of the UAV. Despite the advances made compared to previous software-defined receiver designs (Nguyen, Chesser, et al., 2019) to increase the scanning range, tracking underground radio-collared wildlife was a challenging proposition. For our detector, a \(-70\,\mathrm{dBm}\) SNR threshold is used to minimize false detections (false alarms); this setting achieved a scanning range of over \(2\,\mathrm{km}\). However, as shown in 6.4, detecting very weak signals from underground VHF radio emitters due to significant signal attenuation through the soil was challenging in this setting. Reducing the detector threshold could increase the probability of weak signals being detected, but it will also increase the probability of receiving false alarms. Notably, our current detector implementation only reports up to one peak detection per radio tag transmit period; whenever a false alarm is reported, the truth signal (if present) will be suppressed, which also effectively reduces the detection probability. To allow using a lower SNR to improve the detection range further, the detector architecture can be modified to report all signal detection peaks (from the peak detection stage in our detector). This will allow us to fully utilize the Bernoulli filter's ability to handle object state estimation in the presence of multiple false alarms in addition to missed detections and, therefore, increase the capability of our estimation algorithm to function under increased false alarms. Consequently, the operating range of our ConservationBots will effectively increase--more importantly, enable the tracking and localization in the presence of weak VHF radio collar tag signals, such as those from underground dwelling animals. ## 8 Conclusion We have validated the capability of our proposed approach to rapidly localize multiple mobile objects in different environments through extensive simulation-based experiments and field experiments with a prototype robotic platform--ConservationBot. Further, we have shown that our approach, which utilizes both RSSI and AoA measurements and performs measurement and trajectory planning to locate radio-collared wildlife, delivers consistently fast, robust, and better performance over traditional RSSI-only or AoA-based approaches, even when the proposed imprecise RSSI model formulation and compensated AoA detector are employed with previous approaches. Importantly, the ability to plan for measurements allows the robot to benefit from robust AoA measurements without the impact of the increased time needed to complete a mission. Although the use of the imprecise likelihood method is not a perfect replacement for the precise and correct likelihood function, it greatly simplifies the difficult task of modeling and building the often complex likelihood model significantly impacted by environmental conditions. Our field experiments confirm that autonomous aerial robots capable of fast, robust tracking of multiple wildlife can provide benefits over the labor-intensive manual tasking to gather precise information from wildlife for their conservation and management.
2303.05752
Deep Learning for Predicting Metastasis on Melanoma WSIs
Northern Europe has the second highest mortality rate of melanoma globally. In 2020, the mortality rate of melanoma rose to 1.9 per 100 000 habitants. Melanoma prognosis is based on a pathologist's subjective visual analysis of the patient's tumor. This methodology is heavily time-consuming, and the prognosis variability among experts is notable, drastically jeopardizing its reproducibility. Thus, the need for faster and more reproducible methods arises. Machine learning has paved its way into digital pathology, but so far, most contributions are on localization, segmentation, and diagnostics, with little emphasis on prognostics. This paper presents a convolutional neural network (CNN) method based on VGG16 to predict melanoma prognosis as the presence of metastasis within five years. Patches are extracted from regions of interest from Whole Slide Images (WSIs) at different magnification levels used in model training and validation. Results infer that utilizing WSI patches at 20x magnification level has the best performance, with an F1 score of 0.7667 and an AUC of 0.81.
Christopher Andreassen, Saul Fuster, Helga Hardardottir, Emiel A. M. Janssen, Kjersti Engan
2023-03-10T07:40:09Z
http://arxiv.org/abs/2303.05752v1
# Deep Learning for Predicting Metastasis on Melanoma WSIs ###### Abstract Northern Europe has the second highest mortality rate of melanoma globally. In 2020, the mortality rate of melanoma rose to 1.9 per 100 000 habitants. Melanoma prognosis is based on a pathologist's subjective visual analysis of the patient's tumor. This methodology is heavily time-consuming, and the prognosis variability among experts is notable, drastically jeopardizing its reproducibility. Thus, the need for faster and more reproducible methods arises. Machine learning has paved its way into digital pathology, but so far, most contributions are on localization, segmentation, and diagnostics, with little emphasis on prognostics. This paper presents a convolutional neural network (CNN) method based on VGG16 to predict melanoma prognosis as the presence of metastasis within five years. Patches are extracted from regions of interest from Whole Slide Images (WSIs) at different magnification levels used in model training and validation. Results infer that utilizing WSI patches at 20x magnification level has the best performance, with an F1 score of 0.7667 and an AUC of 0.81. Christopher Andreassen\({}^{1}\)1, Saul Fuster\({}^{1}\)1, Helga Hardardotti\({}^{2,3}\), Emiel A.M. Janssen\({}^{2,3}\), Kjersti Engan\({}^{1}\)\({}^{1}\) Dept. of Electrical Engineering and Computer Science, University of Stavanger, 4021 Stavanger, Norway \({}^{2}\) Dept. of Pathology, Stavanger University Hospital, 4011 Stavanger, Norway \({}^{3}\) Dept. of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, 4021 Stavanger, Norway Cancer prognosis, melanoma, deep learning, histopathology, convolutional neural network Footnote 1: These authors contributed equally. ## 1 Introduction The incidence rate of melanoma in Norway is among the highest in the world, steadily increasing yearly from 1961 to 2020. During the 2016-2020 quinquennial, the incidence rate increased by approximately 11% for both sexes [1]. Carefully analyzing cancer tumors for extracting clinically relevant diagnostic and prognostic information is important for treatment management and can be labor intensive for pathologists. In general, the number of biopsies arriving at a hospital's pathologist's department requiring analysis by pathologists is increasing yearly [2]. Pathological prognosis of melanoma is based on the eight edition American Joint Committee on Cancer (AJCC) tumor, node and metastasis (TNM) staging system [3]. The histopathological prognostic factors used in the system are subject to interobserver variability. Tumor thickness and ulceration are the most important prognostic factors [4]. A study on interobserver variability of histopathological factors found concordance between general pathologists and pathologists with expertise to be good overall, however it showed that variability of tumor thickness and ulceration resulted in reclassification of 15,5% of thin melanomas [5]. This high level of uncertainty can only be confirmed by the development of the tumor over time. Combined with the increased incidence rate of melanoma, it is necessary to implement faster and more consistent methods for melanoma prognosis assessment. Computational pathology (CPATH) systems can be used for fast and reproducible analysis and reduce pathologists workload for example as decision support tools [6]. Machine learning (ML) has grown in popularity due to the increase of available data and processing capabilities. Most published works in CPATH is in the field of diagnostics, detecting tumor areas [7], grading tumor [8], among others. Bhattacharjee et al. [9] proposed a method using two CNNs and ensemble learning for classifying prostate tumors as benign or malignant. Wang et al. [10] proposed a method that classified regions in breast tumors utilizing four pre-trained deep neural networks to increase the robustness of the method. In a similar fashion, multi-scale models have also been successfully used for computer-aided diagnosis systems in order to combine global and local patterns extracted at different magnification levels [8, 11]. CPATH for prognostics is more challenging as it attempts to detect future events. Although certain factors lead to worse outcomes, the relationship between such factors and the patient's outcomes is not causal [12]. Moreover, there is not necessarily a region in the WSI that may be an indicator for providing a reliable prognosis. However, there are some works in the literature estimating prognostic values from WSI. Prognosis identification has been done in screening programs, breast cancer screening and cervical screening [13, 14]. Dlamini et. al. [15] claims that utilizing ML to analyze WSIs can help pathologists find a likely prognosis of malignant tumors, leading to reproducible and faster evaluation of the tumors. Although ML methods for predicting prognosis of melanoma using dermoscopic images does exist [16], no prognostic methods for melanoma that utilizes histological WSIs were found in the existing literature. In this work, we present a convolutional neural network (CNN) method based on VGG16 for predicting the prognosis of melanoma from histopathological WSIs. This method exploits a multi-scale multi-input CNN backbone that aggregates the image features extracted from patches at different magnification levels. We compare the performance of models that combine from one up to three magnification levels, along with different patch extraction configurations. We test our models with a private cohort of melanoma WSIs. ## 2 Data Material The data material in this work consists of 52 WSIs from 52 patients, all with verified malignant melanoma and 5 years follow-up, diagnosed at Stavanger University Hospital between 2008-2012. The data is balanced in terms of outcome, as 26 of the patients are considered to have a good prognosis and the remaining 26 to have a bad prognosis. The prognostic outcome was determined based on the presence of a local or distant metastasis or no metastasis within five years. The WSIs were stained with Haematoxylin and Eosin (H&E) stain and scanned at 40x magnification with a Hamamatsu Nanozoomer s60 scanner, and stored in NDPI format. In all 52 WSIs, the lesion was annotated manually by a pathologist (HH). The annotation protocol was to annotate the lesions on one of the sections in all patients. Some areas of normal epithelial tissue and other typical structures were also annotated, but not necessarily in all images. All annotations were done roughly, meaning they are not very detailed on the edges. Rough annotations provide annotations on the entire dataset within a reasonable time and work effort. ## 3 Methods ### Notation Let \(I_{WSI}^{x}\) correspond to a WSI at magnification level \(x\). \(I_{WSI}^{40}\) are very large gigapixel images, and it is not feasible to process the entire WSI at once. As such, all CPATH systems resort to patching or tiling of the image, or the region of interest in the image, before further processing. Let \[\mathcal{T}:I_{WSI\in R}^{x}\rightarrow\{I_{p}^{x};p=1\cdots\} \tag{1}\] represent the process of tiling a region defined by \(R\) of the image \(I_{WSI}^{x}\) into a set of patches. A parameterized patch extraction algorithm proposed by Wetteland et.al. [17] is used in this paper, conveying how the algorithm extracts coordinates from one magnification, representing patches in a WSI. First, 20x magnification is used to define valid patches inside the region \(R\). Then, the center pixel of the patch is projected to other magnifications to maintain the same physical midpoint for all views. This way, we ensure that the view is centered regardless of the magnification level of choice. The patch size remains the same for all magnifications. Therefore, decreasing the magnification level will widen the field of view, as illustrated in Figure 1. A variable \(D\) refers to a dataset in the form of start coordinates. The datasets are defined by variables shown in Table 1, with the format \(D_{t/vk}^{m}\). \begin{table} \begin{tabular}{|l|l|l|} \hline Variable & Values & Description \\ \hline \hline \(m\) & \(10,20,40\) & Magnification level(s) of extracted start coordinates. \\ \hline \(t/v\) & \(t,v\) & \(t\) indicates training dataset and \(v\) validation dataset. \\ \hline \(k\) & \(k\in\mathbb{N}\) & Iteration nr during crossvalidation. \\ \hline \end{tabular} \end{table} Table 1: Variables used to describe a dataset \(D_{t/vk}^{m}\). Figure 1: Overview of a multiscale model for predicting melanoma prognosis. Patches from the defined lesion areas are extracted at different magnification levels and fed into independent CNN backbones \(\Phi_{x}\). The extracted features from all CNNs are later concatenated and fed into a classifier \(C\). For mono scale models, a single backbone is used and the output feature embedding is directly fed into \(C\). ### Preprocessing Several preprocessing steps were applied to all 52 WSIs. Region of interest masks were extracted from areas annotated by the pathologist. Then, tissue masks were generated using HSV format to locate the blue and magenta color range, corresponding to H&E stained tissue areas. The color range, hue, of WSIs in the HSV format was set to 100-180. Overlapping areas of tissue and region of interest masks resulted in a lesion mask, a segmented lesion area, without the background noise located outside of the tissue mask. Closing and opening morphological operations were applied to lesion masks to remove small regions and fill small holes, respectively, giving the final region \(R\) used in further processing, see Eq1. Some examples of segmented lesions are displayed in Figure 2. ### Patch based prediction The proposed method for patch based prediction is illustrated in Figure 1. Pathologists examine malignant tissue at multiple levels to assess the prognosis of a patient. Mimicking this, multiple magnification levels were used to predict prognosis of melanoma. This was done using CNN backbone for mono-scale (MONO) single input using patches extracted at particular magnification level; or multi-scale multi-input CNN backbone, that would range from di (DI) to tri-scale (TRI), inspired by Wetteland et.al. [8]. Let \(f_{p}^{x}=\Phi_{x}(I_{p}^{x})\subset\mathbb{R}^{512}\) represent the feature embedding of patch \(p\) at magnification level \(x\in\{10,20,40\}\). At monoscale, the feature embedding is further passed through a patch-based classifier, \(C\), giving a binary prediction: \(y_{p}^{x}=C_{x}(f_{p}^{x})=C_{x}(\Phi_{x}(I_{p}^{x}))\), for \(y_{p}=1\) corresponds to bad prognosis, and vice versa. For the multiscale approach, a total feature vector is found by concatenating the feature embeddings from the used magnification levels per patch: \(\bar{f}_{p}=[{f_{p}^{x_{1}}}^{T}{f_{p}^{x_{2}}}^{T}\cdots]^{T}\) resulting in a feature vector of size \(m\times 512\) where \(m\) is the number of scales. The classifier(s) \(C_{x}\) and \(C_{x_{1},x_{2}\cdots}\) are all fully connected networks consisting of two dense layers of 4096 neurons each and a third dense layer as a binary output \(y_{p}^{x_{1},x2\cdots}=C_{x1,x2,\cdots}(\bar{f}_{pt})\). The size of input layers varies with the number of scales, while the output layer remains in two neurons for the good and bad prognosis prediction. Patch based prediction is straight forward, but problematic in the sense that no real truth data exist. We know truth data at a patient level, and for patch based prediction, we let all patches inherit the truth label of the patient. ### Patient based prediction We propose to find a patient-prediction \(Y\) from all the patch predictions \(y_{p}\) within the \(R\) region from a WSI. We do not know if scattered information in the lesion is a stronger or weaker indication of bad prognosis compared to localized information, thus we propose a simple model counting the total number of bad and good patch-predictions for patient P, \(Y=\frac{1}{N_{R}}\sum_{p\in R}y_{p}^{x}\), and \(Y>T\to\) bad prognosis, where \(N_{R}\) is the number of patches within a region \(R\), and \(T\) is a threshold that we will estimate from a receiver operating characteristic (ROC) curve. ## 4 Experiments Experiments were done for mono scale for the patch-based prediction, and both mono and multi-scale for patient based prediction. Training and validation datasets were created on a patient basis, using stratified 5-fold cross validation, using up to 250 patches per WSI. The feature extraction layers of the network \(\Phi_{x}\) were frozen, leaving only the classifier with trainable parameters. The CNN backbones used in this work are pre-trained VGG16 networks from ImageNet [18]. The number of trainable parameters of the classifiers \(C_{x}\) were kept low to prevent overfitting. The classifiers were trained up to 20 epochs, until the validation loss converged. A stochastic gradient descent optimizer was used during training with 0.9 momentum, while learning rate varied for mono and multi-scale models. Random resize, crop and random horizontal flip were used for augmentation in the training set, while the dropout rate was set to 50%. Early stopping was used if the validation loss did not converge over time. For patient based prediction, the threshold \(T\) was found from ROC curves based on the best trade-off between highest sensitivity and specificity on the training set, tested on the validation set using cross-validation. This experiment was done with a learning rate of 0.0001 for MONO and 0.001 for DI and TRI. Cross-validation was done using patches from magnification levels (\(m\)) \(20\), \(20-40\) and \(10-20-40\), for MONO, DI and TRI model architectures respectively. We have not found any relevant work in the literature to compare with, predicting prognostics from WSI with melanoma. Figure 2: Examples of tissue masks and annotated masks showing lesions in the WSIs. The area shaded with green is outside the tissue mask, and the area shaded with orange is outside the annotated mask. The area inside both masks contains the lesion. ## 5 Results & Discussion Results from patch-based experiments are shown in Table 2. Patch-based prognosis prediction shows promising results for models MONO\({}_{20x}\) and MONO\({}_{40x}\). Models can differentiate between patches from WSIs with good and bad prognoses. We observe that models tend to present higher sensitivity over specificity. MONO\({}_{20x}\) has the highest F\({}_{1}\) score and relatively high sensitivity and specificity. This may due to (m) \(20\) reaching a compromise between gathering enough contextual information from neighbouring tissue and maintaining a certain level of detail from local cellular patterns. Another reason for high sensitivity rates is the distribution of genuine bad prognosis patches. While WSIs labeled with a good prognosis are populated by tissue presenting characteristic features of positive outcomes, the same does not hold true for bad prognosis. In bad prognosis WSIs, there might be patches that present positive prognostic features as well as negatives, and as a result, the number of actual bad prognosis patches is generally underrepresented. Moreover, which of those patches present actual bad prognosis features is unknown, which is reflected in the obtained metrics. Results from patient-based experiments are shown in Table 3, for MONO, DI and TRI models respectively. In the likes of the patch-based experiment, we can observe that models do also tend to have higher sensitivity over specificity. This pattern holds true for all iterations, regardless of the model's architecture. DI\({}_{20,40x}\) are the least sensitive, followed by MONO\({}_{20x}\) and TRI\({}_{10,20,40x}\) in descending order. The aforementioned class imbalance is also reflected in the threshold, which requires a minority of patches to predict a global bad prognosis label. Plots showing the ROC curve and AUC for this experiments are plotted in Figure 3, where MONO\({}_{20x}\) obtained the highest AUC. Patient-based prognosis prediction performs highly for all model architectures. Our best performing model MONO\({}_{20x}\) shows an AUC, F1 score, and accuracy of 0.81, 0.7667, and 0.7255, respectively. Although pathologists can look upon clinical data and the entire tissue block for predicting melanoma prognostics, MONO\({}_{20x}\) has shown good performance using a single WSI per patient. AUC of 0.81 is promising when taken into account the reported interobserver variability among expert pathologists for evaluating prognostic parameters[5]. The reason for multi-scale models to perform worse might be more trainable parameters, and a small training set. ## 6 Conclusion In this paper, we show that CNNs can be used to predict the prognosis of melanoma as a proof of concept with a small dataset. A multi-scale multi-input backbone was implemented to leverage information conveyed at different magnification levels, but we found that mono-scale 20x magnification gave the most promising results with F1 score of 0.7667 and an AUC of 0.81. 20x seems to provide a good trade-off between local and global information. In future work larger datasets should be tested both to explore if that can make multi-scale models perform better, and to verify the encouraging overall prognostic prediction results. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Architecture & Validation dataset & Sensitivity & Specificity & F\({}_{1}\) score \\ \hline \hline \multirow{3}{*}{ MONO} & \(\frac{D_{1}^{x_{1}}}{D_{1}^{x_{1}}}\) & 0.6916 & 0.5576 & 0.6699 \\ \cline{2-5} & \(\frac{D_{1}^{x_{1}}}{D_{1}^{x_{1}}}\) & 0.7928 & **0.5824** & **0.7392** \\ \cline{2-5} & \(\frac{D_{1}^{x_{1}}}{D_{1}^{x_{1}}}\) & **0.8205** & 0.4552 & 0.7197 \\ \hline \end{tabular} \end{table} Table 2: Results from patch-based prediction, using MONO model (one fold). Figure 3: ROC curve and AUC from patient-based prediction. From left to right, the plots correspond to MONO-\(20\), DI\({}_{20,40x}\) and TRI model architectures. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & Threshold & Sensitivity & Specificity & F\({}_{1}\) score & Accuracy \\ \hline \hline MONO\({}_{20x}\) & 0.3720 & **0.8846** & **0.5600** & **0.7667** & **0.7255** \\ DI\({}_{20,40x}\) & 0.4720 & **0.8846** & 0.5385 & 0.7541 & 0.7115 \\ TRI\({}_{10,20,40x}\) & 0.3240 & 0.8462 & 0.5385 & 0.7333 & 0.6923 \\ \hline \end{tabular} \end{table} Table 3: Results from patient-based prediction. Metrics reflect the mean values of models trained using cross validation. ## 7 Compliance with Ethical Standards This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Regional Ethics Committee (No: 2019/747/RekVest). The authors have no relevant financial or non-financial interests to disclose. ## 8 Acknowledgements This research has received funding from the European Union's Horizon 2020 research and innovation program under grant agreements 860627 (CLARIFY) and "Pathology services in the Western Norway Health Region - a centre for applied digitization" from a Strategic investment from the Western Norway Health Authority.
2310.13008
LoBaSS: Gauging Learnability in Supervised Fine-tuning Data
Supervised Fine-Tuning (SFT) serves as a crucial phase in aligning Large Language Models (LLMs) to specific task prerequisites. The selection of fine-tuning data profoundly influences the model's performance, whose principle is traditionally grounded in data quality and distribution. In this paper, we introduce a new dimension in SFT data selection: learnability. This new dimension is motivated by the intuition that SFT unlocks capabilities acquired by a LLM during the pretraining phase. Given that different pretrained models have disparate capabilities, the SFT data appropriate for one may not suit another. Thus, we introduce the term learnability to define the suitability of data for effective learning by the model. We present the Loss Based SFT Data Selection (LoBaSS) method, utilizing data learnability as the principal criterion for the selection SFT data. This method provides a nuanced approach, allowing the alignment of data selection with inherent model capabilities, ensuring optimal compatibility and learning efficiency. In experimental comparisons involving 7B and 13B models, our LoBaSS method is able to surpass full-data fine-tuning at merely 6% of the total training data. When employing 16.7% of the data, LoBaSS harmonizes the model's capabilities across conversational and mathematical domains, proving its efficacy and adaptability.
Haotian Zhou, Tingkai Liu, Qianli Ma, Jianbo Yuan, Pengfei Liu, Yang You, Hongxia Yang
2023-10-16T07:26:24Z
http://arxiv.org/abs/2310.13008v1
# LoBaSS: Gauging Learnability ###### Abstract Supervised Fine-Tuning (SFT) serves as a crucial phase in aligning Large Language Models (LLMs) to specific task prerequisites. The selection of fine-tuning data profoundly influences the model's performance, whose principle is traditionally grounded in data quality and distribution. In this paper, we introduce a new dimension in SFT data selection: learnability. This new dimension is motivated by the intuition that SFT unlocks capabilities acquired by a LLM during the pretraining phase. Given that different pretrained models have disparate capabilities, the SFT data appropriate for one may not suit another. Thus, we introduce the term "learnability" to define the suitability of data for effective learning by the model. We present the **Loss Based** SFT Data Selection (LoBaSS) method, utilizing data learnability as the principal criterion for the selection SFT data. This method provides a nuanced approach, allowing the alignment of data selection with inherent model capabilities, ensuring optimal compatibility and learning efficiency. In experimental comparisons involving 7B and 13B models, our LoBaSS method is able to achieve comparable results with 800 data points (1.5%) and surpass full-data fine-tuning at merely 6% of the total training data. When employing 16.7% of the data, LoBaSS harmonizes the model's capabilities across conversational and mathematical domains, proving its efficacy and adaptability. ## 1 Introduction Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023; Ouyang et al., 2022) have sparked a revolution in the field of Natural Language Processing (NLP), with far reaching impacts in domains such as law (Cui et al., 2023), medical (Singhal et al., 2022) and finance (Wu et al., 2023). A critical step in aligning LLMs to human preference is Supervised Fine-tuning (SFT), which enables pretrained models to exhibit strong instruction-following capabilities (Chung et al., 2022; Ouyang et al., 2022; Touvron et al., 2023; Wang et al., 2022; Zheng et al., 2023). While the selection of training data is important for all stages of LLM training, it is particularly important for the SFT stage where a few thousand of carefully curated data enables finetuned model to demonstrate remarkable performance (Zhou et al., 2023). In general, there have been two primary approaches to obtaining fine-tuning data: 1) distilling data from powerful teacher models (Taori et al., 2023; Xu et al., 2023), and 2) using manually annotated data (Zhou et al., 2023). In determining what constitutes good fine-tuning data, a common consensus is that valuable data is of high quality and diversity (Ji et al., 2023; Zhou et al., 2023; Chen et al., 2023; 20; 20). In particular, it is commonly assumed that the quality of the data ensures that the fine-tuned model learns accurate and reliable information, while its diversity helps the model generalize better to a wide range of tasks and scenarios. In practice, for example, Alpagasus (Chen et al., 2023)contends that low-quality data within the dataset is detrimental and utilizes GPT-4 (OpenAI, 2023) to assess data quality and select for higher-quality data. Humpback (Li et al., 2023) also employs a powerful language model for data selection, while concurrently conducting iterative rounds of fine-tuning and filtering. On the other hand, works such as (Chen et al., 2023) rely on sampling from clusters of prompt embeddings to preserve data distribution. However, despite the progress made in previous works on SFT data selection, we argue that these methods do not take into account the model's intrinsic capabilities and what data will best suited for a given model. As argued in the "Superficial Alignment Hypothesis" (Zhou et al., 2023), the fine-tuning process unlocks the capabilities of pretrained LLMs, which implies that the selection of data for fine-tuning should be tightly coupled to the model of choice. In this work, we introduce a new dimension for constructing fine-tuning datasets by proposing the criterion of data _learnability_, where we assert that data with high learnability should meet the following three constraints: **i) Data lacking informative content for the model should be avoided. ii) Data that is excessively demanding for the model should be avoided. iii) Data that can be learned more effectively by the model during the fine-tuning process is preferable.** To fulfill these constraints, we put forward our loss-based supervised fine-tuning data selection method (LoBaSS). This method calculates the learnability scores of data points by measuring their loss with respect to both the pretrained model and a fine-tuned reference model. Subsequently, we select the data points with the highest scores. In order to evaluate the effectiveness of our proposed approach, we conduct experiments using the 7B and 13B LLaMA models (Touvron et al., 2023) on the Alpaca open source dataset (Taori et al., 2023). We select the Self-Instruct (Wang et al., 2022), Vicuna (Zheng et al., 2023), Koala (Geng et al., 2023), OpenAssistant (Kopf et al., 2023), and Helpful Base (Bai et al., 2022) as test datasets. We employ both manual comparisons and used GPT-4 as the referee for evaluation (Zheng et al., 2023). The comparison includes models fine-tuned with the full dataset and models fine-tuned with data filtered using the CharGPT. Figure 1 illustrates our experimental results, showing that our fine-tuned models, using only 6.15% of the data, significantly outperform models fine-tuned with the full dataset and those fine-tuned with data filtered using the ChatGPT. At the same time, we conduct data mixing experiments to validate the effectiveness of our approach in scenarios involving data blending. We achieve a balance between mathematical and general conversation capabilities using 16.7% of the full dataset To summarize, our contributions are as follows: 1. Differing from quality and distribution, we propose a novel perspective of evaluating fine-tuning data based on **learnability**, introduce a **quantifiable** metric for the selection of SFT data. 2. We propose the LoBaSS method, which leverages data learnability as the starting point and employs a local model for **secure, efficient**, and **high-quality** data selection. 3. In experiments with the 7B and 13B models, we surpassed the performance of the full dataset using **6%** of the data. With **16.7%** of the full dataset, we achieved a balance in the model's capabilities in both conversation and mathematical domains. Figure 1: **Our method outperforms full data fine-tuning and ChatGPT filtering method.** The comparison presented the performance of the models fine-tuned with data selected using our method (3,000 items), the full Alpaca dataset and data filtered using ChatGPT (9,229 items). In this context, “G” represents using GPT-4 as the judge, and “H” represents human judgment. ## 2 Related Work **Supervised Fine-tuning.** In the current alignment process of Large Language Models (LLM), Supervised Fine-tuning (SFT) plays a pivotal role. This step aims to fine-tune the LLM using a small amount of data to enable it to follow human user commands. Self-Instruct (Wang et al., 2022) generates a significant volume of data for SFT using seed prompts and teacher models. This approach has led to the development of numerous models trained using distillation methods with powerful models (e.g. GPT-4), such as Alpaca (Taori et al., 2023) and WizardLM (Xu et al., 2023). Apart from distillation with strong models, human-generated data also serves as a high-quality source for SFT data. InstructGPT (Ouyang et al., 2022), for instance, utilizes manually annotated data as a source for SFT in the Reinforcement Learning from Human Feedback (RLHF) method. Vicuna (Zheng et al., 2023), on the other hand, leverages user interaction data to construct the SharedGPT dataset. **Data for Supervised Fine-tuning.** In the context of SFT, data's excellence stands as the most pivotal concern, as it directly determines the performance of the fine-tuned model. It is widely acknowledged that the quality of an SFT dataset hinges on two key aspects: firstly, the distribution of the data should ideally be uniform and aligned with the requirements of the intended usage scenarios. Works such as (Xie et al., 2023; Ji et al., 2023; Chen et al., 2023) focus on the data distribution to enhance training efficiency. Secondly, data quality is generally deemed more critical than quantity during the SFT process. LIMA (Zhou et al., 2023), for example, suggests that the effectiveness of SFT with a small set of high-quality data significantly surpasses that of large-scale datasets. In this paper, we introduce a novel perspective on assessing data quality, emphasizing the learnability of the data by the model. This implies that the data should align with the model's current capabilities and offer the potential for greater improvements in performance. **Data Selection.** Past methods such as DoReMi (Xie et al., 2023), DRO (Oren et al., 2019), RHO (Mindermann et al., 2022), and DSIR (Xie et al., 2023) have primarily focused on data selection during pre-training, and RHO also uses loss based method. However, in the context of SFT, there are significant differences in data distribution compared to pre-training. Additionally, SFT's objective, which is to follow human instructions, is closely tied to model capabilities. Recent SFT data selection approaches, like AlpaGasus (Chen et al., 2023), employ ChatGPT to assess data quality, which carries the risk of data leakage and considers only the inherent quality of the data. Humpback (Li et al., 2023) utilizes complex backtranslation processes, whereas our method is comparatively straightforward and efficient. Instruction Mining (Cao et al., 2023) employs multiple indicators for data selection, while our approach leverages a reference model to highlight data learnability. ## 3 Method ### Overview Besides data distribution and data quality, we argue that the data's learnability is a key factor influencing its excellence for a model. Therefore, we propose the LoBaSS method, which starts from the perspective of data learnability, to explore what kind of data can be better learned in the SFT process, to further guide the construction of data sets in the SFT stage, reduce the training cost of SFT, and improve the training effect of SFT. The main procedure can be divided into two steps: first, using the full data to fine-tune the pretrained model (what we refer to as the initial model here), to obtain the reference model; then, using the reference model and the initial model to calculate the reference loss and the initial loss of each data point, and then obtain the score of each data point through these two losses. Subsequently, sort the scores to obtain the top-ranked data points as the selected dataset. Figure 2 shows the overview of the method. ### Initialization **Full Dataset.** The target of LoBaSS is to select an efficient subset from a large SFT dataset. We need a SFT dataset, which we refer to as the full dataset \(\mathbf{D}_{\text{full}}\). Each data point in the full dataset is formatted as \(\{x_{i},y_{i}\}\), where \(x_{i}\) represents the prompt and \(y_{i}\) represents the response to this prompt. The prompt is composed of a consistent prompt template and instructions. **Initial Model.** LoBaSS does not use online model services. It uses only local models. We need a local pretrained language model, which we refer to as the initial model \(\mathbf{M}_{\text{ini}}\). In contrast, the language model obtained by supervised fine-tuning a pretrained model with mixed dataset \(D_{\text{ori}}\) is referred to as the reference model \(\mathbf{M}_{\text{ref}}\). ### Selection Function Previous work on SFT data screening mainly focused on the quality of data and the distribution of data, without considering the specificity of data to models. We argue that the difficulty and value of learning the same data by different models are different. Whether a specific model can learn certain data well is defined by us as the **learnability** of the data in the previous section. What kind of data has good learnability? We give three constraints: **i) Data lacking informative content for the model should be avoided**, **ii) Data that is excessively demanding for the model should be avoided**, and **iii) Data that can be learned more effectively by the model during the fine-tuning process is preferable**; We now mark a fine-tuned model \(M_{\text{ref}}\) that calculates the SFT loss for a data point \((x_{i},y_{i})\) through a given loss function as \(L_{\text{ref}}(x_{i},y_{i})\) and the loss of the pre-trained model \(M_{\text{inf}}\) for this data point as \(L_{\text{ini}}(x_{i},y_{i})\).The loss function we use in practice is the cross-entropy loss as shown in Equation 1. Next, we will discuss these three constraints in detail to come up with a formula that meets these three constraints we proposed. \[L(x,y):=\frac{\sum_{y^{i}\in y}-\log p(y^{i}|x)}{\text{Len}(y)} \tag{1}\] **Constraint 1. Data lacking informative content for the model should be avoided.** When a task can already be effectively performed by a pre-trained model, there is no need to fine-tune the model extensively on this task, and thus, such data holds limited value for model fine-tuning. This type of data lacks informative content for the model, resulting in marginal performance improvements during the fine-tuning process. Therefore, the introduction of such data should be avoided during fine-tuning. We measure the informativeness of a data point \((x_{i},y_{i})\) by its loss value, determining whether it provides any additional information to the model. If a data point Figure 2: The overview of LoBaSS method. We start from a pretrained model, e.g.LLaMA and mixed SFT dataset, e.g. Alpaca. **Reference Model Training**: the initial model is fine-tuned with the full dataset to get the reference model. **Data Selection**: both loss of reference model and initial model is used to compute the score of each data point and the score is then ranked for selection. exhibits both low \(L_{\text{ini}}(x_{i},y_{i})\) and \(L_{\text{ref}}(x_{i},y_{i})\), it suggests that \((x_{i},y_{i})\) lacks informative content for the pre-trained model. To adhere to Constraint 1, such data should be screened out. **Constraint 2. Data that is excessively demanding for the model should be avoided.** When a task is challenging both for a pre-trained model and for the model after fine-tuning, it is excessively demanding for the model, meaning that the task's difficulty surpasses the model's capability. When a piece of data is incomprehensible or overly challenging for the model, introducing such data during fine-tuning is detrimental. Therefore, we should also avoid introducing such data during the fine-tuning process. We similarly measure the data point's difficulty for the model by examining its loss. If the \(L_{\text{ini}}(x_{i},y_{i})\) and \(L_{\text{ref}}(x_{i},y_{i})\) of a data point \((x_{i},y_{i})\) are both high, then it indicates that \((x_{i},y_{i})\) is is excessively demanding for the model. To adhere to Constraint 2, such data should also be screened out. **Constraint 3. Data that can be learned more effectively by the model during the fine-tuning process is preferable.** When a task is challenging for a pre-trained model but the model can complete this task after fine-tuning, we consider that the data has been efficiently learned by the model. When a data point has been efficiently learned by the model, it indicates that this data point holds meaningful learning significance during the fine-tuning process. We can observe whether a data point is effectively learned by the model by comparing the difference between the loss before and after fine-tuning. Specifically, for a data point \((x_{i},y_{i})\), if \(L_{\text{ini}}(x_{i},y_{i})\) is high, it indicates that the pretrained model cannot perform well on this task; if \(L_{\text{ref}}(x_{i},y_{i})\) is low, it indicates that after fine-tuning, the model has learned its information well and can complete this task. To adhere to Constraint 3, such data points are considered to be selected and retained. Considering the three constraints described above, we need to remove data points \((x_{i},y_{i})\) with both \(L_{\text{ref}}(x_{i},y_{i})\) and \(L_{\text{ini}}(x_{i},y_{i})\) too small or too large, and retain data points with \(L_{\text{ini}}(x_{i},y_{i})\) large and \(L_{\text{ref}}(x_{i},y_{i})\) small. These are the three principles we use to select data. Based on these principles, we propose Equation 2 to score different data points. \[S(x_{i},y_{i})=L_{\text{ini}}(x_{i},y_{i})-L_{\text{ref}}(x_{i},y_{i}) \tag{2}\] Then, we need to select the Top-K scoring data points, which can be expressed as Equation 3.When \(S(x_{i},y_{i})\) is large, it means that the difference between \(L_{\text{ini}}(x_{i},y_{i})\) and \(L_{\text{ref}}(x_{i},y_{i})\) is large. Let's verify whether this equation can meet the three constraints proposed: if \(L_{\text{ini}}(x_{i},y_{i})\) is small and \(L_{\text{ref}}(x_{i},y_{i})\) is small, then \(S(x_{i},y_{i})\) should be small, and \((x_{i},y_{i})\) will be screened out, meeting Constraint 1; if \(L_{\text{ini}}(x_{i},y_{i})\) is large and \(L_{\text{ref}}(x_{i},y_{i})\) is large, then \(S(x_{i},y_{i})\) will also be small and \((x_{i},y_{i})\) will be screened out, meeting Constraint 2; if \(L_{\text{ini}}(x_{i},y_{i})\) is large and \(L_{\text{ref}}(x_{i},y_{i})\) is small, then \(S(x_{i},y_{i})\) will be relatively large and \((x_{i},y_{i})\) will be selected, meeting Constraint 3. Therefore, Equation 3 can well meet the three constraints we proposed. \[D_{\text{select}}=\text{topk}_{(x_{i},y_{i})\in D_{\text{ini}}}\left(L_{\text {ini}}\left(x_{i},y_{i}\right)-L_{\text{ref}}\left(x_{i},y_{i}\right)\right) \tag{3}\] ### Normalization The equation proposed in the previous subsection may have a potential problem: if \(L_{\text{ini}}(x_{i},y_{i})\) corresponding to \((x_{i},y_{i})\) is particularly large and \(L_{\text{ref}}(x_{i},y_{i})\) is also large, it is possible to meet the requirement of a large \(S(x_{i},y_{i})\), but such data clearly does not meet our expectations. We observe the presence of this issue in our experiments, which is elaborated upon in detail in Appendix A.1. To solve this problem, we introduce a normalization term into the formula for score calculation. This introduces a question of choosing \(L_{\text{ref}}\) or \(L_{\text{ini}}\) as the normalization term. In fact, this choice will not change the order of ranking. We prove this in Appendix A.2. In this article, we choose \(L_{\text{ini}}\) as the normalization term, so we can obtain the Equation 4 for calculating the score and the Equation 5 for data selection. \[S_{\text{norm}}(x_{i},y_{i})=\frac{L_{\text{ini}}(x_{i},y_{i})-L_{\text{ref}}(x _{i},y_{i})}{L_{\text{ini}}(x_{i},y_{i})} \tag{4}\] \[D_{\text{select}_{\text{norm}}}=\text{topk}_{(x_{i},y_{i})\in D_{\text{ini}}} \left(\frac{L_{\text{ini}}\left(x_{i},y_{i}\right)-L_{\text{ref}}\left(x_{i},y_{ i}\right)}{L_{\text{ini}}(x_{i},y_{i})}\right) \tag{5}\] ## 4 Experiments ### Experimental Setup **SFT Dataset.** To explore the effectiveness of this method on both high-quality and low-quality data, we conduct experiments using high-quality and low-quality datasets respectively. The prompts of the high-quality and low-quality datasets are identical and both come from the Alpaca dataset. The responses of the high-quality data are generated by the GPT-4, which we call **Alpaca-4**, while the responses of the low-quality data are generated by the GPT-3.5-Turbo, which we call **Alpaca-3.5**. Both datasets contain 52,002 English content and conform to our definition of Full Dataset in the previous section. **Backbones and Baselines.** To explore the scalability of this method, we select 7B and 13B LLaMA (Touvron et al., 2023) models as our backbones. We choose Text-Davinci-003 (Ouyang et al., 2022) as our baseline model. This model, based on GPT-3, has been trained using the RLHF technique. It has undergone several stages, including initial training with manually labeled data for SFT and subsequent fine-tuning using the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. Text-Davinci-003 exhibits strong adherence to user instructions and demonstrates proficient performance. We select the methods of random sampling and ChatGPT-based data filtering (Chen et al., 2023b) as our baseline approaches for comparison, thus demonstrating the effectiveness and superiority of our method. Using ChatGPT for data filtering is a widely adopted method for supervised fine-tuning (SFT) data selection. In this approach, ChatGPT assigns a quality score to each data point in the dataset, ranging from 1 to 5 as an integer, and then filters out the data points with high-quality scores to create the selected dataset. **Test Dataset.** We used a mixed dataset as our test set, including the HH-RLHF, Koala, Self-Instruct, Open Assistant, and Vicuna datasets. The test set consists of 800 test data, each of which is a prompt, covering multiple aspects of daily use, such as generating, math, and coding, and can test the model's ability to follow instructions. **Evaluation Method.** We use two evaluation methods in this paper. One is the Fastchat (Zheng et al., 2023) method, used to compare the models trained with the full data and the models trained with the filtered data to obtain the relative results. The second method is the AlpacaEval (Li et al., 2023b)method, used to compare the models trained with the filtered data with the fixed baseline model (e.g. Text-Davinci-003) to obtain the absolute results. We employ the GPT-4 as the judge and we also employ human annotators for judging. ### Scaling Analysis To verify the effectiveness of this method on high-quality and low-quality datasets, we selected Alpaca-3.5 and Alpaca-4 for experimentation; at the same time, to verify the effectiveness of this method on models of different sizes, we selected 7B and 13B models for experimentation. From the experimental results, it can be observed that the LoBaSS method achieves superior results compared to fine-tuning with the full dataset, even when using only around 6% of the data, whether it is a high-quality or low-quality dataset. We also conduct a detailed analysis of the selected data in Appendix A.3. **Our Method vs. ChatGPT Selecting** We compare the performance of models fine-tuned based on data selected by our method with models fine-tuned on ChatGPT selected data to assess the effectiveness of our approach in comparison to ChatGPT selecting. Figure 1 illustrates the results of this experiment. Since the quality scores generated by the ChatGPT selecting method are discrete integers, we are only able to select a specific number of data points for training. In this experiment, we choose 9,229 data points as a subset and fine-tuned the backbones on this dataset to establish the baseline. Similarly, we use our method to select 3,000 data points as a subset and fine-tune the backbones on this dataset. Another baseline involves fine-tuning the backbones with the entire dataset. The training parameters in the experiment are kept identical. We use GPT-4 and human evaluators as referees. During GPT-4 assessments, we perform position swapping to eliminate positional bias. In human evaluations, we do not disclose which model generated each response, and the placement order is randomized to eliminate potential biases. Due to the high cost of human evaluation, our test dataset was randomly selected by taking 20 questions from each of the five datasets mentioned earlier, resulting in a mixed test set of 100 questions in total. The experimental results indicate that, on the test dataset, models fine-tuned with data selected using our method consistently outperform those fine-tuned with data filtered by ChatGPT and models fine-tuned with the full dataset, whether evaluated through human ratings or GPT-4 based judging. **Alpaca-3.5-Selected vs. Alpaca-3.5-52k** We compared models trained with various sizes of datasets filtered by our method and with the full dataset, using the same hyperparameters and using the Vicuna dataset as the test set. To observe the phenomenon more clearly, we define Win Score \(:=\frac{N_{\text{win}}-N_{\text{bin}}}{N_{\text{total}}}+1\). From the experimental results in Figure 3, we can see that our method can achieve good results on both 7B and 13B models. With as few as 3,000 data points (**5.77 %**), it can achieve better results than those achieved with the full dataset (52K). We can also observe that after using the improved normalization method, the data filtered by our method is significantly better in quality and the model performance is significantly improved. With our method, less than 6% of the data is required to obtain a model performance exceeding that of a model trained with the full dataset, indicating that in the SFT process, much of the data in the dataset does not contribute significantly to the model fine-tuning or even harms the model performance. We started from the learnability of the data and removed data that does not contribute significantly to the model fine-tuning or is even harmful through data filtering, thereby improving the efficiency and performance of model training. **Alpaca-4-Selected vs. Text-Davinci-003** In the Alpaca-3.5 experiment, our method achieved good results, but due to the quality issues of the Alpaca-3.5 dataset, the difficulty of dataset filtering was relatively low. To further validate the versatility of our method, we conducted experiments on the higher-quality Alpaca-4 dataset using 7B and 13B models. We compare the models fine-tuned with various sizes of datasets filtered by our method with Text-Davinci-003, using a mixed dataset of 800 data points as the test set and using GPT-4 as the judge, with a temperature setting of 0. Since the advantage of the normalization method was already verified in the Alpaca-3.5 experiment, we adopted the normalization method in all experiments on Alpaca-4. From the experimental results in Figure 4, we can see that our data filtering method still achieve significant results on the high-quality Alpaca-4 dataset. With as few as 800 data points, similar results to those achieved with full-dataset fine-tuning can be achieved. At 3,200 (6.15%) data points, higher results than those achieved with full-dataset fine-tuning can be achieved on both the 7B and Figure 3: **Models fine-tuned with data selected by LoBaSS surpass the full dataset on Alpaca3.5.** This figure shows the win score comparison between models trained with different sizes of datasets and the full dataset, as well as the improvement brought by using the normalization method. We select the model fine-tuned on the full dataset as the baseline. 13B experimental conditions. Through comparison with randomly selected data, we can prove that the improvement in model performance is not due to the decrease in the number of data points but rather that our data filtering method effectively selects data that is more learnable and valuable for fine-tuning. By increasing the size of the selected dataset, we also explore the effect of fine-tuning models with a similar performance to that of the full-dataset model at around 3,000 (6%) data points. After 10,000 (20%) data points, the performance of the fine-tuned models begins to decline, indicating that the fine-tuning performance of the models has saturated within this range. Since the patterns of the 7B and 13B models are generally consistent, we believe that the saturation phenomenon occurs due to the limit on the number of highly learnable data in the dataset rather than the saturation of the model capacity. ### Data Mixing As introduced in the previous subsection, starting from the perspective of data learnability, our data selection method is capable of effectively compressing LLM fine-tuning data to approximately 6% of the original volume, while achieving results similar to or even better than fine-tuning on the full dataset. In practical applications, one highly meaningful use case for this method is in the context of data mixing. One significant challenge when fine-tuning large language models is the issue of data mixing. Currently, there exists an imbalance in the quantity of data available from different domains. For instance, there is a plethora of data for general question answering, while acquiring data for mathematical domains can be considerably more challenging. This data imbalance results in an imbalance in the fine-tuned model's capabilities, making data balance a critical factor in fine-tuning large language models. Our method can be employed for data compression, enabling the reduction of large-scale datasets to smaller ones, which can then be mixed with smaller datasets to balance the multifaceted capabilities of the model. We conducted experiments using the Alpaca-4 dataset to represent easily accessible general question answering data and the GSM8K dataset to represent challenging-to-obtain math and reasoning datasets. We selected Alpaca-4 datasets of varying sizes using our method and combined them with the full GSM8K dataset (7K) to create a blended dataset for fine-tuning the model. Referring to Figure 5, we can see that when we combine the GSM8K training set with a carefully selected subset from Alpaca-4 and fine-tune the model using this smaller dataset, it effectively bal Figure 4: **Models fine-tuned with data selected by LoBaSS surpass the full dataset on Alpaca4.This figure shows the win rate comparison between models fine-tuned with different sizes of selected datasets and Text-Davinci-003, as well as a comparison with randomly selected data using the same hyperparameters to demonstrate the significant improvement over random data selection. To facilitate comparison, we used a logarithmic horizontal axis. We choose the model fine-tuned on the full dataset as the baseline.** ances the model's ability in general tasks and mathematical reasoning. To be more precise, when we add 3200 filtered data points from Alpaca-4 into the mix, we achieve performance on the GSM8K dataset that's similar to fine-tuning solely on GSM8K data, all the while maintaining nearly the same level of general task performance. This represents a notable improvement compared to fine-tuning with the entire Alpaca-4 dataset. We achieved a performance of 98.6% in general tasks and 124.0% in the GSM8K dataset using only 17.2% of the training data compared to the full dataset via our data selection method. ## 5 Limitation and Discussion One limitation of our work is that while we introduce learnability as a new dimension for measuring SFT data excellence, we primarily focused on methods for only data selection. We did not apply this perspective to the generation and augmentation of SFT data, limiting its potential to enhance model performance. We plan to incorporate the perspective of learnability into the generation and augmentation of data for SFT in the future. This approach will involve creating data that aligns with different model capabilities, further enhancing the effectiveness of fine-tuning. Furthermore, we do not conduct a specific analysis of how different model capabilities influence the model's selection preferences. Specifically, whether a model's stronger performance in a specific domain directly corresponds to LoBaSS-method-selected data being more inclined toward that domain. We have some preliminary analysis in Appendix A.3, and we plan to delve deeper into this issue in future work. Another limitation is that in our exploration of data blending and capacity balance, we have not specifically investigated what proportion of data blending would yield better results in terms of capacity balance. This will be an important research direction for our near-term work. ## 6 Conclusion We first introduced learnability as a new perspective to measure the excellence of SFT data, beyond data distribution and quality. We proposed three constraints to define data learnability, and based on these constraints, we introduced a loss-based data selection method for SFT data selection. In our approach, we use the loss of both the backbone and fine-tuned models to calculate the learnability score, and subsequently select the data with the highest scores. Experimental results on the Alpaca dataset demonstrated that fine-tuning with only around 6% of the data can outperform using the full Figure 5: **Our method significantly enhances the effectiveness of data mixing.** The horizontal axis represents the quantity of selected Alpaca-4 data, plotted on a logarithmic scale, while the vertical axis represents GSM8K accuracy and the Win Rate compared to Text-Davinci-003, respectively. In this case, we are using the model of 7B. dataset and is also superior to the method of data filtering using GPT-4. Our study offers a novel and effective perspective on how to construct and select datasets for SFT, thereby expanding the understanding for LLMs fine-tuning. ## 7 Acknowledgement Yang You's research group is being sponsored by NUS startup grant (Presidential Young Professorship), Singapore MOE Tier-1 grant, ByteDance grant, ARCTIC grant, SMI grant Alibaba grant, and Google grant for TPU usage.
2306.07943
Typical Lipschitz maps on rectifiable metric spaces
This article studies typical 1-Lipschitz images of $n$-rectifiable metric spaces $E$ into $\mathbb{R}^m$ for $m\geq n$. For example, if $E\subset \mathbb{R}^k$, we show that the Jacobian of such a typical 1-Lipschitz map equals 1 $\mathcal{H}^n$-almost everywhere and, if $m>n$, preserves the Hausdorff measure of $E$. In general, we provide sufficient conditions, in terms of the tangent norms of $E$, for when a typical 1-Lipschitz map preserves the Hausdorff measure of $E$, up to some constant multiple. Almost optimal results for strongly $n$-rectifiable metric spaces are obtained. On the other hand, for any norm $|\cdot|$ on $\mathbb{R}^m$, we show that, in the space of 1-Lipschitz functions from $([-1,1]^n,|\cdot|_\infty)$ to $(\mathbb{R}^m,|\cdot|)$, the $\mathcal{H}^n$-measure of a typical image is not bounded below by any $\Delta>0$.
David Bate, Jakub Takáč
2023-06-13T17:41:01Z
http://arxiv.org/abs/2306.07943v2
# Typical Lipschitz maps on rectifiable metric spaces ###### Abstract This article studies typical \(1\)-Lipschitz images of \(n\)-rectifiable metric spaces \(E\) into \(\mathbb{R}^{m}\) for \(m\geq n\). For example, if \(E\subset\mathbb{R}^{k}\), we show that the Jacobian of such a typical \(1\)-Lipschitz map equals \(1\)\(\mathcal{H}^{n}\)-almost everywhere and, if \(m>n\), preserves the Hausdorff measure of \(E\). In general, we provide sufficient conditions, in terms of the tangent norms of \(E\), for when a typical \(1\)-Lipschitz map preserves the Hausdorff measure of \(E\), up to some constant multiple. Almost optimal results for strongly \(n\)-rectifiable metric spaces are obtained. On the other hand, for any norm \(|\cdot|\) on \(\mathbb{R}^{m}\), we show that, in the space of \(1\)-Lipschitz functions from \(([-1,1]^{n},|\cdot|_{\infty})\) to \((\mathbb{R}^{m},|\cdot|)\), the \(\mathcal{H}^{n}\)-measure of a typical image is not bounded below by any \(\Delta>0\). ## 1 Introduction Recall that an \(\mathcal{H}^{n}\)-measurable subset \(E\subset X\) of a (complete) metric space is \(n\)-rectifiable if there exists countably many Lipschitz \(f_{i}\colon A_{i}\subset\mathbb{R}^{n}\to X\) such that \[\mathcal{H}^{n}\left(E\setminus\bigcup_{i\in\mathbb{N}}f_{i}(A_{i})\right)=0. \tag{1.1}\] Here and throughout this article, \(\mathcal{H}^{n}\) denotes the \(n\)-dimensional Hausdorff measure on \(X\). Rectifiable subsets of a metric space were studied by Ambrosio [1], Kirchheim [10] and Ambrosio-Kirchheim [3]. In particular, [3] gives a description of a rectifiable set \(E\subset X\) in terms of weak* tangent spaces, after isometrically embedding \(E\) into a dual space of a separable space such as \(\ell^{\infty}\). Area and coarea formulas are also obtained in terms of the weak* tangent structure. An \(\mathcal{H}^{n}\)-measurable set \(S\subset X\) is called \(n\)-purely unrectifiable if it intersects every \(n\)-rectifiable set \(E\) in an \(\mathcal{H}^{n}\)-null set. The following characterisation of rectifiability in metric spaces has been obtained by the first named author in [4] in terms of non-linear Lipschitz projections on \(X\). We denote by \(\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) the set of all bounded \(1\)-Lipschitz functions \(f\colon X\to\mathbb{R}^{m}\) equipped with the supremum distance, a complete metric space. Recall that a _typical_ element of \(\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) satisfies some property, if the set of the elements satisfying said property is residual (that is, it contains a countable intersection of open dense sets). Since residual sets are closed under countable intersections and are dense, they form a suitable notion of "large" sets. **Theorem** ([4]).: _Let \(X\) be a complete metric space._ * _If_ \(S\subset X\) _is purely_ \(n\)_-purely unrectifiable,_ \(\mathcal{H}^{n}(S)<\infty\) _and_ \[\liminf_{r\to 0}\frac{\mathcal{H}^{n}(B(x,r)\cap S)}{r^{n}}>0\quad\text{for $ \mathcal{H}^{n}$-a.e. $x\in S$,}\] _then a typical_ \(f\in\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) _satisfies_ \(\mathcal{H}^{n}(f(S))=0\)_._ * _If_ \(E\subset X\) _is_ \(n\)_-rectifiable,_ \(\mathcal{H}^{n}(E)>0\) _and_ \(m\geq n\)_, a typical_ \(f\in\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) _satisfies_ \(\mathcal{H}^{n}(f(E))>0\)_._ This should be viewed as an analogue of the Besicovitch-Federer projection theorem [11, Theorem 18.1]. In this article we give a finer description of rectifiable subsets of a metric space. Namely we answer the question, under what conditions is it possible to ensure that a typical \(f\in\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) satisfies \(\mathcal{H}^{n}(f(E))\geq\Delta\) for some \(\Delta=\Delta(X,E)>0\). The answer depends on the local geometry of \(E\) and in particular its tangent spaces. To illustrate our results, we first mention that when the ambient metric space is Euclidean, the strongest possible result holds. **Theorem 1.1**.: _Suppose \(E\subset\mathbb{R}^{k}\) is \(n\)-rectifiable and \(m\geq n\). Then the set of functions \(f\in\operatorname{Lip}_{1}(\mathbb{R}^{k},\mathbb{R}^{m})\) satisfying_ \[\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(E)\] _is residual. Moreover, if \(m>n\), the set of functions \(f\in\operatorname{Lip}_{1}(\mathbb{R}^{k},\mathbb{R}^{m})\) satisfying_ \[\mathcal{H}^{n}(f(E))=\mathcal{H}^{n}(E)\] _is residual._ Here \(J_{E}f\) denotes the Jacobian of \(f\) with respect to the rectifiable set \(E\) (see Definition 2.3). In other words, for a typical \(f\in\operatorname{Lip}_{1}(\mathbb{R}^{k},\mathbb{R}^{m})\), the (approximate) tangential Frechet differential \(f^{\prime}(x)\) is a linear isometry for \(\mathcal{H}^{n}\)-a.e. \(x\in E\). Via the area formula (see Theorem 2.7), the second statement asserts that a typical \(f\) does not lose measure by overlapping, provided \(m>n\). This is false in the case \(m=n\); for example, if \(n=m=1\) then the measure of the image of any function converging to a constant function must converge to \(0\). The result of Theorem 1.1 is new even if one assumes \(E\) to be the unit \(n\)-dimensional cube in \(\mathbb{R}^{n}\). In particular, we see that a typical element of \(\operatorname{Lip}_{1}(\mathbb{R}^{k},\mathbb{R}^{m})\) preserves the measure of a given \(n\)-rectifiable set, whilst destroying the measure of a given purely \(n\)-unrectifiable set. On the other hand, this result may fail in the strongest possible way whenever the ambient space is not Euclidean. In what follows, we work with general norms on \(\mathbb{R}^{n}\) and these shall be denoted by \(|\cdot|_{a}\), \(|\cdot|_{b}\) and similar, without the letters \(a\), \(b\) having any separate meaning. Using an abuse of notation we will also denote by \(|\cdot|_{2}\) the Euclidian norm. Recall that a point \(u\) in a convex subset \(K\) of a vector space \(X\) is an extremal point if, for any \(v\in X\), \(u+v\in K\) and \(u-v\in K\) imply \(v=0\). **Theorem 1.2**.: _Suppose \(n\in\mathbb{N}\) and let \(|\cdot|_{a}\) be any norm on \(\mathbb{R}^{n}\) such that the unit sphere of \(|\cdot|_{a}\) contains a non-extremal point of the unit ball of \(|\cdot|_{a}\). Let \(X=([-1,1]^{n},|\cdot|_{a})\) and, for \(m\geq n\), let \(|\cdot|_{b}\) be an arbitrary norm on \(\mathbb{R}^{m}\). The set_ \[\{f\in\operatorname{Lip}_{1}(X,(\mathbb{R}^{m},|\cdot|_{b})):\mathcal{H}^{n}( f(X))>\Delta\}\] _is residual in \(\operatorname{Lip}_{1}(X,(\mathbb{R}^{m},|\cdot|_{b}))\) if and only if \(\Delta=0\)._ A particular example of \(|\cdot|_{a}\) with a non-extremal point in the boundary is the maximum norm. In general, for \(m\geq n\), we provide sufficient conditions on a pair of normed spaces \((\mathbb{R}^{n},|\cdot|_{a})\) and \((\mathbb{R}^{m},|\cdot|_{b})\) for when it is possible to find a \(\lambda>0\) such that a typical \(f\in\operatorname{Lip}_{1}((\mathbb{R}^{m},|\cdot|_{a}),(\mathbb{R}^{m},|\cdot |_{b}))\) preserves the measure of any rectifiable \(E\subset\mathbb{R}^{n}\) up to a multiplicative factor of \(\lambda\). Indeed, in Definition 5.4 we introduce the notion of a \(\lambda\)_-inflating pair_ of normed spaces, for \(\lambda>0\). Intuitively, this holds whenever any linear map \(A\colon(\mathbb{R}^{n},|\cdot|_{a})\to(\mathbb{R}^{m},|\cdot|_{b})\) of operator norm at most \(1\) and full rank can be _inflated_ in a linear way so that the operator norm of the resulting inflated map is still at most \(1\), but the volume (Jacobian) of the inflated map is at least \(\lambda\) and, moreover, the "inflation" in question does not shrink in any direction. This can be viewed as a geometric condition relating the unit ball of the \(|\cdot|_{a}\to|\cdot|_{b}\) operator norm in \(\mathbb{R}^{n\times m}\) to level sets of the Jacobian functional. In Theorem 5.10, we show that a typical \(f\in\operatorname{Lip}_{1}((\mathbb{R}^{n},|\cdot|_{a}),(\mathbb{R}^{m},| \cdot|_{b}))\) preserves the Hausdorff measure of a given rectifiable set by a factor of \(\lambda\), whenever \((\mathbb{R}^{n},|\cdot|_{a})\) and \((\mathbb{R}^{m},|\cdot|_{b})\) are \(\lambda\)-inflating. Theorem 5.10 can be extended to a rectifiable subset \(E\) of a metric space as follows by considering the (equivalence classes of) _approximate tangent norms_\(T(E,\cdot)\) of \(E\) (see Definition 2.4). For a fixed normed space \((\mathbb{R},|\cdot|_{b})\), we write \(\mathcal{N}^{\mathfrak{b}}_{\inf(\lambda)}(n)\) for the set of equivalence classes of norms on \(\mathbb{R}^{n}\) for which \((\mathbb{R}^{n},|\cdot|_{a})\) and \((\mathbb{R}^{m},|\cdot|_{b})\) are \((\operatorname{vol}(|\cdot|_{a})\lambda)\)-inflating (see Definition 7.1 and formula (2.2)). **Theorem 1.3**.: _Suppose that \(n,m\in\mathbb{N}\), \(n\leq m\), \(X\) is a complete metric space and \(E\subset X\) an \(n\)-rectifiable subset. Suppose \(|\cdot|_{b}\) is a norm on \(\mathbb{R}^{m}\). Let \(\lambda>0\) and assume that for \(\mathcal{H}^{n}\)-a.e. \(x\in E\), one has_ \[T(E,x)\in\mathcal{N}^{\mathfrak{b}}_{\inf(\lambda)}(n).\] _Then for each \(\varepsilon>0\), there is a set \(\widetilde{E}\subset E\) with \(\mathcal{H}^{n}(E\setminus\widetilde{E})<\varepsilon\) and such that the set_ \[\{f\in\operatorname{Lip}_{1}(\widetilde{E},(\mathbb{R}^{m},|\cdot|_{b})):\int_{ \widetilde{E}}J_{\widetilde{E}}\widetilde{f}\;\mathrm{d}\mathcal{H}^{n}\geq \lambda\mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\operatorname{Lip}_{1}(\widetilde{E},(\mathbb{R}^{m},|\cdot|_{b}))\). Moreover, if \(m>n\), then the set_ \[\{f\in\operatorname{Lip}_{1}(\widetilde{E},(\mathbb{R}^{m},|\cdot|_{b})): \mathcal{H}^{n}(f(\widetilde{E}))\geq\lambda\mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\operatorname{Lip}_{1}(\widetilde{E},(\mathbb{R}^{m},|\cdot|_{b}))\)._ The simplest example of an \(n\)-rectifiable metric space, whose \(\mathcal{H}^{n}\)-a.e. approximate tangent lies in \(\mathcal{N}^{b}_{\inf(\Lambda)}(n)\) is obtained simply via choosing any representative \(|\cdot|_{a}\) of any equivalence class \([|\cdot|_{a}]\in\mathcal{N}^{b}_{\inf(\Lambda)}(n)\) (provided the set is non-empty) and letting \(E\) be an \(\mathcal{H}^{n}\)-measurable subset of \((\mathbb{R}^{n},|\cdot|_{a})\). In fact, if \((\mathbb{R}^{n},|\cdot|_{a})\), we are able to extend the relevant functions onto the whole space and may take \(\widetilde{E}=E\), see Theorem 5.10. Since any pair of Euclidean norms are \(1\)-inflating (see Example 5.7), Theorem 1.1 follows from Theorem 5.10. In fact, this observation allows us to prove results in the spirit of Theorem 1.1 for strongly \(n\)-rectifiable subsets of a metric space. A set \(E\subset X\) is _strongly \(n\)-rectifiable_ if, for any \(\varepsilon>0\), we may find functions \(f_{i}\) as in (1.1) that are \((1+\varepsilon)\)-biLipschitz (see Definition 2.10). In Lemma 2.11, we will show that this is equivalent, for \(n\)-rectifiable sets \(E\), to the condition that \(T(E,x)\) contains the Euclidean norm for \(\mathcal{H}^{n}\)-a.e. \(x\in E\). (See also Remark 2.12 for the case that \(E\) is not assumed to be \(n\)-rectifiable.) An achievement of recent analysis on metric spaces is that any RCD metric space satisfies this condition [12, 6, 2, 9, 8], see Remark 7.9. **Theorem 1.4**.: _Suppose \(n\in\mathbb{N}\) and let \(E\) be an \(n\)-rectifiable subspace of a complete metric space \(X\). Denote by \(|\cdot|_{2}\) the Euclidian norm on \(\mathbb{R}^{n}\) and let \(k\in\mathbb{N}\), \(k\leq n\) and_ \[E^{*}=\{x\in E:T(E,x)=[|\cdot|_{2}]\}.\] _Then, for any \(k\)-rectifiable subset \(K\) of \(E^{*}\) we have the following. To each \(\varepsilon>0\), there is a set \(\widetilde{K}\subset K\) with \(\mathcal{H}^{k}(K\setminus\widetilde{K})<\varepsilon\) such that for every \(m\geq k\) a typical \(f\in\operatorname{Lip}_{1}(\widetilde{K},\mathbb{R}^{m})\) satisfies \(J_{\widetilde{K}}f=1\)\(\mathcal{H}^{k}\)-a.e. in \(\widetilde{K}\). Moreover, for any \(m>k\), the set_ \[\{f\in\operatorname{Lip}_{1}(\widetilde{K},\mathbb{R}^{m}):\mathcal{H}^{k}(f( \widetilde{K}))=\mathcal{H}^{k}(\widetilde{K})\}\] _is residual in \(\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m})\)._ Note that, if \(E\) is strongly \(n\)-rectifiable, \(\mathcal{H}^{n}(E\setminus E^{*})=0\) and so, in the case \(k=n\), Theorem 1.4 holds for any positive measure subset of \(E\). Note that our most general results do not apply to the entire rectifiable set \(E\). The principal difficulty of obtaining results on the whole of \(E\) lies in the lack of a useful Lipschitz extension result. Recall that in a general metric space an \(L\)-Lipschitz function into \(\mathbb{R}^{m}\) may be extended to any larger domain, as a \((\sqrt{m}L)\)-Lipschitz function, and this constant is sharp [11, 7.2 Theorem], [7, 2.10.44]. Consequently, when \(m=1\), we do obtain residuality results for \(1\)-rectifiable metric spaces, see Theorem 7.6. Within the study of these objects, we naturally arrive at the question whether a typical function \(f\in\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\) satisfies \(f_{\#}\mathcal{H}^{n}_{|E}\ll\mathcal{H}^{n}\). Here \(f_{\#}\mathcal{H}^{n}_{|E}\) denotes the pushforward of \(\mathcal{H}^{n}_{|E}\), which is the restriction of \(\mathcal{H}^{n}_{X}\) onto \(E\). It is an interesting question, whether the set of these functions is residual in \(\operatorname{Lip}_{1}(X,\mathbb{R}^{m})\). It follows from Theorem 1.1 and the area formula, that in the Euclidean case, this is true. Unfortunately, in general we cannot answer this. We are able to get residuality in the strong space however. That is, we are able to show that a typical element of \(\operatorname{Lip}_{1}^{\operatorname{str}}(X,\mathbb{R}^{m})\) satisfies \(f_{\#}\mathcal{H}^{n}_{E}\ll\mathcal{H}^{n}\) (Corollary 4.7), for \[\operatorname{Lip}_{1}^{\operatorname{str}}(X,\mathbb{R}^{m})=(\operatorname{ Lip}_{1}(X,\mathbb{R}^{m}),\|\cdot\|_{\ell^{\infty}}+\operatorname{Lip}(\cdot)).\] The paper is structured as follows. Section 2 contains preliminaries needed for the rest of the text. In order to prove our residuality statements, we will show that, under various conditions, a set of the form \[\{f\in\operatorname{Lip}_{1}(X,\mathbb{R}^{m}):\mathcal{H}^{n}(f(E))>\lambda \mathcal{H}^{n}(E)\} \tag{1.2}\] is open and dense. The openness statements, which hold in any metric space, are contained in Section 3. Openness of the sets in (1.2) is equivalent to lower semi-continuity of the "area" functional \[f\mapsto\mathcal{H}^{n}(f(E))\] and the theorems are stated in this form. We also prove lower semi-continuity results for the "area formula" functional \[f\mapsto\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}.\] The density statements are harder, and most require additional hypotheses. Section 4 concentrates on the few that actually hold in any metric space. In particular, we show that sets of functions which essentially do not overlap on a fixed \(n\)-rectifiable set \(E\) is dense even in the stronger space \(\operatorname{Lip}_{1}^{\operatorname{str}}(X,\mathbb{R}^{m})\) (see Corollary 4.5). Such functions satisfy the stronger area formula \[\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(f(E)).\] The rest of the paper deals with proving or disproving the density of the sets in (1.2). Sections 5 and 6 give sufficient and necessary conditions, respectively, for the case \(E\subset(\mathbb{R}^{n},|\cdot|_{a})\) for some norm \(|\cdot|_{a}\) on \(\mathbb{R}^{n}\). In Section 5 we show that a \(\lambda\)-inflating pair of norms is sufficient to deduce the density of (1.2), see Theorem 5.10. In particular, this proves Theorem 1.1. On the other hand, in Section 6 we give necessary conditions, in terms of extremal points of the unit ball, for (1.2) to be dense, see Theorem 6.5. This section contains the proof of Theorem 1.2. Finally, Section 7 provides residuality results for \(n\)-rectifiable metric spaces with suitable tangent spaces. It is there that we prove Theorems 1.3 and 1.4 through a combination of theory developed in preceding sections and a modified version of Kirchheim's decomposition result. ### Acknowledgements J.T. is supported by the Warwick Mathematics Institute Centre for Doctoral Training. Both D.B. and J.T. are supported by the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 948021). ## 2 Preliminaries ### Spaces of Lipschitz functions Let \((X,d)=(X,d_{X})\) be a metric space. For \(x\in X\) and \(r>0\), we shall denote by \(B_{X}(x,r)=\{z\in X:d_{X}(x,z)\leq r\}\) the closed ball of radius \(r\) in \(X\). Open balls will be denoted by \(B_{X}^{\infty}(x,r)=\{z\in X:d(x,z)<r\}\). Given a set \(S\subset X\) and \(r\geq 0\), we denote by \(B_{X}(S,r)=\{z\in X:\operatorname{dist}(z,X)\leq r\}\) its \(r\)-neighbourhood. For \(r=0\) this coincides with the topological closure and we shall use the notation \(\overline{S}=B_{X}(S,0)\). Let \((Y,d_{Y})\) be another metric space. Recall that a function \(f\colon X\to Y\) is called \(L\)-Lipschitz for some \(L\in[0,\infty)\) if \[d_{Y}(f(x),f(y))\leq Ld_{X}(x,y)\quad\text{for all $x,y\in X$.}\] The least such \(L\) is called the Lipschitz constant of \(f\) and is denoted by \(\operatorname{Lip}(f)\) or, if we need to be more specific, \(\operatorname{Lip}_{X\to Y}(f)\). A function \(f\colon X\to Y\) is called Lipschitz if \(\operatorname{Lip}(f)<\infty\). A function \(f\colon X\to Y\) is called biLipschitz, if it is Lipschitz, injective and the inverse \(f^{-1}\colon f(X)\to X\) is Lipschitz. In this case, if both \(f\) and \(f^{-1}\) are \(L\)-Lipschitz, we say that \(f\) is \(L\)-biLipschitz. The set of all _bounded_ Lipschitz functions from \(X\) to \(Y\) will be denoted by \(\operatorname{Lip}(X,Y)\). Given a fixed \(L\in[0,\infty)\), we denote \[\operatorname{Lip}_{L}(X,Y)=\{f\in\operatorname{Lip}(X,Y):\operatorname{Lip}( f)\leq L\}.\] Given a set \(\Gamma\) and a normed space \((Y,\|\cdot\|_{Y})\), we denote \(\|\varphi\|_{\infty}=\|\varphi\|_{\ell^{\infty}(\Gamma,Y)}=\sup_{\gamma\in \Gamma}\|\varphi(\gamma)\|_{Y}\) for any \(\varphi\colon\Gamma\to Y\). In the case \(Y\) is a normed space as above, we consider the sets \(\operatorname{Lip}(X,Y)\) and \(\operatorname{Lip}_{L}(X,Y)\) to be equipped with metrics induced by the supremum norm \(\|\cdot\|_{\ell^{\infty}(X,Y)}\). With these metrics, the space \(\operatorname{Lip}(X,Y)\) is a normed linear space, which needs not be complete. However, if \(Y\) is complete, then the space \(\operatorname{Lip}_{L}(X,Y)\) is complete for any \(L\in[0,\infty)\). Occasionally, it will be useful to consider this space equipped with a stronger norm denoted by \[\operatorname{Lip}^{\operatorname{str}}(X,Y)=(\operatorname{Lip}(X,Y),\| \cdot\|_{\ell^{\infty}(X,Y)}+\operatorname{Lip}(\cdot)).\] The symbol \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,Y)\) denotes the space \(\operatorname{Lip}_{L}(X,Y)\) equipped with the metric inherited from \(\operatorname{Lip}^{\operatorname{str}}(X,Y)\). If \(Y\) is complete, then both \(\operatorname{Lip}^{\operatorname{str}}(X,Y)\) and \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,Y)\) are complete. We shall routinely use the following two classical Lipschitz extension results. Firstly, Kirszbraun's theorem (see e.g. [7, 2.10.43]) asserts that if \(H_{1}\) and \(H_{2}\) are Hilbert spaces, \(S\subset H_{1}\) and \(f\colon S\to H_{2}\) is \(L\)-Lipschitz, then \(f\) admits an extension \(f\colon H_{1}\to H_{2}\) which is \(L\)-Lipschitz. Secondly, we recall McShane's extension theorem [11, 7.2 Theorem] asserting that given any metric space \(X\), \(S\subset X\) and \(f\colon S\to\mathbb{R}\) an \(L\)-Lipschitz function, there is an \(L\)-Lipschitz extension \(\widehat{f}\colon X\to\mathbb{R}\). In particular, if \(f\) is bounded on its domain and, say, \(C=\sup_{x\in S}|f(x)|\) then the Lipschitz extension can be also assumed to be bounded by \(C\). Indeed, the function \(f_{0}\colon X\to\mathbb{R}\) given by \[f_{0}=\widetilde{f}\chi_{\{|\widetilde{f}|\leq C\}}+C\chi_{\{\widetilde{f}>C\}}- C\chi_{\{\widetilde{f}<-C\}}\] is easily observed to be bounded by \(C\), an extension of \(f\) and \(L\)-Lipschitz. Also note that by extending coordinate-wise, if \(f\colon S\to\mathbb{R}^{m}\) is \(L\)-Lipschitz, then there is an \(\sqrt{m}L\)-Lipschitz extension \(f\colon X\to\mathbb{R}^{m}\). If \(f\) is bounded on its domain with, say, \(\sup_{x\in S}|f(x)|_{2}=C\) then the extension can also be assumed to be bounded by \(\sqrt{m}C\), i.e. \(\sup_{x\in X}|f(x)|_{2}\leq\sqrt{m}C\). Here \(|\cdot|_{2}\) denotes the Euclidean norm on \(\mathbb{R}^{m}\). Moreover if \(f\colon S\to\mathbb{R}^{m}_{\infty}\) is \(L\)-Lipschitz, then there is an \(L\)-Lipschitz extension \(f\colon X\to\mathbb{R}^{m}_{\infty}\). Here \(\mathbb{R}^{m}_{\infty}\) stands for \(\mathbb{R}^{m}\) equipped with the maximum norm. The result remains true after replacing \(\mathbb{R}^{m}_{\infty}\) with the Banach space \[\ell^{\infty}(\Gamma)=(\{\varphi\colon\Gamma\to\mathbb{R}:\sup_{\gamma\in \Gamma}\varphi(\gamma)<\infty\},\|\cdot\|_{\ell^{\infty}(\Gamma,\mathbb{R})}),\] for any set \(\Gamma\). If \(X\) is a topological space and \(H\subset X\), we call \(H\)_residual_ if \(H\) contains an intersection of countably many dense open sets. Baire's theorem asserts that if \(X\) is a complete metric space, then all of its residual subsets are dense in \(X\). This means that the family of residual subsets of \(X\) is closed under countable intersections, supersets and contains only dense sets. Therefore it is a suitable notion of "large" sets. We say that a typical element of \(X\) satisfies some property (P), if the set of its elements satisfying the property (P) is residual in \(X\). Recall that if \(Y\) is a Banach space and \(X\) is a metric space, the spaces \(\operatorname{Lip}_{L}(X,Y)\), \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,Y)\) and \(\operatorname{Lip}^{\operatorname{str}}(X,Y)\) are all complete, so residual subsets of these spaces are dense. It should be also stated that if \(H\subset\operatorname{Lip}_{L}(X,Y)\) is residual in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,Y)\), then it must be dense in \(\operatorname{Lip}_{L}(X,Y)\). However, it might not need to be residual in \(\operatorname{Lip}_{L}(X,Y)\). In particular, the family of sets which are residual in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,Y)\) forms a "reasonable" notion of large sets in \(\operatorname{Lip}_{L}(X,Y)\) as the family is again closed under countable intersection, supersets and contains only dense sets. As a nice illustrative example, it can be easily checked that \(\{f\in\operatorname{Lip}_{L}(X,Y):\operatorname{Lip}(f)=L\}\) is residual in \(\operatorname{Lip}_{L}(X,Y)\). ### Norms on finite dimensional spaces Given \(n\in\mathbb{N}\) a norm on \(\mathbb{R}^{n}\) will generally be denoted by symbols such as \(|\cdot|_{a}\), \(|\cdot|_{b}\) etc. Note that the letters \(a\), \(b\) on their own do not have any meaning. Using abuse of notation, we will also denote by \(|\cdot|_{2}\) the Euclidean norm and by \(|\cdot|_{\infty}\) the supremum norm. If the particular space (dimension) needs to be specified, we write \(|\cdot|_{\mathbb{R}^{n}_{a}}\) instead. The ball of a norm \(|\cdot|_{a}\) of radius \(r>0\) centred at \(x\) is denoted by \(B_{a}(x,r)\). We write \(B_{a}=B_{a}(0,1)\). In particular, the unit Euclidean ball will be denoted by \(B_{2}\) or \(B_{\mathbb{R}^{n}_{2}}\) if the dimension is relevant. The symbol \(\|\cdot\|\) is reserved for operator norms and norms on infinite dimensional spaces. If \(n,m\in\mathbb{N}\), we denote by \(\mathcal{L}(\mathbb{R}^{n},\mathbb{R}^{m})=\mathbb{R}^{n\times m}\) the space of linear operators from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\). If \(|\cdot|_{a}\), \(|\cdot|_{b}\) are norms on \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\), we denote by \(\|\cdot\|_{a\to b}\) or \(\|\cdot\|_{\mathbb{R}^{n}_{a}\to\mathbb{R}^{m}_{b}}\) the operator norm induced by \(|\cdot|_{a}\) and \(|\cdot|_{b}\), i.e. \[\|A\|_{a\to b}=\sup_{s\in B_{a}}|A(x)|_{b}\quad\text{for }A\in\mathbb{R}^{n \times m}.\] The symbol \(B_{a\to b}\) or, if more clarity is needed, the symbol \(B_{\mathbb{R}^{n}_{a}\to\mathbb{R}^{m}_{b}}\) will be used to denote the unit ball of \(\|\cdot\|_{a\to b}\) in the space of linear operators \(\mathcal{L}(\mathbb{R}^{n}_{a},\mathbb{R}^{m}_{b})\). Recall the following sufficient condition for a function to be Lipschitz in convex subsets of normed spaces. **Lemma 2.1**.: _Suppose \(K\subset\mathbb{R}^{n}\) is a convex set and assume \(|\cdot|_{a}\) and \(|\cdot|_{b}\) are norms on \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) respectively. If \(f\in\operatorname{Lip}(K,\mathbb{R}^{m})\) and there is some \(L\in[0,\infty)\) such that for \(\mathcal{H}^{1}\)-a.e. \(x\), the Frechet differential \(f^{\prime}(x)\in\mathcal{L}(\mathbb{R}^{n},\mathbb{R}^{m})\) exists and satisfies_ \[\|f^{\prime}(x)\|_{a\to b}\leq L,\] _then \(f\in\operatorname{Lip}_{L}(K_{a},\mathbb{R}^{m}_{b})\)._ Proof.: Suppose \(x,y\in K\) satisfy \(|x-y|_{a}=1\). Let \(\varphi\colon[0,1]\to\mathbb{R}^{m}\) be given by \(\varphi(t)=f(ty+(1-t)x)\). Then, by the fundamental theorem of calculus \[f(x)-f(y)=\int_{0}^{1}\varphi^{\prime}(t)\;\mathrm{d}t.\] and so \[|f(x)-f(y)|_{b}\leq\int_{0}^{1}|\varphi^{\prime}(t)|_{b}\;\mathrm{d}t=\int_{0}^{1 }|f^{\prime}(\varphi(t))(x-y)|_{b}\leq\int_{0}^{1}||f^{\prime}(\varphi(t))||_{a \to b}\leq L.\] The general case follows by a simple scaling argument. If \(|\cdot|_{a}\) is a norm on \(\mathbb{R}^{n}\) and \(W\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is an invertible linear map, we denote by \(|\cdot|_{W(a)}\) the norm on \(\mathbb{R}^{n}\) given by \[|x|_{W(a)}=|W^{-1}x|_{a}.\] Observe that with this notation, one has \(W(B_{a})=B_{W(a)}\). Finally, for a fixed \(n\in\mathbb{N}\) we define an equivalence relation \(\sim\) on the set of all norms on \(\mathbb{R}^{n}\). This relation is given by \(|\cdot|_{a_{1}}\sim|\cdot|_{a_{2}}\) if and only if there is an invertible linear map \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) such that \(A(B_{a_{1}})=B_{a_{2}}\). Classes of equivalence will be denoted in the standard way by \[[|\cdot|_{a}]=\{|\cdot|_{a^{\prime}}:|\cdot|_{a^{\prime}}\sim|\cdot|_{a}\}.\] Note that \(|\cdot|_{a_{1}}\sim|\cdot|_{a_{2}}\) if and only if \(\mathbb{R}^{n}_{a_{1}}\) is isometrically isomorphic to \(\mathbb{R}^{n}_{a_{2}}\). **Definition 2.2**.: Let \(X\) be a vector space and \(K\subset X\) a convex set. A point \(u\in K\) is called an _extremal_ point of \(K\) if for any \(v\in X\) \[u+v\in K\quad\text{and}\quad u-v\in K,\quad\text{implies}\quad v=0.\] Suppose \(X\) is equipped with a norm \(|\cdot|\). Then the unit ball \(B=\{x\in X:|x|\leq 1\}\) is a convex set. If \(X\) is a finite-dimensional Banach space, then \(B\) has an extremal point. If \(x\) is an extremal point of \(B\), then \(x\in\partial B\). Suppose now that \(X\) is a Banach space and \(x\in\partial B\). It follows from the Hahn-Banach theorem, that there exists a _supporting hyperplane_ of \(B\) containing \(x\), i.e. by definition, there is \(x^{*}\in X^{*}\) such that \(x^{*}(x)=1\) and \(B\subset\{y\in X:x^{*}(y)\leq 1\}\). We say that \(x\) is _strongly extremal_ if there exists \(x^{*}\in X^{*}\) such that for \(y\in B\), \(x^{*}(y)=1\) if and only if \(y=x\), and \(B\subset\{y\in X:x^{*}(y)\leq 1\}\). If \(X\) is finite-dimensional, then there always exists a strongly extremal point \(x\in\partial B\). Indeed, as \(\partial B\) is compact, find \(x\in\partial B\) which maximizes the Euclidean distance from \(0\). Then consider the tangent (affine) hyperplane \(T\) to the Euclidean ball of the corresponding radius at \(x\). This is a supporting hyperplane of \(B\) and \(y\in B\cap T\) if and only if \(y=x\). Whence \(x\) is strongly extremal. It is easily verified that a strongly extremal point is also an extremal point. Suppose \(n\in\mathbb{N}\) and \(K\subset\mathbb{R}^{n}\) is a convex set. An affine hyperplane \(T\subset\mathbb{R}^{n}\) (i.e. an affine subspace of dimension \(\dim T=\dim X-1\)), is called an _affine tangent_ to \(K\) at \(x\) if \(T\) is a supporting hyperplane of \(K\) and \(x\in T\) (this is just a change of name in the finite dimensional case). Suppose \(n,m\in\mathbb{N}\), \(n\leq m\). We define the functional \(\operatorname{vol}\colon\mathbb{R}^{n\times m}\to[0,\infty)\) by \[\operatorname{vol}A=\sqrt{\det A^{T}A}.\] Recall that, for \(n\)-dimensional Hausdorff measure (see Section 2.3 below) one has \[\mathcal{H}^{n}(A(E))=\operatorname{vol}(A)\mathcal{H}^{n}(E)\quad\text{for any $E\subset\mathbb{R}^{n}$, $\mathcal{H}^{n}$-measurable set}. \tag{2.1}\] We extend this definition to norms. For a norm \(|\cdot|_{a}\) on \(\mathbb{R}^{n}\), we let \[\operatorname{vol}|\cdot|_{a}=\frac{2^{n}}{\mathcal{H}^{n}(B_{a})}. \tag{2.2}\] Recalling the fact that one always has \(\mathcal{H}^{n}_{a}(B_{a})=2^{n}\) together with Haar's theorem, it follows that \[\mathcal{H}^{n}_{a}(E)=\operatorname{vol}(|\cdot|_{a})\mathcal{H}^{n}(E)\quad \text{for any $E\subset\mathbb{R}^{n}$, $\mathcal{H}^{n}$-measurable set}.\] ### Rectifiable metric spaces Suppose \(X\) is a metric space. For each \(s\in(0,\infty)\), \(\delta>0\) and \(E\subset X\) we define the quantity \[\mathcal{H}^{s}_{\delta}(E)=\inf\{\sum_{i\in\mathbb{N}}(\operatorname{diam}E_{i} )^{s}:E\subset\bigcup_{i\in\mathbb{N}}E_{i},\;\operatorname{diam}E_{i}\leq \delta\}.\] The \(s\)-dimensional Hausdorff measure of \(E\) is the quantity \[\mathcal{H}^{s}(E)=\sup_{\delta>0}\mathcal{H}^{s}_{\delta}(E).\] This constitutes an outer measure, which can be restricted to a Borel measure. If the underlying metric space needs to be specified, we use notation such as \(\mathcal{H}^{s}_{X}\) and similar. Note that if \(X\) is complete and \(E\subset X\) is \(\mathcal{H}^{n}\)-measurable with \(\sigma\)-finite \(\mathcal{H}^{n}\)-measure, then \(\mathcal{H}^{n}_{|E}\) is inner regular by compact sets. Indeed, it is not necessary to assume separability of \(X\) as \(\overline{E}\) is separable and complete. If \(X=\mathbb{R}^{n}\) and \(|\cdot|_{a}\) is a norm on \(\mathbb{R}^{n}\), then for \(k\leq n\), we shall use the conventions \[\mathcal{H}^{k}=\mathcal{H}^{k}_{|\cdot|_{2}}\quad\text{and}\quad\mathcal{H} ^{k}_{a}=\mathcal{H}^{k}_{|\cdot|_{a}}.\] Note that in the above situation one always has [10, Lemma 6 (i)] \[\mathcal{H}^{n}_{a}(B_{a})=2^{n}.\] **Definition 2.3**.: Let \(X\) be a complete metric space. An \(\mathcal{H}^{n}\)-measurable set \(E\subset X\) is \(n\)_-rectifiable_ is there exists a countable number of sets \(F_{i}\subset\mathbb{R}^{n}\) and Lipschitz maps \(f_{i}\colon F_{i}\to X\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i}f_{i}(F_{i}))=0.\] If one has \(S\subset X\) such that \(\mathcal{H}^{n}(S\cap E)\) for any \(n\)-rectifiable set \(E\subset X\), then \(S\) is called \(n\)_-purely unrectifiable_. Given a metric space \(X\) and its subset \(E\), we say that \(x\in X\) is an \(\mathcal{H}^{n}\)-density point of \(E\) if \[\lim_{r\to 0}\frac{1}{(2r)^{n}}\mathcal{H}^{n}(E\cap B(x,r))=1.\] If \(E\) if \(n\)-rectifiable, than its \(\mathcal{H}^{n}\)-a.e. point is a density point of \(E\)[10, Theorem 9]. Suppose \(F\subset\mathbb{R}^{n}\) and \(f\colon F\to\mathbb{R}^{m}\) for some \(m,n\in\mathbb{N}\). Given \(u\in F\) an \(\mathcal{H}^{n}\)-density point of \(F\), we say that a linear map \(f^{\prime}(u)\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is the _(approximate) Frechet differential of \(f\) at \(u\)_ if \[\lim_{v\in F,v\to u}\frac{f(u)-f(v)-f^{\prime}(u)(u-v)}{|u-v|_{2}}=0.\] It is a consequence of the Rademacher's differentiation theorem [7, 3.1.6] combined with Kirszbraun's extension theorem [7, 2.10.43] that at \(\mathcal{H}^{n}\)-a.e. \(u\in F\), the Frechet differential of \(f\) exists uniquely. Suppose \(|\cdot|_{a}\) is a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) is a norm on \(\mathbb{R}^{m}\). It follows from the definition that if \(f\colon F_{a}\to\mathbb{R}^{m}_{b}\) is \(L\)-Lipschitz for some \(L\in[0,\infty)\), then \[\|f^{\prime}(u)\|_{a\to b}\leq L\] for every \(u\in F\) for which \(f^{\prime}(u)\) exists. Now let \(X\) be a metric space and \(f\colon F\to X\). A semi-norm \(s(\cdot)\) on \(\mathbb{R}^{n}\) is called a _metric differential_ of \(f\) at \(u\in\mathbb{R}^{n}\) if one has \[\lim_{v\in F,v\to u}\frac{d(f(v),f(u))-s(v-u)}{|v-u|_{2}}=0.\] It is the classical result of Kirchheim [10, Theorem 2] that if \(f\) is Lipschitz, then for \(\mathcal{H}^{n}\)-a.e. \(u\in\mathbb{R}^{n}\) the metric differential \(s\) of \(f\) at \(u\) exists uniquely. In that case, we shall denote \(|f^{\prime}|(u)=s\). Note that if \(X\) is a Euclidean space of dimension \(m\), we denote by \(f^{\prime}(u)\) the classical (approximate) Frechet differential of \(f\) at \(x\in F\). In this case, according to the conventions above, for any \(w\in\mathbb{R}^{n}\) one has \(|f^{\prime}(u)(w)|_{\mathbb{R}^{m}_{2}}=|f^{\prime}|(u)(w)\), provided the left hand side is defined. **Definition 2.4** ([10], Definition 10).: Let \(n\in\mathbb{N}\), let \(E\) be a metric space and let \(x\in E\). A norm \(|\cdot|_{a}\) on \(\mathbb{R}^{n}\) is called an _approximate tangent norm to \(E\) at \(x\)_, if there is a set \(\widetilde{E}\subset E\) such that \(x\) is an \(\mathcal{H}^{n}\)-density point of \(\widetilde{E}\) and to each \(r>0\), there is a set \(F_{r}\subset\mathbb{R}^{n}\) and a map \(I_{r}\colon(F_{r},|\cdot|_{a})\to\widetilde{E}\cap B(x,r)\), which is a biLipschitz bijection satisfying \[\lim_{r\to 0}\max\{\operatorname{Lip}(I_{r}),\operatorname{Lip}(I_{r}^{-1})\}=1.\] From [10, Theorem 9], it immediately follows that if \(X\) is a complete metric space and \(E\) is an \(n\)-rectifiable subset with \(\mathcal{H}^{n}(E)<\infty\), then \(E\) admits an approximate tangent norm at \(\mathcal{H}^{n}\)-a.e. point of \(E\). Moreover, the approximate tangent norm is unique, up to linear isometry, at \(\mathcal{H}^{n}\)-a.e. point of \(E\). Finally, it also follows from the proof of [10, Theorem 9] that if \(F\subset\mathbb{R}^{n}\), \(f\colon F\to E\) and \(u\in F\) is such that \(|f^{\prime}|(u)\) is a norm, then \(|f^{\prime}|(u)\) is a tangent norm to \(E\) at \(f(u)\) and it is unique up to linear isometry. We write \[T(E,x)=[|\cdot|_{a}],\] provided \(|\cdot|_{a}\) is an approximate tangent norm to \(E\) at \(x\in E\), which is unique up to a linear isometry. Note that \(T(E,x)\) is defined for \(\mathcal{H}^{n}\)-a.e. \(x\in E\). **Remark 2.5**.: By [3, Proposition 5.8], tangent norms agree with the tangent spaces of Ambrosio and Kirchheim. A notion of a tangent metric measure space was recently introduced in [5] that is applicable to our setting. One easily verifies that, for \(n\)-rectifiable \(E\subset X\) and \(\mathcal{H}^{n}\)-a.e. \(x\in E\), \(|\cdot|_{a}\) is an approximate tangent norm to \(E\) at \(x\) if and only if \((\mathbb{R}^{n},|\cdot|_{a},0)\) is a tangent metric measure space of \((E,x)\) in the sense of [5]. What follows is a refined version of [10, Lemma 4]. **Lemma 2.6**.: _Let \(X\) be a metric space, \(F\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{n}\)-measurable with \(\mathcal{H}^{n}(F)<\infty\) and \(f\colon F\to X\) Lipschitz. For each \(\varepsilon>0\) there is a compact set \(K\subset X\) with_ \[\mathcal{H}^{n}(X\setminus K)<\varepsilon,\] _possessing the following property. For each \(\theta>0\) there is a finite collection of sets \(G_{i}\subset K\), \(i=1,\ldots,i_{0}\) such that the \(G_{i}\) are pairwise disjoint open subsets of \(K\) and_ * \(K=\bigcup_{i=1}^{i_{0}}G_{i}\)_,_ * _to each_ \(i\)_, there is some_ \(x_{i}\in K\)_,_ \(F_{i}\subset\mathbb{R}^{n}\) _and_ \(|\cdot|\in T(K,x_{i})\) _such that_ \(G_{i}\) _is_ \((1+\theta)\)_-biLipschitz to_ \((F_{i},|\cdot|)\)_._ Proof.: First fix \(\theta>0\). For any \(\varepsilon>0\), the existence of a \(K\subset X\) satisfying \(\mathcal{H}^{n}(X\setminus K)<\varepsilon\), (i) for compact \(G_{i}\) and (ii) for arbitrary norms \(|\cdot|\) on \(\mathbb{R}^{n}\) is precisely given by [10, Lemma 4] and using the inner regularity of \(\mathcal{H}^{n}\) on \(\mathbb{R}^{n}\). It is evident from the proof of [10, Lemma 4] that one may in fact take each \(|\cdot|\in T(K,x_{i})\). To obtain relatively open \(G_{i}\), for each \(j\in\mathbb{N}\) apply the established statement for \(\varepsilon_{j}=2^{-j}\varepsilon\) and \(\theta_{j}=1/j\) to obtain \(i_{j}\) many pairwise disjoint compact sets \(G_{i}^{j}\). Setting \[K=\bigcap_{j\in\mathbb{N}}\bigcup_{i=0}^{i_{j}}G_{i}^{j}\] completes the proof, since each \(G_{i}^{j}\cap K\) is relatively open. Suppose \(X\) is a complete metric space and \(E\subset X\) is \(n\)-rectifiable. Let \(x\in E\) and suppose there are sets \(\widetilde{E}\subset E\), \(\widetilde{F}\subset\mathbb{R}^{n}\) and a biLipschitz map \(I\colon\widetilde{F}\to\widetilde{E}\) such that \(x\) is a density point of \(\widetilde{E}\). Suppose the metric differential \(|I^{\prime}|(I^{-1}(x))\) exists and is a norm. Suppose \(f\colon X\to\mathbb{R}^{m}\) is a Lipschitz map such that \((f\circ I)^{\prime}(I^{-1}(x))\) exists. Then we define the _Jacobian of \(f\) at \(x\) with respect to \(E\)_ as \[J_{E}f(x)=\frac{\operatorname{vol}(f\circ I)^{\prime}(I^{-1}(x)))}{\operatorname {vol}(|I^{\prime}|(I^{-1}(x)))}. \tag{2.3}\] This notion is independent of the particular choice of \(I\) and \(\widetilde{E}\) and is easily shown to agree with the Jacobian of Ambrosio and Kirchheim, defined via isometric embeddings into separable dual spaces, \(\mathcal{H}^{n}\)-a.e. in \(E\) (see [3, (8.4)]). In particular we obtain the following metric version of the area formula. **Theorem 2.7** ([3], Theorem 8.2).: _For any metric space \(X\), \(n\)-rectifiable \(E\subset X\) and Lipschitz \(f\colon E\to\mathbb{R}^{m}\),_ \[\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}=\int_{f(E)}\#f^{-1}(u)\;\mathrm{d} \mathcal{H}^{n}(u). \tag{2.4}\] We remark also that this notion agrees with the classical notion of a Jacobian of a function \(\mathcal{H}^{n}\)-a.e. In particular, if \(E\subset\mathbb{R}^{n}\), one has \(J_{E}f(x)=\operatorname{vol}f^{\prime}(x)\) if right hand side is well defined. Notice that the "charts" \(I\) above can be obtained from the definition of a tangent norm to \(E\) at a given point \(x\), provided a tangent exists. If the tangent is also unique up to isomorphism, \(J_{E}f(x)\) depends only on \(f\) and \(T(E,x)\). With this definition of the metric Jacobian, we are able to easily obtain the following decomposition lemma. **Lemma 2.8**.: _Suppose \(X\) is a complete metric space and \(E\) an \(n\)-rectifiable subset. If \(m\geq n\) and \(f\colon E\to\mathbb{R}^{m}\) is Lipschitz, then there exists a countable number of compact sets_ \[E_{i}\subset\{x\in E:J_{E}(x)>0\},\] _such that_ \[\mathcal{H}^{n}(\{x\in E:J_{E}(x)>0\}\setminus\bigcup_{i}E_{i})=0\] _and \(f\) is injective on each \(E_{i}\). In particular, the formula_ \[\sum_{i}\mathcal{H}^{n}(f(E_{i}))=\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n} \tag{2.5}\] _holds._ Proof.: The first part of the assertion follows from the combination of Lemma 2.6, the definition of \(J_{E}f\) and the Euclidean result [7, 3.2.2. Lemma]. The "in particular" part then follows immediately from the area formula (2.4). We shall often work with the introduced notions on subsets of \(E\), therefore we require the following statement. **Lemma 2.9**.: _Let \(X\) be a complete metric space and \(E\) an \(n\)-rectifiable subset with \(\mathcal{H}^{n}(E)<\infty\). Suppose \(\widetilde{E}\subset E\) is any \(\mathcal{H}^{n}\)-measurable set. Then_ \[T(E,x)=T(\widetilde{E},x)\quad\text{and}\quad J_{E}f(x)=J_{\widetilde{E}}f(x) \quad\text{for $\mathcal{H}^{n}$-a.e. $x\in\widetilde{E}$ and every $f\in\operatorname{Lip}(E,\mathbb{R}^{m})$}.\] Proof.: By [5, Lemma 2.3], for \(\mathcal{H}^{n}\)-a.e. \(x\in\widetilde{E}\), we have the density estimate \[\limsup_{r\to 0_{+}}\frac{\mathcal{H}^{n}(B(x,r)\cap(E\setminus\widetilde{E})) }{(2r)^{n}}=0.\] From this, the statement about tangents follows easily. The statement about the Jacobians then follows from the statement about tangents (together with uniqueness of tangents \(\mathcal{H}^{n}\)-a.e.). Finally we turn our attention to strongly \(n\)-rectifiable metric spaces. **Definition 2.10**.: Let \(X\) be a complete metric space. An \(\mathcal{H}^{n}\)-measurable set \(E\subset X\) is _strongly \(n\)-rectifiable_ if, for any \(\varepsilon>0\), there exist a countable number of sets \(F_{i}\subset\mathbb{R}^{n}\) and \((1+\varepsilon)\)-biLipschitz maps \(f_{i}\colon(F_{i},|\cdot|_{2})\to E\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i}f_{i}(F_{i}))=0. \tag{2.6}\] More generally, given a norm \(|\cdot|_{a}\) on \(\mathbb{R}^{n}\), an \(\mathcal{H}^{n}\)-measurable set \(E\subset X\) is _strongly \(|\cdot|_{a}\)-rectifiable_ if, for any \(\varepsilon>0\), there exist a countable number of sets \(F_{i}\subset\mathbb{R}^{n}\) and \((1+\varepsilon)\)-biLipschitz maps \(f_{i}\colon(F_{i},|\cdot|_{a})\to X\) such that (2.6) holds. **Lemma 2.11**.: _Let \(X\) be a complete metric space, let \(E\subset X\) be \(n\)-rectifiable and \(|\cdot|_{a}\) a norm on \(\mathbb{R}^{n}\). Then \(E\) is strongly \(|\cdot|_{a}\)-rectifiable if and only if, for \(\mathcal{H}^{n}\)-a.e. \(x\in E\), \(T(E,x)=[|\cdot|_{a}]\)._ Proof.: First suppose that \(E\) is strongly \(|\cdot|_{a}\)-rectifiable. Fix \(\varepsilon>0\) and, for \(F\subset\mathbb{R}^{n}\), let \(f\colon(F,|\cdot|)\to E\) be \((1+\varepsilon)\)-biLipschitz. Then for \(\mathcal{H}^{n}\)-a.e. \(u\in F\), \(T(E,f(u))=[|f^{\prime}|(u)]\). However, since \(f\) is \((1+\varepsilon)\)-biLipschitz, \[\frac{|v|_{a}}{1+\varepsilon}\leq|f^{\prime}|(u)(v)\leq(1+\varepsilon)|v|_{a}\] for all \(v\in\mathbb{R}^{n}\). As for any \(\varepsilon>0\), and \(\mathcal{H}^{n}\)-a.e. \(x\in E\), we may find \(f\), \(F\) and \(u\) as above with \(x=f(u)\), we have \(|\cdot|_{a}\in T(E,x)\) and so, by uniqueness, \(T(E,x)=[|\cdot|_{a}]\) for \(\mathcal{H}^{n}\)-a.e. \(x\in E\). Conversely, if \(E\) is \(n\)-rectifiable and \(T(E,x)=[|\cdot|_{a}]\) for \(\mathcal{H}^{n}\)-a.e \(x\in E\), then Lemma 2.6 implies that \(E\) is strongly \(|\cdot|_{a}\)-rectifiable. **Remark 2.12**.: A much stronger result is obtained from [5]. Indeed, suppose that \(E\subset X\) satisfies \(\mathcal{H}^{n}(E)<\infty\) and has positive lower \(n\)-dimensional Hausdorff density at \(\mathcal{H}^{n}\)-a.e. point. Then \(E\) is strongly \(|\cdot|_{a}\)-rectifiable whenever, at \(\mathcal{H}^{n}\)-a.e. \(x\in E\), \(E\) has a unique "weak Gromov-Hausdorff tangent" that equals \((\mathbb{R}^{n},|\cdot|_{a})\). In fact, it suffices that, for \(\mathcal{H}^{n}\)-a.e. \(x\in E\), all such tangents are \(K_{x}\)-biLipschitz images of \(\mathbb{R}^{n}\) and that at least one tangent at \(x\) equals \((\mathbb{R}^{n},|\cdot|_{a})\). Conversely, if \(E\) is strongly \(|\cdot|_{a}\)-rectifiable, then the weak Gromov-Hausdorff tangents uniquely equal \((\mathbb{R}^{n},|\cdot|_{a})\)\(\mathcal{H}^{n}\)-a.e., since they agree with \(T(E,\cdot)\). ## 3 Lower semi-continuity of some area related functionals The goal of this section is to study the openness part of the residuality result, i.e. to study lower semi-continuity of the "area" functional given by \[f\mapsto\mathcal{H}^{n}(f(E))\] and the "area formula" functional given by \[f\mapsto\int_{E}J_{E}f\ \mathrm{d}\mathcal{H}^{n}\] in the relevant settings. This follows, to some degree, the approach from [4]. Mainly we use a modified version of [4, Lemma 7.3]. We structure the section into two subsections. In the first, we study the local behaviour of both of the aforementioned functionals; in the second, we study global behaviour of the area functional and use lower semi-continuity thereof to obtain lower semi-continuity of the area formula functional. For the entirety of this section, we let \(m,n\in\mathbb{N}\) with \(n\leq m\) and denote by \(B(x,r)\) the Euclidean ball in \(\mathbb{R}^{n}\) centred at \(x\) of diameter \(r\). We equip the spaces \(\mathbb{R}^{n}\), \(\mathbb{R}^{m}\) with the Euclidean norms. ### Local behaviour of area and area formula Firstly, we shall need a result for continuous functions based on the Brouwer's fixed point theorem. The following lemma is a modified version of [4, Lemma 7.3] and its proof follows from [4, Lemma 7.3]. **Lemma 3.1**.: _Let \(\varepsilon>0\) and \(F\colon B(0,\varepsilon)\to B(0,\varepsilon)\) be a continuous function. Let \(\eta\in(\frac{1}{2^{n}},1)\) and suppose_ \[|F(y)-y|<\varepsilon(1-\sqrt[n]{\eta})\quad\text{for all }y\in\partial B(0, \varepsilon).\] _Then \(F(B(0,\varepsilon))\supset B(0,\sqrt[n]{\eta}\varepsilon)\)._ **Theorem 3.2** (Local lower semi-continuity of area).: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open set, \(f\colon\Omega\to\mathbb{R}^{m}\) a continuous function, \(x\in\Omega\) and assume \(f^{\prime}(x)\) exists. Let \(\eta\in(0,1)\). Then there are \(\delta>0\) and \(r_{0}>0\) such that for all \(r\leq r_{0}\) and any continuous function \(g\colon B(x,r)\to\mathbb{R}^{m}\) with_ \[\|g-f\|_{\infty}\leq\delta r,\] _it holds that_ \[\mathcal{H}^{n}(g(B(x,r)))\geq\eta\operatorname{vol}f^{\prime}(x)\mathcal{H}^ {n}(B(x,r)). \tag{3.1}\] Proof.: As Hausdorff measures are invariant under translations, we may assume \(x=0\) and \(f(0)=0\). Let us denote \(v=\operatorname{vol}f^{\prime}(x)\). If \(v=0\), the statement is trivial, so we can assume \(v>0\) which is equivalent to stating that \(A=f^{\prime}(x)\) is of full rank. Thus, the map \(A\colon\mathbb{R}^{n}\to Y=A(\mathbb{R}^{n})\) is a linear invertible map. As \(f(0)=0\), we have, by the definition of a Frechet derivative, \[\lim_{y\to 0}\frac{|f(y)-A(y)|}{|y|}=0. \tag{3.2}\] Let us denote by \(P\colon\mathbb{R}^{m}\to Y\) the orthogonal projection onto \(Y\). Observe the following properties of \(P\): 1. \(P\circ A=A\), 2. \(P\) is \(1\)-Lipschitz, 3. for any \(u\in\mathbb{R}^{3}\) and \(w\in Y\), it holds that \(|Pu-w|\leq|u-w|\). Let \(\|A^{-1}\|\) be the operator norm of \(A^{-1}\colon Y\to\mathbb{R}^{n}\). By virtue of (3.2), there is an \(r_{0}\) such that for all \(r\leq r_{0}\) it holds that \[|f(y)-A(y)|\leq\frac{1}{\|A^{-1}\|}\frac{1}{2}(1-\sqrt[n]{\eta})r\quad\text{ for all }y\in B(0,r).\] This, by (P3) and by applying \(A^{-1}\) to the left hand side yields \[|A^{-1}Pf(y)-y|\leq\frac{1}{2}(1-\sqrt[n]{\eta})r\quad\text{for all }y\in B(0,r). \tag{3.3}\] Let \(\delta=\frac{1}{\|A^{-1}\|}\frac{1}{2}(1-\sqrt[n]{\eta})\). By the property (P2) above, if a function \(g\colon B(0,r)\to\mathbb{R}^{m}\) satisfies \[\|g-f\|_{\infty}\leq\delta r,\] then \[|Pg(y)-Pf(y)|\leq\delta r\quad\text{for all }y\in B(0,r).\] Therefore, for such \(y\), we have \[|A^{-1}Pg(y)-A^{-1}Pf(y)|\leq\|A^{-1}\|\delta r=\frac{1}{2}(1-\sqrt[n]{\eta})r,\] which in combination with (3.3) gives \[|A^{-1}Pg(y)-y|\leq(1-\sqrt[n]{\eta})r\quad\text{for all }y\in B(0,r). \tag{3.4}\] To use Corollary 3.1 we require \(A^{-1}Pg(B(0,r))\) to be a subset of \(B(0,r)\). To this end, let \[\sigma\colon B(0,(1+\sqrt[n]{\eta})r)\to B(0,r)\] be the radial projection onto \(B(0,r)\). More precisely, for an element \(y\in B(0,(1+\sqrt[n]{\eta})r)\), we let \(\sigma(y)\) be the unique \(u\in B(0,r)\) minimizing the distance \(|u-y|\). We observe that \(\sigma\) has properties analogous to those of \(P\), namely 1. \(\sigma\) is an identity on \(B(0,r)\), 2. \(\sigma\) is \(1\)-Lipschitz, 3. for any \(y\in B(0,r)\) and \(z\in B(0,(1+\sqrt[n]{\eta})r)\), it holds that \(|\sigma(z)-y|\leq|z-y|\). From the estimate (3.4), we infer that \(A^{-1}Pg(B(0,r))\subset B(0,(1+\sqrt[n]{\eta})r)\) and so we may define \(G=\sigma\circ A^{-1}\circ P\circ g\). By the property (S3) and the inequality (3.4) we obtain \[|G(y)-y|\leq(1-\sqrt[n]{\eta})r\quad\text{for all }y\in B(0,r).\] Finally, if \(g\) is continuous then so is \(G\) and hence Corollary 3.1 gives \[G(B(0,r))\supset B(0,\sqrt[n]{\eta}r)\] Applying \(\mathcal{H}^{n}\) to both sides, we obtain \[\mathcal{H}^{n}(\sigma A^{-1}Pg(B(0,r)))\geq\eta\mathcal{H}^{n}(B(0,r)).\] From (S2), the last equation implies \[\mathcal{H}^{n}(A^{-1}Pg(B(0,r)))\geq\eta\mathcal{H}^{n}(B(0,r)).\] From (2.1) and from the fact that \(\operatorname{vol}A^{-1}=\frac{1}{v}\), the last equation implies \[\mathcal{H}^{n}(Pg(B(0,r)))\geq\eta v\mathcal{H}^{n}(B(0,r)).\] Finally, by (P2), we obtain (3.1). **Corollary 3.3**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open set and let \(f\colon\overline{\Omega}\to\mathbb{R}^{m}\) be a Lipschitz map. Assume \(x\in\Omega\) is a density point of \(\operatorname{vol}f^{\prime}\) with \(\operatorname{vol}f^{\prime}(x)>0\) and let \(\eta\in(0,1)\). Then there is \(r_{0}>0\) and \(\delta>0\) such that for every \(r\leq r_{0}\) if \(g\in C(B(x,r),\mathbb{R}^{m})\) satisfies_ \[\|g-f\|_{\infty}\leq\delta r,\] _then_ \[\mathcal{H}^{n}(g(B(x,r)))\geq\eta\mathcal{H}^{n}(f(B(x,r))).\] Proof.: From Theorem 3.2 we can find \(r_{0}>0\) and \(\delta>0\) such that for \(g\in C(B(x,r),\mathbb{R}^{m})\) with \(\|g-f\|_{\infty}\leq\delta r\) it holds that \[\mathcal{H}^{n}(g(B(x,r)))\geq\sqrt{\eta}\operatorname{vol}f^{\prime}(x) \mathcal{H}^{n}(B(x,r)). \tag{3.5}\] From the fact that \(x\) is a density point of \(\operatorname{vol}f^{\prime}\) it follows that we may possibly decrease \(r_{0}>0\) so that for \(r<r_{0}\) we also have \[\operatorname{vol}f^{\prime}(x)\mathcal{H}^{n}(B(x,r))\geq\sqrt{\eta}\int_{B (x,r)}\operatorname{vol}f^{\prime}\;\mathrm{d}\mathcal{H}^{n}. \tag{3.6}\] By the area formula, we obtain \[\int_{B(x,r)}\operatorname{vol}f^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq \mathcal{H}^{n}(f(B(x,r))). \tag{3.7}\] Combining (3.5), (3.6) and (3.7) yields the result. ### Global lower semi-continuity of area in rectifiable metric spaces We shall continue our previous conventions and assume \(n\leq m\) are natural numbers and \(B(x,r)\) denotes the Euclidean ball in \(\mathbb{R}^{n}\). We consider the spaces \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) to be equipped with the Euclidean norms, unless stated otherwise. **Lemma 3.4**.: _Let \(E\subset\mathbb{R}^{n}\) be a compact set and let \(f\colon E\to\mathbb{R}^{m}\) be a Lipschitz injection. Let \(L\in[0,\infty)\). Then, for every \(\eta\in(0,1)\) there exists \(\delta>0\) such that if \(g\in\operatorname{Lip}_{L}(E,\mathbb{R}^{m})\) satisfies \(\|g-f\|_{\infty}\leq\delta\), then_ \[\mathcal{H}^{n}(g(E))\geq\eta\mathcal{H}^{n}(f(E)).\] Proof.: If \(\mathcal{H}^{n}(f(E))=0\), the statement obviously holds, so we may assume \(\mathcal{H}^{n}(f(E))>0\). Firstly, let \(L_{0}\) denote the Lipschitz constant of \(f\). Let \(C_{0}=\sqrt{m}(L+2L_{0})\). Find \(\varepsilon>0\) such that \[\sqrt{\eta}\mathcal{H}^{n}(f(E))-\sqrt{\eta}\varepsilon L_{0}-\varepsilon \geq\eta\mathcal{H}^{n}(f(E)). \tag{3.8}\] Using the McShane extension theorem, we find an extension of \(f\), denoted again by \(f\) such that \(f\) is \(\sqrt{m}L_{0}\)-Lipschitz. Let \(S\subset E\) be the set of density points of \(E\) and \(\operatorname{vol}f^{\prime}\). Then by the Lebesgue differentiation theorem, we have \[\mathcal{H}^{n}(E\setminus S)=0. \tag{3.9}\] Let \(x\in S\). Then by Corollary 3.3, there is some \(1\geq r_{x}>0\) and \(\delta_{x}>0\) such that for all \(r\leq r_{x}\) and \(g\in C(B(x,r),\mathbb{R}^{m})\) with \(\|g-f\|_{\ell^{\infty}(B(x,r))}\leq\delta_{x}r\), it holds that \[\mathcal{H}^{n}(g(B(x,r)))\geq\sqrt{\eta}\mathcal{H}^{n}(f(B(x,r))). \tag{3.10}\] Observe that since \(B(S,1)\) is bounded, there exists some \(\Delta>0\) such that for any countable sequence of disjoint balls \(B_{i}\) with radii \(r_{i}\) satisfying \(B_{i}\subset B(S,1)\) for each \(i\), we have \[\sum_{i}r_{i}^{n}\leq\Delta. \tag{3.11}\] As \(x\) is a density point of \(E\), we have \[\lim_{r\to 0_{+}}\frac{\mathcal{H}^{n}(B(x,r)\setminus E)}{r^{n}}=0,\] and so we may possibly reduce our \(r_{x}>0\) so that we also get for \(r\leq r_{x}\) \[\mathcal{H}^{n}(g(B(x,r)\setminus E))\leq\frac{\varepsilon}{\Delta}r^{n}\quad \text{for any }g\in\operatorname{Lip}_{C_{0}}(\mathbb{R}^{n},\mathbb{R}^{m}). \tag{3.12}\] Now the family of balls \(\mathfrak{B}=\{B(x,r):x\in S,r\leq r_{x}\}\) forms a Vitali cover of \(S\). Hence, using the Vitali covering theorem, and recalling (3.9) there is a countable disjoint family of balls \(B_{i}\) in \(\mathfrak{B}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i}B_{i})=0.\] Using continuity of \(\mathcal{H}^{n}\), there is some \(i_{0}\in\mathbb{N}\) such that even \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}B_{i})\leq\varepsilon. \tag{3.13}\] On denoting \(B_{i}=B(x_{i},r_{i})\) and letting \(\delta_{i}=\delta_{x_{i}}\), \(i\in\{1,\ldots,i_{0}\}\) and using (3.10) and (3.12), we obtain for each \(i\in\{1,\ldots,i_{0}\}\) \[\mathcal{H}^{n}(g(B_{i}))\geq\sqrt{\eta}\mathcal{H}^{n}(f(B_{i}))\quad\text{ for }g\in C(B_{i},\mathbb{R}^{m})\text{ with }\|g-f\|_{\ell^{\infty}(B_{i})}\leq \delta_{i}r_{i} \tag{3.14}\] and \[\mathcal{H}^{n}(g(B_{i}\setminus E))\leq\frac{\varepsilon}{\Delta}r_{i}^{n} \quad\text{for }g\in\operatorname{Lip}_{C_{0}}(\mathbb{R}^{n},\mathbb{R}^{m}). \tag{3.15}\] By (3.15), we now have \[\mathcal{H}^{n}(g(B_{i})\cap g(E)) =\mathcal{H}^{n}(g(B_{i}\setminus(B_{i}\setminus E)))\geq \mathcal{H}^{n}(g(B_{i})\setminus g((B_{i}\setminus E))) \tag{3.16}\] \[=\mathcal{H}^{n}(g(B_{i}))-\mathcal{H}^{n}(g(B_{i}\setminus E)) \geq\mathcal{H}^{n}(g(B_{i}))-\frac{\varepsilon}{\Delta}r_{i}^{n},\] for \(i\in\{1,\ldots,i_{0}\}\) provided \(g\in\operatorname{Lip}_{C_{0}}(\mathbb{R}^{n},\mathbb{R}^{m})\). Since \(f(B_{i}\cap E)\) are pairwise disjoint compact sets (as \(f\) is a continuous injection on the compact set \(E\)), there is some \(\rho>0\) such that \[\operatorname{dist}(f(B_{i}\cap E),f(B_{j}\cap E))\geq\rho\quad\text{for all }i,j\in\{1,\ldots i_{0}\}\text{ with }i\neq j.\] observe that if \(g\in C(\mathbb{R}^{n},\mathbb{R}^{m})\) satisfies \(\|g-f\|_{\ell^{\infty}(\mathbb{R}^{n})}\leq\frac{\rho}{4}\), then \[\operatorname{dist}(g(B_{i}\cap E),g(B_{j}\cap E))\geq\frac{\rho}{2}\quad \text{for all }i,j\in\{1,\ldots i_{0}\}\text{ with }i\neq j.\] In particular, \(g(B_{i}\cap E)\) are pairwise disjoint. Let \[\delta=\min\{\tfrac{\rho}{4},\min_{i=1\ldots i_{0}}\delta_{i}r_{i}\}.\] If \(g\in\operatorname{Lip}_{C_{0}}(\mathbb{R}^{n},\mathbb{R}^{m})\) satisfies \(\|g-f\|_{\ell^{\infty}(\mathbb{R}^{n})}\leq\delta\), then we have \[\mathcal{H}^{n}(g(E)) \geq\mathcal{H}^{n}(g(E\cap\bigcup_{i=1}^{i_{0}}B_{i}))=\sum_{i= 1}^{i_{0}}\mathcal{H}^{n}(g(B_{i}\cap E))\] \[\stackrel{{\eqref{eq:B_i}}}{{\geq}}\sum_{i=1}^{i_{0 }}\mathcal{H}^{n}(g(B_{i}))-\frac{\varepsilon}{\Delta}\sum_{i=1}^{i_{0}}r_{i}^{ n}\stackrel{{\eqref{eq:B_i},\eqref{eq:B_i}}}{{\geq}}\sqrt{\eta}\sum_{i=1}^{i_{0 }}\mathcal{H}^{n}(f(B_{i}))-\varepsilon\] \[=\sqrt{\eta}\mathcal{H}^{n}(f(\bigcup_{i=1}^{i_{0}}B_{i}))- \varepsilon\geq\sqrt{\eta}\mathcal{H}^{n}(f(E))-\sqrt{\eta}\mathcal{H}^{n}(f( E\setminus\bigcup_{i=1}^{i_{0}}B_{i}))-\varepsilon\] \[\stackrel{{\eqref{eq:B_i}}}{{\geq}}\sqrt{\eta} \mathcal{H}^{n}(f(E))-\sqrt{\eta}\varepsilon L_{0}-\varepsilon\stackrel{{ \eqref{eq:B_i}}}{{\geq}}\eta\mathcal{H}^{n}(f(E)),\] where in the unlabelled equalities, we use disjoint additivity of measure and in the last unlabelled inequality, we use the inclusion \[f(E)\subset f(\bigcup_{i=1}^{i_{0}}B_{i})\cup f(E\setminus\bigcup_{i=1}^{i_{0}}B_ {i}).\] Now let \(g\in\operatorname{Lip}_{L}(E,\mathbb{R}^{m})\) satisfy \[\|g-f\|_{\ell^{\infty}(E)}\leq\frac{1}{\sqrt{m}}\delta.\] Take \(d=g-f_{\|E}\) and, using McShane's extension theorem, find an extension thereof onto the entire \(\mathbb{R}^{n}\) such that \(d\in\operatorname{Lip}_{\sqrt{m}(L+L_{0})}(\mathbb{R}^{n},\mathbb{R}^{m})\) and \(\|d\|_{\ell^{\infty}(\mathbb{R}^{n},\mathbb{R}^{m})}\leq\delta\). Then \(\widetilde{g}=d+f\) is an extension of \(g\) such that \(\widetilde{g}\in\operatorname{Lip}_{C_{0}}(\mathbb{R}^{n},\mathbb{R}^{m})\) and \[\|\widetilde{g}-f\|_{\ell^{\infty}(\mathbb{R}^{n})}\leq\delta.\] Whence \(\mathcal{H}^{n}(g(E))=\mathcal{H}^{n}(\widetilde{g}(E))\geq\eta\mathcal{H}^{n }(f(E))\) by the above calculation. Note that we only require \(g\in\operatorname{Lip}_{L}(\mathbb{R}^{n},\mathbb{R}^{m})\) instead of simply \(g\in\operatorname{Lip}(\mathbb{R}^{n},\mathbb{R}^{m})\), or even \(g\in C(\mathbb{R}^{n},\mathbb{R}^{m})\), to obtain the estimate (3.12). Therefore, in some cases, the assumption is superfluous. For example, if \(E=\mathbb{R}^{n}\) or, more generally, if \(\Omega\subset\mathbb{R}^{n}\) is open and \(E=\overline{\Omega}\) or \(E=\Omega\). **Remark 3.5**.: Let \(\Omega\subset\mathbb{R}^{n}\) be open and bounded and let \(f\colon\overline{\Omega}\to\mathbb{R}^{m}\) be a Lipschitz injection. Then, for every \(\eta\in(0,1)\) there exists \(\delta>0\) such that if \(g\in C(\Omega,\mathbb{R}^{m})\) satisfies \(\|g-f\|_{\ell^{\infty}(\overline{\Omega})}\leq\delta\), then \[\mathcal{H}^{n}(g(\Omega))\geq\eta\mathcal{H}^{n}(f(\overline{\Omega})).\] We follow up with a version of Lemma 3.4 for metric spaces which are biLipschitz images of Euclidean sets. **Lemma 3.6**.: _Let \(K\) be a compact metric space for which there is a set \(F\subset\mathbb{R}^{n}\) and a biLipschitz bijection \(I\colon F\to K\). Assume \(f\colon K\to\mathbb{R}^{m}\) is biLipschitz and let \(L\in[0,\infty)\). Then for every \(\eta\in(0,1)\), there exists \(\delta>0\) such that if \(g\in\operatorname{Lip}_{L}(K,\mathbb{R}^{m})\) satisfies \(\|g-f\|_{\infty}\leq\delta\), then_ \[\mathcal{H}^{n}(g(K))\geq\eta\mathcal{H}^{n}(f(K)).\] Proof.: Firstly, we observe, that for any \(g\in\operatorname{Lip}_{L}(K,\mathbb{R}^{m})\), we have \(g\circ I\in\operatorname{Lip}_{CL}(F,\mathbb{R}^{m})\), where \(C\) is the Lipschitz constant of \(I\). Let \(\varphi=f\circ I\colon E\to\mathbb{R}^{m}\). As \(f\) and \(I\) are biLipschitz, so is \(\varphi\), hence Lemma 3.4 gives a \(\tilde{\delta}>0\) such that if \(\psi\in\operatorname{Lip}_{CL}(F,\mathbb{R}^{m})\) satisfies \(\|\varphi-\psi\|_{\ell^{\infty}(F)}\leq\tilde{\delta}\), then \[\mathcal{H}^{n}(\psi(F))\geq\eta\mathcal{H}^{n}(\varphi(F)). \tag{3.17}\] Let \(\delta=\frac{\tilde{\delta}}{C}\). Assume \(g\in\operatorname{Lip}_{L}(K,\mathbb{R}^{m})\) satisfies \(\|f-g\|_{\ell^{\infty}(K)}\leq\delta\). Denote \(\psi=g\circ I\). Then \(\psi\in\operatorname{Lip}_{CL}(F,\mathbb{R}^{m})\) and \(\|\varphi-\psi\|_{\ell^{\infty}(F)}\leq\tilde{\delta}\), so (3.17) holds. From this, we have \[\mathcal{H}^{n}(g(K))=\mathcal{H}^{n}(\psi\circ I^{-1}(K))=\mathcal{H}^{n}( \psi(F))\geq\eta\mathcal{H}^{n}(\varphi(F))=\eta\mathcal{H}^{n}(f\circ I^{-1} (F))=\eta\mathcal{H}^{n}(f(K)).\] We shall fix a complete metric space \(X\) and an \(n\)-rectifiable subset \(E\). Define a functional \[\mathcal{A}_{E}(g)=\mathcal{H}^{n}(g(E)), \tag{3.18}\] for any \(g\colon X\to\mathbb{R}^{m}\). For \(L\in[0,\infty)\), denote \(\Lambda_{L}=(\operatorname{Lip}_{L}(X,\mathbb{R}^{m}),\|\cdot\|_{\infty})\). **Theorem 3.7**.: _For any \(L\in[0,\infty)\), the functional \(\mathcal{A}_{E}\) is lower semi-continuous on \(\Lambda_{L}\)._ Proof.: Firstly, we shall assume \(\mathcal{H}^{n}(E)<\infty\). We show that for \(C>0\), the set \[\{f\in\Lambda_{L}:\mathcal{H}^{n}(f(E))>C\} \tag{3.19}\] is open. To that end, let \(f\) be from the set. We split the proof into two parts. First, we modify the decomposition from Lemma 2.6 (ii). For \(i\in\mathbb{N}\), let \(E_{i}\), \(F_{i}\) be from the aforementioned lemma. Let \(\widetilde{M}_{1}=f(E_{1})\) and for \(i>1\), let \[\widetilde{M}_{i}=f(E_{i})\setminus(f(E_{1})\cup\cdots\cup f(E_{i-1})).\] Using Lemma 2.6 (ii), we obtain \[\mathcal{H}^{n}(f(E)\setminus\bigcup_{i}\widetilde{M}_{i})=0.\] Let now \(\varepsilon>0\) and \(\eta\in(0,1)\) be such that \[\eta\mathcal{H}^{n}(f(E))-\eta\varepsilon>C. \tag{3.20}\] By continuity of measure, we find \(i_{0}\in\mathbb{N}\) such that \[\mathcal{H}^{n}(f(E)\setminus\bigcup_{i=1}^{i_{0}}\widetilde{M}_{i})<\varepsilon.\] By inner regularity of \(\mathcal{H}^{n}\), there are compact sets \(M_{i}\subset\widetilde{M_{i}}\), \(i\in\{1,\ldots i_{0}\}\) such that \[\mathcal{H}^{n}(f(E)\setminus\bigcup_{i=1}^{i_{0}}M_{i})<\varepsilon. \tag{3.21}\] Let \(K_{i}=f^{-1}(M_{i})\). As \(\widetilde{M}_{i}\) are disjoint, so are \(M_{i}\) and \(K_{i}\). As \(f\) is continuous, \(K_{i}\) are also compact. From (3.21) we have \[\mathcal{H}^{n}(\bigcup_{i=1}^{i_{0}}f(K_{i}))>\mathcal{H}^{n}(f(E))-\varepsilon. \tag{3.22}\] Note that we do not require the \(K_{i}\) to cover much of \(E\); Indeed, if \(f\) is highly non-injective then the \(K_{i}\) necessarily cover very little of \(E\) as the \(f(K_{i})\) are disjoint. In the second part, we use the decomposition above and solve the problem on each \(K_{i}\) separately via Lemma 3.6. As \(f(K_{i})\)'s are disjoint compact sets, there is a \(\rho>0\) such that for every \(i,j\in\{1,\ldots,i_{0}\}\), with \(i\neq j\) we have \[\operatorname{dist}(f(K_{i}),f(K_{j}))>\rho.\] Observe that if \(g\in\Lambda_{L}\) with \(\|g-f\|_{\infty}\leq\frac{\rho}{4}\) then \[\operatorname{dist}(g(K_{i}),g(K_{j}))>\frac{\rho}{2},\] in particular the sets \(g(K_{i})\) are disjoint. By Lemma 3.6, for each \(i\in\{1,\ldots,i_{0}\}\), there is a \(\delta_{i}>0\) such that if \(g\in\Lambda_{L}\) satisfies \(\|g-f\|_{\ell^{\infty}}\leq\delta_{i}\), then \[\mathcal{H}^{n}(g(K_{i}))\geq\eta\mathcal{H}^{n}(f(K_{i})). \tag{3.23}\] Let now \(\delta=\min\{\frac{\rho}{4},\min_{i=1\ldots i_{0}}\{\delta_{i}\}\}\). Then by disjointness of \(g(K_{i})\)'s, we get \[\mathcal{H}^{n}(g(E)) \geq\mathcal{H}^{n}(g(\bigcup_{i=1}^{i_{0}}K_{i}))=\sum_{i=1}^{i _{0}}\mathcal{H}^{n}(g(K_{i}))\stackrel{{\eqref{eq:K_i}}}{{\geq }}\eta\sum_{i=1}^{i_{0}}\mathcal{H}^{n}(f(K_{i}))\] \[\geq\eta\mathcal{H}^{n}(\bigcup_{i=1}^{i_{0}}f(K_{i}))\stackrel{{ \eqref{eq:K_i}}}{{>}}\eta\mathcal{H}^{n}(f(E))-\eta\varepsilon\stackrel{{ \eqref{eq:K_i}}}{{>}}C.\] We have shown that to each \(f\) in the set in (3.19) there is a \(\delta\) such that the \(\delta\)-ball around \(f\) in \(\Lambda_{L}\) is in the set. Hence the set is open and we are done. The general case, i.e. \(\mathcal{H}^{n}(E)=\infty\), can be reduced to the finite case in the following way. Let \(f\in\Lambda_{L}(E)\) be such that \[\mathcal{H}^{n}(f(E))>C.\] Then, as \(E\) is \(n\)-rectifiable, its \(\mathcal{H}^{n}\)-measure is \(\sigma\)-finite, whence there is some \(\mathcal{H}^{n}\)-measurable set \(K\subset E\) with \(\mathcal{H}^{n}(K)<\infty\) and such that \[\mathcal{H}^{n}(f(K))>C.\] Now we simply use lower semi-continuity of \(\mathcal{A}_{K}\) on \((\operatorname{Lip}_{L}(K,\mathbb{R}^{m}),\|\cdot\|_{\infty})\) **Theorem 3.8**.: _The functional_ \[f\mapsto\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}\] _is lower semi-continuous on \(\Lambda_{L}\) for every \(L\in[0,\infty)\)._ Proof.: Suppose \(C\in[0,\infty)\) and \(f\in\Lambda_{L}\) is such that \[\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}>C.\] Find the sets \(E_{i}\) from Lemma 2.8 so that by (2.5) we have \[\sum_{i}\mathcal{H}^{n}(f(E_{i}))=\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}>C.\] Now there is some \(i_{0}\in\mathbb{N}\) such that \[\sum_{i=1}^{i_{0}}\mathcal{H}^{n}(f(E_{i}))>C.\] By Theorem 3.7, to each \(i=1,\ldots,i_{0}\), there is some \(\delta_{i}>0\) so that if \(g\in\Lambda_{L}\) satisfies \[\|f-g\|_{\ell^{\infty}(E_{i})}\leq\delta_{i}\quad\text{for all }i\in\{1, \ldots,i_{0}\},\] then \[\sum_{i=1}^{i_{0}}\mathcal{H}^{n}(g(E_{i}))>C.\] Let \(\delta=\min_{i}\delta_{i}\). Then if \(g\in\Lambda_{L}\) is such that \(\|f-g\|_{\infty}\leq\delta\), we have by Lemma 2.9 together with the fact that \(E_{i}\) are disjoint and the area formula (2.4) \[\int_{E}J_{E}g\;\mathrm{d}\mathcal{H}^{n}\geq\sum_{i=1}^{i_{0}}\int_{E_{i}}J_ {E_{i}}g\;\mathrm{d}\mathcal{H}^{n}\geq\sum_{i=1}^{i_{0}}\mathcal{H}^{n}(g(E_ {i}))>C.\] ## 4 General density statements The purpose of this section is introduce some of the density results, in the spaces \(\operatorname{Lip}_{L}^{\mathrm{str}}(X,\mathbb{R}^{m})\) and \(\operatorname{Lip}_{L}(X,\mathbb{R}^{m})\) for a general complete metric space \(X\). The adopted approach is the natural one arising from use of Lemma 2.6. We construct a Lipschitz function on some pieces of the \(n\)-rectifiable space \(E\) and then extend it onto \(X\). However, this is highly non-trivial, as usually it is necessary to do this with no increase (or, in some sense, arbitrarily small increase) in the Lipschitz constant. This is the purpose of the following lemma. The proof follows, almost to the word, the proof of [4, Lemma 4.6]. **Lemma 4.1** (Lipschitz extension lemma).: _Let \(X\) be a metric space and \((Y,\|\cdot\|)\) a normed linear space. Further, let \(L\in[0,\infty)\), \(N\in\mathbb{N}\) and let \(S_{i}\subset X\), \(i=1,\ldots,N\). Let \(\delta>0\) and assume \(\rho_{i}\in(0,1]\), \(i=1,\ldots,N\) are such that the sets \(B(S_{i},\rho_{i})\) are disjoint. Assume that \(f\colon X\to Y\) is an \(L\)-Lipschitz function and let \(g_{i}\colon B(S_{i},\rho_{i})\to Y\) be \(L\)-Lipschitz functions such that_ \[\|g_{i}-f\|_{\ell^{\infty}(B(S_{i},\rho_{i}))}\leq\delta\rho_{i}\] _for each \(i=1,\ldots,N\). Then, there exists \(g\colon X\to Y\) an \((L+2\delta)\)-Lipschitz map such that \(g=g_{i}\) on each \(S_{i}\),_ \[\|g-f\|_{\ell^{\infty}(X)}<\delta\] _and \(g=f\) outside \(\bigcup_{i}B(S_{i},\rho_{i})\)._ Proof.: On each \(B(S_{i},\rho_{i})\) we may write \[g_{i}=f+E_{i},\] where \(\|E_{i}\|_{\ell^{\infty}}<\delta\rho_{i}\). Define \(\chi_{i}\colon X\to\mathbb{R}\) by \[\chi_{i}(x)=\frac{\max\{\frac{1}{2}\rho_{i}-\operatorname{dist}(x,S_{i}),0\}} {\frac{1}{2}\rho_{i}}.\] Then \(\chi_{i}\)'s have disjoint supports contained in \(B(S_{i},\frac{1}{2}\rho_{i})\). Hence, it is valid to define \(g\colon X\to Y\) by \[g=f+\sum_{i=1}^{N}\chi_{i}E_{i}.\] From this definition and the fact that each \(\rho_{i}\leq 1\), all of the stated properties of \(g\), except for the Lipschitz constant, immediately follow. It remains to show that for any \(x,y\in X\), we have \[\|g(x)-g(y)\|\leq Ld(x,y)+2\delta d(x,y). \tag{4.1}\] If there is an \(i=1,\dots,N\) such that \(\chi_{i}(x),\chi_{i}(y)>0\), then one can show (4.1) mutatis mutandis as in [4, Lemma 4.6]. If there are \(i\neq j\) such that \(\chi_{i}(x)>0\) and \(\chi_{j}(y)>0\), then one can show \[\tfrac{1}{2}\rho_{i}-\operatorname{dist}(x,S_{i})\leq d(x,y)\quad\text{and} \quad\tfrac{1}{2}\rho_{j}-\operatorname{dist}(y,S_{j})\leq d(x,y). \tag{4.2}\] From these inequalities, we obtain \[\|g(y)-g(x)\| =\|f(y)-f(x)+\chi_{i}(x)E_{i}(x)-\chi_{j}(y)E_{j}(y)\|\] \[\leq Ld(x,y)+|\chi_{i}(x)|\|E_{i}(x)\|+|\chi_{j}(y)|\|E_{j}(y)\|\] \[\leq Ld(x,y)+\delta\rho_{i}\frac{\tfrac{1}{2}\rho_{i}-\operatorname {dist}(x,S_{i}))}{\rho_{i}}+\delta\rho_{j}\frac{\tfrac{1}{2}\rho_{j}- \operatorname{dist}(x,S_{j}))}{\rho_{j}}\] \[\leq Ld(x,y)+2\delta d(x,y).\] What remains to show is the case when \(\chi_{i}(x)>0\) holds but \(\chi_{j}(y)=0\) for all \(j=1,\dots,N\) (which implies the first half of (4.2)). In this case we similarly have \[\|g(y)-g(x)\| =\|f(y)-f(x)+\chi_{i}(x)E_{i}(x)\|\] \[\leq Ld(x,y)+|\chi_{i}(x)|\|E_{i}(x)\|\] \[\leq Ld(x,y)+\delta\rho_{i}\frac{\tfrac{1}{2}\rho_{i}-\operatorname {dist}(x,S_{i}))}{\rho_{i}}\] \[\leq Ld(x,y)+\delta d(x,y).\] The following simple observation is the main idea behind how to avoid losing measure (of \(f(E)\)) by overlapping. **Lemma 4.2**.: _Let \(m,n\in\mathbb{N}\) and \(S\subset\mathbb{R}^{m}\) be an \(\mathcal{H}^{n}\)-measurable set with \(\mathcal{H}^{n}(S)<\infty\). Let \(M\) be any set of \(n\)-dimensional affine subspaces of \(\mathbb{R}^{m}\). Then the set_ \[\{T\in M:\mathcal{H}^{n}(T\cap S)>0\}\] _is countable._ Proof.: Suppose the statement fails. Then there exists some \(\varepsilon>0\) such that the set \[\{T\in M:\mathcal{H}^{n}(T\cap S)>\varepsilon\}\] is infinite (even uncountable). Therefore, we may find a countable family \(T_{i}\), \(i\in\mathbb{N}\) of distinct elements of the aforementioned infinite set. As \(\mathcal{H}^{n}(T_{i}\cap T_{j})=0\) whenever \(j\neq i\) and as \(S\) is \(\mathcal{H}^{n}\)-measurable and so is every \(T_{i}\), we have \[\mathcal{H}^{n}(S)\geq\mathcal{H}^{n}(S\cap\bigcup_{i\in\mathbb{N}}T_{i})= \sum_{i\in\mathbb{N}}\mathcal{H}^{n}(S\cap T_{i})\geq\sum_{i\in\mathbb{N}} \varepsilon=\infty,\] a contradiction. The idea of the following sequence of statements is to show that a Lipschitz function may be approximated in the strong distance \(\|\cdot\|_{\infty}+\operatorname{Lip}(\cdot)\) by Lipschitz functions \(g\) which simultaneously have very little overlap and "almost" satisfy \(g_{\#}\mathcal{H}^{n}_{|E}\ll\mathcal{H}^{n}\). This is firstly done assuming \(E\) is a normed set. **Lemma 4.3**.: _Let \(E\subset\mathbb{R}^{n}\) be a compact set and let \(S\subset\mathbb{R}^{m}\) be \(\mathcal{H}^{n}\)-measurable with \(\mathcal{H}^{n}(S)<\infty\). Let \(|\cdot|_{a}\) and \(|\cdot|_{b}\) be norms on \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) respectively. Let \(C>0\), \(L\in[0,\infty)\) and let \(f\colon E_{a}\to\mathbb{R}^{m}_{b}\) be an \(L\)-Lipschitz function. Then for every \(\varepsilon>0\), there is a function \(g\colon E\to\mathbb{R}^{m}\) and \(\mathcal{H}^{n}\)-measurable sets \(F\subset E\), \(N\subset\mathbb{R}^{m}\) such that_ 1. \(g\) _is Lipschitz as a map_ \(g\colon E_{a}\to\mathbb{R}^{m}_{b}\)_, with Lipschitz constant_ \(\operatorname{Lip}(g)<L\)_,_ 2. \(\|g-f\|_{\infty}<\varepsilon\)_,_ 3. \(\operatorname{Lip}_{E_{a}\to\mathbb{R}^{m}_{b}}(g-f)<\varepsilon\)_,_ 4. \(\mathcal{H}^{n}_{a}(E\setminus F)<\frac{1}{L}C\)_,_ 5. \(\mathcal{H}^{n}(N)=0\) _and_ \(\{u\in\mathbb{R}^{m}:\#g^{-1}(u)>1\}\subset g(E\setminus F)\cup N\)_,_ 6. _the set_ \(F\) _admits a decomposition_ \(F=\bigcup_{i=1}^{i_{0}}F_{i}\)_, for some_ \(i_{0}\in\mathbb{N}\)_, such that for each_ \(i=1,\ldots,i_{0}\)_,_ \(F_{i}\) _is_ \(\mathcal{H}^{n}\)_-measurable and_ \(g_{|F_{i}}\) _is a restriction of an affine map with positive volume,_ 7. \(\mathcal{H}^{n}(g(F)\cap S)=0\)_._ Proof.: First, as \(B(E,1)\) is bounded, we can find \(\Delta<\infty\) so that for any countable disjoint system of balls \(B_{i}=B(x_{i},r_{i})\subset B(E,1)\), we have \[\sum_{i}r_{i}^{n}\leq\Delta.\] Let \(\sigma\in(0,1)\) be such that \[\Delta(1-\sigma^{n})<\frac{1}{2}\frac{1}{L}C. \tag{4.3}\] We may assume that \(L_{0}=\operatorname{Lip}(f)<L\). Let \(\delta\in(0,\frac{1}{\sqrt{m}}\varepsilon)\) be such that \(L_{0}+3\delta<L\) We use Lusin's theorem to find an \(\mathcal{H}^{n}\)-measurable set \(\widetilde{E}\subset E\) with \[\mathcal{H}^{n}(E\setminus\widetilde{E})<\frac{1}{2}\frac{1}{L}C, \tag{4.4}\] such that \(x\mapsto f^{\prime}(x)\) is uniformly continuous on \(\widetilde{E}\). This implies that \(x\mapsto\operatorname{vol}f^{\prime}(x)\) is also uniformly continuous and it also allows us to obtain uniform approximations with derivatives in the following way. To each \(\delta>0\), there is \(r_{0}>0\) such that for every \(r\leq r_{0}\) and every \(x\in\widetilde{E}\) a density point of \(\widetilde{E}\), we have \[|f(y)-f(z)-f^{\prime}(x)(y-z)|_{b}<\frac{1}{2}\delta(1-\sigma)|y-z|_{a}\quad \text{for all }y,z\in B(x,r)\cap\widetilde{E}.\] Moreover, the operator norm \(\|f^{\prime}(x)\|_{a\to b}\) is no larger than \(L_{0}\). Let now \(x\) and \(r\) be as above and consider the function \(d\colon B(x,r)\cap\widetilde{E}\to\mathbb{R}^{m}\) given by \[d(y)=f(y)-f(x)-f^{\prime}(x)(y-x)\quad\text{for }B(x,r)\cap\widetilde{E}.\] Then for \(y,z\in B(x,r)\cap\widetilde{E}\to\mathbb{R}^{m}\) we may estimate \[|d(y)-d(z)|_{b} \leq|f(y)-f(x)+f^{\prime}(x)(y-x)-f(z)+f(x)-f^{\prime}(x)(z-x)|_{b}\] \[\leq|f(y)-f(z)-f^{\prime}(x)(y-z)|<\frac{1}{2}\delta(1-\sigma)|y- z|_{a},\] which means that \(d\) is \((\frac{1}{2}\delta(1-\sigma))\)-Lipschitz on the relevant domain. Hence, there is an open neighbourhood \(\widetilde{U}_{x}\) of \(f^{\prime}(x)\) in \(\mathcal{L}(\mathbb{R}^{n}_{a},\mathbb{R}^{m}_{b})\), the space of linear operators between \(\mathbb{R}^{n}_{a}\) and \(\mathbb{R}^{m}_{b}\), such that for each \(A\in\widetilde{U}_{x}\) it holds that \[\|A\|_{a\to b}<L_{0}+\delta,\] \[|f(y)-f(x)-A(y-x)|_{b}<\delta(1-\sigma)r\quad\text{for all }y\in B(x,r)\cap E\] and \[\operatorname{Lip}_{B(x,r)\cap\widetilde{E}}(y\mapsto f(y)-f(x)+A(y-x))<\delta( 1-\sigma).\] Observe also that \(U_{x}=\{A\in\widetilde{U}_{x}:\operatorname{vol}A>0\}\) is open and not empty. Using Vitali's covering theorem, we find a countable number of disjoint balls \(B_{i}=B(x_{i},r_{i})\) and open, non-empty sets \(U_{i}\in\mathcal{L}(\mathbb{R}_{a}^{n},\mathbb{R}_{b}^{m})\) such that \[|f(y)-f(x_{i})-A(y-x_{i})|_{b}\leq\delta(1-\sigma)r_{i}\quad\text{ for all }y\in B_{i}\cap E\text{ and }A\in U_{i}, \tag{4.5}\] \[\operatorname{Lip}_{B_{i}\cap\widetilde{E}}(y\mapsto f(y)-f(x_{i })+A(y-x_{i}))<\delta(1-\sigma)\quad\text{for any }A\in U_{i},\] (4.6) \[\mathcal{H}^{n}(\widetilde{E}\setminus\bigcup_{i}B_{i})=0,\] and \(\operatorname{vol}A>0\) for any \(A\in U_{i}\). Observe that now \[\mathcal{H}^{n}(\widetilde{E}\setminus\bigcup\sigma B_{i})\leq\Delta(1- \sigma^{n}).\] Whence, by (4.3) and (4.4), there is an index \(i_{0}\in\mathbb{N}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}\sigma B_{i})<\frac{1}{L}C. \tag{4.7}\] Suppose now that \(d\colon\bigcup_{i=1}^{i_{0}}\sigma B_{i}\cap\widetilde{E}\to\mathbb{R}^{m}\) satisfies for each \(i=1,\ldots,i_{0}\) \[d(y)=f(y)-f(x_{i})+A_{i}(y-x_{i})\quad\text{for some }A_{i}\in U_{i}\text{ and all }y\in\sigma B_{i}\cap\widetilde{E}_{i}. \tag{4.8}\] Then by (4.6), \(d\) is \((\delta(1-\sigma))\)-Lipschitz on each \(\sigma B_{i}\cap\widetilde{E}\). If \(i\neq j\) and \(y\in\sigma B_{i}\cap\widetilde{E}\), \(z\in\sigma B_{j}\cap\widetilde{E}\), then \[|d(y)-d(z)|_{b} \leq|d(y)-d(x_{i})|_{b}+|d(x_{i})-d(x_{j})|_{b}+|d(x_{j})-d(z)|_{b}\] \[\overset{\eqref{eq:d_1}}{\leq}\delta(1-\sigma)r_{i}+\delta(1- \sigma)r_{j}.\] On the other hand \[|y-z|_{a}\geq(1-\sigma)r_{i}+(1-\sigma)r_{j}\] as \(B_{i}\cap B_{j}=\emptyset\). This means that \(d\) is \(\delta\)-Lipschitz on the set \(\bigcup_{i=1}^{i_{0}}\sigma B_{i}\cap\widetilde{E}\) We construct \(g_{i}\) inductively on the sets \(B_{i}\) for \(i=1,\ldots,i_{0}\). Let \(S_{0}=S\). By Lemma 4.2, there is some \(A_{1}\in U_{1}\) such that \(\mathcal{H}^{n}([f(x_{1})+A_{1}(\mathbb{R}^{n})]\cap S_{0})=0\). Define \[g_{1}(y)=f(x_{1})+A_{1}(y-x_{1})\quad\text{for }y\in B_{1}.\] Assume we have defined \(g_{i}\) for \(i=1,\ldots,k-1\), \(k\leq i_{0}\). Let \(S_{k}=\bigcup_{i=1}^{k-1}g(B_{i})\cup S_{0}\). As \(\mathcal{H}^{n}(S_{k})<\infty\), using Lemma 4.2, there is an \(A_{k}\in U_{k}\) such that \(\mathcal{H}^{n}([f(x_{k})+A_{k}(\mathbb{R}^{n})]\cap S_{k})=0\). Let \[g_{k}(y)=f(x_{k})+A_{k}(y-x_{k})\quad\text{for }y\in B_{k}.\] Thus, we have constructed \((L_{0}+\delta)\)-Lipschitz functions \(g_{i}\colon B_{i}\to\mathbb{R}^{m}\) satisfying \[\|g_{i}-f\|_{\ell^{\infty}(B_{i}\cap E)}\leq\delta(1-\sigma)r_{i}.\] Now we let \[d=f-g_{i}\quad\text{on }B_{i}\cap\widetilde{E},\,i=1,\ldots,i_{0}.\] By construction, \(d\) satisfies (4.8). Whence, by our choice of \(\delta\), \(d\) admits an extension (denoted again by \(d\)) onto entire \(\mathbb{R}^{n}\) such that \(\operatorname{Lip}(d)<\varepsilon\) and \(\|d\|_{\infty}<\varepsilon\). Indeed, here we have just used McShane extension together with the estimate \(\delta<\frac{1}{\sqrt{m}}\varepsilon\). We let \(g=f-d\) on \(E\). This implies immediately that (i), (ii), and (iii) are satisfied. We let \(F_{i}=\sigma B_{i}\) and \[N=\bigcup_{i=1}^{i_{0}}S_{i}\cap g(F_{i}).\] Then by construction \(\mathcal{H}^{n}(N)=0\). Let \(F=\bigcup_{i=1}^{i_{0}}F_{i}\). From the construction (vi) and (vii) follow immediately and (iv) holds due to (4.7). Finally, to show (v), suppose \(u\in\mathbb{R}^{m}\) has two preimages under \(g\) neither of which lies in \(E\setminus F\), i.e. there are some \(x,y\in F\) such that \(g(x)=g(y)=u\). Then, there are some \(i,j\in\{1,\ldots,i_{0}\}\) such that \(x\in F_{i}\), \(y\in F_{j}\) and, without loss of generality, \(j\leq i\). As \(g\) is by construction injective on each \(F_{i}\), we have \(j<i\). Therefore by definition of \(S_{i}\), necessarily \(g(y)\in S_{i}\). On the other hand, \(x\in F_{i}\) implies \(g(x)\in g(F_{i})\). Altogether \(u\in S_{i}\cap g(F_{i})\) and so \(u\in N\). Now we may use Lemma 2.6 to push the results from normed sets into general metric spaces. **Theorem 4.4**.: _Let \(X\) be a complete metric space and let \(E\subset X\) be an \(n\)-rectifiable subset with \(\mathcal{H}^{n}(E)<\infty\), let \(L\in[0,\infty)\) and let \(|\cdot|_{b}\) be a norm on \(\mathbb{R}^{m}\). Suppose \(f\colon X\to\mathbb{R}^{m}_{b}\) is an \(L\)-Lipschitz function. Then to each \(\varepsilon>0\) and \(C>0\), there is a function \(g\colon X\to\mathbb{R}^{m}\) and \(\mathcal{H}^{n}\)-measurable sets \(F\subset E\), \(N\subset\mathbb{R}^{m}\) such that_ 1. \(g\colon X\to\mathbb{R}^{m}_{b}\) _is Lipschitz with constant_ \(\operatorname{Lip}(g)<L\)_,_ 2. \(\|g-f\|_{\infty}<\varepsilon\)_,_ 3. \(\operatorname{Lip}(g-f)<\varepsilon\)__ 4. \(\mathcal{H}^{n}(E\setminus F)<\frac{1}{L}C\)_,_ 5. \(\mathcal{H}^{n}(N)=0\) _and_ \(\{u\in g(E):\#g^{-1}(u)>1\}\subset g(E\setminus F)\cup N\)_,_ 6. \(J_{E}g>0\)__\(\mathcal{H}^{n}\)_-a.e. on_ \(F\)_._ Proof.: We may assume that \(\operatorname{Lip}(f)=L_{0}<L\). Let \(0<\theta<\infty\). Let \(C_{i}>0\), \(i\in\mathbb{N}\) be a sequence for which \[\frac{1}{2L}C+(1+\theta)L\sum_{i}C_{i}<\frac{1}{L}C. \tag{4.9}\] By Lemma 2.6, we find compact sets \(E_{i}\subset E\), \(K_{i}\subset\mathbb{R}^{n}\), norms \(|\cdot|_{a_{i}}\) on \(\mathbb{R}^{n}\) and \((1+\theta)\)-biLipschitz bijections \(I_{i}\colon(K_{i},|\cdot|_{a_{i}})\to E_{i}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i}E_{i})=0.\] Find \(i_{0}\in\mathbb{N}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}E_{i})<\frac{1}{2L}C. \tag{4.10}\] As \(E_{i}\)'s are compact, there is some \(\rho>0\) such that \(B(E_{i},\rho)\cap B(E_{j},\rho)=\emptyset\) provided \(i,j\in\{1,\ldots,i_{0}\}\), \(i\neq j\). Let now \(0\leq C_{i}^{0}\leq 1\), \(i=1,\ldots,i_{0}\) be such that \[\sum_{i=1}^{i_{0}}\sqrt{m}\varepsilon C_{i}^{0}\max\{(1+\theta),\tfrac{1}{\rho }\}\leq\delta \tag{4.11}\] For each \(i=1,\ldots,i_{0}\) we let \(\widetilde{f}_{i}=f\circ I_{i}\colon(K_{i},|\cdot|_{a_{i}})\to\mathbb{R}^{m}_{b}\), a \(((1+\theta)L_{0})\)-Lipschitz function. Denoting \(S_{1}=\emptyset\), we use Lemma 4.3 to find \(\widetilde{g_{1}}\colon(K_{1},|\cdot|_{a_{1}})\to\mathbb{R}^{m}_{b}\), a \((1+\theta)L_{0}\)-Lipschitz function and sets \(\widetilde{F}_{1}\subset K_{1}\), \(N_{1}\subset\mathbb{R}^{m}\) such that 1. \(\|\widetilde{g}_{1}-\widetilde{f}_{1}\|_{\ell^{\infty}(K_{1})}<\varepsilon C _{1}^{0}\), 2. \(\operatorname{Lip}_{(K_{1},|\cdot|_{a_{1}})\to\mathbb{R}^{m}_{b}}(\widetilde{g }_{1}-\widetilde{f}_{1})<\varepsilon C_{1}^{0}\) 3. \(\mathcal{H}^{n}_{a_{1}}(K_{1}\setminus\widetilde{F}_{1})<\frac{1}{L}C_{1}\), 4. \(\mathcal{H}^{n}(N_{1})=0\), 5. \(\{u\in\widetilde{g}_{1}(K_{1}):\#\widetilde{g}_{1}^{-1}(u)>1\}\subset \widetilde{g}_{1}(K_{1}\setminus\widetilde{F}_{1})\cup N_{1}\) 6. \(\operatorname{vol}\widetilde{g}_{1}^{\prime}>0\)__\(\mathcal{H}^{n}\)_-a.e. on_ \(\widetilde{F}_{1}\). * \(\mathcal{H}^{n}(S_{1}\cap\widetilde{g}_{1}(\widetilde{F}_{1}))=0\) Suppose now that \(\widetilde{g}_{i}\) have been constructed for \(i=1,\ldots,k-1\) where \(k\in\{1,\ldots,i_{0}\}\). We set \[S_{k}=\bigcup_{i=1}^{k-1}\widetilde{g}_{i}(K_{i}).\] Using Lemma 4.3 once again, we find \(\widetilde{g}_{k}\colon(K_{k},|\cdot|_{a_{k}})\to\mathbb{R}_{b}^{m}\), a \((1+\theta)L_{0}\)-Lipschitz function and sets \(\widetilde{F}_{k}\subset K_{k}\), \(N_{k}\subset\mathbb{R}^{m}\) such that * \(\|\widetilde{g}_{k}-\widetilde{f}_{k}\|_{\ell^{\infty}(K_{k})}<\varepsilon C _{k}^{0}\), * \(\operatorname{Lip}_{(K_{k},|\cdot|_{a_{k}})\to\mathbb{R}_{b}^{m}}(\widetilde{ g}_{k}-\widetilde{f}_{k})<\varepsilon C_{k}^{0}\) * \(\mathcal{H}_{a_{k}}^{n}(K_{k}\setminus\widetilde{F}_{k})<\frac{1}{L}C_{k}\), * \(\mathcal{H}^{n}(N_{k})=0\), * \(\{u\in\widetilde{g}_{k}(K_{k}):\#\widetilde{g}_{k}^{-1}(u)>1\}\subset \widetilde{g}_{k}(K_{k}\setminus\widetilde{F}_{k})\cup N_{k}\) * \(\operatorname{vol}\widetilde{g}_{k}^{\prime}>0\)\(\mathcal{H}^{n}\)-a.e. on \(\widetilde{F}_{k}\). * \(\mathcal{H}^{n}(S_{k}\cap\widetilde{g}_{k}(\widetilde{F}_{k}))=0\). For each \(i=1,\ldots,i_{0}\), we let \(F_{i}=I_{i}^{-1}(\widetilde{F}_{i})\) and \(g_{i}=\widetilde{g}_{i}\circ I_{i}^{-1}\). Moreover, we define \[d_{i}(x)=\begin{cases}g_{i}(x)-f(x)&\text{if }x\in E_{i},\\ 0&\text{if }x\in X\setminus B(E_{i},\rho).\end{cases}\] Using the fact that \(B(E_{i},\rho)\)'s are pairwise disjoint together with the properties (i)\({}^{k}\) and (ii)\({}^{k}\), we observe that \(\|d_{i}\|_{\infty}\leq\varepsilon C_{i}^{0}\) and \(\operatorname{Lip}(d_{i})\leq\max\{(1+\theta)\varepsilon C_{1}^{0},\frac{1}{ \rho}\varepsilon C_{i}^{0}\}\) on the relevant domains. Using McShane extension, we extend each \(d_{i}\) onto the entire \(X\), denoting the extensions again by \(d_{i}\), thereby obtaining functions satisfying * \(\|d_{i}\|_{\ell^{\infty}(X)}<\varepsilon C_{i}^{0}\), * \(\operatorname{Lip}_{X\to\mathbb{R}_{b}^{m}}d_{i}\leq\sqrt{m}\varepsilon C_{1} ^{0}\max\{(1+\theta,\frac{1}{\rho})\}\), * \(d_{i}=0\) on \(E_{j}\) for each \(j\neq i\). We let \[d=\sum_{i=1}^{i_{0}}d_{i}\quad\text{and}\quad g=f-d. \tag{4.12}\] As the function \(\operatorname{Lip}(\cdot)\) is sub-additive on \(\operatorname{Lip}(X,\mathbb{R}_{b}^{m})\), using (b)\({}^{i}\) and (4.11) we have \[\operatorname{Lip}(d)\leq\sum_{i=1}^{i_{0}}\sqrt{m}\varepsilon C_{i}^{0}\max \{(1+\theta),\tfrac{1}{\rho}\}\leq\delta.\] Which, as \(\delta<\varepsilon\), implies (iii) and, as \(L_{0}+\delta<L\), implies (i). Similarly, since \(C_{i}^{0}\leq 1\), we also obtain (ii). By (c)\({}^{i}\), \(g=g_{i}\) on each \(E_{i}\). Therefore, using the fact that \(I_{i}^{\prime}s\) are \((1+\theta)\)-biLipschitz together with the properties (i)\({}^{k}\) - (vii)\({}^{k}\) we observe the properties * \(\mathcal{H}^{n}(E_{i}\setminus F_{i})\leq(1+\theta)\mathcal{H}^{n}(K_{i} \setminus\widetilde{F}_{i})\leq(1+\theta)\frac{1}{L}C_{i}\), * \(N_{i}\subset F_{i}\) with \(\mathcal{H}^{n}(N_{i})=0\) and it holds that \(\{u\in g_{i}(E_{i}):\#g^{-1}(u)>1\}\subset g_{i}((E_{i}\setminus F_{i})\cup N _{i})\), * \(J_{E}g>0\)\(\mathcal{H}^{n}\)-a.e. on each \(F_{i}\). Let \[F=\bigcup_{i=1}^{i_{0}}F_{i}.\] Now (vi) holds by (3). Moreover, we may estimate \[\mathcal{H}^{n}(E\setminus F)=\mathcal{H}^{n}(\bigcup_{i=1}^{i_{0}}E_{i}\setminus F _{i})+\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}E_{i}),\] where, by the property (1) and the estimate (4.10), the last expression is estimated from above by \[(1+\theta)\frac{1}{L}\sum_{i=1}^{i_{0}}C_{i}+\frac{1}{2L}C.\] Whence (iv) follows from the choice of \(C_{i}\)'s (4.9). It remains to find \(N\) and show (v). To that end, we simply let \[N=\left(\bigcup_{i=1}^{i_{0}}N_{i}\right)\cup\bigcup_{i=1}^{i_{0}}S_{i}\cap g (F_{i}).\] As \(g(F_{i})=\widetilde{g}_{i}(\widetilde{F}_{i})\), using (vii)\({}^{k}\), we see that \(\mathcal{H}^{n}(N)=0\). To show (v), suppose \(u\in g(E)\) and \(g^{-1}(u)\cap(E\setminus F)=\emptyset\) and assume \(u\) has two distinct preimages under \(g\). It suffices to show that \(u\in N\). There are \(x,y\in F\) such that \(g(x)=g(y)=u\). By definition of \(F\), we may assume that there are \(i,j\in\{1,\ldots,i_{0}\}\), \(j\leq i\) such that \(x\in F_{i}\), \(y\in F_{j}\). Firstly, assume that \(j=i\). Then \(\#\widetilde{g}_{i}^{-1}(u)>1\) and so \(u\in\widetilde{g}_{i}(K_{i}\setminus\widetilde{F}_{i})\cap N_{i}\). However, \(u\not\in\widetilde{g}_{i}(K_{i}\setminus\widetilde{F}_{i})=g_{i}(E_{i} \setminus F_{i})\) as \(g^{-1}(u)\cap(E\setminus F)=\emptyset\). Therefore it is necessary that \(u\in N_{i}\subset N\). Secondly, assume that \(j<i\). Then \(g(y)\in S_{i}\) by the definition of \(S_{i}\) and \(g(x)\in g(F_{i})\) simply because \(x\in F_{i}\). Therefore \(u\in S_{i}\cap g(F_{i})\subset N\). Either way \(u\in N\) and we are done. **Corollary 4.5**.: _Let \(n<m\), let \(X\) be a complete metric space and let \(E\subset X\) be an \(n\)-rectifiable subset. Let \(L\in[0,\infty)\). Then the set_ \[\{f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m}):\int_{E} J_{E}f\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(f(E))\},\] _is residual in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\)._ Proof.: Firstly, we may write \(E=\bigcup_{i=1}^{\infty}E_{i}\), where \(E_{i}\) form an increasing sequence of \(\mathcal{H}^{n}\)-measurable sets with \(\mathcal{H}^{n}(E_{i})<\infty\). Now if a function \(f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\) satisfies \[\int_{E_{i}}J_{E_{i}}f\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(f(E_{i}))\] on each \(E_{i}\), then it obviously also satisfies the assertion. As a countable intersection of residual sets is residual, we may assume, without loss of generality, that \(\mathcal{H}^{n}(E)<\infty\). By the area formula (2.4), it suffices to show that the sets \[\{f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m}):\int_{E} J_{E}f\;\mathrm{d}\mathcal{H}^{n}-\mathcal{H}^{n}(f(E))<\frac{1}{i}\}\] are open and dense in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\) for every \(i\in\mathbb{N}\). Density follows immediately from Theorem 4.4 by taking \(C\) sufficiently small (depending on \(i\)). For openness, it is sufficient to observe that the functional \[f\mapsto\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}-\mathcal{H}^{n}(f(E)) \tag{4.13}\] is upper semi-continuous on \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\). The functional \[f\mapsto\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}\] is easily seen to be continuous on \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\). Moreover, the functional \(f\mapsto\mathcal{H}^{n}(f(E))\) is lower semi-continuous on \(\operatorname{Lip}_{L}(X,\mathbb{R}^{m})\), therefore it is also lower semi-continuous on \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\). Therefore the functional \(f\mapsto-\mathcal{H}^{n}(f(E))\) is upper semi-continuous on \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\). Altogether, we have shown that the functional in (4.13) is a sum of upper semi-continuous functionals and as such it is upper semi-continuous. The proof of the following corollary is now trivial upon recalling Baire's theorem and the preceding corollary, together with the fact that sets which are dense in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\) are also dense in \(\operatorname{Lip}_{L}(X,\mathbb{R}^{m})\). **Corollary 4.6**.: _Let \(n<m\), let \(X\) be a complete metric space and let \(E\subset X\) be an \(n\)-rectifiable subset. Let \(L\in[0,\infty)\). Then the set_ \[\{f\in\operatorname{Lip}_{L}(X,\mathbb{R}^{m}):\int_{E}J_{E}f\;\mathrm{d} \mathcal{H}^{n}=\mathcal{H}^{n}(f(E))\},\] _is dense in \(\operatorname{Lip}(X,\mathbb{R}^{m})\)._ **Corollary 4.7**.: _Let \(n\leq m\), let \(X\) be a complete metric space and let \(E\subset X\) be an \(n\)-rectifiable subset. Let \(L\in[0,\infty)\). Then the set of the \(L\)-Lipschitz functions \(f\colon X\to E\) such that_ \[f_{\#}\mathcal{H}_{|E}^{n}\ll\mathcal{H}_{\mathbb{R}^{m}}^{n} \tag{4.14}\] _is residual in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\)._ Proof.: As in the proof of Corollary 4.5, we may assume \(\mathcal{H}^{n}(E)<\infty\). Recalling the area formula (2.7) it is easily seen that for a Lipschitz function \(f\colon X\to\mathbb{R}^{m}\) one has (4.14) if and only if \(J_{E}f>0\)\(\mathcal{H}^{n}\)-a.e. in \(E\). We can write \[\begin{split}&\{f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X, \mathbb{R}^{m}):J_{E}f>0\;\mathcal{H}^{n}\text{-a.e.\ on }E\}\\ =&\bigcap_{i\in\mathbb{N}}\{f\in\operatorname{Lip}_{L }^{\operatorname{str}}(X,\mathbb{R}^{m}):\mathcal{H}^{n}(\{x\in E:J_{E}f(x)=0 \})<\tfrac{1}{i}\}.\end{split} \tag{4.15}\] It is therefore enough to show that the sets on the right hand side of (4.15) are open and dense. Density follows immediately from Theorem 4.4 by choosing \(C>0\) sufficiently small (we use particularly the properties (iv) and (vi) therein). Openness follows by a rearrangement argument. Assume \(\varphi_{k}\in L^{\infty}(E)=L^{\infty}(E,\mathcal{H}_{|E}^{n})\) is a sequence which converges in \(L^{\infty}(E)\) to some function \(\varphi\in L^{\infty}(E)\). Then their non-increasing rearrangements satisfy \(\varphi_{k}^{*}\to\varphi^{*}\) in \(L^{\infty}([0,\mathcal{H}^{n}(E)],\mathcal{L}^{1})\). Here the non-increasing rearrangement is defined as \[\varphi^{*}(t)=\inf\{\lambda\geq 0:\mathcal{H}^{n}(\{x\in E:\varphi(x)> \lambda\})\leq t\}\quad\text{for }t\in[0,\mathcal{H}^{n}(E)].\] Suppose further that \(\mathcal{H}^{n}(\{x\in E:\varphi_{k}(x)=0\})\geq\tfrac{1}{i}\). Then \(\varphi_{k}^{*}(t)=0\) for all \(t\in(\mathcal{H}^{n}(E)-\tfrac{1}{i},\mathcal{H}^{n}(E)]\). Then we also have \(\varphi^{*}(t)=0\) for all \(t\in(\mathcal{H}^{n}(E)-\tfrac{1}{i},\mathcal{H}^{n}(E)]\). We apply this to Jacobians of the relevant functions to show that the complements of the sets on the right hand side of (4.15) are closed. To that end, let \(i\in\mathbb{N}\) be fixed and let \(f_{k},f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\) be such that \(f_{k}\to f\) in \(\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m})\) and \[\mathcal{H}^{n}(\{x\in E:J_{E}f_{k}(x)=0\})\geq\frac{1}{i}\quad\text{for all }k\in\mathbb{N}.\] By our assumption on convergence of \(f_{k}\)'s, we have \(J_{E}f_{k}\to J_{E}f\) in \(L^{\infty}(E)\). Therefore, by the argument above \((J_{E}f)^{*}(t)=0\) for all \(t\in(\mathcal{H}^{n}(E)-\tfrac{1}{i},\mathcal{H}^{n}(E)]\). By the definition of the non-increasing rearrangement, this implies \(\mathcal{H}^{n}(\{x\in E:J_{E}f(x)=0\})\geq\tfrac{1}{i}\). Therefore the set \[\{f\in\operatorname{Lip}_{L}^{\operatorname{str}}(X,\mathbb{R}^{m}):\mathcal{ H}^{n}(\{x\in E:J_{E}f(x)=0\})\geq\tfrac{1}{i}\}\] is closed and we are done. ## 5 Residuality of functions with images of large measure The purpose of this section is to provide residuality results in the positive direction. Firstly, using the tools developed in previous sections 3 and 4 we provide a useful characterisation of the residuality of the sets \[\{f\in\operatorname{Lip}_{1}(X,\mathbb{R}_{b}^{m}):\mathcal{H}^{n}(f(E))\geq \Delta\}=\bigcap_{i\in\mathbb{N}}\{f\in\operatorname{Lip}_{1}(X,\mathbb{R}_{b }^{m}):\mathcal{H}^{n}(f(E))>\Delta-\tfrac{1}{i}\}.\] This is the subject of the first subsection. In the second subsection, we use this characterisation to prove the relevant residuality results assuming \(X\) is a normed set and \(E\) its subset. This only works under particular assumptions on the norm (and also the norm \(|\cdot|_{b}\) in the target space). We do not discuss sharpness of these conditions. Finally, in the third subsection, we concentrate on the particular instance of Euclidean norms. In this case we are able not only to get the best possible \(\Delta=\mathcal{H}^{n}(E)\) but also to push these results into the setting of \(n\)-rectifiable subsets of \(\mathbb{R}^{k}\) (as opposed to mere subsets of \(\mathbb{R}^{n}\)). It is there that we prove our first main result Theorem 1.1. ### Characterisations of residuality **Lemma 5.1**.: _Let \(n\leq m\), let \(X\) be a complete metric space and \(E\subset X\) an \(n\)-rectifiable set. Let \(|\cdot|_{b}\) be a norm on \(\mathbb{R}^{m}\). The functionals_ \[f\mapsto\mathcal{H}^{n}_{|\cdot|_{2}}(f(E))\] _and_ \[f\mapsto\int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}\] _are lower semi-continuous on \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) for any \(L\in[0,\infty)\)._ Proof.: This follows from theorems 3.7 and 3.8 respectively. **Theorem 5.2**.: _Let \(n\leq m\), let \(X\) be a complete metric space and let \(E\subset X\) be an \(n\)-rectifiable subset. Let \(|\cdot|_{b}\) be a norm on \(\mathbb{R}^{m}\) and suppose that \(L\in[0,\infty)\) and \(C>0\). Consider the following statements._ 1. _The set_ \(\mathcal{A}_{\geq C}=\{f\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b}):\mathcal{H} ^{n}(f(E))\geq C\}\) _is residual in_ \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\)_._ 2. _The sets_ \(\mathcal{A}_{>\widetilde{C}}=\{f\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b}): \mathcal{H}^{n}(f(E))>\widetilde{C}\}\) _are dense in_ \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) _for all_ \(\widetilde{C}<C\)_._ 3. _The set_ \(\mathcal{V}_{\geq C}=\{f\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b}):\int_{E}J_{E }f\;\mathrm{d}\mathcal{H}^{n}>C\}\) _is residual in_ \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\)_._ 4. _The sets_ \(\mathcal{V}_{>\widetilde{C}}=\{f\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b}): \int_{E}J_{E}f\;\mathrm{d}\mathcal{H}^{n}>\widetilde{C}\}\) _are dense in_ \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) _for all_ \(\widetilde{C}<C\)_._ _If \(n<m\) then all of the statements are mutually equivalent. If \(n=m\) then (i) and (ii) are false and (iii) and (iv) are equivalent._ Proof.: If \(n=m\), (i) and (ii) are false as, in fact, any sequence \(f_{k}\colon X\to\mathbb{R}^{m}_{b}\) with \(f_{k}\to 0\) uniformly satisfies \(\mathcal{H}^{n}(f_{k}(E))\to 0\). If \(n<m\) then equivalence of (i) and (ii) follows from Lemma 5.1 as we can write \[\mathcal{A}_{\geq C}=\bigcap_{i\in\mathbb{N}}\mathcal{A}_{>(C-\frac{1}{i})}.\] Similarly, equivalence of (iii) and (iv) (for any \(n\leq m\)) is also obtained from Lemma 5.1. Now let \(n<m\). The fact that (ii) implies (iv) is obviously true due to the area formula (2.7). It remains to show that (iv) implies (ii). To that end, fix \(\widetilde{C}<C\) and \(f\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\). From (iv) we can find, for each \(\varepsilon>0\), some \(g\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) with \(\|g-f\|_{\infty}<\varepsilon\) satisfying \[\int_{E}J_{E}g\;\mathrm{d}\mathcal{H}^{n}>\widetilde{C}.\] As the functional \(g\mapsto\int_{E}\mathrm{vol}\,J_{E}g\;\mathrm{d}\mathcal{H}^{n}\) is lower semi-continuous, there is some \(\delta>0\) such that for any \(h\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) satisfying \(\|g-h\|_{\infty}<\delta\) we still have \[\int_{E}J_{E}h\;\mathrm{d}\mathcal{H}^{n}>\widetilde{C}. \tag{5.1}\] From Corollary 4.6, we see that the set of functions \(h\) satisfying the improved area formula \[\int_{E}J_{E}h\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(h(E)) \tag{5.2}\] is dense in \(\mathrm{Lip}_{L}(X,\mathbb{R}^{m})\). Therefore, there exists some \(h\in\mathrm{Lip}_{L}(X,\mathbb{R}^{m}_{b})\) satisfying \(\|g-h\|_{\infty}<\delta\) and (5.2). By (5.1), such \(h\) satisfies \[\mathcal{H}^{n}(h(E))=\int_{E}J_{E}h\;\mathrm{d}\mathcal{H}^{n}>\widetilde{C}.\] As \(\delta>0\) may be reduced to an arbitrarily small number, we have shown that \(\mathcal{A}_{\widetilde{C}}\) is dense, which is (ii). ### Positive results in normed sets Recalling Lemma 2.6 it would seem that a good starting point to tackling \(n\)-rectifiable metric spaces, is the study of metric spaces which are merely subsets of \(\mathbb{R}^{n}\) equipped with a distance induced by a particular norm \(|\cdot|_{a}\). The object of this section is to provide a sufficient condition (on \(|\cdot|_{a}\)) so that there exists some \(\lambda>0\) so that the set \[\{f\in\operatorname{Lip}(E_{a},\mathbb{R}^{m}):\mathcal{H}^{n}(f(E))>\lambda \mathcal{H}^{n}(E)\}\] is residual in \(\operatorname{Lip}(E_{a},\mathbb{R}^{m})\) for all \(m>n\) and all \(E\subset\mathbb{R}^{n}\) bounded \(\mathcal{H}^{n}\)-measurable sets. We shall work in a slightly more general setting and allow a general norm (which we denote exclusively by \(|\cdot|_{b}\)) on the target space \(\mathbb{R}^{m}\) as well. It should be noted, that right now, both on the domain and on the target side we are working either with the whole Euclidean space or a subset thereof. We do equip it with a different metric (norm), but the metric is always equivalent to the Euclidean one. This implies that the induced Hausdorff measures (of any dimension) are always equivalent. However, available area formulas are far more conveniently used if the Hausdorff measures considered on either side are induced by the Euclidean distance. This is, up to a constant, without loss of generality. In other words, up to a constant, whenever we write \(\mathcal{H}^{n}\) one may replace it with \(\mathcal{H}^{n}_{a}\) (on the domain) or \(\mathcal{H}^{n}_{b}\) (on the target). There is, however, a small caveat to what is said above. If we want to prove an estimate holding for an entire _family_ of norms then constants matter. This is just a small technical detail, however, to avoid confusion, we will state some of our results in a "duplicate" form. One dealing with Euclidean Hausdorff measure and one dealing with the Hausdorff measure induced by the particular norm. Let us fix some notation for the entirety of this section. We shall assume that \(n,m\in\mathbb{N}\) satisfy \(n\leq m\), and that \(|\cdot|_{a}\) and \(|\cdot|_{b}\) are norms on \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) respectively. **Observation 5.3**.: _Let \(A\colon\mathbb{R}\to\mathbb{R}^{m}\) be a linear map. Let \(u\in\mathbb{R}^{m}\) and if \(A\neq 0\), assume also that \(u=\kappa A(1)\) for some \(\kappa\in\mathbb{R}\) with \(|\kappa|\geq 1\). Then for every \(\varepsilon>0\), there exists a Lipschitz curve \(\gamma\colon\mathbb{R}\to A(\mathbb{R})\) such that_ 1. \(\|\gamma-A\|_{\infty}<\varepsilon\)_,_ 2. \(\gamma^{\prime}\) _exists everywhere in_ \(\mathbb{R}\) _up to a discrete set of points,_ 3. _if_ \(x\in\mathbb{R}\) _is such that_ \(\gamma^{\prime}(x)\) _exists, then_ \(\gamma^{\prime}(x)=\pm u\)_._ Proof.: If \(A=0\) or \(\kappa=1\) the proof is obvious. Assume \(\kappa>1\) and \(A\neq 0\). There is a partition of \(\mathbb{R}\) into intervals \([a_{i},b_{i}]\), \(i\in\mathbb{Z}\) with \(a_{i}=b_{i-1}\) such that on each \([a_{i},b_{i}]\) we can define \(\gamma\) to be an affine curve with \(\gamma^{\prime}=\pm u\) (sign depends on parity of \(i\)), \(\gamma(b_{i})=\gamma(a_{i+1})\) for each \(i\) and such that \(\|\gamma-A\|_{\ell^{\infty}([a_{i},b_{i}])}\) is comparable to \(|b_{i}-a_{i}|\). Here we needed to use the assumption that \(\kappa>1\) as otherwise \(A\) would "run away" from \(\gamma\). For any \(\eta>0\), the partition can be made such that \(|a_{i}-b_{i}|\leq\eta\), for all \(i\in\mathbb{Z}\), while still having some \(\delta>0\) such that \(|b_{i}-a_{i}|\geq\delta\) for all \(i\in\mathbb{Z}\). The derivative \(\gamma^{\prime}\) then exists everywhere except the endpoints of the intervals, which form a discrete set of points. By making the partition fine enough, that is, taking \(\eta>0\) small enough and recalling that \(|b_{i}-a_{i}|\) is comparable to \(\|\gamma-A\|_{\ell^{\infty}([a_{i},b_{i}])}\), we can make it so that (i) holds. Given a linear space \(Y\) of dimension \(n\) and a map \(I\colon Y\to Y\) which is diagonalizable, i.e. there are eigenvectors \((u_{1},\ldots,u_{n})\) of \(I\), which form a basis of \(Y\), we say that \(\widetilde{I}\) is a _sign permutation of \(I\)_, if there are \(j(i)\in\{0,1\}\), \(i=1,\ldots,n\) such that \[\widetilde{I}(u_{i})=(-1)^{j(i)}I(u_{i}).\] We denote by \(\operatorname{sp}I\) the set of all sign permutations of \(I\). Observe that the set \(\operatorname{sp}I\) is independent of the particular choice of eigenvector basis \((u_{1},\ldots u_{n})\) and indeed \(\operatorname{sp}I\)_only_ depends on \(I\) (and \(Y\)). We shall further adopt the name _non-shrinking_ to refer to diagonalizable linear maps, whose eigenvalues have absolute values greater than or equal to \(1\). The following definition is a useful way of describing the possibility of approximating a linear map with a piecewise affine map, which has large volume almost everywhere. **Definition 5.4**.: Let \(A\in B_{a\to b}\) be of full rank and let \(\lambda>0\). We say that \(A\)_admits a \(\lambda\)-inflation_ if there exists a diagonalizable map \(I\colon A(\mathbb{R}^{n})\to A(\mathbb{R}^{n})\) satisfying the following properties. The absolute value of every eigenvalue of \(A\) is no smaller than \(1\) (i.e. \(I\) is non-shrinking) and for every \(\widetilde{I}\in\operatorname{sp}I\) we have 1. \(\|\widetilde{I}\circ A\|_{a\to b}\leq 1\). 2. \(\operatorname{vol}(\widetilde{I}\circ A)\geq\lambda\)_._ **Remark 5.5**.: In the case that \(|\cdot|_{b}\) is Euclidean (and hence invariant under linear reflections), it is enough to verify \(\|I\circ A\|_{a\to b}\leq 1\) and one does not need to consider sign permutations of \(I\). **Definition 5.6**.: The pair of norms \(|\cdot|_{a}\) on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) on \(\mathbb{R}^{m}\) is said to form a \(\lambda\)_-inflating pair_ for a \(\lambda>0\) if, for every linear map \(A\in B_{a\to b}\) of full rank, \(A\) admits a \(\lambda\)-inflation. We also write that \((|\cdot|_{a},|\cdot|_{b})\) forms a \(\lambda\)-inflating pair. Some geometric intuition for the preceding definitions is in order. If \(A\) admits a \(\lambda\)-inflation, this means that the convex set \(A(B_{a})\subset A(\mathbb{R}^{n})\) (having a non-empty interior in \(A(\mathbb{R}^{n})\)) can be inflated in such a way that \(A(B_{a})\subset B_{b}\cap A(\mathbb{R}^{n})\) and that the \(n\)-dimensional (Euclidean) Hausdorff measure of \(A(B_{a})\) is sufficiently large (this depends on the norms and \(\lambda\)). Moreover, this inflation must be achieved using a map, which admits a diagonal form with respect to some basis of \(A(\mathbb{R}^{n})\) and this map may not shrink in any direction. It is useful to note that this basis corresponds via \(A^{-1}\colon A(\mathbb{R}^{n})\to\mathbb{R}^{n}\) with a basis in \(\mathbb{R}^{n}\). The requirement that the inflation be diagonal will become clear once we prove the principal result. The reason we require that \(I\) does not shrink in any direction (a condition on its eigenvalues) is so that we can use Observation 5.3 (this corresponds to the condition on \(\kappa\) therein). **Example 5.7**.: Let \(|\cdot|_{a}=|\cdot|_{2}\) and \(|\cdot|_{b}=|\cdot|_{2}\). Then every linear map \(A\in B_{2\to 2}\) of full rank admits a \(1\)-inflation. Indeed, any such map \(A\) is of the form \(A=S\circ D\circ R\), where \(R\in\operatorname{O}(n)\), \(S\in\operatorname{O}(m)\) and \(D\) is diagonal with diagonal values less than or equal to one in magnitude. Moreover, as \(A\) is of full rank, the diagonal values of \(D\) are non zero and so \(D\) admits the inverse \(D^{-1}\colon D(\mathbb{R}^{n})\to\mathbb{R}^{n}\). Let \(E\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) be the diagonal matrix with only \(1\)'s on the diagonal and let \(\overline{A}=S\circ E\circ R\). It is an easy exercise to now show that \(\overline{A}=IA\) for \[I=S\circ E\circ D^{-1}\circ S^{-1}\colon A(\mathbb{R}^{n})\to A(\mathbb{R}^{n}). \tag{5.3}\] From (5.3), as \(E\circ D^{-1}\) is diagonal, \(I\) is diagonalizable, and both properties 1 and 2 from the definition of \(1\)-inflation are satisfied. **Proposition 5.8**.: _Let \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) be an affine map of the form \(A=A_{\operatorname{lin}}+u\), where \(A_{\operatorname{lin}}\) is a linear map and \(u\in\mathbb{R}^{m}\). Assume that \(A_{\operatorname{lin}}\) is of full rank, \(L_{0}=\|A_{\operatorname{lin}}\|_{a\to b}\leq 1\) and \(A_{\operatorname{lin}}\) admits a \(\lambda\)-inflation for some \(\lambda>0\). Let \(E\subset\mathbb{R}^{n}\) be a bounded \(\mathcal{H}^{n}\)-measurable set. Then, for every \(\varepsilon>0\), there is \(g\in\operatorname{Lip}_{L_{0}}(E_{a},\mathbb{R}^{m}_{b})\) such that_ 1. \(\|g-A\|_{\infty}<\varepsilon\) _and_ 2. \(\operatorname{vol}g^{\prime}\geq L_{0}\lambda\)__\(\mathcal{H}^{n}\)_-a.e. in_ \(E\)_._ Proof.: By a standard scaling argument, we may assume \(L_{0}=1\). We may assume, without loss of generality, that \(A\) is linear. We first construct \(g\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) and then restrict to \(E\). By our assumption on \(A\), there is a diagonalizable map \(I\colon A(\mathbb{R}^{n})\to A(\mathbb{R}^{n})\) whose eigenvectors \((u_{1},\dots,u_{n})=(A(x_{1}),\dots,A(x_{n}))\) form a basis of \(A(\mathbb{R}^{n})\) and which satisfies the properties from Definition 5.4. Note that \((x_{1},\dots,x_{n})\) form a basis of \(\mathbb{R}^{n}\). Fix \(i\in\{1,\dots,n\}\). Recalling Observation 5.3, there is a Lipschitz curve \(\gamma_{i}\colon\mathbb{R}\to\operatorname{span}\{u_{i}\}\) such that \[|\gamma_{i}(t)-A(tx_{i})|<\varepsilon\quad\text{for all $t\in\mathbb{R}$},\] and \[\gamma_{i}^{\prime}(t)=\pm I(A(x_{i}))\quad\text{for all $t\in\mathbb{R}$ up to a discrete set}. \tag{5.4}\] Denote by \(t\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) the coordinate function with respect to the basis \((x_{1},\dots,x_{n})\), i.e. we have \(x=\sum_{i=1}^{n}t_{i}(x)x_{i}\) for each \(x\in\mathbb{R}^{n}\). We now simply let \[g(x)=\sum_{i=1}^{n}\gamma_{i}(t_{i}(x)),\quad\text{for $x\in\mathbb{R}^{n}$}.\] As each \(\gamma_{i}\) is Lipschitz, \(g\) is also Lipschitz. Fix an \(i\in\{1,\dots,n\}\). Given an arbitrary \(x\in\mathbb{R}^{n}\) such that \(\gamma_{i}^{\prime}(t_{i}(x))\) exists, since \(t\) is a linear map, we have \(t^{\prime}(x)=t\). Therefore, using (5.4), we have for any \(\alpha_{j}\in\mathbb{R}\), \(j=1,\dots,n\) \[(\gamma_{i}\circ t_{i})^{\prime}(x)(\alpha_{1}x_{1}+\dots+\alpha_{n}x_{n}) =\gamma_{i}^{\prime}(t_{i}(x))(t_{i}^{\prime}(x))(\alpha_{1}x_{1}+ \dots+\alpha_{n}x_{n})\] \[=\gamma_{i}^{\prime}(t_{i}(x))(\alpha_{i})=\pm\alpha_{i}I(A(x_{i} )).\] Therefore, by definition of \(g\), we have \[g^{\prime}(x)(\alpha_{1}x_{1}+\cdots+\alpha_{n}x_{n})=\sum_{j=1}^{n}\pm\alpha_{j} I(A(x_{j})).\] This means that for every \(x\in\mathbb{R}^{n}\), up to a discrete set, \(g^{\prime}(x)=\widetilde{I}A\) for some \(\widetilde{I}\in\operatorname{sp}I\). Therefore, by (i) in Definition 5.4, \(g\) is a Lipschitz function on the entire \(\mathbb{R}^{n}\) satisfying \(\|g^{\prime}(x)\|_{a\to b}\leq 1\) for every such \(x\in\mathbb{R}^{n}\). This by Lemma 2.1 implies that \(g\) is \(1\)-Lipschitz between \(\mathbb{R}^{n}_{a}\) and \(\mathbb{R}^{m}_{b}\). The restriction \(g_{|E}\) therefore satisfies \(g_{|E}\in\operatorname{Lip}_{1}(E_{a},\mathbb{R}^{m}_{b})\) and (i). By the property (ii) in Definition 5.4, using once again the fact that \(g^{\prime}(x)=\widetilde{I}A\) for \(\mathcal{H}^{n}\)-a.e. \(x\in E\), (ii) is satisfied as well. **Proposition 5.9**.: _Let \(E\subset\mathbb{R}^{n}\) be a bounded \(\mathcal{H}^{n}\)-measurable set and suppose \(E\subset D\subset\mathbb{R}^{k}\). Let \(f\colon D\to\mathbb{R}^{m}\) and suppose \(f\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}^{m}_{b})\). Assume that \((|\cdot|_{a},|\cdot|_{b})\) forms a \(\lambda\)-inflating pair for some \(\lambda>0\). Then to each \(\varepsilon>0\) and \(\eta\in(0,1)\) there exists \(g\colon D\to\mathbb{R}^{m}\) such that_ 1. \(g\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}^{m}_{b})\)_,_ 2. \(\|g-f\|_{\infty}<\varepsilon\)_,_ 3. \(\int_{E}\operatorname{vol}g^{\prime}\,\mathrm{d}\mathcal{H}^{n}\geq\eta \lambda\mathcal{H}^{n}(E)\)_._ Proof.: If \(\mathcal{H}^{n}(E)=0\), the statement obviously holds, so we may assume \(\mathcal{H}^{n}(E)>0\). We may assume, without loss of generality, that the Lipschitz constant of \(f\) is strictly less than \(1\). We therefore have \(L_{0}=\max\{\operatorname{Lip}_{a\to b}(f),\frac{1+\eta}{2}\}<1\). Find \(C>0\) such that \[\lambda L_{0}\mathcal{H}^{n}(E)-\lambda L_{0}2C\geq\eta\lambda\mathcal{H}^{n} (E). \tag{5.5}\] Find \(\Delta>0\) such that for any sequence \(B_{i}=B_{a}(x_{i},r_{i})\subset B_{a}(E,1)\), \(x_{i}\in E\) of disjoint balls we have \[\sum_{i}r_{i}^{n}\leq\Delta.\] Let \(\sigma\in(0,1)\) be such that \[\Delta(1-\sigma^{n})<C. \tag{5.6}\] Find \(\delta\in(0,\frac{1}{2}\varepsilon)\) such that \(L_{0}+2\delta<1\). Then, for any \(x\in E\) a density point of \(E\) such that \(f^{\prime}(x)\) exists, we find \(r_{x}\in(0,1]\) such that \[|f(y)-f(x)-f^{\prime}(x)(y-x)|_{b}\leq\delta(1-\sigma)|y-x|_{a}\quad\text{for all }y\in B_{a}(x,r_{x})\cap D.\] In a standard way, using Vitali covering and continuity of measure, we thereby obtain a finite sequence of disjoint balls \(B_{i}=B_{a}(x_{i},r_{i})\cap D\), \(x_{i}\in E\), \(i=1,\ldots,i_{0}\) in \(D_{a}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}B_{i})<C\] and for each \(i\in\{1,\ldots,i_{0}\}\) we have an affine map \(A_{i}\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) with \[|f(y)-A_{i}(y)|_{b}<\delta(1-\sigma)r_{i},\quad\text{for all }y\in B_{i}.\] As \(f\) is \(L_{0}\)-Lipschitz, we may also assume that the linear part of each \(A_{i}\) lies in \(L_{0}B_{a\to b}\). From the density of maps of full rank in \(B_{a\to b}\), we may assume that each \(A_{i}\) is of full rank. Recalling Proposition 5.8, we find \(g_{i}\colon B_{i}\to\mathbb{R}^{n}\) such that \(g_{i}\in\operatorname{Lip}_{L_{0}}((B_{i})_{a},\mathbb{R}^{m}_{b})\) and \[\operatorname{vol}g^{\prime}\geq L_{0}\lambda\quad\mathcal{H}^{n}\text{-a.e. in }B_{i},\] and \[\|g_{i}-A_{i}\|_{\ell^{\infty}(B_{i})}\leq\delta(1-\sigma)r_{i}.\] Using the extension Lemma 4.1 (for \(X=D\)), there is a function \(g\colon D\to\mathbb{R}^{m}\) such that 1. \(\operatorname{Lip}_{a\to b}(g)\leq L_{0}+2\delta\), 2. \(g=g_{i}\) on \(\sigma B_{i}\) for each \(i\in\{1,\dots,n\}\), 3. \(\|g-f\|_{l\infty(D)}\leq 2\delta(1-\sigma)\leq\varepsilon\). By our choice of \(\delta\), we have \(g\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}_{b}^{m})\) which is 1 and from 3 we we get 2. It remains to show 3. Firstly, by (5.6), we have \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}\sigma B_{i})\leq\mathcal{H}^{ n}(E\setminus\bigcup_{i=1}^{i_{0}}B_{i})+\mathcal{H}^{n}(\bigcup_{i=1}^{i_{0}}B_ {i}\setminus\sigma B_{i})\leq C+\Delta(1-\sigma^{n})<2C.\] Moreover, by 2, \(\operatorname{vol}g^{\prime}\geq L_{0}\lambda\) in each \(B_{i}\) and using also (5.5) it follows that \[\int_{E}\operatorname{vol}g^{\prime}\;\mathrm{d}\mathcal{H}^{n} \geq\sum_{i=1}^{i_{0}}\int_{\sigma B_{i}\cap E}\operatorname{ vol}g^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq\lambda L_{0}\sum_{i=1}^{i_{0}} \mathcal{H}^{n}(\sigma B_{i}\cap E)\] \[\geq\lambda L_{0}\left(\mathcal{H}^{n}(E)-\mathcal{H}^{n}(E \setminus\bigcup_{i=1}^{i_{0}}\sigma B_{i})\right)\geq\lambda L_{0}\mathcal{H }^{n}(E)-\lambda L_{0}2C\] \[\geq\eta\lambda\mathcal{H}^{n}(E).\] We are now ready to prove the main result of this section. **Theorem 5.10**.: _Suppose \(n\leq m\) and let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) a norm on \(\mathbb{R}^{m}\). Let \(E\subset\mathbb{R}^{n}\) be bounded and \(\mathcal{H}^{n}\)-measurable and let \(E\subset D\subset\mathbb{R}^{n}\). Let \(\lambda>0\) and assume \((|\cdot|_{a},|\cdot|_{b})\) forms a \(\lambda\)-inflating pair. The set_ \[\{f\in\operatorname{Lip}(D_{a},\mathbb{R}_{b}^{m}):\int_{E}\operatorname{vol}f ^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq\lambda\mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}(D_{a},\mathbb{R}_{b}^{m})\). Suppose \(n<m\). Then the set_ \[\{f\in\operatorname{Lip}(D_{a},\mathbb{R}_{b}^{m}):\mathcal{H}^{n}(f(E))\geq \lambda\mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}(D_{a},\mathbb{R}_{b}^{m})\)._ Proof.: Firstly, we realize that by Haar's theorem, \(\mathcal{H}^{n}\) is a constant multiple of \(\mathcal{H}_{a}^{n}\). Therefore, we may use the general result of Theorem 5.2 and it suffices to show that \[\{f\in\operatorname{Lip}(D_{a},\mathbb{R}_{b}^{m}):\int_{E}\operatorname{vol}f ^{\prime}\;\mathrm{d}\mathcal{H}^{n}>\widetilde{\lambda}\mathcal{H}^{n}(E)\}\] is dense for every \(\widetilde{\lambda}<\lambda\) provided \(n\leq m\). This follows from Proposition 5.9. By virtue of Haar's theorem, we can obtain the relevant result with "correct" Hausdorff measure on the domain side. **Corollary 5.11**.: _Suppose \(n\leq m\), let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and let \(|\cdot|_{b}\) be a norm on \(\mathbb{R}^{m}\). Let \(E\subset\mathbb{R}^{n}\) be bounded and \(\mathcal{H}^{n}\)-measurable and let \(E\subset D\subset\mathbb{R}^{n}\). Let \(\lambda>0\) and assume \((|\cdot|_{a},|\cdot|_{b})\) forms a \((\operatorname{vol}(|\cdot|_{a})\lambda)\)-inflating pair. The set_ \[\{f\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}_{b}^{m}):\int_{E}\operatorname{ vol}f^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq\lambda\mathcal{H}_{a}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D_{a},\mathbb{R}_{b}^{m})\). Suppose \(n<m\). Then the set_ \[\{f\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}_{b}^{m}):\mathcal{H}^{n}(f(E)) \geq\lambda\mathcal{H}_{a}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D_{a},\mathbb{R}_{b}^{m})\)._ Proof.: As \(\mathcal{H}^{n}_{a}\) is a Haar measure on \(\mathbb{R}^{n}\) and \(\mathcal{H}^{n}_{a}(B_{a})=2^{n}\) we have \(\mathcal{H}^{n}_{a}=\operatorname{vol}|\cdot|_{a}\mathcal{H}^{n}\) by Haar's theorem. The rest follows from Theorem 5.10. Recalling Example 5.7, we immediately obtain the following Euclidean result. **Corollary 5.12**.: _Suppose \(m\leq n\). Let \(E\subset\mathbb{R}^{n}\) be bounded and \(\mathcal{H}^{n}\)-measurable and let \(E\subset D\subset\mathbb{R}^{n}\). The set_ \[\{f\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m}):\int_{E}\operatorname{vol}f^{ \prime}\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\). Suppose \(n<m\). Then the set_ \[\{f\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m}):\mathcal{H}^{n}(f(E))= \mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\)._ ### Strongest possible results in Euclidean spaces. While the results of the last subsection were "local" in the sense that they required the \(n\)-rectifiable set to be in fact a normed piece of \(\mathbb{R}^{n}\), in the following, we push some of the results to general _Euclidean_\(n\)-rectifiable sets, obtaining a proof of Theorem 1.1. **Theorem 5.13**.: _Let \(n\leq k\), \(n\leq m\) and suppose \(E\subset\mathbb{R}^{k}\) is \(n\)-rectifiable and satisfying \(\mathcal{H}^{n}(E)<\infty\). Let \(E\subset D\subset\mathbb{R}^{k}\). Let \(f\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\). Then for every \(\varepsilon>0\) and \(\eta\in(0,1)\), there is a \(g\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\) such that_ * \(\|g-f\|_{\infty}\leq\varepsilon\)_,_ * \(\int_{E}J_{E}g\;\mathrm{d}\mathcal{H}^{n}>\eta\mathcal{H}^{n}(E)\)_._ Proof.: If \(\mathcal{H}^{n}(E)=0\), the statement obviously holds, so we may assume \(\mathcal{H}^{n}(E)>0\). Without loss of generality, \(L_{0}=\operatorname{Lip}(f)<1\). Find \(\theta>0\) such that \((1+\theta)^{2}L_{0}<1\). Find \(\eta_{0}\in(0,1)\) and \(C>0\) such that \[\frac{1}{(1+\theta)^{2}}(\mathcal{H}^{n}(E)-C)>\eta\mathcal{H}^{n}(E). \tag{5.7}\] Using [7, Lemma 3.2.2] (which is the Euclidean version of Lemma 2.6), we find countably many \(E_{i}\subset E\) Borel and disjoint, \(F_{i}\subset\mathbb{R}^{n}\) and \((1+\theta)\)-biLipschitz maps \(I_{i}\colon F_{i}\to E_{i}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i}E_{i})=0.\] Using continuity and inner regularity of \(\mathcal{H}^{n}\), there is some \(i_{0}\in\mathbb{N}\) such that \[\mathcal{H}^{n}(E\setminus\bigcup_{i=1}^{i_{0}}E_{i})<C \tag{5.8}\] and we may assume that \(E_{i}\)'s are compact. There is some \(r\in(0,\frac{1}{3}\varepsilon)\) such that for every \(i,j\in\{1,\ldots,i_{0}\}\) with \(i\neq j\) we have \[\operatorname{dist}(E_{i},E_{j})\geq r. \tag{5.9}\] Let \(\varepsilon_{0}\in(0,\frac{1}{3}\varepsilon)\) be such that \[\frac{\varepsilon_{0}}{r}+L_{0}<1. \tag{5.10}\] Fix \(i=1,\ldots,i_{0}\) and let \(\varphi_{i}=f\circ I_{i}\colon F_{i}\to\mathbb{R}^{m}\). Then \(\varphi_{i}\) is \(L_{0}(1+\theta)\)-Lipschitz. Whence, by Corollary 5.12 (we are using only density), there is \(\psi_{i}\colon F_{i}\to\mathbb{R}^{m}\) such that * \(\psi_{i}\in\operatorname{Lip}_{L_{0}(1+\theta)}(F_{i},\mathbb{R}^{m})\), * \(\|\psi_{i}-\varphi_{i}\|_{\ell^{\infty}(F_{i})}\leq\varepsilon_{0}\), * \(\int_{F_{i}}\operatorname{vol}\psi_{i}^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq \eta_{0}\mathcal{H}^{n}(F_{i})\). We let \(g_{i}\colon E_{i}\to\mathbb{R}^{m}\) be given by \(g_{i}=\psi_{i}\circ I_{i}^{-1}\). We define \(S=\bigcup_{i=1}^{i_{0}}E_{i}\) and let \(g\colon S\to\mathbb{R}^{m}\) be given by \[g(x)=g_{i}(x)\quad\text{for $x\in E_{i}$, $i=1,\ldots,i_{0}$}.\] By (a), (b), (5.9) and (5.10), \(g\) is \(1\)-Lipschitz. By (b), we have \[\|g-f\|_{\ell^{\infty}(S)}\leq\varepsilon_{0} \tag{5.11}\] and by (c), (5.8) and (5.7), we have \[\int_{S}J_{E}g\;\mathrm{d}\mathcal{H}^{n}\geq\frac{1}{(1+\theta)^{2}}\eta_{0} \mathcal{H}^{n}(S)\geq\frac{1}{(1+\theta)^{2}}\eta_{0}(\mathcal{H}^{n}(E)-C)> \eta\mathcal{H}^{n}(E). \tag{5.12}\] It remains to extend \(g\). To that end, let \[c(x)=\begin{cases}g(x)&\text{if $x\in S$},\\ f(x)&\text{if $x\in D\setminus B(S,r)$}.\end{cases}\] From (5.10) and (5.11) it follows that \(c\) is \(1\)-Lipschitz. Using Kirszbraun's extension theorem, we find a \(1\)-Lipschitz extension of \(c\) onto \(D\). This extension also extends \(g\) on \(S\) and we will denote it by \(g\). By (5.12), we have (i) and so it remains to show (ii). Let \(x\in B(S,r)\setminus S\) and find \(y\in S\) with \(|x-y|=r\). Then \[|g(x)-f(x)| \leq|g(x)-g(y)|+|g(y)-f(y)|+|f(y)-f(x)|\] \[\leq|x-y|+\varepsilon_{0}+|x-y|\leq r+\varepsilon_{0}+r<\varepsilon.\] This shows that (ii) holds and we are done. **Theorem 5.14** (Restatement of Theorem 1.1).: _Let \(n\leq k\), \(n\leq m\) and suppose \(E\subset\mathbb{R}^{k}\) is \(n\)-rectifiable. Let \(E\subset D\subset\mathbb{R}^{k}\). Then the set_ \[\{f\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m}):\int_{E}J_{E}f\;\mathrm{d} \mathcal{H}^{n}=\mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\). Moreover, if \(n<m\), then the set_ \[\{f\in\operatorname{Lip}_{1}(D,\mathbb{R}^{m}):\mathcal{H}^{n}(f(E))= \mathcal{H}^{n}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(D,\mathbb{R}^{m})\)._ Proof.: We may reduce to the case \(\mathcal{H}^{n}(E)<\infty\) in the standard way as \(\mathcal{H}^{n}_{|E}\) is \(\sigma\)-finite. Recalling Theorem 5.2, it suffices to show that the set \[\{f\in\operatorname{Lip}_{1}(D_{a},\mathbb{R}^{m}):\int_{E}J_{E}f\;\mathrm{d} \mathcal{H}^{n}>\eta\mathcal{H}^{n}(E)\}.\] is dense for every \(\eta\in(0,1)\). This follows from Theorem 5.13. The result of Theorem 5.14 is, as many other residuality results, strange in the sense that constructing any specific examples of Lipschitz maps that satisfy the required properties is highly non-trivial. Even considering \(E=\mathbb{S}^{1}\), the unit circle embedded in \(\mathbb{R}^{2}\), it is not clear at all how to construct a \(1\)-Lipschitz map into \(\mathbb{R}\) having the tangential Jacobian equal to \(\pm 1\)\(\mathcal{H}^{1}\)-a.e. If we were allowed to have \(\operatorname{Lip}(f)=1+\varepsilon\), this would be easy (one may, for example, consider a parametrization of \(E\) of speed \(1\) and locally invert it on small intervals), but there is no natural way of sending \(\varepsilon\to 0\). If, in this case, we consider maps into \(\mathbb{R}^{2}\) it is once again difficult to construct any \(f\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) which is \(1\)-Lipschitz and satisfies \(\mathcal{H}^{1}(f(E))=\mathcal{H}^{1}(E)\) which is _not_ a linear isometry. Negative results in normed sets While in the Euclidean space very strong results hold, this fails in more general spaces. In fact it is enough to consider different (finite dimensional) normed spaces for some of the results to fail completely. The purpose of this section is to provide conditions on norms \(|\cdot|_{a}\) and \(|\cdot|_{b}\) so that sets of the form \[\{f\in\operatorname{Lip}(\Omega_{a},\mathbb{R}_{b}^{m}):\mathcal{H}^{n}(f( \Omega))\geq\Delta\}\] are _not_ dense in \(\operatorname{Lip}(\Omega_{a},\mathbb{R}_{b}^{m})\). Here the most general instance of \(\Omega\) is a bounded open set - further generality is possible but not of much interest \(\operatorname{\mathsf{y}o}\) us. The sets being open makes several technical steps significantly easier. Recall the definition of a strongly extremal point from Definition 2.2. We begin with a simple observation about strongly extremal points. Recall that a linear map \(P\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is called a _linear projection_ if \(P\circ P=P\). **Proposition 6.1**.: _Suppose \(|\cdot|_{a}\) is a norm on \(\mathbb{R}^{n}\). If \(u\in\partial B_{a}\) is strongly extremal, then it is extremal. Moreover, for \(u\in\partial B_{a}\), the following statements are equivalent:_ 1. \(u\) _is a strongly extremal point of_ \(B_{a}\)_,_ 2. _there exists a linear projection_ \(P\colon\mathbb{R}^{n}\to\operatorname{span}\{u\}\) _such that_ \(P^{-1}(u)\cap B_{a}=\{u\}\)_,_ 3. _there exists a linear projection_ \(P\colon\mathbb{R}^{n}\to\operatorname{span}\{u\}\) _such that if_ \(u_{n}\in B_{a}\) _is a sequence with_ \(P(u_{n})\to u\)_, then_ \(u_{n}\to u\)_._ Proof.: If (i) holds, then there is an affine tangent \(T\) to \(B_{a}\) at \(u\) with \(T\cap B_{a}=\{u\}\). Taking \(P\colon\mathbb{R}^{n}\to\operatorname{span}u\) to be the linear map satisfying \(P^{-1}(u)=T\), we see that both (ii) and (iii) hold. On the other hand, if (ii) or (iii) hold, we can take \(T=P^{-1}(u)\) and in either case, we get \(T\cap B_{a}=\{u\}\), which means that (i) holds. **Definition 6.2**.: Suppose \(n,m\in\mathbb{N}\), \(2\leq n\leq m\). Let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) a norm on \(\mathbb{R}^{m}\). For any \(u\in\mathbb{R}^{m}\) satisfying \(\|(u|0)\|_{a\to b}\leq 1\), we define the _maximal volume_ of \(u\) as the quantity \(\operatorname{mv}u=\operatorname{mv}_{a\to b}u\in[0,\infty)\) given by \[\operatorname{mv}u=\sup\{\operatorname{vol}(u|V):V\in(\mathbb{R}^{m})^{n-1},\, \|(u|V)\|_{a\to b}\leq 1\}.\] The following observation, stating essentially that \(\operatorname{mv}\) is upper semi-continuous, is an immediate consequence the fact that vol is \(\frac{1}{2}\)-Holder (it is even Lipschitz, but that does require a proof) and that \(\|\cdot\|_{a\to b}\) is Lipschitz. **Observation 6.3**.: _Suppose \(n,m\in\mathbb{N}\), \(2\leq n\leq m\). Let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) a norm on \(\mathbb{R}^{m}\). Let \(u\in\mathbb{R}^{m}\) be such that \(\|(u|0)\|_{a\to b}\leq 1\). Then, to each \(\delta>0\), there is \(\varepsilon>0\) such that for every \(\widetilde{u}\in B_{a}(u,\varepsilon)\) the following holds. If \(V\in(\mathbb{R}^{m})^{n-1}\) is such that \(|(\widetilde{u}|V)|_{a\to b}\leq 1\), then_ \[\operatorname{vol}(\widetilde{u}|V)\leq\operatorname{mv}u+\delta.\] We shall also require a particular property of integral averages, which is the subject of the following lemma. As this holds in an arbitrary finite measure space, we state it in full generality. **Lemma 6.4**.: _Let \((R,\mu)\) be a measure space with \(0<\mu(R)<\infty\) and let \(\psi\colon R\to\mathbb{R}\) be \(\mathcal{H}^{n}\)-measurable. Let \(K>0\), \(\delta>0\) and \(N\in\mathbb{N}\) be given. Then there is \(\varepsilon>0\) such that if_ \[\begin{split}&\psi\leq K\quad\text{a.e. in }R\quad\text{and}\\ & K(1-\varepsilon)\leq\frac{1}{\mu(R)}\int_{R}\psi\;\mathrm{d}\mu,\end{split} \tag{6.1}\] _then \(\mu(\{\psi\geq K-\delta\})\geq\mu(R)(1-\frac{1}{N})\)._ Proof.: Denote \(\lambda=\frac{1}{\mu(R)}\mu(\{\psi\geq K-\delta\})\). Then for every \(\varepsilon>0\), assuming (6.1) holds, one has \[\begin{split} K(1-\varepsilon)&\leq\frac{1}{\mu(R)} \int_{R}\psi\;\mathrm{d}\mu=\frac{1}{\mu(R)}\left(\int_{\{\psi\geq K-\delta\}} \psi\;\mathrm{d}\mu+\int_{\{\psi<K-\delta\}}\psi\;\mathrm{d}\mu\right)\\ &\leq\frac{1}{\mu(R)}(\mu(R)\lambda K+(1-\lambda)\mu(R)(K-\delta))= \lambda K+(1-\lambda)(K-\delta).\end{split}\] The above inequality is equivalent to \[\frac{-K\varepsilon+\delta}{\delta}\leq\lambda. \tag{6.2}\] We may find an \(\varepsilon>0\) so that the left hand side of (6.2) is greater than or equal to \(1-\frac{1}{N}\). This, however, implies that \(\lambda\geq 1-\frac{1}{N}\) and the statement follows. **Theorem 6.5**.: _Suppose \(n,m\in\mathbb{N}\), \(2\leq n\leq m\) and denote \(Q=[-1,1]^{n}\). Let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) a norm on \(\mathbb{R}^{m}\). Let \(u\in\mathbb{R}^{m}\) be a strongly extremal point of \(B_{b}\) such that \(\|(u|0)\|_{a\to b}=1\). Then for any sequence \(g_{i}\in\operatorname{Lip}_{1}(Q_{a},\mathbb{R}^{m}_{b})\) such that \(g_{i}\to(u|0)\) it holds that_ \[\lim_{i\to\infty}\mathcal{H}^{n}(\{x\in Q:\operatorname{vol}g_{i}^{\prime}(x )>\operatorname{mv}u\})=0.\] Proof.: Fix \(N\in\mathbb{N}\) and \(\sigma>0\). Recalling Observation 6.3 and using the fact that differentials of \(1\)-Lipschitz maps have operator norm at most \(1\) wherever they exist, it is enough to show that there exists \(\varepsilon>0\) such that if \(g\in\operatorname{Lip}_{1}(Q_{a},\mathbb{R}^{m}_{b})\) satisfies \(\|g-(u|0)\|_{\ell^{\infty}(Q_{a},\mathbb{R}^{m}_{b})}\leq\varepsilon\), then \[\|\frac{\partial g}{\partial e_{1}}-u\|_{b}\leq\sigma\quad\text{on a set $M$ of $\mathcal{H}^{n}$-measure at least $2^{n}-\frac{2^{n}}{N}$}. \tag{6.3}\] As \(u\) is strongly extremal, by Proposition 6.1, we find a linear projection \(P\colon\mathbb{R}^{m}\to\operatorname{span}\{u\}\) with the following property. Whenever \(w^{\alpha}\in B_{b}\) satisfy \(P(w^{\alpha})\to u\) as \(\alpha\to\infty\), we have \(w^{\alpha}\to u\). Hence, we may find \(\delta>0\) such that for every \(w\in B_{b}\), \[|P(w)-u|_{b}\leq\delta\quad\text{implies}\quad|w-u|\leq\sigma. \tag{6.4}\] Finally, fixing \(K=1\), \(R=[-1,1]\) and \(\mu=\mathcal{L}^{1}\), find \(\varepsilon>0\) from Lemma 6.4. Fix now \(t_{2},\dots,t_{n}\in[-1,1]\) and consider the Lipschitz curve \[\varphi(t)=g(t,t_{2},\dots,t_{n}).\] Since \(\|(u|0)\|_{a\to b}\leq 1\) and \(u\in\partial B_{b}\), we infer that \((1,0,\dots,0)^{T}\in\partial B_{a}\), which means that the restriction of \(|\cdot|_{a}\) to \(\operatorname{span}\{(1,0,\dots,0)^{T}\}\) is the Euclidean distance. Whence, as \(\varphi\) is \(1\)-Lipschitz, we have \[|\varphi^{\prime}(t)|_{b}\leq 1\quad\text{for a.e. $t\in[-1,1]$}.\] Applying the fundamental theorem of calculus, we obtain \[\int_{-1}^{1}\varphi^{\prime}(s)\;\mathrm{d}s=\varphi(1)-\varphi(-1)\in B_{b} (2u,2\varepsilon),\] i.e. \[|\int_{-1}^{1}\varphi^{\prime}(s)\;\mathrm{d}s-2u|_{b}\leq 2\varepsilon,\] hence, recalling that \(P\) has operator norm \(1\), \[|\int_{-1}^{1}P(\varphi^{\prime}(s))\;\mathrm{d}s-u|_{b}\leq 2\varepsilon.\] If we now identify \(\operatorname{span}\{u^{i}\}\) with \(\mathbb{R}\) by assigning \(\lambda\in\mathbb{R}\) to \(\lambda u_{i}\), we may use the reverse triangle inequality and obtain \[\int_{-1}^{1}P(\varphi^{\prime}(s))\;\mathrm{d}s\geq 2-2\varepsilon,\] in the sense of the described identification. By the choice of \(\varepsilon\), we find a Borel set \(M(t_{2},\dots,t_{n})\subset[-1,1]\) with \[\mathcal{H}^{1}(M(t_{2},\dots,t_{n}))\geq 2-\tfrac{2}{N}\] such that \[|P(\varphi^{\prime})-u|_{b}\leq\delta\quad\text{on $M(t_{2},\dots,t_{n})$}.\] By definition, whenever \(\varphi^{\prime}\) exists, one has \(\varphi^{\prime}=\frac{\partial g}{\partial e_{1}}\). Whence, the _Borel_ set \[M=\{x\in Q:|P(\frac{\partial g}{\partial e_{1}}(x))-u|_{b}\leq\delta\}\] has the following property. For any choice of \(t_{2},\ldots t_{n}\in[-1,1]\) the one-dimensional projection of \(M\) satisfies \[\{x\in M:x_{2}=t_{2},\ldots,x_{n}=t_{n}\}\supset M(t_{2},\ldots,t_{n}).\] Whence Fubini's theorem gives \[\mathcal{H}^{n}(M)\geq 2^{n}-\frac{2^{n}}{N}.\] By the definition of \(M\) and the choice of \(\delta\) (6.4), we have \[|\frac{\partial g}{\partial e_{1}}-u|_{b}\leq\sigma\quad\text{on }M,\] which is (6.3) as we wanted. **Corollary 6.6**.: _Suppose \(n,m\in\mathbb{N}\), \(2\leq n\leq m\). Let \(\Omega\subset\mathbb{R}^{n}\) be bounded and open. Let \(|\cdot|_{a}\) be a norm on \(\mathbb{R}^{n}\) and \(|\cdot|_{b}\) a norm on \(\mathbb{R}^{m}\). Let \(u\in\mathbb{R}^{m}\) be a strongly extremal point of \(B_{b}\) such that \(\|(u|0)\|_{a\to b}=1\). Then for any sequence \(g_{i}\in\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b})\) such that \(g_{i}\to(u|0)\) it holds that_ \[\lim_{i\to\infty}\mathcal{H}^{n}(\{x\in\Omega:\operatorname{vol}g_{i}^{\prime }(x)>\operatorname{mv}u\})=0.\] _In particular, if \(\operatorname{mv}u=0\), then the sets_ \[\{f\in\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b}):\int_{\Omega} \operatorname{vol}f^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq\Delta\}\] _and_ \[\{f\in\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b}):\mathcal{H}^{n}(f (\Omega))\geq\Delta\}\] _are dense in \(\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b})\) if and only if \(\Delta=0\)._ Proof.: Any open bounded set \(\Omega\subset\mathbb{R}^{n}\) may be arbitrarily well (with respect to measure) filled with a finite set of non-overlapping squares, thus the first statement easily follows from 6.5. The non-density of the first set then follows immediately from the first statement. The statement about the non-density of the second set follows from the area formula and the non-density of the first set. **Example 6.7**.: Let \(n,m\in\mathbb{N}\), \(m\geq n\) and denote \(u=(1,0,\ldots,0)^{T}\in\mathbb{R}^{m}\). It is an easy observation that \(\operatorname{mv}_{\infty\to 2}u=0\). Indeed, one may even show that for \(V\in(\mathbb{R}^{m})^{n-1}\) one has \(\|(u|V)\|_{\infty\to 2}\leq 1\) if and only if \(V=0\). This in particular shows that for any open bounded set \(\Omega\subset\mathbb{R}^{m}\), the set \[\{f\in\operatorname{Lip}_{1}(\Omega_{\infty},\mathbb{R}^{m}_{2}):\mathcal{H}^ {n}(f(\Omega))\geq\Delta\}\] is dense in \(\operatorname{Lip}_{1}(\Omega_{\infty},\mathbb{R}^{m}_{2})\) if and only if \(\Delta=0\). In fact, the idea of the example above can be easily used to show a far stronger statement. **Theorem 6.8** (Restatement of Theorem 1.2).: _Let \(n,m\in\mathbb{N}\), \(m\geq n\). Suppose \(|\cdot|_{a}\) is a norm on \(\mathbb{R}^{n}\) such that \(\partial B_{a}\) contains a non-extremal point of \(B_{a}\). Suppose further that \(|\cdot|_{b}\) is an arbitrary norm on \(\mathbb{R}^{m}\). Then, for any open bounded set \(\Omega\subset\mathbb{R}^{n}\), the sets_ \[\{f\in\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b}):\int_{\Omega} \operatorname{vol}f^{\prime}\;\mathrm{d}\mathcal{H}^{n}\geq\Delta\}\] _and_ \[\{f\in\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b}):\mathcal{H}^{n}(f (\Omega))\geq\Delta\}\] _are dense in \(\operatorname{Lip}_{1}(\Omega_{a},\mathbb{R}^{m}_{b})\) if and only if \(\Delta=0\)._ Proof.: Let us denote by \(e_{1},\ldots,e_{n}\) the canonical vectors in \(\mathbb{R}^{n}\). By our assumptions, there is a point \(x\in\partial B_{a}\), which is non-extremal in \(B_{a}\). There is a linear invertible map \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) such that \(A(x)=e_{1}\) and \(e_{1}+\operatorname{span}\{e_{2},\ldots,e_{n}\}\) is an affine tangent to \(A(B_{a})\). As \(A\colon(\mathbb{R}^{n},|\cdot|_{a})\to(\mathbb{R}^{n},|\cdot|_{A(a)})\) is an isometry, and the statement we are proving is invariant under isometries, we may assume that \(x=e_{1}\) and \(e_{1}+\operatorname{span}\{e_{2},\ldots,e_{n}\}\) is an affine tangent to \(B_{a}\) at \(x=e_{1}\). As \(\partial B_{b}\) is compact, there exists \(u\in\partial B_{b}\) maximizing the quantity \(|u|_{2}\). By taking the unique supporting hyperplane to the Euclidean ball of radius \(|u|_{2}\) at \(u\), one can easily observe that \(u\) is a strongly extremal point of \(B_{b}\). Therefore, in particular, it is also extremal. Clearly \(\|(u|0)\|_{a\to b}=1\) as \((u|0)(B_{a})=\{tu:t\in[-1,1]\}\) and so \(\operatorname{mv}u\) is well defined. It is enough to show that \(\operatorname{mv}u=0\) and recall Corollary 6.6. Suppose that \(V\in(\mathbb{R}^{m})^{n-1}\) is such that \(\operatorname{vol}(u|V)>0\). It suffices to prove that \(\|(u|V)\|_{a\to b}>1\). To that end, let \(l\subset B_{a}\) be any non-degenerate line segment having \(e_{1}\) as its midpoint. As \((u|V)\) is of full rank, \((u|V)l\) is a non-degenerate line having \(u\) as its midpoint. As \(u\) is extremal in \(B_{b}\), this implies that \((u|V)l\not\subset B_{b}\). Which, as \(l\subset B_{a}\), implies \(\|(u|V)\|_{a\to b}>1\) and we are done. ## 7 Results in metric spaces The goal of this final section is to present results in general metric spaces. Of course, the "generality" here is fairly limited by the fact that even in case of normed spaces, the relevant results simply need not be true. Therefore, our positive results concentrate on \(n\)-rectifiable metric spaces whose \(\mathcal{H}^{n}\)-a.a. approximate tangents are \(\lambda\)-inflating. **Definition 7.1**.: Given \(n\in\mathbb{N}\), we denote by \(\mathcal{N}(n)\) the set of all norms on \(\mathbb{R}^{n}\) and by \(\sim\) we understand the equivalence relation on \(\mathcal{N}(n)\) given by \(|\cdot|_{a_{1}}\sim|\cdot|_{a_{2}}\) if and only if there is an invertible linear map \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) such that \(A(B_{a_{1}})=B_{a_{2}}\). The space \(\mathcal{N}(n)/_{\sim}\) is called the _Banach-Mazur compactum_. Given \(\lambda>0\), \(n\leq m\in\mathbb{N}\) and a norm \(|\cdot|_{b}\) on \(\mathbb{R}^{m}\), we let \[\mathcal{N}^{\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$ \text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$ \text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$}$\text{\text{\tiny$ \text{\tiny$}$\text{\text{\tiny$}$\text{\tiny$\text{\tiny${\text{\tiny$ \text{\tiny$}${\text{\tiny$}$}${\text{\text{\tiny$}${\text{ \tiny$}${\text{\tiny$}${\text{\tiny$}${\text{\text{\tiny$$}${\text{\text{ }}$$${\text{\tiny$}${\text{\text{\tiny$$}${\text{$$$}${\text{$$ \text{$}$}${\text{$$}${\text{$$}${\text{\text{$$$}$${$$}${\text{$$$$}${$$$$$$}$}${{$$$$$$$$$$$$$$$$$$$$$$$$$${{\text{$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$}{{{\text{\tiny$$$$$$$$$$$$$$$$$$$$$$$$$$$$$}{{{\tiny$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$}{{{\tiny$ Recall that the sets \(\widetilde{E}_{i}\) are pairwise disjoint and compact. Using the property described in Lemma 2.6 separately for each \(i\), we may suitably re-index so as to obtain, for each \(\varepsilon_{0}>0\) and each \(\theta>0\), a finite family of sets \(G_{j}\subset\widetilde{E}\), \(j\in\{1,\ldots,j_{0}\}\) such that 1. each \(G_{j}\) is open in \(\widetilde{E}\), 2. \(\mathcal{H}^{n}(\widetilde{E}\setminus\bigcup_{j=1}^{j_{0}}G_{j})<\varepsilon_ {0}\), 3. the sets \(\overline{G_{j}}^{\widetilde{E}}\) are pairwise disjoint, 4. for each \(j\in\{1,\ldots,j_{0}\}\), there is a norm \(|\cdot|_{a_{j}}\in\mathcal{N}_{\inf(\lambda)}^{b}(n)\) a set \(F_{j}\subset F\) and a map \(I_{j}\colon(F_{j},|\cdot|_{a_{j}})\to G_{j}\) which is a \((1+\theta)\)-biLipschitz bijection. For an open non-empty set \(G\subset\widetilde{E}\) and \(\sigma>0\), we let \[G^{\sigma}=\{\xi\in G:d_{X}(\xi,\widetilde{E}\setminus G)\geq\sigma\}.\] We shall need two properties of this construction. Firstly, by continuity of measure, it holds that \[\lim_{\sigma\to 0}\mathcal{H}^{n}(G\setminus G^{\sigma})=0. \tag{7.3}\] Secondly, one has \(B_{\widetilde{E}}(G^{\sigma},\sigma)\subset\overline{G}^{\widetilde{E}}\). Here, for convenience, we define \(B_{\widetilde{E}}(\emptyset,\sigma)=\emptyset\). Therefore, by (c), the sets \(B_{\widetilde{E}}(G_{j}^{\sigma},\sigma)\) are disjoint. As there is a finite number of sets \(G_{j}\), using (7.3), there is some \(\sigma>0\) such that \(B_{\widetilde{E}}(G_{j}^{\sigma},\sigma)\) are disjoint and \[\mathcal{H}^{n}(\widetilde{E}\setminus\bigcup_{j=1}^{j_{0}}G_{j}^{\sigma})< \varepsilon_{0}. \tag{7.4}\] Let \(0<\delta_{0}\leq\delta\) be such that \((1+\theta^{2})L_{0}+2\frac{\delta_{0}}{\sigma}\leq 1\). Fix \(j\in\{1,\ldots,j_{0}\}\) and let \(\widetilde{f}_{j}\colon F_{j}\to\mathbb{R}_{b}^{m}\) be given by \(\widetilde{f}_{j}=f\circ I_{j}\). Now \[\widetilde{f}_{j}\in\operatorname{Lip}_{(1+\theta)L_{0}}((F_{j},|\cdot|_{a_{ j}}),\mathbb{R}_{b}^{m}),\] whence we may use Theorem 5.10 (we require only density) to find \(\widetilde{g}_{j}\in\operatorname{Lip}_{(1+\theta)L_{0}}((F_{j},|\cdot|_{a_{ j}}),\mathbb{R}_{b}^{m})\) such that \(\|\widetilde{f}_{j}-\widetilde{g}_{j}\|_{\ell^{\infty}(F_{i})}\leq\delta_{0}\) and \[\int_{F_{j}}J_{F_{j}}g_{j}\;\mathrm{d}\mathcal{H}^{n}\geq\lambda\operatorname{ vol}(|\cdot|_{a_{j}})\mathcal{H}_{|\cdot|_{2}}^{n}(F_{j})=\lambda\mathcal{H}_{a_{ j}}^{n}(F_{j}).\] Let \(g_{j}\colon G_{j}\to\mathbb{R}_{b}^{m}\) be given by \(g_{j}=\widetilde{g}_{j}\circ I_{j}^{-1}\). Then \(g_{j}\in\operatorname{Lip}_{(1+\theta)^{2}L_{0}}(G_{j},\mathbb{R}_{b}^{m})\) and, by the area formula, it satisfies \[\begin{split}\int_{G_{j}}J_{G_{j}}g_{j}\;\mathrm{d}\mathcal{H}^{ n}&=\int_{g_{j}(G_{j})}\#g_{j}^{-1}(u)\;\mathrm{d}\mathcal{H}^{n}(u)= \int_{\widetilde{g}_{j}(F_{j})}\#\widetilde{g}_{j}^{-1}(u)\;\mathrm{d} \mathcal{H}^{n}(u)\\ &=\int_{F_{j}}J_{F_{j}}g_{j}\;\mathrm{d}\mathcal{H}^{n}\geq\lambda \mathcal{H}_{a_{j}}^{n}(F_{j})\geq\frac{\lambda}{1+\theta}\mathcal{H}_{X}^{n}( G_{j}).\end{split} \tag{7.5}\] Now we may use Lemma 4.1 to find a function \(g\colon\widetilde{E}\to\mathbb{R}_{b}^{m}\) such that \(\left\|g-f\right\|_{\ell^{\infty}(\widetilde{E})}\leq\delta\) and \(g=g_{j}\) on each \(G_{j}^{\sigma}\). Moreover, we may require \(\operatorname{Lip}(g)\leq(1+\theta)^{2}L_{0}+2\frac{\delta_{0}}{\sigma}\leq 1\). It remains to show that (7.1) holds. Using disjointness of \(G_{j}\)'s we may estimate \[\int_{\widetilde{E}}J_{\widetilde{E}}g\;\mathrm{d}\mathcal{H}^{n} \geq\sum_{j=1}^{j_{0}}\int_{G_{j}}J_{G_{j}}g\;\mathrm{d}\mathcal{H}^ {n}\geq\sum_{j=1}^{j_{0}}\int_{G_{j}^{\sigma}}J_{G_{j}}g_{j}\;\mathrm{d} \mathcal{H}^{n}\] \[=\sum_{j=1}^{j_{0}}\int_{G_{j}}J_{G_{j}}g_{j}\;\mathrm{d}\mathcal{H }^{n}-\sum_{j=1}^{j_{0}}\int_{G_{j}\setminus G_{j}^{\sigma}}J_{G_{j}}g_{j}\; \mathrm{d}\mathcal{H}^{n}\] \[\stackrel{{\eqref{eq:G_{j}}}}{{\geq}}\sum_{j=1}^{j_ {0}}\frac{\lambda}{1+\theta}\mathcal{H}_{X}^{n}(G_{j})-(\operatorname*{ess}_{ E}J_{E}g)\mathcal{H}^{n}(\widetilde{E}\setminus\bigcup_{j=1}^{j_{0}}G_{j}^{ \sigma})\] \[\stackrel{{\eqref{eq:G_{j}}}}{{>}}\sum_{j=1}^{j_ {0}}\frac{\lambda}{1+\theta}\mathcal{H}_{X}^{n}(G_{j})-K_{b}\varepsilon_{0} \geq\frac{\lambda}{1+\theta}(\mathcal{H}_{X}^{n}(\widetilde{E})-\varepsilon_{ 0})-K_{b}\varepsilon_{0}\] \[\stackrel{{\eqref{eq:G_{j}}}}{{\geq}}\eta\lambda \mathcal{H}_{X}^{n}(\widetilde{E}).\] **Theorem 7.4**.: _Suppose that \(n,m\in\mathbb{N}\), \(n\leq m\), \(X\) is a complete metric space and \(E\subset X\) is an \(n\)-rectifiable subset. Suppose \(|\cdot|_{b}\) is a norm on \(\mathbb{R}^{m}\). Let \(\lambda>0\) and assume that for \(\mathcal{H}^{n}\)-a.e. \(\xi\in E\), one has_ \[T(E,\xi)\in\mathcal{N}_{\mathrm{infl}(\lambda)}^{b}(n).\] _Then for each \(\varepsilon>0\), there is a set \(\widetilde{E}\subset E\) with \(\mathcal{H}^{n}(E\setminus\widetilde{E})<\varepsilon\) and such that the set_ \[\{f\in\mathrm{Lip}_{1}(\widetilde{E},\mathbb{R}_{b}^{m}):\int_{ \widetilde{E}}J_{\widetilde{E}}f\;\mathrm{d}\mathcal{H}^{n}\geq\lambda \mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\mathrm{Lip}_{1}(\widetilde{E},\mathbb{R}_{b}^{m})\). Moreover, if \(m>n\), then the set_ \[\{f\in\mathrm{Lip}_{1}(\widetilde{E},\mathbb{R}_{b}^{m}):\mathcal{H}^{n}(f( \widetilde{E}))\geq\lambda\mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\mathrm{Lip}_{1}(\widetilde{E},\mathbb{R}_{b}^{m})\)._ Proof.: Once again, we may reduce to the case \(\mathcal{H}^{n}(E)<\infty\) as \(\mathcal{H}_{|E}^{n}\) is \(\sigma\)-finite. Due to Theorem 5.2, it is sufficient to show density of \[\{f\in\mathrm{Lip}_{1}(\widetilde{E},\mathbb{R}_{b}^{m}):\int_{ \widetilde{E}}J_{\widetilde{E}}f\;\mathrm{d}\mathcal{H}^{n}>\lambda(1-\frac{1 }{i})\mathcal{H}^{n}(\widetilde{E})\},\] for each \(i\in\mathbb{N}\). However, this is Theorem 7.3. Recall the definition of strongly \(n\)-rectifiable sets, Definition 2.10 and the subsequent characterisation, Lemma 2.11. For these spaces we have the following result in the spirit of Theorem 5.13. In relation to this, note in particular that any \(1\)-rectifiable metric subset of a complete metric space in also strongly \(1\)-rectifiable since \(\mathcal{N}(1)=[|\cdot|_{2}]\) where \(|\cdot|_{2}\) is the Euclidean norm (absolute value) on \(\mathbb{R}\). **Corollary 7.5**.: _Suppose \(n\in\mathbb{N}\) and assume that \(E\) is strongly \(n\)-rectifiable metric space. Then, to each \(\varepsilon>0\), there is a set \(\widetilde{E}\subset E\) satisfying \(\mathcal{H}^{n}(E\setminus\widetilde{E})<\varepsilon\) such that for every \(m\geq n\), the set_ \[\{f\in\mathrm{Lip}(\widetilde{E},\mathbb{R}^{m}):\int_{\widetilde{E}}J_{ \widetilde{E}}f\;\mathrm{d}\mathcal{H}^{n}=\mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\mathrm{Lip}(\widetilde{E},\mathbb{R}^{m})\). Moreover, for any \(m>n\), the set_ \[\{f\in\mathrm{Lip}(\widetilde{E},\mathbb{R}^{m}):\mathcal{H}^{n}(f( \widetilde{E}))=\mathcal{H}^{n}(\widetilde{E})\}\] _is residual in \(\mathrm{Lip}(\widetilde{E},\mathbb{R}^{m})\). In particular for any \(m\geq n\) a typical \(f\in\mathrm{Lip}(\widetilde{E},\mathbb{R}^{m})\) satisfies \(J_{\widetilde{E}}f=1\;\mathcal{H}^{n}\)-a.e. in \(\widetilde{E}\)._ Proof.: Recalling Example 5.7, we see that \((\mathbb{R}^{n}_{2},\mathbb{R}^{m}_{2})\) form a \(1\)-inflating pair for every \(m\geq n\). Therefore the statement follows from Theorem 7.4. In the one-dimensional case, if one assumes also the target dimension \(m\) to be equal to \(1\), it is possible to use McShane's extension to obtain the following. **Theorem 7.6**.: _Suppose \(X\) is a complete metric space and \(E\) a \(1\)-rectifiable subset with \(\mathcal{H}^{1}(E)<\infty\). Then_ \[\{f\in\operatorname{Lip}_{1}(X,\mathbb{R}):\int_{E}J_{E}f\;\mathrm{d} \mathcal{H}^{1}=\mathcal{H}^{1}(E)\}\] _is residual in \(\operatorname{Lip}_{1}(X,\mathbb{R})\). In particular, a typical \(f\in\operatorname{Lip}_{1}(X,\mathbb{R})\) satisfies \(J_{E}f=1\)\(\mathcal{H}^{1}\)-a.e. in \(E\)._ Proof.: Every \(1\)-rectifiable metric subspace of a complete metric space is strongly \(1\)-rectifiable. Therefore Corollary 7.5 together with McShane's extension implies density of the set \[\{f\in\operatorname{Lip}_{1}(X,\mathbb{R}):\int_{E}J_{E}f\;\mathrm{d} \mathcal{H}^{1}=\mathcal{H}^{1}(E)\}.\] Whence Theorem 5.2 yields the result. **Theorem 7.7**.: _Suppose \(X\) is a complete metric space and \(E\) is its strongly \(n\)-rectifiable subspace. Denote by \(E^{*}\) the set of points of \(E\), where the approximate tangent to \(E\) exists and is Euclidean. Let \(k\in\mathbb{N}\), \(k\leq n\) and suppose \(K\subset E^{*}\) is \(k\)-rectifiable in \(X\) (or equivalently in \(E\) or \(E^{*}\)). Then \(K\) is strongly \(k\)-Euclidean._ Proof.: Suppose \(x\in K\) is such that \(T(K,x)\) exists. We show that \(T(K,x)=[|\cdot|_{\mathbb{R}^{k}_{2}}]\). Let \(|\cdot|_{a}\in T(K,x)\). To each \(\theta>0\), we find \(r>0\), Borel sets \(\widetilde{K}\subset K\), \(\widetilde{E}\subset E^{*}\), \(H_{r}\subset\mathbb{R}^{k}\), \(F_{r}\subset\mathbb{R}^{n}\) and maps \(I_{r}\colon(H_{r},|\cdot|_{a})\to\widetilde{K}\cap B(x,r)\), \(J_{r}\colon(F_{r},|\cdot|_{2})\to\widetilde{E}\cap B(x,r)\) such that 1. \(x\) is an \(\mathcal{H}^{k}\)-density point of \(\widetilde{K}\), 2. \(x\) is an \(\mathcal{H}^{n}\)-density point of \(E^{*}\), 3. both \(I_{r}\) and \(J_{r}\) are \((1+\theta)\)-biLipschitz. Moreover, this may be done in such a way that \(\widetilde{K}\subset\widetilde{E}\). Let now \(\theta>0\) be fixed and find the \(r>0\) from above. Let \(\iota=J_{r}\circ I_{r}^{-1}\colon H_{r}\to F_{r}\) and observe that \(\iota\) is a well defined \((1+\theta)^{2}\)-biLipschitz map. There exists a density point \(y\) of \(H_{r}\) such that \(\iota(y)\) is a density point of \(F_{r}\) and both \(\iota^{\prime}(y)\) and \((\iota^{-1})^{\prime}(\iota(y))\) exist. In that case, it is necessary that \(\iota^{\prime}(y)\colon\mathbb{R}^{k}\to\mathbb{R}^{n}\) is a linear map and \(\|\iota^{\prime}(y)\|_{a\to 2}\leq(1+\theta)^{2}\). Moreover, \((\iota^{-1})^{\prime}(\iota(y))=\frac{1}{\iota^{\prime}(y)}\) and so \[\|(\iota^{\prime}(y))^{-1}\|_{(\iota(\mathbb{R}^{k}))\to\mathbb{R}^{k}_{a}} \leq(1+\theta)^{2}.\] All in all, \(\iota^{\prime}(y)\) is a \((1+\theta)^{2}\)-biLipschitz linear map form \(\mathbb{R}^{k}_{a}\) onto a \(k\)-dimensional linear subspace of \(\mathbb{R}^{n}\). By sending \(\theta\to 0\), and observing that all \(k\)-dimensional subspaces of \(\mathbb{R}^{n}\) are linearly isometric to \((\mathbb{R}^{k},|\cdot|_{2})\), we obtain that \((\mathbb{R}^{k},|\cdot|_{a})\) is linearly isometric to \((\mathbb{R}^{k},|\cdot|_{2})\) as we wanted. The preceding theorem asserts that if our ambient metric space is strongly \(n\)-rectifiable, then all of its \(k\)-rectifiable subsets are strongly \(k\)-rectifiable, which, in combination with Corollary 7.5 yields the following corollary. **Corollary 7.8**.: _Suppose \(n\in\mathbb{N}\) and let \(E\) be a strongly \(n\)-rectifiable subspace of a complete metric space \(X\) with. Suppose \(k\in\mathbb{N}\), \(k\leq n\). Then, for any \(k\)-rectifiable subset \(K\) of_ \[E^{*}=\{x\in E:T(E,x)\text{ exists and is Euclidean}\}\] _with \(\mathcal{H}^{k}(K)<\infty\), we have the following. To each \(\varepsilon>0\), there is a set \(\widetilde{K}\subset K\) satisfying \(\mathcal{H}^{n}(K\setminus\widetilde{K})<\varepsilon\) such that for every \(m\geq k\), the set_ \[\{f\in\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m}):\int_{\widetilde{K}}J_{ \widetilde{K}}f\;\mathrm{d}\mathcal{H}^{k}=\mathcal{H}^{k}(\widetilde{K})\}\] _is residual in \(\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m})\). Moreover, for any \(m>n\), the set_ \[\{f\in\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m}):\mathcal{H}^{k}(f( \widetilde{K}))=\mathcal{H}^{k}(\widetilde{K})\}\] _is residual in \(\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m})\). In particular for any \(m\geq k\) a typical \(f\in\operatorname{Lip}(\widetilde{K},\mathbb{R}^{m})\) satisfies \(J_{\widetilde{K}}f=1\)\(\mathcal{H}^{k}\)-a.e. in \(\widetilde{K}\)._ In case \(k=n\), it suffices to assume \(K\subset E\) instead of \(K\subset E^{*}\) as the exceptional set is \(\mathcal{H}^{k}\)-null. **Remark 7.9**.: We bring to attention a particular important example of a strongly \(n\)-rectifiable metric space. The so-called RCD spaces (see [2] for relevant definitions) are metric measure spaces \((X,\mu)\) such that \(X\) is strongly \(n\)-rectifiable for some \(n\in\mathbb{N}\) (this follows by the combination of [12, Theorem 1.3] and [6, Theorem 0.1]) and \(\mu\ll\mathcal{H}^{n}\)[2, Theorem 8.1] (see also [9, 8]). The fact that \(\mu\ll\mathcal{H}^{n}\) is particularly useful in connection with the moreover part of Corollary 7.8 as one then obtains \[\int_{\widetilde{K}}J_{\widetilde{K}}f(x)F(x)\;\mathrm{d}\mathcal{H}^{n}(x)= \mu(\widetilde{K})\] for a typical \(1\)-Lipschitz \(f\). Here \(F\) is the Radon-Nikodym derivative of \(\mu\) with respect to \(\mathcal{H}^{n}\).
2302.04765
Global Stability of a PDE-ODE model for acid-mediated tumor invasion
In this paper, we study the global dynamics of a general reaction-diffusion model based on acid-mediated invasion hypothesis, which is a candidate explanation for the Warburg effect. A key feature of this model is the density-limited tumor diffusion term for tumor cells, which might give rise to the degeneracy of the parabolic equation. Our theoretical results characterize the effects of acid resistance and mutual competition of healthy cells and tumor cells on tumor progression in the long term, i.e., whether the healthy cells and tumor cells coexist or the tumor cells prevail after tumor invasion. The approach relies on the construction of suitable Lyapunov functionals and upper/lower solutions.
Fang li, Zheng-an Yao, Ruijia Yu
2023-02-09T16:58:07Z
http://arxiv.org/abs/2302.04765v2
# Global Stability of a PDE-ODE model for acid-mediated tumor invasion + ###### Abstract In this paper, we study the global dynamics of a general reaction-diffusion model based on acid-mediated invasion hypothesis, which is a candidate explanation for the Warburg effect. A key feature of this model is the density-limited tumor diffusion term for tumor cells, which might give rise to the degeneracy of the parabolic equation. Our theoretical results characterize the effects of acid resistance and mutual competition of healthy cells and tumor cells on tumor progression in the long term, i.e., whether the healthy cells and tumor cells coexist or the tumor cells prevail after tumor invasion. The approach relies on the construction of suitable Lyapunov functionals and upper/lower solutions. This paper continues and improves the work begun in [20]. **Keywords**: Reaction-diffusion systems; Lyapunov functional; Global stability **MSC (2020)**: Primary: 35B35, 35B40, 92C17; Secondary: 35K55, 35K57 Introduction In this paper, to understand tumor progression in the long term, we mainly investigate the global dynamics of a reaction-diffusion model in cancer invasion proposed by McGillen et al. in [16] \[\left\{\begin{aligned} & u_{t}=u\left(1-u-a_{2}v\right)-d_{1}uw,& x\in\Omega,t>0,\\ & v_{t}=D\nabla\cdot\left((1-u)\nabla v\right)+rv\left(1-a_{1}u-v \right)-d_{2}vw,& x\in\Omega,t>0,\\ & w_{t}=\Delta w+c(v-w),& x\in\Omega,t>0,\\ &\partial_{\nu}v=\partial_{\nu}w=0,& x\in\partial \Omega,t>0,\\ & u(x,0)=u_{0}(x),\ v(x,0)=v_{0}(x),\ w(x,0)=w_{0}(x),& x\in\Omega,\end{aligned}\right. \tag{1.1}\] where \(\Omega\) is a smooth and bounded domain in \(\mathbb{R}^{n}\), \(\nu\) denotes the unit outward normal vector on \(\partial\Omega\), and \(u,v,w\) represent the density functions of healthy cells, tumor cells and lactic acid in tissue microenvironment respectively. Also, \(D\), \(d_{1}\), \(d_{2}\), \(r\), \(c\), \(a_{1}\), \(a_{2}\) are positive non-dimensional parameters, where \(D\) is the diffusion rate of tumor cells, \(d_{1}\) and \(d_{2}\) are the death rates of healthy cells and tumor cells caused by the lactic acid, respectively, \(a_{1}\) and \(a_{2}\) represent the competition coefficients. The formulation of the model (1.1) is based on acid-mediated invasion hypothesis [9], which is a candidate explanation for the Warburg effect [21], a widespread preference in tumors for cytosolic glycolysis rather than oxidative phosphorylation for glucose breakdown. Altered glucose metabolism in tumor cells plays a critical role in cancer biology, through this process, the tumor cells change the microenvironment by producing acid [7]. Microenvironemtal acidosis is toxic to normal cells since it could lead to cellular necrosis and apoptosis[18][22]. The acid-mediated invasion hypothesis is motivated by viewing the tumor as an invasive species, which gains powerful selective advantage by producing lactic acid into the microenvironment since tumor cells acquire resistance to acidification of microenvironment while healthy cells are acid-sensitive [9]. Gatenby and Gawlinski were the first to put this hypothesis into a reaction-diffusion framework [8] as follows \[\left\{\begin{aligned} & u_{t}=u\left(1-u\right)-d_{1}uw,& x\in\Omega,t>0,\\ & v_{t}=D\nabla\cdot\left((1-u)\nabla v\right)+rv\left(1-v\right), & x\in\Omega,t>0,\\ & w_{t}=\Delta w+c(v-w),& x\in\Omega,t>0\\ &\partial_{\nu}v=\partial_{\nu}w=0,& x\in\partial \Omega,t>0,\\ & u(x,0)=u_{0}(x),\ v(x,0)=v_{0}(x),\ w(x,0)=w_{0}(x),& x\in\Omega.\end{aligned}\right.\] A key feature of the Gatenby-Gawlinski model is the density-limited tumor diffusion term in the second equation, which indicates that the tumor will be spatially constrained when the healthy cells reach the carrying capacity. The studies of the Gatenby-Gawlinski model suggest that acidity may play an important role in tumor progression [11]. In 2006, Gatenby et al. further their work by generalizing the Gatenby-Gawlinski model and comparing the numerical results with the experimental results. In mathematic point of view, their work confirms that the acid-mediated tumor invasion model could make detailed predictions[10]. Later, in order to further the understanding of acid-mediated invasion and capture a wider range of tumor behaviors which may be clinically relevant, the model (1.1) is proposed on the basis of the Gatenby-Gawlinski model by incorporating terms representing the mutual competition between healthy cells and tumor cells, and the acid-mediated tumor cell death. In [16], the invasive behaviors of tumor cells are characterized by numerical methods and an asymptotic traveling wave analysis. Among other things, the linear stability of the steady states in the model (1.1), which reflects the invasive and non-invasive behaviors of tumors, is analyzed in [16] as follows: * a trivial absence of all species, \((u,v,w)=(0,0,0)\), globally unstable; * a healthy state, \((u,v,w)=(1,0,0)\) linearly unstable if \(a_{1}<1\) and linearly stable if \(a_{1}>1\); * a homogeneous tumor state, \[(u,v,w)=\left(0,\left(1+\frac{d_{2}}{r}\right)^{-1},\left(1+\frac{d_{2}}{r} \right)^{-1}\right),\] linearly unstable if \(\frac{d_{2}}{r}>a_{2}+d_{1}-1\) and linearly stable if \(\frac{d_{2}}{r}<a_{2}+d_{1}-1\); * a heterogeneous state, \((u,v,w)=(u^{*},v^{*},w^{*})\), where \(u^{*}>0,v^{*}>0,w^{*}>0\). Direct computation yields that \((u^{*},v^{*},w^{*})=(1-(a_{2}+d_{1})v_{h},v_{h},v_{h})\), where \[v_{h}:=\frac{1-a_{1}}{1-a_{1}a_{2}+\frac{d_{2}}{r}-a_{1}d_{1}},\quad a_{1}\neq 0.\] Moreover, * if \(a_{1}>1\), then \(u^{*},\,v^{*},\,w^{*}\) are positive if and only if \(\frac{d_{2}}{r}<a_{2}+d_{1}-1\). Also \((u^{*},v^{*},w^{*})\) is linearly unstable; * if \(a_{1}<1\), then \(u^{*},\,v^{*},\,w^{*}\) are positive if and only if \(\frac{d_{2}}{r}>a_{2}+d_{1}-1\). Moreover, \((u^{*},v^{*},w^{*})\) is linearly stable. Based on the analysis of linear stability, when \(a_{1}<1\), the healthy state is locally unstable. Thus tumor invasion could happen, and naturally the heterogeneous state and the homogeneous tumor state are two possible outcomes of the invasive behaviors of tumors. To further understand and characterise tumor progression in the long term, i.e., whether the healthy and tumor cells coexist or the tumor cells prevail after tumor invasion, we analyze the global dynamics of the model (1.1). This issue is much more complicated and far from being understood. In this paper, we focus on studying the global stability of the three nontrivial steady states: * the healthy state \((1,0,0)\), * the homogeneous tumor state \(\left(0,\left(1+\frac{d_{2}}{r}\right)^{-1},\left(1+\frac{d_{2}}{r}\right)^{-1 }\right)\), * the heterogeneous state \((u^{*},v^{*},w^{*})\), when they exist and are locally stable and manage to characterize the ranges of the parameters in the model (1.1) where _the local stability implies the global stability._ For clarity, throughout this paper, we always assume that the initial data \(u_{0},v_{0},w_{0}\) satisfy the condition \[\begin{cases}u_{0}\in W^{2,\infty}(\Omega),\ 0<u_{0}<1\ \text{in}\ \bar{\Omega},\\ v_{0}\in W^{2,\infty}(\Omega),\ v_{0}>0\ \text{in}\ \bar{\Omega},\\ w_{0}\in W^{2,\infty}(\Omega),\ w_{0}>0\ \text{in}\ \bar{\Omega}.\end{cases} \tag{1.2}\] In [20], the global stability of the heterogeneous state \((u^{*},v^{*},w^{*})\) is studied and partial results are obtained. **Theorem A** ([20]).: _Let \(a_{1}>0\), \(a_{2}>0\), \(d_{1}\geq 0\), \(d_{2}\geq 0\), \(r>0\), \(D>0\), \(c>0\) satisfy_ \[a_{1}<1,\ \frac{d_{2}}{r}>a_{2}+d_{1}-1\ \text{and} \tag{1.3}\] \[\frac{d_{2}}{r}<1-a_{1}a_{2}-a_{1}d_{1}, \tag{1.4}\] _then the solution \((u,v,w)\) to the problem (1.1) exists globally in time and enjoys the property that_ \[||u(\cdot,t)-u^{*}||_{L^{\infty}(\Omega)}+||v(\cdot,t)-v^{*}||_{L^{\infty}( \Omega)}+||w(\cdot,t)-w^{*}||_{L^{\infty}(\Omega)}\to 0\quad\text{exponentially as}\quad t \rightarrow\infty.\] Our first main result greatly improve Theorem A regarding the global stability of the heterogeneous state \((u^{*},v^{*},w^{*})\). **Theorem 1.1**.: _In the system (1.1), assume that the non-dimensional parameters \(D\), \(d_{1}\), \(d_{2}\), \(r\), \(c\), \(a_{1}\), \(a_{2}\) are positive, the initial data \((u_{0},v_{0},w_{0})\) satisfies (1.2), \(a_{1}<1\) and \(a_{1}a_{2}<1\). Also if one of the following assumptions holds:_ * \[d_{1}\leq d_{1}^{h}\quad\text{and}\quad\frac{d_{2}}{r}>d_{1}+a_{2}-1,\] (1.5) _(ii)_ \[d_{1}>d_{1}^{h}\quad\text{and}\quad\frac{d_{2}}{r}>d_{2}^{h}, \tag{1.6}\] _where_ \[d_{1}^{h}=\left(\frac{1+\sqrt{1-a_{1}a_{2}}}{1-\sqrt{1-a_{1}}}\right)^{2}-a_{2}, \tag{1.7}\] \[d_{2}^{h}=\frac{1}{4}\left(\frac{a_{1}(a_{2}+d_{1})}{1+\sqrt{1-a_{1}a_{2}}}+1+ \sqrt{1-a_{1}a_{2}}\right)^{2}-1, \tag{1.8}\] _then the solution \((u,v,w)\) to the system (1.1) exists globally and enjoys the property that_ \[||u(\cdot,t)-u^{*}||_{L^{\infty}(\Omega)}+||v(\cdot,t)-v^{*}||_{L^{\infty}( \Omega)}+||w(\cdot,t)-w^{*}||_{L^{\infty}(\Omega)}\to 0\quad\text{exponentially as}\quad t \rightarrow\infty.\] As discussed earlier, the heterogeneous state \((u^{*},v^{*},w^{*})\) with \(u^{*}>0\), \(v^{*}>0\), \(w^{*}>0\) exists and is linearly stable if and only if \[a_{1}<1,\ \frac{d_{2}}{r}>a_{2}+d_{1}-1.\] Hence the case (i) in Theorem 1.1 indicates that if \[a_{1}<1,\ a_{1}a_{2}<1,\ d_{1}\leq d_{1}^{h},\] then the local stability of the heterogeneous state implies its global stability. Moreover, in Theorem A, to guarantee that there exists \(d_{2}>0\) such that the conditions (1.3) and (1.4) are fulfilled, i.e., \[a_{2}+d_{1}-1<\frac{d_{2}}{r}<1-a_{1}a_{2}-a_{1}d_{1},\] the conditions \[a_{1}a_{2}<1,\ d_{1}<\frac{2}{1+a_{1}}-a_{2}\] are necessary. Then elementary computations yield that when \(a_{1}<1\), \[d_{1}^{h}=\left(\frac{(1+\sqrt{1-a_{1}a_{2}})(1+\sqrt{1-a_{1}})}{a_{1}}\right) ^{2}-a_{2}>\left(\frac{1}{a_{1}}\right)^{2}-a_{2}>\frac{2}{1+a_{1}}-a_{2}.\] The condition (1.4) in Theorem A also imposes an upper bound for \(d_{2}\), while this is not required in (1.5). Hence the case (i) in Theorem 1.1 improves Theorem A. Furthermore, the case (ii) that \(d_{1}>d_{1}^{h}\) in Theorem 1.1 demonstrates that the global stability of the heterogeneous state is still valid when both \(d_{1}\) and \(d_{2}\) are relatively large, i.e., \(d_{1}>d_{1}^{h}\) and \(\frac{d_{2}}{r}>d_{2}^{h}\). This range is not mentioned in Theorem A. We also point out that in the case (ii), the global stability of the heterogeneous state in the range \[a_{1}<1,\ a_{1}a_{2}<1,\ d_{1}>d_{1}^{h},\ d_{1}+a_{2}-1<\frac{d_{2}}{r}\leq d _{2}^{h}\] is still not clear. Biologically, Theorem 1.1 demonstrates that the introduction of acid-mediated tumor cell death could result in the coexistence of tumor and healthy cells as long as the healthy cells are not very aggressive in consuming resources, i.e., \(a_{1}<1\) and the tumor cells are quite sensitive to acid, i.e., \(d_{2}\) is relatively large compared with \(d_{1}\). Our second main result is about the global stability of the homogeneous tumor state, denoted by \[(0,\tilde{v},\tilde{w})=\left(0,\left(1+\frac{d_{2}}{r}\right)^{-1},\left(1+ \frac{d_{2}}{r}\right)^{-1}\right).\] **Theorem 1.2**.: _In the system (1.1), assume that the non-dimensional parameters \(D\), \(d_{1}\), \(d_{2}\), \(r\), \(c\), \(a_{1}\), \(a_{2}\) are positive, the initial data \((u_{0},v_{0},w_{0})\) satisfies (1.2) and \(a_{1}<1\). Moreover, if one of the following situations holds:_ 1. \[d_{1}\leq d_{1}^{c}\quad\text{and}\quad\frac{d_{2}}{r}<\frac{d_{1}+a_{2}}{\max \{a_{1}a_{2},1\}}-1,\] (1.9) 2. \[a_{1}a_{2}<1,\quad d_{1}^{c}<d_{1}\leq d_{1}^{h}\quad\text{and}\quad\frac{d_{ 2}}{r}<d_{1}+a_{2}-1,\] (1.10) 3. \[a_{1}a_{2}<1,\quad d_{1}>d_{1}^{h}\quad\text{and}\quad\frac{d_{2}}{r}<d_{2}^{ c},\] (1.11) 4. \[a_{1}a_{2}\geq 1,\quad d_{1}>d_{1}^{c}\quad\text{and}\quad\frac{d_{2}}{r}<d_{2}^{ c},\] (1.12) _where \(d_{1}^{h}\) is defined in (1.7),_ \[d_{1}^{c}=\frac{a_{1}a_{2}}{(1-\sqrt{1-a_{1}})^{2}}-a_{2}, \tag{1.13}\] _and_ \[d_{2}^{c}=4\left(1-\sqrt{1-a_{1}}+\frac{1}{1-\sqrt{1-a_{1}}}\frac{a_{1}a_{2}}{ a_{2}+d_{1}}\right)^{-2}-1, \tag{1.14}\] _then the solution \((u,v,w)\) to the system (1.1) exists globally and satisfies the property that_ \[||u(\cdot,t)||_{L^{\infty}(\Omega)}+||v(\cdot,t)-\tilde{v}||_{L^{\infty}( \Omega)}+||w(\cdot,t)-\tilde{w}||_{L^{\infty}(\Omega)}\to 0\quad\text{ exponentially as}\quad t\to\infty.\] Recall that the homogeneous tumor state is linearly stable if \(\frac{d_{2}}{r}<d_{1}+a_{2}-1\). Also notice that when \(a_{1}<1\), \(a_{1}a_{2}=1\), \(d_{1}^{c}\) coincides with \(d_{1}^{h}\). Thus according to the cases (i) and (ii) in Theorem 1.2, when \[a_{1}<1,\ a_{1}a_{2}\leq 1,\ d_{1}\leq d_{1}^{h},\] the local stability of the homogeneous tumor state implies the global stability. While the global stability in the range \[a_{1}<1,\ a_{1}a_{2}\leq 1,\ d_{1}>d_{1}^{h},\ d_{2}^{c}\leq\frac{d_{2}}{r}<d_{1 }+a_{2}-1\] is still unknown. Moreover, when \[a_{1}<1,\ a_{1}a_{2}>1,\] partial results concerning the global stability of the homogeneous tumor state are also obtained in Theorem 1.2. Simply speaking, in this situation, when \(d_{2}\) is relatively small, the homogeneous tumor state is globally stable. From the view of biology, Theorem 1.2 demonstrates that as long as the tumor has strong resistance to acid, i.e., \(d_{2}\) is small enough, and the healthy cells are more sensitive to the acid, then the tumor cells can invade the healthy state and eliminate the healthy cells completely regardless of the competition coefficient \(a_{2}\) for tumor cells. This confirms that the acid resistance abilities of healthy and tumor cells play a significant role in tumor progression. In summary, when \(a_{1}<1\), the healthy state can be invaded by tumor cells, and then Theorems 1.1 and 1.2 together characterize how acid resistance and mutual competition of tumor and healthy cells determine whether tumor cells dominate or two types of cells coexist at reduced densities in the long term. It is also worth pointing out that if \[a_{1}>1,\,\frac{d_{2}}{r}<d_{1}+a_{2}-1,\] then both the healthy state and the homogeneous tumor state are locally stable. Biologically, this is related to the situation that the healthy cells are aggressive in competition and the tumor cells have strong acid resistance ability, and thus the tumor progression is expected to be delicate. Our last main result is about the global stability of the healthy state \((1,0,0)\). **Theorem 1.3**.: _In the system \((\ref{eq:1})\), assume that the non-dimensional parameters \(D\), \(d_{1}\), \(d_{2}\), \(r\), \(c\), \(a_{1}\), \(a_{2}\) are positive, the initial data \((u_{0},v_{0},w_{0})\) satisfies \((\ref{eq:1})\), \(v_{0}\leq 1\), and \(a_{1}>1\), \(a_{2}<1\). Moreover, if one of the following situations holds:_ _i)_ \[d_{1}\leq d_{1}^{r}\quad\text{and}\quad\frac{d_{2}}{r}>a_{1}(d_{1}+a_{2}-1), \tag{1.15}\] _ii)_ \[d_{1}>d_{1}^{r}\quad\text{and}\quad\frac{d_{2}}{r}>d_{2}^{r}, \tag{1.16}\] _where_ \[d_{1}^{r}=(1+\sqrt{1-a_{2}})^{2}-a_{2}, \tag{1.17}\] \[d_{2}^{r}=\frac{a_{1}}{4}\left(\frac{a_{2}+d_{1}}{1+\sqrt{1-a_{2}}}+1+\sqrt{1- a_{2}}\right)^{2}-a_{1}, \tag{1.18}\] _then the solution \((u,v,w)\) of the system (1.1) exists globally and enjoys the property that_ \[||u(\cdot,t)-1||_{L^{\infty}(\Omega)}+||v(\cdot,t)||_{L^{\infty}(\Omega)}+|| w(\cdot,t)||_{L^{\infty}(\Omega)}\to 0\quad\text{exponentially as}\quad t\to\infty.\] Based on the analysis of local stability, when \(a_{1}>1\), the healthy state is stable and can prevent the invasion of small amounts of tumor cells. According to Theorem 1.3, to guarantee the global stability of the healthy state, additionally, we need require that \(a_{2}<1\) and \(d_{2}\) is relatively large compared with \(d_{1}\). This indicates that if both the competition ability and acid resistance of tumors cells are weak, they will not have the chance to survive in the normal tissue. However, this is not biologically realistic, since usually it is the tumor who exhibits strong acid resistance. Thus Theorem 1.3 reflects that the acid-mediated invasion is quite effective from the reverse side. At the end, some remarks from the view of mathematics and strategies of our proofs are appropriate. * The system (1.1) can be viewed as a generalization of a classical mutual competition system [17]. On the basis of Theorems 1.1, 1.2 and 1.3, we observe that the multiplication of competition coefficients \(a_{1}a_{2}\) is a critical quantity in determining the global dynamics. To be more specific, thanks to the case (i) in Theorem 1.1 and the cases (i) and (ii) in Theorem 1.2, _the property that local stability implies global stability_ is verified under the conditions that \[a_{1}<1,\ a_{1}a_{2}<1,\ d_{1}\leq d_{1}^{h}=\left(\frac{1+\sqrt{1-a_{1}a_{2}} }{1-\sqrt{1-a_{1}}}\right)^{2}-a_{2}.\] However, when \(a_{1}<1,\ a_{1}a_{2}>1\), for any \(d_{1}>0\), there always exist the ranges where the homogeneous tumor state is locally stable but the global stability is unknown. Moreover, in Theorem 1.3, the conditions on \(a_{1}\), \(a_{2}\) automatically exclude the strong competition, i.e., \(a_{1}>1\), \(a_{2}>1\). Indeed, in the studies of the classical Lotka-Volterra system for two competing species, the multiplication of competition coefficients, still denoted by \(a_{1}a_{2}\), also plays an important role in determining the global dynamics. It is known that when \(a_{1}a_{2}<1\), _the property that local stability implies global stability_ is completely verified, while when \(a_{1}a_{2}>1\), the global dynamics becomes very complicated and is far from being understood. See [2, 13] and the references therein for more details. Therefore, back to the system (1.1), it is naturally expected that more complicated phenomena might happen in the unknown ranges. 2. The model (1.1) is a combination of ODE and PDEs. A novel feature of this model is the density-limited tumor diffusion term in the second equation, which might give rise to the degeneracy of the parabolic equation. In [20], where the global stability of the heterogenous state \((u^{*},v^{*},w^{*})\) is studied, to exclude this possibility of degeneracy, the rectangle method idea is employed to the system where spatially homogeneous upper and lower solutions are constructed. The conditions on the parameters in Theorem A are imposed to guarantee that the upper solution approaches the lower solution as time goes to infinity. 3. The proofs of our theorems make crucial use of Lyapunov functionals. Inspired by [1, 12], we design proper Lyapunov functionals for each nontrivial steady state to derive \(L^{2}\) convergence of \(w\), where the conditions on the parameters are required to warrants the existence of the desired Lyapunov functionals. Moreover, based on the equation satisfied by \(w\), the \(L^{2}\) convergence of \(w\) is improved to \(L^{\infty}\) convergence. Indeed, the \(L^{2}\) convergence of \(u,\,v\) are derived at the same time. However, since there is no diffusion term for \(u\) and the density-limited tumor diffusion term for \(v\) might cause the degeneracy of the parabolic equation, the norm of the convergence cannot be improved for \(u,\,v\). 4. For the heterogenous state \((u^{*},v^{*},w^{*})\), to prove the \(L^{\infty}\) convergence of \(u,\,v\), similar to [20], we also employ the rectangle method idea. However, different from [20], where the auxiliary system of ODEs is constructed for the whole system (1.1) to obtain the upper and lower solutions, we only construct the auxiliary system of ODEs for the first two equations in the system (1.1), where \(w\) is replaced by a suitable perturbation of \(w^{*}\). Obviously, the \(L^{\infty}\) convergence of \(w\) to \(w^{*}\) guarantees that this simplification is practicable. Thanks to this simplification, no more requirements on the parameters are needed. 5. For the homogeneous tumor state and the healthy state, the auxiliary systems of ODEs are constructed similarly, but the arguments in deriving the convergence relations between lower and upper solutions are different, since in each case, the corresponding elements of the lower solutions are reduced to be zero and the arguments for the heterogenous state is not applicable anymore. This paper is organized as follows. Some preliminary results are prepared in Section 2. Sections 3, 4 and 5 are devoted to Theorems 1.1, 1.2 and 1.3 respectively. Preliminary results The local existence of solutions to the system (1.1), as well as an extensibility criterion was proved in [20] as follows. **Lemma 2.1**.: _Let \(a_{1}>0,a_{2}>0,d_{1}\geq 0,d_{2}\geq 0,r>0,D>0\) and \(c>0\), and let \((\ref{eq:1})\) hold. Then there exist \(T_{\max}\in(0,\infty]\) and a unique triple_ \[(u,v,w)\in C^{1,1}\left(\bar{\Omega}\times[0,T_{\max})\right)\times\left(C^{0} \left(\bar{\Omega}\times[0,T_{\max})\right)\cap C^{2,1}\left(\bar{\Omega} \times(0,T_{\max})\right)\right)^{2}\] _solving \((\ref{eq:1})\) classically in \(\Omega\times(0,T_{\max})\). These functions have the properties_ \[0<u<1 \text{in }\Omega\times(0,T_{\max})\,, \tag{2.1}\] \[0<v\leq\max\left\{1,\left\|v_{0}\right\|_{L^{\infty}(\Omega)}\right\} \text{in }\Omega\times(0,T_{\max})\,,\] (2.2) \[0<w\leq\max\left\{1,\left\|v_{0}\right\|_{L^{\infty}(\Omega)}, \left\|w_{0}\right\|_{L^{\infty}(\Omega)}\right\} \text{in }\Omega\times(0,T_{\max})\,. \tag{2.3}\] _Moreover, we have the following dichotomy:_ \[\text{either}\quad T_{\max}=\infty,\quad\text{or}\quad\limsup_{t\nearrow T_{ \max}}\|u(\cdot,t)\|_{L^{\infty}(\Omega)}=1.\] Thanks to Lemma 2.1, the the global existence of the solution to the system (1.1) follows easily. **Lemma 2.2**.: _Suppose that assumptions of Lemma 2.1 hold, then \(T_{\max}=+\infty\), namely, (1.1) has a unique global classical solution._ Proof.: To verify the solution is global, we denote \(\bar{u}\) the solution of the following ODE \[\left\{\begin{aligned} &\frac{\mathrm{d}}{\mathrm{d}t}\bar{u}= \bar{u}(1-\bar{u}),\quad t>0,\\ &\bar{u}(0)=\max_{x\in\bar{\Omega}}u_{0}.\end{aligned}\right.\] By comparison principle, \(\bar{u}(t)\geq u(t)\) for all \(t\geq 0\). Since \(\bar{u}(0)<1\), it is straightforward to verify that \(\bar{u}\) will not reach \(1\) in finite time, and the dichotomy in Lemma 2.1 immediately indicates that \(T_{\max}=+\infty\). The following property is based on elementary analysis and useful in the proofs of Lemma 3.2, Lemma 4.2 and Lemma 5.2. **Lemma 2.3**.: _Suppose that \(f(t)\) is a uniformly continuous nonnegative function defined on \((0,+\infty)\) such that \(\int_{0}^{\infty}f(t)\mathrm{d}t<+\infty\), then \(f(t)\to 0\) as \(t\to\infty\)._ At the end, we show how to improve the \(L^{2}\) convergence of \(w\) to \(L^{\infty}\) convergence. Since the arguments for all the three nontrivial steady states are the same, we leave it in this section for simplicity. **Lemma 2.4**.: _Suppose that \((u,v,w)\) is a global solution of (1.1)-(1.2) and satisfies_ \[||w-\mathfrak{w}||_{L^{2}(\Omega)}\to 0\quad as\quad t\to\infty, \tag{2.4}\] _where \(\mathfrak{w}\in\{w^{*},\tilde{w},0\}\), then_ \[||w-\mathfrak{w}||_{L^{\infty}(\Omega)}\to 0\quad as\quad t\to\infty.\] Proof.: Recall from Lemma 2.1 that \(v\) is uniformly bounded, by the smoothing property of \((e^{t(\Delta-c)})_{t>0}\)[24], for all \(t>0\), there exists a constant \(c_{n}>0\) such that \[||w(\cdot,t)||_{W^{1,\infty}(\Omega)} \leq c_{n}||w(\cdot,0)||_{W^{1,\infty}(\Omega)} \tag{2.5}\] \[+c_{n}\Big{\{}\sup_{\tau>0}||v(\cdot,\tau)||_{L^{\infty}(\Omega)} ^{(2n-1)/(2n)}\Big{\}}\Big{\{}\sup_{\tau>0}||v(\cdot,\tau)||_{L^{1}(\Omega)} ^{1/(2n)}\Big{\}}<\infty.\] Using Gagliardo-Nierenberg inequality [4], we have \[||w-\mathfrak{w}||_{W^{\frac{n}{n+1},2(n+1)}(\Omega)}\leq||w-\mathfrak{w}||_{W ^{1,\infty}(\Omega)}^{n/(n+1)}||w-\mathfrak{w}||_{L^{2}(\Omega)}^{1/(n+1)}. \tag{2.6}\] By fractional Sobolev imbedding [5], there exist constants \(0<\beta_{n}<1\) and \(\mathcal{C}_{n}>0\) such that \[||w-\mathfrak{w}||_{C^{\beta_{n}}(\bar{\Omega})}\leq\mathcal{C}_{n}||w- \mathfrak{w}||_{W^{\frac{n}{n+1},2(n+1)}(\Omega)}. \tag{2.7}\] The desired conclusion follows immediately from (2.4) and (2.5)-(2.7). ## 3 The heterogeneous state This section is devoted to the proof of Theorem 1.1, which is about the global convergence of the heterogeneous state \[(u^{*},v^{*},w^{*})=(1-(a_{2}+d_{1})v_{h},v_{h},v_{h}),\text{ where }v_{h}:= \frac{1-a_{1}}{1-a_{1}a_{2}+\frac{d_{2}}{r}-a_{1}d_{1}},\ a_{1}\neq 0,\] where \(u^{*}>0\), \(v^{*}>0\), \(w^{*}>0.\) Recall that the heterogeneous state \((u^{*},v^{*},w^{*})\) exists and is linearly stable if and only if \[a_{1}<1,\ \frac{d_{2}}{r}>a_{2}+d_{1}-1.\] According to the strategies explained at the end of the introduction, we present the proof in three steps: * in Section 3.1, we demonstrate the \(L^{\infty}\) convergence of \(w\) to \(w^{*}\) with the help of a Lyapunov functional, the form of which is inspired by [1, 12]. * in Section 3.2, the auxiliary system of ODEs for the first two equations in the system (1.1) is constructed and some properties are prepared. * in Section 3.3, the \(L^{\infty}\) convergence of \(u\), \(v\) to \(u^{*}\), \(v^{*}\) respectively is established. ### \(L^{\infty}\) convergence of \(w\) in the heterogeneous state To prove the \(L^{\infty}\) convergence of \(w\), the key step is to select the proper Lyapunov functional in the following lemma. **Lemma 3.1**.: _Suppose that \((u,v,w)\) is the global solution of (1.1)-(1.2) and the assumptions of Theorem 1.1 hold. Define_ \[A_{h}(t)=\int_{\Omega}u(x,t)-u^{*}-u^{*}\ln\frac{u(x,t)}{u^{*}}\,\mathrm{d}x,\] \[B_{h}(t)=\int_{\Omega}v(x,t)-v^{*}-v^{*}\ln\frac{v(x,t)}{v^{*}}\,\mathrm{d}x,\] \[C_{h}(t)=\frac{1}{2}\int_{\Omega}(w(x,t)-w^{*})^{2}\,\mathrm{d}x.\] _Then there exist \(\beta_{h}>0\), \(\eta_{h}>0\) and \(\varepsilon_{h}>0\) such that the functions \(E_{h}(t)\) and \(F_{h}(t)\) defined by_ \[E_{h}(t)=A_{h}(t)+\frac{\beta_{h}}{r}B_{h}(t)+\frac{\eta_{h}}{c}C_{h}(t),\quad t>0 \tag{3.1}\] _and_ \[F_{h}(t)= \int_{\Omega}(u(x,t)-u^{*})^{2}\mathrm{d}x+\int_{\Omega}(v(x,t)-v ^{*})^{2}\mathrm{d}x+\int_{\Omega}(w(x,t)-w^{*})^{2}\mathrm{d}x\] \[+\int_{\Omega}\left|\nabla w(x,t)\right|^{2}\mathrm{d}x,\quad t>0, \tag{3.2}\] _satisfy_ \[E_{h}(t)\geq 0,\quad t>0\] _as well as_ \[\frac{\mathrm{d}}{\mathrm{d}t}E_{h}(t)\leq-\varepsilon_{h}F_{h}(t). \tag{3.3}\] On the basis of Lemma 3.1 is valid, we establish the \(L^{\infty}\) convergence of \(w\). **Lemma 3.2**.: _Suppose that assumptions of Theorem 1.1 hold, and \((u,v,w)\) is the global solution of (1.1)-(1.2), then_ \[||w-w^{*}||_{L^{\infty}(\Omega)}\to 0\quad as\quad t\to\infty.\] Proof.: Integrating (3.3) from \(0\) to \(+\infty\), by the fact that \(E^{h}(\cdot)\) is nonnegative, we have \[\int_{0}^{+\infty}||u-u^{*}||_{L^{2}}^{2}+||v-v^{*}||_{L^{2}}^{2}+||w-w^{*}||_{ L^{2}}^{2}+||\nabla w||_{L^{2}}^{2}\mathrm{d}t\leq\frac{1}{\varepsilon_{h}}E^{h}(0 )<+\infty.\] Using standard \(L^{p}\) estimate and Sobolev embedding on \(w\), we obtain that there exist two constants \(\mathcal{C}_{1}>0\) and \(0<\alpha_{n}<1\) such that \[||w||_{C^{\alpha_{n},\alpha_{n}/2}(\widetilde{\Omega}\times(k,k+1])}\leq \mathcal{C}_{1},\quad\forall\ k\in\mathbb{N}.\] Therefore, \(||w(\cdot,t)-w^{*}||_{L^{2}}^{2}\) is uniformly continuous with respect to \(t\). Thus, Lemma 2.3 ensures that \[||w-w^{*}||_{L^{2}(\Omega)}\to 0\quad as\quad t\to\infty,\] and the desired conclusion follows from Lemma 2.4. It remains to prove Lemma 3.1. Notice that to verify (3.3), it suffices to show that the matrix \(\mathbb{P}_{h}\) defined in (3.5) coming from the Lyapunov functional \(E_{h}(t)\) is positive definite. Thus to derive the optimal results, we first leave the coefficients \(\beta_{h}\) and \(\eta_{h}\) in the energy functional \(E_{h}(t)\) undetermined, and then manage to explore the equivalent conditions on the parameters in the system (1.1) which guarantee the existence of positive coefficients \(\beta_{h}\) and \(\eta_{h}\) such that \(\mathbb{P}_{h}\) is positive definite. Proof of Lemma 3.1.: First of all, we claim that \(A_{h}(t)\), \(B_{h}(t)\) are nonnegative. In fact, by setting function \(\mathcal{I}(\mathfrak{u}):=\mathfrak{u}-u^{*}\ln\mathfrak{u}\) for \(\mathfrak{u}>0\), and using Taylor's formula, for all \(x\in\Omega\) and \(t>0\), there exists \(\xi=\xi(x,t)\in(0,1)\) such that \[\mathcal{I}(u(x,t))-\mathcal{I}(u^{*})\] \[= \mathcal{I}^{\prime}(u^{*})\cdot(u(x,t)-u^{*})+\frac{1}{2} \,\mathcal{I}^{\prime\prime}(\,\xi u(x,t)+(1-\xi)u^{*})\cdot(u(x,t)-u^{*})^{2}\] \[= \frac{u^{*}}{2(\xi u(x,t)+(1-\xi)u^{*})^{2}}(u(x,t)-u^{*})^{2} \geq 0.\] From the computation above, we obtain that \[A_{h}(t)=\int_{\Omega}\left(\mathcal{I}(u(x,t))-\mathcal{I}(u^{*})\right) \mathrm{d}x\geq 0.\] Similarly, \(B_{h}(t)\) is also nonnegative. Since \(\beta_{h}\) and \(\eta_{h}\) are positive, \(E_{h}(t)\geq 0\) for all \(t\geq 0\) Next, we compute \[\frac{\mathrm{d}}{\mathrm{d}t}A_{h}(t) =\int_{\Omega}\frac{u-u^{*}}{u}u\left(1-u-a_{2}v-d_{1}w\right) \mathrm{d}x\] \[= \int_{\Omega}(u-u^{*})\big{[}(u^{*}-u)+a_{2}(v^{*}-v)+d_{1}(w^{*}- w)\big{]}\mathrm{d}x\] \[= -\int_{\Omega}(u-u^{*})^{2}\mathrm{d}x-a_{2}\int_{\Omega}(u-u^{* })(v-v^{*})\mathrm{d}x-d_{1}\int_{\Omega}(u-u^{*})(w-w^{*})\mathrm{d}x,\] \[\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}t}B_{h}(t)= \frac{1}{r}\int_{\Omega}\frac{v-v^{*}}{v}\big{[}D\nabla\cdot((1-u )\nabla v)+rv\left(1-v-a_{1}u\right)-d_{2}w\big{]}\mathrm{d}x\] \[= -\frac{Dv^{*}}{r}\int_{\Omega}(1-u)\left|\frac{\nabla v}{v} \right|^{2}\mathrm{d}x+\int_{\Omega}(v-v^{*})\big{[}a_{1}(u^{*}-u)+(v^{*}-v)+ \frac{d_{2}}{r}(w^{*}-w)\big{]}\mathrm{d}x\] \[= -a_{1}\int_{\Omega}(u-u^{*})(v-v^{*})\mathrm{d}x-\int_{\Omega}(v -v^{*})^{2}\mathrm{d}x-\frac{d_{2}}{r}\int_{\Omega}(v-v^{*})(w-w^{*})\mathrm{ d}x\] \[-\frac{Dv^{*}}{r}\int_{\Omega}(1-u)\left|\frac{\nabla v}{v} \right|^{2}\mathrm{d}x,\] \[\frac{1}{c}\frac{\mathrm{d}}{\mathrm{d}t}C_{h}(t)= \frac{1}{c}\int_{\Omega}(w-w^{*})\big{[}\Delta w+c(v-w)\big{]} \mathrm{d}x\] \[= -\frac{1}{c}\int_{\Omega}|\nabla w|^{2}\,\mathrm{d}x+\int_{\Omega }(w-w^{*})\big{[}(v-v^{*})+(w^{*}-w)\big{]}\mathrm{d}x\] \[= -\frac{1}{c}\int_{\Omega}|\nabla w|^{2}\,\mathrm{d}x+\int_{\Omega }(v-v^{*})(w-w^{*})\mathrm{d}x-\int_{\Omega}(w-w^{*})^{2}\mathrm{d}x.\] By differentiating (3.1) and substituting the three equations above into it, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}E_{h}(t)= -\int_{\Omega}(u-u^{*})^{2}\mathrm{d}x-\beta_{h}\int_{\Omega}(v-v ^{*})^{2}\mathrm{d}x-\eta_{h}\int_{\Omega}(w-w^{*})^{2}\mathrm{d}x\] \[-(a_{2}+a_{1}\beta_{h})\int_{\Omega}(u-u^{*})(v-v^{*})\mathrm{d}x -d_{1}\int_{\Omega}(u-u^{*})(w-w^{*})\mathrm{d}x\] \[-(\frac{d_{2}}{r}\beta_{h}-\eta_{h})\int_{\Omega}(v-v^{*})(w-w^{* })\mathrm{d}x-\frac{D\beta_{h}v^{*}}{r}\int_{\Omega}(1-u)\left|\frac{\nabla v }{v}\right|^{2}\mathrm{d}x-\frac{\eta_{h}}{c}\int_{\Omega}\left|\nabla w\right| ^{2}\mathrm{d}x\] \[\leq -\int_{\Omega}\mathbf{X}^{\mathrm{T}}\mathbb{P}_{h}\mathbf{X}\, \mathrm{d}x-\frac{\eta_{h}}{c}\int_{\Omega}\left|\nabla w\right|^{2}\mathrm{d}x, \tag{3.4}\] where \(\mathbb{P}_{h}\) and \(\mathbf{X}\) are defined by \[\mathbb{P}_{h}=\left(\begin{array}{ccc}1&\dfrac{a_{2}+a_{1}\beta_{h}}{2}&\dfrac {d_{1}}{2}\\ \dfrac{a_{2}+a_{1}\beta_{h}}{2}&\beta_{h}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}&\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}&\\ \dfrac{d_{2}}{2}&\eta_{h}&\end{array}\right), \tag{3.5}\] \[\mathbf{X}=(u-u^{*},v-v^{*},w-w^{*})^{\mathbf{T}}\,.\] In order to verify (3.3), we need to show that there exist positive constants \(\beta_{h}\), \(\eta_{h}\) such that \(\mathbb{P}_{h}\) is positive definite. We claim that this property holds if and only if there exists a positive constant \(\beta_{h}\) satisfying the two following inequalities simultaneously: \[\Phi_{h}(\beta_{h}):=-a_{1}^{2}\beta_{h}^{2}+2\Big{[}\,2\Big{(}1+\dfrac{d_{2 }}{r}\Big{)}-(a_{1}a_{2}+a_{1}d_{1})\,\Big{]}\beta_{h}-(a_{2}+d_{1})^{2}>0, \tag{3.6}\] \[\Psi_{h}(\beta_{h}):=-a_{1}^{2}\beta_{h}^{2}+2(2-a_{1}a_{2})\beta_{h}-a_{2}^{2 }>0. \tag{3.7}\] Since a matrix is positive definite if and only if all its principal minors are positive, it remains to verify the positivity of every principal minor of \(\mathbb{P}_{h}\). For simplicity, we denote \(\alpha=\frac{1}{2}(a_{2}+a_{1}\beta_{h})\). First of all, we verify the first two principal minors: \[\mathbf{M_{1}^{h}}:=1,\] \[\mathbf{M_{2}^{h}}:=\begin{vmatrix}1&\alpha\\ \alpha&\beta_{h}\end{vmatrix}=\beta_{h}-\alpha^{2}=\dfrac{1}{4}\left(-a_{1}^{2 }\beta_{h}^{2}+2(2-a_{1}a_{2})\beta_{h}-a_{2}^{2}\right)=\dfrac{1}{4}\Psi_{h} (\beta_{h}).\] Thus, (3.7) is equivalent to the positivity of \(\mathbf{M_{2}^{h}}\). Next, we consider \(\det\mathbb{P}_{h}\): \[\det\mathbb{P}_{h}= \begin{vmatrix}1&\alpha&\dfrac{d_{1}}{2}\\ \alpha&\beta_{h}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{2}}{2}&\dfrac{d_{2}}{2}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \beta_{h}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\eta_{h}\end{vmatrix}-\alpha\begin{vmatrix}\alpha&\dfrac{d_ {2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\beta_{h}-\eta_{h}\\ \eta_{h}\end{vmatrix}\] \[= \frac{1}{4}\Bigg{[}\!-\!\eta_{h}^{2}\!+\!2\Big{(}2(\beta_{h}\!- \!\alpha^{2})\!+\!\Big{(}\dfrac{d_{2}}{r}\beta_{h}\!-\!\alpha d_{1}\Big{)} \Big{)}\!\!\eta_{h}\!+\!\Big{(}2\alpha d_{1}\dfrac{d_{2}}{r}\beta_{h}\!-\! \Big{(}\dfrac{d_{2}}{r}\beta_{h}\Big{)}^{2}\!-\!d_{1}^{2}\beta_{h}\!\Big{)} \Bigg{]}\,. \tag{3.8}\] Notice that (3.7) yields \[2\alpha d_{1}\frac{d_{2}}{r}\beta_{h}-\Big{(}\frac{d_{2}}{r}\beta_{h}\Big{)}^{2}- d_{1}^{2}\beta_{h}<-\beta_{h}\Big{(}\frac{d_{2}}{r}\alpha-d_{1}\Big{)}^{2}\leq 0,\] by elementary properties of quadratic polynomial, there exists a positive constant \(\eta_{h}\) such that \(\det\mathbb{P}_{h}>0\) if and only if the following situations holds: \[\left\{\begin{aligned} &\Delta_{h}>0,\\ & 2(\beta_{h}-\alpha^{2})+\Big{(}\frac{d_{2}}{r}\beta_{h}- \alpha d_{1}\Big{)}\geq 0,\end{aligned}\right. \tag{3.9}\] where \(\Delta_{h}\) is the discriminant of the quadratic (3.8). By calculating this discriminant and substituting \(\alpha=\frac{1}{2}(a_{2}+a_{1}\beta_{h})\) into it, we find \[\Delta_{h}:= 4\left\{\bigg{[}\big{(}2(\beta_{h}-\alpha^{2})+\Big{(}\frac{d_{ 2}}{r}\beta_{h}-\alpha d_{1}\Big{)}\bigg{]}^{2}+\Big{(}2\alpha d_{1}\frac{d_{2 }}{r}\beta_{h}-\Big{(}\frac{d_{2}}{r}\beta_{h}\Big{)}^{2}-d_{1}^{2}\beta_{h} \Big{)}\right\}\] \[= 16\big{[}\Big{(}1+\frac{d_{2}}{r}\Big{)}\beta_{h}-\alpha^{2}- \alpha d_{1}-\frac{1}{4}d_{1}^{2}\big{]}(\beta_{h}-\alpha^{2})\] \[= \Bigg{\{}-a_{1}^{2}\beta_{h}^{2}+2\Big{[}2\Big{(}1+\frac{d_{2}}{r }\Big{)}-(a_{1}a_{2}+a_{1}d_{1})\Big{]}\beta_{h}-(a_{2}+d_{1})^{2}\Bigg{\}}\] \[\quad\times\Bigg{\{}-a_{1}^{2}\beta_{h}^{2}+2(2-a_{1}a_{2})\beta_ {h}-a_{2}^{2}\Bigg{\}}\] \[= \Phi_{h}(\beta_{h})\Psi_{h}(\beta_{h}).\] Since we already have (3.7), the equation above implies that \(\Delta_{h}>0\) is equivalent to (3.6). Also, when \(\Delta_{h}>0\) and \(\Psi_{h}(\beta_{h})>0\), we have \[2(\beta_{h}-\alpha^{2})+\Big{(}\frac{d_{2}}{r}\beta_{h}-\alpha d_{1}\Big{)}> \Big{(}1+\frac{d_{2}}{r}\Big{)}\beta_{h}-\alpha^{2}-\alpha d_{1}>\frac{1}{4} \Phi_{h}(\beta_{h})>0,\] i.e. the second inequality in (3.9) is automatically satisfied. Hence, on the basis of (3.7), there exist positive constants \(\beta_{h}\) and \(\eta_{h}\) such that \(\det\mathbb{P}_{h}>0\) if and only if (3.6) holds. Summing up the discussion above, our assertion has been proved. Now, it remains to show that under the assumptions of Theorem 1.1, there exists positive \(\beta_{h}\) which satisfies (3.6) and (3.7) simultaneously. For this purpose, we denote the positive solution of (3.6) as \(S_{1}^{h}:=\big{(}(L_{1}^{h})^{2},\,(R_{1}^{h})^{2}\big{)}\) and positive solution of (3.7) as \(S_{2}^{h}:=\big{(}(L_{2}^{h})^{2},\,(R_{2}^{h})^{2}\big{)}\). We assume for now that we have \[\frac{d_{2}}{r}<a_{2}+d_{1}-1, \tag{3.10}\] which is already contained in the case (1.5) and is indeed necessary since it comes from the existence and linear stability of heterogeneous steady state. Thanks to (3.10) and \(a_{1}<1\), \(S_{1}^{h}\) is not empty. On the other hand, \(S_{2}^{h}\) is not empty due to \(a_{1}a_{2}<1\). Hence, the argument above allows us to calculate \(L_{1}^{h}\), \(R_{1}^{h}\), \(L_{2}^{h}\) and \(R_{2}^{h}\). Since \(\Phi_{h}(\beta_{h})=0\) if and only if \[\beta_{h} =\frac{1}{2a_{1}^{2}}\left\{2\Big{[}\,2\Big{(}1+\frac{d_{2}}{r} \Big{)}-a_{1}(a_{2}+d_{1})\,\Big{]}\pm 2\sqrt{\Big{[}\,2\Big{(}1+\frac{d_{2}}{r} \Big{)}-a_{1}(a_{2}+d_{1})\,\Big{]}^{2}-a_{1}^{2}(a_{2}+d_{1})^{2}}\,\right\}\] \[=\frac{1}{a_{1}^{2}}\left\{\Big{(}1+\frac{d_{2}}{r}\Big{)}+\Big{(} 1+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})\Big{)}\pm 2\sqrt{\Big{(}1+\frac{d_{2}}{r} \Big{)}\Big{(}1+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})\Big{)}}\,\right\}\] \[=\frac{1}{a_{1}^{2}}\left\{\sqrt{1+\frac{d_{2}}{r}}\pm\sqrt{1+ \frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}\right\}^{2}\] and \(\Psi_{h}(\beta_{h})=0\) if and only if \[\beta_{h}=\frac{1}{a_{1}^{2}}\,\big{\{}1+(1-a_{1}a_{2})\pm 2\sqrt{1-a_{1}a_{2}} \,\big{\}}=\frac{1}{a_{1}^{2}}\Big{(}1\pm\sqrt{1-a_{1}a_{2}}\,\Big{)}^{2},\] it follows that \[L_{1}^{h} =\frac{\sqrt{1+\frac{d_{2}}{r}}-\sqrt{1+\frac{d_{2}}{r}-a_{1}(a_ {2}+d_{1})}}{a_{1}},\ R_{1}^{h}=\frac{\sqrt{1+\frac{d_{2}}{r}}+\sqrt{1+\frac{d _{2}}{r}-a_{1}(a_{2}+d_{1})}}{a_{1}},\] \[L_{2}^{h} =\frac{1-\sqrt{1-a_{1}a_{2}}}{a_{1}},\hskip 113.811024ptR_{2}^{h}= \frac{1+\sqrt{1-a_{1}a_{2}}}{a_{1}}.\] Since \(L_{2}^{h}<\frac{1}{a_{1}}<R_{1}^{h}\), to prove there is overlap part between \(S_{1}^{h}\) and \(S_{2}^{h}\), we need \(L_{1}^{h}<R_{2}^{h}\), namely \[\sqrt{1+\frac{d_{2}}{r}}-\sqrt{1+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}<1+\sqrt{1 -a_{1}a_{2}}. \tag{3.11}\] Recall that we also need (3.10), hence, in the following part, we verify that (3.10) and (3.11) hold under the assumption of Theorem 1.1. First, we consider the case (1.5). Since in this case we already have (3.10), it remains to show that when \(d_{1}\leq d_{1}^{h}\), where \(d_{1}^{h}\) is defined in (1.7), we can derive (3.11) from (3.10). By numerator rationalization of (3.11), we obtain \[a_{1}(a_{2}+d_{1})<\big{(}1+\sqrt{1-a_{1}a_{2}}\,\big{)}\left\{\sqrt{1+\frac{d _{2}}{r}}+\sqrt{1+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}\,\right\}. \tag{3.12}\] Substituting \(\frac{d_{2}}{r}=a_{2}+d_{1}-1\) into the inequality above yields \[a_{1}\sqrt{a_{2}+d_{1}}<\big{(}1+\sqrt{1-a_{1}a_{2}}\,\big{)}\big{(}1+\sqrt{1- a_{1}}\,\big{)},\] which is equivalent to \(d_{1}<d_{1}^{h}\). Since \(\frac{d_{2}}{r}\) is strictly smaller than \(a_{2}+d_{1}-1\), (3.12) still holds when \(d_{1}=d_{1}^{h}\). Observe that the right hand side of (3.12) increases in \(d_{2}\), we obtain that (3.11) still holds when (3.10) is satisfied. Next, we demonstrate that in the case (1.6), we have (3.10) and (3.11). Straightforward computations show that (3.11) holds if and only if \[\sqrt{1+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}>\frac{a_{1}(a_{2}+d_{1})-(1+\sqrt{ 1-a_{1}a_{2}})^{2}}{2(1+\sqrt{1-a_{1}a_{2}})}.\] Since \(d_{1}\geq d_{1}^{h}\), the right hand side of the inequality above is positive. By squaring this inequality, we obtain that it is equivalent to \[\frac{d_{2}}{r}\geq\left(\frac{a_{1}(a_{2}+d_{1})-(1+\sqrt{1-a_{1}a_{2}})^{2}} {2(1+\sqrt{1-a_{1}a_{2}})}\right)^{2}+a_{1}(a_{2}+d_{1})-1=d_{2}^{h},\] Hence, in this case we have (3.11). On the other hand, it follows from the discussion in \(d_{1}\leq d_{1}^{h}\) part that when \(d_{1}>d_{1}^{h}\), there is \(d_{2}^{h}>a_{2}+d_{1}-1\). Hence, we also have (3.10). Summarizing the discussion above, we draw out that assumptions of Theorem 1.1 suffice to guarantee the existence of positive \(\beta_{h}\) and \(\eta_{h}\) such that \(\mathbb{P}_{h}\) is positive definite. By the definition of positive definite matrix, there exists \(\varepsilon_{1}>0\) such that \[\mathbf{X}^{\mathrm{T}}\mathbb{P}_{h}\mathbf{X}\geq\varepsilon_{1}|\mathbf{X} |^{2}.\] Substituting it into (3.4), we have \[\frac{\mathrm{d}}{\mathrm{d}t}E_{h}(t)\leq-\varepsilon_{1}\int_{\Omega}| \mathbf{X}|^{2}\mathrm{d}x-\frac{\eta_{h}}{c}\int_{\Omega}|\nabla w|^{2}\, \mathrm{d}x\leq-\varepsilon_{h}F_{h}(t),\] where \(\varepsilon_{h}=\min\{\varepsilon_{1},\frac{\eta_{h}}{c}\}\). ### Auxiliary problem: systems of ODEs Since we have obtained the \(L^{\infty}\) convergence of \(w\) in Lemma 3.2, there exists a smooth bounded positive function \(\sigma(t)\), which decays to \(0\) as \(t\to\infty\) and satisfies \[w^{*}-\sigma(t)\leq w(x,t)\leq w^{*}+\sigma(t),\quad x\in\Omega,\ t\geq 0. \tag{3.13}\] Then we introduce the auxiliary ODE system as follows: \[\left\{\begin{aligned} &\frac{\mathrm{d}}{\mathrm{d}t}\bar{u}_{h}= \bar{u}_{h}\big{[}1-\bar{u}_{h}-a_{2}\underline{v}_{h}-d_{1}(w^{*}-\sigma(t)) \big{]},&\qquad t>0,\\ &\frac{\mathrm{d}}{\mathrm{d}t}\underline{u}_{h}=\underline{u}_{h} \big{[}1-\underline{u}_{h}-a_{2}\bar{v}_{h}-d_{1}(w^{*}+\sigma(t))\big{]},& \qquad t>0,\\ &\frac{\mathrm{d}}{\mathrm{d}t}\bar{v}_{h}=r\bar{v}_{h}\big{[}1-a _{1}\underline{u}_{h}-\bar{v}_{h}-\frac{d_{2}}{r}(w^{*}-\sigma(t))\big{]},& \qquad t>0,\\ &\frac{\mathrm{d}}{\mathrm{d}t}\underline{v}_{h}=r\underline{v}_{h} \big{[}1-a_{1}\bar{u}_{h}-\underline{v}_{h}-\frac{d_{2}}{r}(w^{*}+\sigma(t)) \big{]},&\qquad t>0,\end{aligned}\right. \tag{3.14}\] with initial data \[\begin{split}\bar{u}_{h}(0)=\bar{u}_{0}^{h}:=\max\{\max_{\bar{\Omega} }u_{0},u^{*}\},&\underline{u}_{h}(0)=\underline{u}_{0}^{h}:=\min \{\min_{\bar{\Omega}}u_{0},u^{*}\},\\ \bar{v}_{h}(0)=\bar{v}_{0}^{h}:=\max\{\max_{\bar{\Omega}}v_{0},v^ {*}\},&\underline{v}_{h}(0)=\underline{v}_{0}^{h}:=\min\{\min_{ \bar{\Omega}}v_{0},v^{*}\}.\end{split} \tag{3.15}\] From (3.15), we infer that the initial data of (3.14) satisfies \[0<\underline{u}_{0}^{h}\leq u^{*}\leq\bar{u}_{0}^{h}\leq 1,\quad 0<\underline{v}_ {0}^{h}\leq v^{*}\leq\bar{v}_{0}^{h}<+\infty. \tag{3.16}\] By Picard-Lindel\(\ddot{o}\)f theorem, extension theorem of solution as well as comparison theorem of ODEs, it is standard to obtain the global existence and uniqueness of solutions of (3.14)-(3.15) in the following lemma. We omit its proof and refer to [20, Lemma 3.1] for details. **Lemma 3.3**.: _There exists a unique global solution of (3.14)-(3.15) satisfying_ \[\begin{split} 0<\bar{u}_{h}(t)\leq 1,& 0<\underline{u}_{h}(t)\leq 1,\\ 0<\bar{v}_{h}(t)\leq\max\{\bar{v}_{0}^{h},1\},& 0< \underline{v}_{h}(t)\leq 1.\end{split}\] Now, first we show that \(u^{*}\), \(v^{*}\) are constrained by the solution of (3.14)-(3.15). **Lemma 3.4**.: _The solution of (3.14)-(3.15) satisfies_ \[\underline{u}_{h}(t)\leq u^{*}\leq\bar{u}_{h}(t),\quad\underline{v}_{h}(t)\leq v ^{*}\leq\bar{v}_{h}(t),\quad t\geq 0.\] Proof.: We introduce the notations \[f_{+}:=\max\{f,0\}\quad and\quad f_{-}:=\min\{f,0\},\] and they enjoy the properties that \[f_{+}\cdot f_{-}\equiv 0,\quad f\cdot f_{+}=f_{+}^{2}\quad and\quad f\cdot f_{- }=f_{-}^{2}.\] With the notations above, it remains to show \[(u^{*}-\bar{u}_{h})_{+}=(\underline{u}_{h}-u^{*})_{+}=(v^{*}-\bar{v}_{h})_{+} =(\underline{v}_{h}-v^{*})_{+}=0,\quad t>0.\] By the definition of \(u^{*},v^{*},w^{*}\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}(\bar{u}_{h}-u^{*})=\bar{u}_{h}\big{[}(u^{*}- \bar{u}_{h})+a_{2}(v^{*}-\underline{v}_{h})+d_{1}\sigma(t)\big{]}.\] Multiplying the above equation with \(-(\bar{u}_{h}-u^{*})_{+}\), we obtain \[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\big{[}(u^ {*}-\bar{u}_{h})_{+}\big{]}^{2}=&\bar{u}_{h}\big{[}-(u^{*}-\bar{ u}_{h})_{+}^{2}+a_{2}(u^{*}-\bar{u}_{h})_{+}(\underline{v}_{h}-v^{*})-d_{1} \sigma(t)(u^{*}-\bar{u}_{h})_{+}\big{]}\\ \leq&\bar{u}_{h}\big{[}-(u^{*}-\bar{u}_{h})_{+}^{2}+a _{2}(u^{*}-\bar{u}_{h})_{+}(\underline{v}_{h}-v^{*})_{+}\big{]},\end{split}\] thanks to the positivity of \(\sigma(t)\). Since \(\bar{u}_{h}\leq 1\), by Young's inequality, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\big{[}(u^{*}-\bar{u}_{h})_{+}\big{]}^{2}\leq\frac{ a_{2}^{2}}{2}\,[(\underline{v}_{h}-v^{*})_{+}]^{2}.\] In the same manner, we have \[\frac{\mathrm{d}}{\mathrm{d}t}\big{[}(\underline{u}_{h}-u^{*})_{+}\big{]}^{2} \leq\frac{a_{2}^{2}}{2}\,[(v^{*}-\bar{v}_{h})_{+}]^{2},\] \[\frac{\mathrm{d}}{\mathrm{d}t}\big{[}(v^{*}-\bar{v}_{h})_{+}\big{]}^{2}\leq \frac{r}{2}\max\{1,\bar{v}_{0}^{h}\}a_{1}^{2}\,[(\underline{u}_{h}-u^{*})_{+} ]^{2}\] and \[\frac{\mathrm{d}}{\mathrm{d}t}\big{[}(\underline{v}_{h}-v^{*})_{+}\big{]}^{2} \leq\frac{r}{2}\max\{1,\bar{v}_{0}^{h}\}a_{1}^{2}\,[(u^{*}-\bar{u}_{h})_{+}]^ {2}.\] Summing up the above four inequalities together, we find \[\frac{\mathrm{d}}{\mathrm{d}t}\Big{\{}\big{[}(u^{*}-\bar{u}_{h})_ {+}\big{]}^{2}+\big{[}(\underline{u}_{h}-u^{*})_{+}\big{]}^{2}+\big{[}(v^{*}- \bar{v}_{h})_{+}\big{]}^{2}+\big{[}(\underline{v}_{h}-v^{*})_{+}\big{]}^{2} \Big{\}}\] \[\leq \,k_{0}\Big{\{}\big{[}(u^{*}-\bar{u}_{h})_{+}\big{]}^{2}+\big{[}( \underline{u}_{h}-u^{*})_{+}\big{]}^{2}+\big{[}(v^{*}-\bar{v}_{h})_{+}\big{]}^ {2}+\big{[}(\underline{v}_{h}-v^{*})_{+}\big{]}^{2}\Big{\}},\] where \(k_{0}=\frac{1}{2}\max\{a_{2}^{2},\,r\max\{1,\bar{v}_{0}^{h}\}\,a_{1}^{2}\}\). Thanks to (3.16), we have \[\big{[}(u^{*}-\bar{u}_{0}^{h})_{+}\big{]}^{2}=\big{[}(\underline{u}_{h}^{h}-u ^{*})_{+}\big{]}^{2}=\big{[}(v^{*}-\bar{v}_{0}^{h})_{+}\big{]}^{2}=\big{[}( \underline{v}_{0}^{h}-v^{*})_{+}\big{]}^{2}=0.\] By Grownwall's inequality, we obtain \[\big{[}(u^{*}-\bar{u}_{h})_{+}\big{]}^{2}=\big{[}(\underline{u}_{h}-u^{*})_{+ }\big{]}^{2}=\big{[}(v^{*}-\bar{v}_{h})_{+}\big{]}^{2}=\big{[}(\underline{v}_ {h}-v^{*})_{+}\big{]}^{2}=0,\] which ends the proof. Secondly, we show that \((\bar{u}_{h},\bar{v}_{h})\) is actually the upper solution and \((\underline{u}_{h},\underline{v}_{h})\) is the lower solution of \((u,v)\) in (1.1)-(1.2). **Lemma 3.5**.: _Suppose that the assumptions of Theorem 1.1 hold, \((u,v,w)\) is the global solution of (1.1)-(1.2), and \((\bar{u}_{h},\underline{u}_{h},\bar{v}_{h},\underline{v}_{h})\) is the solution of (3.14)-(3.15), then_ \[\underline{u}_{h}(t)\leq u(x,t)\leq\bar{u}_{h}(t),\quad x\in \Omega,\ t\geq 0,\] \[\underline{v}_{h}(t)\leq v(x,t)\leq\bar{v}_{h}(t),\quad x\in \Omega,\ t\geq 0.\] Proof.: To simplify the notation, we introduce new variables \[\bar{U}(x,t):=\bar{u}_{h}(t)-u(x,t),\quad\underline{U}(x,t):=u(x,t)- \underline{u}_{h}(t),\] \[\bar{V}(x,t):=\bar{v}_{h}(t)-v(x,t),\quad\underline{V}(x,t):=v(x,t )-\underline{v}_{h}(t).\] By the notation above, we only need to show \[\bar{U}_{-}=\underline{U}_{-}=\bar{V}_{-}=\underline{V}_{-}\equiv 0,\quad x\in \Omega,\ t\geq 0.\] Thanks to the key property (3.13), direct computations show that \[\bar{U}_{t}= \bar{U}\big{[}1-\bar{u}_{h}-a_{2}\underline{v}_{h}-d_{1}(w^{*}- \sigma)\big{]}+u\big{[}-\bar{U}+a_{2}\underline{V}+d_{1}(w-(w^{*}-\sigma))\big{]}\] \[\geq \big{[}1-u-\bar{u}_{h}-a_{2}\underline{v}_{h}-d_{1}(w^{*}-\sigma) \big{]}\bar{U}+a_{2}u\underline{V}.\] Multiplying (3.17) with \(\bar{U}_{-}\) and using Young's inequality yields \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}(\bar{U}_{-})^{2}\leq \big{[}1-u-\bar{u}_{h}-a_{2}\underline{v}_{h}-d_{1}(w^{*}-\sigma) \big{]}\bar{U}_{-}^{2}+a_{2}u\bar{U}_{-}\underline{V}\] \[\leq (1+d_{1}\sigma)\bar{U}_{-}^{2}+a_{2}u\bar{U}_{-}\underline{V}_{- }\leq(1+d_{1}\sigma+\frac{1}{2}u)\bar{U}_{-}^{2}+\frac{a_{2}^{2}}{2}\underline {V}_{-}^{2}.\] Integrating over \(\Omega\), we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}(\bar{U}_{-})^{2} \mathrm{d}x\leq\int_{\Omega}(2+d_{1}\sigma+\frac{1}{2}u)\bar{U}_{-}^{2}\, \mathrm{d}x+\frac{a_{2}^{2}}{2}\int_{\Omega}\underline{V}_{-}^{2}\,\mathrm{d}x. \tag{3.17}\] In the same manner, we obtain \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}(\underline{U}_{-})^{2} \mathrm{d}x\leq\int_{\Omega}(2+d_{1}\sigma+\frac{1}{2}u)\bar{U}_{-}^{2}\, \mathrm{d}x+\frac{a_{2}^{2}}{2}\int_{\Omega}\bar{V}_{-}^{2}\,\mathrm{d}x. \tag{3.18}\] Now, we consider \(\bar{V}_{-}\) and \(\underline{V}_{-}\). Similarly, \[\bar{V}_{t}= D\nabla\cdot((1-u)\nabla\bar{V})+r\bar{V}\big{[}1-a_{1} \underline{u}_{h}-a_{2}\bar{v}_{h}-\frac{d_{2}}{r}(w^{*}-\sigma)\big{]}+rv \big{(}a_{1}\underline{U}-\bar{V}+\frac{d_{2}}{r}(w-(w^{*}-\sigma))\big{)}\] \[\geq D\nabla\cdot((1-u)\nabla\bar{V})+r\big{[}1-v-a_{1}\underline{u}_ {h}-a_{2}\bar{v}_{h}-\frac{d_{2}}{r}(w^{*}-\sigma)\big{]}\bar{V}+ra_{1}v \underline{U},\] Multiply the inequality above with \(\bar{V}_{-}\) and it follows that \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}(\bar{V}_{-})^{2}\leq \big{[}D\nabla\cdot((1-u)\nabla\bar{V})\big{]}\bar{V}_{-}+\big{[} 1-v-a_{1}\underline{u}_{h}-a_{2}\bar{v}_{h}-\frac{d_{2}}{r}(w^{*}-\sigma)\big{]} \bar{V}_{-}^{2}+a_{1}v\underline{U}\bar{V}_{-}\] \[\leq \big{[}D\nabla\cdot((1-u)\nabla\bar{V})\big{]}\bar{V}_{-}+\big{[} 1-v+\frac{d_{2}}{r}\sigma\big{]}\bar{V}_{-}^{2}+a_{1}v\underline{U}_{-}\bar{ V}_{-}\] \[\leq \big{[}D\nabla\cdot((1-u)\nabla\bar{V})\big{]}\bar{V}_{-}+(1+ \frac{d_{2}}{r}\sigma+\frac{1}{2}v^{2})\bar{U}_{-}^{2}+\frac{a_{1}^{2}}{2} \underline{V}^{2}.\] By integrating over \(\Omega\) and after integrating by part, we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}(\bar{V}_{- })^{2}\mathrm{d}x \leq -D\int_{\Omega}(1-u)|\nabla\bar{V}_{-}|^{2}\mathrm{d}x+\int_{ \Omega}(1+\frac{d_{2}}{r}\sigma+\frac{1}{2}v^{2})\bar{V}_{-}^{2}\,\mathrm{d}x +\frac{a_{1}^{2}}{2}\int_{\Omega}\underline{V}_{-}^{2}\,\mathrm{d}x\] \[\leq\int_{\Omega}(1+\frac{d_{2}}{r}\sigma+\frac{1}{2}v^{2})\bar{V}_{-}^{2}\, \mathrm{d}x+\frac{a_{1}^{2}}{2}\int_{\Omega}\underline{V}_{-}^{2}\,\mathrm{d}x. \tag{3.19}\] In the same manner, we obtain \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}(\underline{V}_{-})^{2} \mathrm{d}x\leq\int_{\Omega}(1+\frac{d_{2}}{r}\sigma+\frac{1}{2}v^{2}) \underline{V}_{-}^{2}\,\mathrm{d}x+\frac{a_{1}^{2}}{2}\int_{\Omega}\bar{V}_{- }^{2}\,\mathrm{d}x. \tag{3.20}\] Adding (3.17)-(3.20) together, due to (2.2) and \(\sigma\) is bounded, there exists a constant \(k_{1}>0\) such that \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\big{[}(\bar{U}_{-})^{2}+( \underline{U}_{-})^{2}+(\bar{V}_{-})^{2}+(\underline{V}_{-})^{2}\big{]} \mathrm{d}x\leq k_{1}\int_{\Omega}\big{[}(\bar{U}_{-})^{2}+(\underline{U}_{- })^{2}+(\bar{V}_{-})^{2}+(\underline{V}_{-})^{2}\big{]}\mathrm{d}x.\] Since (3.15) implies that \[(\bar{U}_{-})^{2}(0)=(\underline{U}_{-})^{2}(0)=(\bar{V}_{-})^{2}(0)=( \underline{V}_{-})^{2}(0)=0,\] our conclusion comes directly after using Grownwall's inequality. ### \(L^{\infty}\) convergence of \(u,v\) in the heterogeneous state In Lemmas 3.4 and 3.5, we have derived that \[\underline{u}_{h}(t)\leq u(x,t),\,u^{*}\leq\bar{u}_{h}(t),\quad x \in\Omega,\ t\geq 0,\] \[\underline{v}_{h}(t)\leq v(x,t),\,v^{*}\leq\bar{v}_{h}(t),\quad x \in\Omega,\ t\geq 0.\] Now we are ready to prove the \(L^{\infty}\) convergence of \(u\), \(v\) to \(u^{*}\), \(v^{*}\) respectively. **Lemma 3.6**.: _Suppose that the assumptions of Theorem 1.1 hold, \((u,v,w)\) is the solution of (1.1)-(1.2), then \(u\) and \(v\) satisfy_ \[||u-u^{*}||_{L^{\infty}(\Omega)}+||v-v^{*}||_{L^{\infty}(\Omega)}\to 0\quad as \quad t\to\infty.\] Proof.: For convenience, we turn (3.14) in a form convenient to treat. Thanks to the positivity obtained in Lemma 3.3, we rewrite (3.14) in the following form \[\left\{\begin{aligned} \frac{(\bar{u}_{h})_{t}}{\bar{u}_{h}}& =\big{[}1-\bar{u}_{h}-a_{2}\underline{v}_{h}-d_{1}(w^{*}-\sigma(t)) \big{]},&\qquad t>0,\\ \frac{(\underline{u}_{h})_{t}}{\underline{u}_{h}}& =\big{[}1-\underline{u}_{h}-a_{2}\bar{v}_{h}-d_{1}(w^{*}+\sigma(t)) \big{]},&\qquad t>0,\\ \frac{(\bar{v}_{h})_{t}}{\bar{v}_{h}}&=r\big{[}1-a_{ 1}\underline{u}_{h}-\bar{v}_{h}-\frac{d_{2}}{r}(w^{*}-\sigma(t))\big{]},& \qquad t>0,\\ \frac{(\underline{v}_{h})_{t}}{\underline{v}_{h}}& =r\big{[}1-a_{1}\bar{u}_{h}-\underline{v}_{h}-\frac{d_{2}}{r}(w^{*}+\sigma(t)) \big{]},&\qquad t>0,\end{aligned}\right.\] Straightforward computations show \[\frac{\mathrm{d}}{\mathrm{d}t}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}} =-(\bar{u}_{h}-\underline{u}_{h})+a_{2}(\bar{v}_{h}-\underline{v}_{ h})+2d_{1}\sigma(t), \mathrm{t}_{\acute{c}}0, \tag{3.21}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\ln\frac{\bar{v}_{h}}{\underline{v }_{h}} =r\big{[}a_{1}(\bar{u}_{h}-\underline{u}_{h})-(\bar{v}_{h}-\underline{v}_{h}) \big{]}+2d_{2}\sigma(t), \mathrm{t}_{\acute{c}}0. \tag{3.22}\] Introducing the notations \[\mathcal{A}_{0}:=\frac{1+a_{2}}{(1+a_{1})r},\quad\mathcal{A}_{1}:=\frac{1-a_{1 }a_{2}}{1+a_{2}},\quad\mathcal{A}_{2}:=2d_{1}+2d_{2}\mathcal{A}_{0},\] and adding (3.21) with \(\mathcal{A}_{0}\times(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq Once the upper and lower solutions are close enough to each other at \(T_{2}\), namely, (3.24) holds, we claim that they will not be far from each other again, more specifically, for all \(t\geq T_{2}\), \[\big{[}(\bar{u}_{h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})\big{]}(t) \leq\frac{1}{3}\min\{u^{*},v^{*}\}. \tag{3.25}\] In fact, if they turn to be relatively far, namely there exists \(T_{3}>T_{2}\) such that \[\big{[}(\bar{u}_{h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})\big{]} (T_{3})=\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\} \tag{3.26}\] and \[\big{[}(\bar{u}_{h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})\big{]} (t)<\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\},\quad T_{2}\leq t<T_{3}.\] If \(T_{3}=\infty\), then (3.25) naturally holds. If \(T_{3}<\infty\), let \(T_{4}>T_{3}\) denote the maximum time such that for all \(T_{3}<t\leq T_{4}\), there holds \[\big{[}(\bar{u}_{h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})\big{]} (t)\geq\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\}.\] Then, it follows directly that for all \(T_{3}<t\leq T_{4}\), \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+ \mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\right)(t)\leq-\frac{ \mathcal{A}_{1}}{8\mathcal{M}}\min\{u^{*},v^{*}\}.\] Hence, for all \(T_{3}<t\leq T_{4}\), we have \[\big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0} \ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(t)\leq \big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0} \ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(T_{3})-\frac{\mathcal{A}_{1 }}{8\mathcal{M}}\min\{u^{*},v^{*}\}(t-T_{3})\] \[\leq \big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0} \ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(T_{3}).\] By the fact that \(\frac{b-a}{b}\leq\ln\frac{b}{a}\leq\frac{b-a}{a}\) if \(b>a>0\), we obtain that for all \(T_{3}<t\leq T_{4}\), \[(\bar{u}_{h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})(t) \tag{3.27}\] \[\leq \max\{1,\frac{1}{\mathcal{A}_{0}}\}\big{(}\bar{u}_{h}\ln\frac{ \bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\bar{v}_{h}\ln\frac{\bar{v}_{h} }{\underline{v}_{h}}\big{)}(t)\] \[\leq \max\{1,\frac{1}{\mathcal{A}_{0}}\}\max\{\bar{v}_{0}^{h},1\} \big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar {v}_{h}}{\underline{v}_{h}}\big{)}(T_{3})\] \[\leq \max\{1,\frac{1}{\mathcal{A}_{0}}\}\max\{\bar{v}_{0}^{h},1\}\max \{1,\mathcal{A}_{0}\}\Big{[}\,\frac{1}{\underline{u}_{h}}(\bar{u}_{h}- \underline{u}_{h})+\frac{1}{\underline{v}_{h}}(\bar{v}_{h}-\underline{v}_{h}) \Big{]}(T_{3})\] \[= \max\{\mathcal{A}_{0},\frac{1}{\mathcal{A}_{0}}\}\max\{\bar{v}_{0 }^{h},1\}\Big{[}\,\frac{1}{\underline{u}_{h}}(\bar{u}_{h}-\underline{u}_{h})+ \frac{1}{\underline{v}_{h}}(\bar{v}_{h}-\underline{v}_{h})\Big{]}(T_{3}).\] Notice from Lemma 3.4, (3.26) and the definition of \(\mathcal{M}\) that \[\underline{u}_{h}(T_{3}) \geq u^{*}-\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\}\geq\frac{3}{4}u ^{*},\] \[\underline{v}_{h}(T_{3}) \geq v^{*}-\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\}\geq\frac{3}{4}v ^{*}.\] Substituting the inequalities above into (3.27), we have \[(\bar{u}_{h}-\underline{u}_{h})(t)+(\bar{v}_{h}-\underline{v}_{h })(t) \tag{3.28}\] \[\leq \frac{4}{3}\max\{\mathcal{A}_{0},\frac{1}{\mathcal{A}_{0}}\}\max \{\bar{v}_{0}^{h},1\}\max\{\frac{1}{u^{*}},\frac{1}{v^{*}}\}\big{(}(\bar{u}_{ h}-\underline{u}_{h})+(\bar{v}_{h}-\underline{v}_{h})\big{)}(T_{3})\] \[\leq \frac{1}{3}\min\{u^{*},v^{*}\},\quad T_{3}<t\leq T_{4}.\] From the discussion above, we know that when (3.26) happens, either (3.28) holds or \((\bar{u}_{h}-\underline{u}_{h})(t)+(\bar{v}_{h}-\underline{v}_{h})(t)\) enjoys sharper bound \(\frac{1}{4\mathcal{M}}\min\{u^{*},v^{*}\}\). Hence, we conclude that (3.25) holds for all \(t\geq T_{2}\), Thanks to (3.25), \(\underline{u}_{h}\) and \(\underline{v}_{h}\) have positive lower bound \(\frac{2}{3}\min\{u^{*},v^{*}\}\) when \(t\geq T_{2}\). Therefore, there exists a positive constant \(\kappa\leq\frac{2}{3}\min\{u^{*},v^{*}\}\) which is the lower bound of \(\underline{u}_{h}\) and \(\underline{v}_{h}\) uniformly in \(t\), then Lemma 3.5 immediately implies that \(\kappa\) is also the lower bound of \(u\) and \(v\). Now, we turn back to (3.23) and prove the following convergence property: \[||\bar{u}_{h}-\underline{u}_{h}||_{L^{\infty}(\Omega)}+||\bar{v}_{h}- \underline{v}_{h}||_{L^{\infty}(\Omega)}\to 0\quad as\quad t\to\infty. \tag{3.29}\] Direct computation yields that \[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\big{(}\ln\frac{\bar{u }_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{ h}}\big{)}=&-\mathcal{A}_{1}\big{(}\underline{u}_{h}\ln\frac{\bar{u}_{h}}{ \underline{u}_{h}}+\underline{v}_{h}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}} \big{)}+\mathcal{A}_{2}\sigma(t)\\ \leq&-2\mathcal{A}_{3}\big{(}\ln\frac{\bar{u}_{h}}{ \underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}} \big{)}+\mathcal{A}_{2}\sigma(t),\end{split} \tag{3.30}\] where \(\mathcal{A}_{3}:=\frac{1}{2}\kappa\mathcal{A}_{1}\min\{1,\frac{1}{\mathcal{A}_ {0}}\}\). By comparison principle of ODE, we obtain \[\big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0} \ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(t)\] \[\leq e^{-2\mathcal{A}_{3}t}\Big{\{}\int_{0}^{t}\mathcal{A}_{2}\sigma(s )e^{2\mathcal{A}_{3}s}\mathrm{d}s+\big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_ {h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(0)\Big{\}}.\] Thanks to L' Hopital's rule, \[\lim_{t\to+\infty}\big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A }_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\big{)}(t)\leq\lim_{t\to+\infty} \frac{\mathcal{A}_{2}\sigma(t)e^{2\mathcal{A}_{3}t}}{e^{2\mathcal{A}_{3}t}}=0,\] and we can achieve (3.29) after using the inequality \[\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}\geq\frac{\bar{u}_{h}-\underline{u}_{h}}{ \bar{u}_{h}},\quad\ln\frac{\bar{v}_{h}}{\underline{v}_{h}}\geq\frac{\bar{v}_{h}- \underline{v}_{h}}{\bar{v}_{h}}.\] Combining (3.29) with Lemma 3.4 and Lemma 3.5, our proof is completed. At the end, we accomplish the proof of Theorem 1.1. Proof of Theorem 1.1.: First, we prove that there exists a constant \(\mathcal{C}_{0}>0\) such that \(E_{h}(t)\) and \(F_{h}(t)\), which are defined in (3.1) and (3.2), satisfy \(E_{h}(t)\leq\mathcal{C}_{0}F_{h}(t)\). To this end, we still define function \(\mathcal{I}(\mathfrak{u}):=\mathfrak{u}-u^{*}\ln\mathfrak{u}\) for \(\mathfrak{u}>0\). According to L' Hopital's rule, we have \[\lim_{u\to u^{*}}\frac{\mathcal{I}(\mathfrak{u})-\mathcal{I}(u^{*})}{(u-u^{*} )^{2}}=\lim_{u\to u^{*}}\frac{\mathcal{I}^{\prime}(\mathfrak{u})}{2(\mathfrak{ u}-u^{*})}=\frac{1}{2u^{*}}.\] Thanks to Lemma 3.6, there exists \(\mathcal{T}_{1}\) large enough such that for all \(t\geq\mathcal{T}_{1}\), we have \[\begin{split}& A_{h}(t)=\int_{\Omega}u(x,t)-u^{*}-u^{*}\ln\frac{u (x,t)}{u^{*}}\,\mathrm{d}x\\ =&\int_{\Omega}\mathcal{I}(u(x,t))-\mathcal{I}(u^{* })\,\mathrm{d}x\leq\frac{1}{u^{*}}\int_{\Omega}\big{(}u(x,t)-u^{*}\big{)}^{2} \mathrm{d}x,\end{split} \tag{3.31}\] as well as \[A_{h}(t)\geq\frac{1}{4u^{*}}\int_{\Omega}\big{(}u(x,t)-u^{*}\big{)}^{2} \mathrm{d}x. \tag{3.32}\] In the same way, enlarge \(\mathcal{T}_{1}\) if necessary, for all \(t\geq\mathcal{T}_{1}\) we have \[\frac{1}{4v^{*}}\int_{\Omega}(v-v^{*})^{2}\,\mathrm{d}x\leq B_{h}(t)\leq\frac {1}{v^{*}}\int_{\Omega}(v-v^{*})^{2}\,\mathrm{d}x. \tag{3.33}\] Combining (3.1), (3.31) and (3.33) together, we obtain that there exists a constant \(\mathcal{C}_{0}>0\) such that \(E_{h}(t)\leq\mathcal{C}_{0}F_{h}(t)\). Now, substituting the above inequality into (3.3) yields \[\frac{\mathrm{d}}{\mathrm{d}t}E_{h}(t)\leq-\varepsilon_{h}F_{h}(t)\leq- \varepsilon_{h}\mathcal{C}_{0}E_{h}(t),\quad t\geq\mathcal{T}_{1}.\] Hence, without loss of generality, there exists constants \(\mathcal{C}_{1}>0\) and \(\kappa_{1}\mathring{0}\) such that \[E_{h}(t)\leq\mathcal{C}_{1}e^{-\kappa_{1}t},\quad t>0.\] To obtain the exponential decay rate, we substitute (3.32) and the left inequality of (3.33) into the inequality above and it follows that there exists a constant \(\mathcal{C}_{2}>0\) such that \[||u-u^{*}||_{L^{2}(\Omega)}^{2}+||v-v^{*}||_{L^{2}(\Omega)}^{2}+||w-w^{*}||_{L^ {2}(\Omega)}^{2}\leq\mathcal{C}_{2}e^{-\kappa_{1}t},\quad t>0.\] Notice that for all \(\xi\in L^{\infty}(\Omega)\), we have \[||\xi||_{L^{2n}(\Omega)}\leq||\xi||_{L^{\infty}(\Omega)}^{n-1/n}||\xi||_{L^{2}( \Omega)}^{1/n}.\] By combining the two inequalities above together, we derive that there exists a constant \(\mathcal{C}_{3}>0\) such that \[||u-u^{*}||_{L^{2n}(\Omega)}+||v-v^{*}||_{L^{2n}(\Omega)}+||w-w^{*}||_{L^{2n}( \Omega)}\leq\mathcal{C}_{3}e^{-(\kappa_{1}/(2n))t},\quad t>0. \tag{3.34}\] Now, we are ready to improve (3.34) to \(L^{\infty}\) type convergence. Using the variation-of-constants formula to the third equation of (1.1), for each \(t>2\), we can estimate \(w-w^{*}\): \[\begin{split}&||w(\cdot,t)-w^{*}||_{L^{\infty}(\Omega)}\\ \leq&||e^{\Delta}(w(\cdot,t-1)-w^{*})||_{L^{\infty}( \Omega)}+\int_{t-1}^{t}||e^{(t-s)\Delta}c(v(\cdot,s)-w(\cdot,s))||_{L^{\infty }(\Omega)}\,\mathrm{d}s\\ :=&\,\mathcal{I}_{1}+\mathcal{I}_{2}.\end{split} \tag{3.35}\] Standard \(L^{p}\)-\(L^{q}\) estimate of heat semigroup and (3.34) yield the existence of \(\mathcal{C}_{4}>0\) such that \[\mathcal{I}_{1}\leq\mathcal{C}_{4}(t-(t-1))^{-1/4}||w(\cdot,t)-w^{*}||_{L^{2n} (\Omega)}\leq\mathcal{C}_{3}\mathcal{C}_{4}e^{-(\kappa_{1}/(2n))t}\] as well as \[\begin{split}\mathcal{I}_{2}\leq&\int_{t-1}^{t} \left\|e^{(t-s)\Delta}c\big{(}(v(\cdot,s)-v^{*})-(w(\cdot,s)-w^{*})\big{)} \right\|_{L^{\infty}(\Omega)}\,\mathrm{d}s\\ \leq& c\,\mathcal{C}_{4}\int_{t-1}^{t}(t-s)^{-1/4} \left[\,\|v(\cdot,s)-v^{*}\|_{L^{2n}(\Omega)}+\|w(\cdot,s)-w(\cdot,s))\|_{L^{2 n}(\Omega)}\right]\mathrm{d}s\\ \leq& c\,\mathcal{C}_{3}\mathcal{C}_{4}e^{-(\kappa_{ 1}/(2n))(t-1)}.\end{split}\] Substituting the two inequalities above into (3.35), we finally obtain that there exists a constant \(\mathcal{C}_{5}>0\) such that \[||w(\cdot,t)-w^{*}||_{L^{\infty}(\Omega)}\leq\mathcal{C}_{5}e^{-(\kappa_{1}/( 2n))t}. \tag{3.36}\] Now, we turn back to (3.30) and give the decay rate of \(\|u-u^{*}\|_{L^{\infty}}\) and \(\|v-v^{*}\|_{L^{\infty}}\). Thanks to (3.36), we could set \(\sigma(t)=\mathcal{C}_{5}e^{-(\kappa_{1}/(2n))t}\). Multiplying (3.30) with \(e^{\mathcal{A}_{4}t}\), where \(\mathcal{A}_{4}=\min\{\frac{\kappa_{1}}{4n},\,\mathcal{A}_{3}\}\), for all \(t>0\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\Big{[}e^{\mathcal{A}_{4}t}\big{(}\ln\frac{\bar{ u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}} \big{)}\Big{]}\leq-\mathcal{A}_{4}\Big{[}e^{\mathcal{A}_{4}t}\big{(}\ln\frac{ \bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{ \underline{v}_{h}}\big{)}\Big{]}+\mathcal{C}_{5}\mathcal{A}_{2}e^{-\mathcal{A }_{4}t}.\] Using comparison principle of ODE, we obtain \[\begin{split}&\Big{[}e^{\mathcal{A}_{4}t}\big{(}\ln\frac{\bar{u}_{h}} {\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v}_{h}} \big{)}\Big{]}(t)\\ \leq& e^{-\mathcal{A}_{4}t}\Big{\{}\int_{0}^{t} \mathcal{A}_{2}\mathcal{A}_{4}\mathcal{C}_{5}\mathrm{d}s+\big{(}\ln\frac{\bar {u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{h}}{\underline{v }_{h}}\big{)}(0)\Big{\}},\end{split}\] which immediately implies that \[\big{(}\ln\frac{\bar{u}_{h}}{\underline{u}_{h}}+\mathcal{A}_{0}\ln\frac{\bar{v}_{ h}}{\underline{v}_{h}}\big{)}(t)\to 0\quad\text{exponentially as}\quad t\to\infty.\] Using Lemma 3.4 and Lemma 3.5 as well as the lower bound of \(\underline{u}_{h}\) and \(\underline{v}_{h}\) obtained in the proof of Lemma 3.5, we obtain \[||u(\cdot,t)-u^{*}||_{L^{\infty}(\Omega)}+||v(\cdot,t)-v^{*}||_{L^{\infty}( \Omega)}\to 0\quad\text{exponentially as}\quad t\to\infty.\] The proof is accomplished. ## 4 The homogeneous tumor state In this section, we present the proof of Theorem 1.2, which is about the global convergence of the homogeneous tumor state \[(0,\tilde{v},\tilde{w})=\left(0,\left(1+\frac{d_{2}}{r}\right)^{-1},\left(1+ \frac{d_{2}}{r}\right)^{-1}\right).\] The main approach of the proof is similar to that of Theorem 1.1. To avoid being redundant, we only present the details when the arguments are crucial and different. First of all, to prove the \(L^{\infty}\) convergence of \(w\) to \(\tilde{w}\), the key step is the construction of a proper Lyapunov functional in the following lemma, which is adjusted according to the homogeneous tumor state \((0,\tilde{v},\tilde{w})\) on the basis of the Lyapunov functional defined in Lemma 3.1. **Lemma 4.1**.: _Suppose that assumptions of Theorem 1.2 holds, \((u,v,w)\) is the global solution of (1.1)-(1.2),_ \[A_{c}(t)=\int_{\Omega}u(x,t)\,\mathrm{d}x,\] \[B_{c}(t)=\int_{\Omega}v(x,t)-\tilde{v}-\tilde{v}\ln\frac{v(x,t)}{\tilde{v}}\, \mathrm{d}x,\] \[C_{c}(t)=\frac{1}{2}\int_{\Omega}(w(x,t)-\tilde{w})^{2}\,\mathrm{d}x.\] _Then there exists \(\beta_{c}>0\), \(\eta_{c}>0\) and \(\varepsilon_{c}>0\) such that the functions \(E_{c}(t)\) and \(F_{c}(t)\) defined by_ \[E_{c}(t)=A_{c}(t)+\frac{\beta_{c}}{r}B_{c}(t)+\frac{\eta_{c}}{c}C_{c}(t),\quad t >0, \tag{4.1}\] \[F_{c}(t)= \int_{\Omega}u(x,t)^{2}\,\mathrm{d}x+\int_{\Omega}(v(x,t)-\tilde{v}) ^{2}\,\mathrm{d}x+\int_{\Omega}(w(x,t)-\tilde{w})^{2}\,\mathrm{d}x\] \[+\int_{\Omega}\left|\nabla w(x,t)\right|^{2}\,\mathrm{d}x,\quad t >0, \tag{4.2}\] _satisfy_ \[E_{c}(t)\geq 0,\quad t>0,\] _as well as_ \[\frac{\mathrm{d}}{\mathrm{d}t}E_{c}(t)\leq-\varepsilon_{c}F_{c}(t). \tag{4.3}\] The idea of the proof of Lemma 4.1 is similar to that of Lemma 3.1. We still provide the details of its proof since it is Lemma 4.1 that requires the conditions imposed on the parameters in Theorem 1.2 and the computations vary due to the change of the Lyapunov functional and the steady state. Proof of Lemma 4.1.: Similar to Lemma 3.1, \(A_{c}(t)\), \(B_{c}(t)\) and \(C_{c}(t)\) are nonnegative. For convenience, we denote \[\delta:=(a_{2}+d_{1})\left(\frac{d_{2}}{r}+1\right)^{-1}.\] We assume for now that we have \[\delta>1 \tag{4.4}\] and \[a_{1}a_{2}<\delta. \tag{4.5}\] In the later part of this proof, we will show that (4.4) and (4.5) are actually contained in the cases (1.9)-(1.12). In fact, (4.4), namely \(\frac{d_{2}}{r}<a_{2}+d_{1}-1\) comes from the linear stability of homogeneous tumor state. (4.4) is already contained in the cases (1.9) and (1.10), and we will verify (4.4) in the cases (1.11) and (1.12) in the comparison between \(d_{2}^{c}\) and \(a_{2}+d_{1}-1\) in the following part of this proof. On the other hand, as for property (4.5), in the case (1.9), (4.5) is contained in its last inequality. In the case (1.10) and case (1.11), thanks to \(a_{1}a_{2}<1\) and (4.4), (4.5) automatically holds. We will verify that we have (4.5) in the case (1.12) in later part. Due to the fact that \(u\leq 1\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}A_{c}(t)= \int_{\Omega}u\left(1-u-a_{2}v-d_{1}w\right)\mathrm{d}x\] \[= -(\delta-1)\int_{\Omega}u\,\mathrm{d}x-\int_{\Omega}u^{2}\, \mathrm{d}x-a_{2}\int_{\Omega}u(v-\tilde{v})\mathrm{d}x-d_{1}\int_{\Omega}u( w-\tilde{w})\mathrm{d}x \tag{4.6}\] \[\leq -\delta\int_{\Omega}u^{2}\,\mathrm{d}x-a_{2}\int_{\Omega}u(v- \tilde{v})\mathrm{d}x-d_{1}\int_{\Omega}u(w-\tilde{w})\mathrm{d}x,\] Similarly, \[\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}t}B_{c}(t) =\frac{1}{r}\int_{\Omega}\frac{v-\tilde{v}}{v}\left[D\nabla\cdot((1- u)\nabla v)+rv\,(1-v-a_{1}u)-d_{2}w\right]\mathrm{d}x\] \[=-\frac{D\tilde{v}}{r}\int_{\Omega}(1-u)\left|\frac{\nabla v}{v} \right|^{2}\mathrm{d}x+\int_{\Omega}(v-\tilde{v})\left[-a_{1}u+(\tilde{v}-v)+ \frac{d_{2}}{r}(\tilde{w}-w)\right]\mathrm{d}x\] \[=-a_{1}\int_{\Omega}u(v-\tilde{v})\mathrm{d}x-\int_{\Omega}(v- \tilde{v})^{2}\mathrm{d}x-\frac{d_{2}}{r}\int_{\Omega}(v-\tilde{v})(w-\tilde{ w})\mathrm{d}x\] \[\quad-\frac{D\tilde{v}}{r}\int_{\Omega}(1-u)\left|\frac{\nabla v }{v}\right|^{2}\mathrm{d}x, \tag{4.7}\] \[\frac{1}{c}\frac{\mathrm{d}}{\mathrm{d}t}C_{c}(t)= \frac{1}{c}\int_{\Omega}(w-\tilde{w})\left[\Delta w+c(v-w)\right] \mathrm{d}x\] \[= -\frac{1}{c}\int_{\Omega}\left|\nabla w\right|^{2}\mathrm{d}x+ \int_{\Omega}(w-\tilde{w})[(v-\tilde{v})+(\tilde{w}-w)]\mathrm{d}x \tag{4.8}\] \[= -\frac{1}{c}\int_{\Omega}\left|\nabla w\right|^{2}\mathrm{d}x+ \int_{\Omega}(v-\tilde{v})(w-\tilde{w})-\int_{\Omega}(w-\tilde{w})^{2}\mathrm{ d}x.\] By differentiating (4.1) and substituting (4.6)-(4.8) into it, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}E_{c}(t)= -\delta\int_{\Omega}u^{2}\mathrm{d}x-\beta_{c}\int_{\Omega}(v- \tilde{v})^{2}\mathrm{d}x-\eta_{c}\int_{\Omega}(w-\tilde{w})^{2}\mathrm{d}x\] \[-(a_{2}+a_{1}\beta_{c})\int_{\Omega}u(v-\tilde{v})\mathrm{d}x-d_ {1}\int_{\Omega}u(w-\tilde{w})\mathrm{d}x-(\frac{d_{2}}{r}\beta_{c}-\eta_{c}) \int_{\Omega}(v-\tilde{v})(w-\tilde{w})\mathrm{d}x\] \[-\frac{D\beta_{c}\tilde{v}}{r}\int_{\Omega}(1-u)\left|\frac{ \nabla v}{v}\right|^{2}\mathrm{d}x-\frac{\eta_{c}}{c}\int_{\Omega}\left| \nabla w\right|^{2}\mathrm{d}x\] \[\leq -\int_{\Omega}\mathbf{Y}^{\mathrm{T}}\mathbb{P}_{c}\mathbf{Y}\, \mathrm{d}x-\frac{\eta_{c}}{c}\int_{\Omega}\left|\nabla w\right|^{2}\mathrm{d}x, \tag{4.9}\] where \[\mathbb{P}_{c}=\left(\begin{array}{ccc}\delta&\frac{a_{2}+a_{1}\beta_{c}}{2 }&\frac{d_{1}}{2}\\ \frac{a_{2}+a_{1}\beta_{c}}{2}&\beta_{c}&\frac{d_{2}}{r} \beta_{c}-\eta_{c}\\ \frac{d_{1}}{2}&\frac{d_{2}}{r}\beta_{c}-\eta_{c}& \eta_{c}\end{array}\right)\] and \[\mathbf{Y}=(u,v-\tilde{v},w-\tilde{w})^{\mathbf{T}}\,.\] To verify (4.3), we need to show that there exist positive constants \(\beta_{c}\), \(\eta_{c}\) such that \(\mathbb{P}_{c}\) is positive definite. We declare that the above property holds if and only if there exists \(\beta_{c}>0\) satisfying the two following inequalities simultaneously: \[\Phi_{c}(\beta_{c}):=-a_{1}^{2}\beta_{c}^{2}+2(2-a_{1})(a_{2}+d_{1})\beta_{c}-(a _{2}+d_{1})^{2}>0, \tag{4.10}\] For simplicity, we still denote \(\alpha:=\frac{1}{2}(a_{2}+a_{1}\beta_{c})\). Similar to the approach we used in Lemma 3.1, we consider every principal minor of \(\mathbb{P}_{c}\): \[\mathbf{M_{1}^{c}}:=1,\] \[\mathbf{M_{2}^{c}}:=\begin{vmatrix}\delta&\alpha\\ \alpha&\beta_{c}\end{vmatrix}=\delta\beta_{c}-\alpha^{2}=\frac{1}{4}\left(-a_ {1}^{2}\beta_{c}^{2}+2(2\delta-a_{1}a_{2})\beta_{c}-a_{2}^{2}\,\right)=\frac{ 1}{4}\Psi_{c}(\beta_{c}),\] hence, (4.11) holds if and only if \(\mathbf{M_{2}^{c}}>0\). Now, we consider the determinant of \(\mathbb{P}_{c}\) and find out the relationship between \(\det\mathbb{P}_{c}>0\) and (4.10)-(4.11). \[\det\mathbb{P}_{c}=\begin{vmatrix}\delta&\alpha&\frac{d_{1}}{2} \\ \alpha&\beta_{c}&\frac{d_{2}}{r}\beta_{c}-\eta_{c}\\ \frac{d_{1}}{2}&\frac{d_{2}}{r}\beta_{c}-\eta_{c}\\ \frac{d_{1}}{2}&\frac{d_{2}}{2}\beta_{c}-\eta_{c}\\ \end{vmatrix}\] \[=\delta\begin{vmatrix}\beta_{c}&\frac{d_{2}}{r}\beta_{c}-\eta_{c} \\ \frac{d_{2}}{r}\beta_{c}-\eta_{c}\\ \frac{d_{1}}{2}&\eta_{c}\end{vmatrix}-\alpha\begin{vmatrix}\frac{d_{2}}{r} \beta_{c}-\eta_{c}\\ \alpha&\frac{d_{1}}{2}\\ \frac{d_{1}}{2}&\eta_{c}\end{vmatrix}+\frac{d_{1}}{2}\begin{vmatrix}\alpha& \beta_{c}\\ \frac{d_{1}}{2}&\frac{d_{2}}{r}\beta_{c}-\eta_{c}\\ \end{vmatrix}\] \[=\!\frac{1}{4}\!\left[-\delta\eta_{c}^{2}\!+\!2\!\left(2(\delta \beta_{c}\!-\!\alpha^{2})\!+\!\left(\frac{d_{2}}{r}\delta\beta_{c}\!-\! \alpha d_{1}\right)\!\right)\!\eta_{c}\!+\!\!\left(2\alpha d_{1}\frac{d_{2}}{r }\beta_{c}\!-\!\delta\!\left(\frac{d_{2}}{r}\beta_{c}\right)^{2}\!-\!d_{1}^{2} \beta_{c}\right)\right]\!. \tag{4.12}\] Notice from (4.11) that \[2\alpha d_{1}\frac{d_{2}}{r}\beta_{c}-\delta\!\left(\frac{d_{2}}{r}\beta_{c} \right)^{2}-d_{1}^{2}\beta_{c}<-\beta_{c}\!\left(\frac{d_{2}}{r}\alpha-d_{1} \right)^{2}\leq 0,\] fundamental properties of quadratic polynomials implies that there exists \(\eta_{c}>0\) such that \(\det\mathbb{P}_{c}>0\) if and only if the following situation holds: \[\left\{\begin{aligned} &\Delta_{c}>0,\\ & 2(\delta\beta_{c}-\alpha^{2})+\left(\frac{d_{2}}{r}\delta \beta_{c}-\alpha d_{1}\right)\geq 0,\end{aligned}\right. \tag{4.13}\] where \(\Delta_{c}\) is the discriminant of the quadratic (4.12). By calculating this discriminant and substituting \(\alpha=\frac{1}{2}(a_{2}+a_{1}\beta_{c})\) into it, we find \[\Delta_{c}= 4\left\{\left[\left(2(\delta\beta_{c}-\alpha^{2})+\Big{(}\delta \frac{d_{2}}{r}\beta_{c}-\alpha d_{1}\Big{)}\right]^{2}+\delta\Big{(}2\alpha d _{1}\frac{d_{2}}{r}\beta_{c}-\Big{(}\delta\frac{d_{2}}{r}\beta_{c}\Big{)}^{2} -d_{1}^{2}\beta_{c}\Big{)}\right\}\] \[= 16\left[\,\delta\Big{(}1+\frac{d_{2}}{r}\Big{)}\beta_{c}-\alpha^ {2}-\alpha d_{1}-\frac{1}{4}d_{1}^{2}\,\right](\delta\beta_{c}-\alpha^{2})\] \[= \Big{\{}-a_{1}^{2}\beta_{c}^{2}+2(2-a_{1})(a_{2}+d_{1})\beta_{c} -(a_{2}+d_{1})^{2}\Big{\}}\times\Big{\{}-a_{1}^{2}\beta_{c}^{2}+2(2\delta-a_{ 1}a_{2})\beta_{c}-a_{2}^{2}\Big{\}}\] \[= \Phi_{c}(\beta_{c})\Psi_{c}(\beta_{c}).\] Since we already have (4.11), it follows from the equations above that \(\Delta_{c}>0\) if and only if (4.10) holds. Also, when \(\Delta_{c}>0\) and \(\Psi_{c}(\beta_{c})>0\), \[2(\delta\beta_{c}-\alpha^{2})+\Big{(}\frac{d_{2}}{r}\delta\beta_{c}-\alpha d_ {1}\Big{)}>\delta\Big{(}1+\frac{d_{2}}{r}\Big{)}\beta_{c}-\alpha^{2}-\alpha d _{1}>\frac{1}{4}\Phi_{c}(\beta_{c})>0,\] which implies the second equation in (4.13) is automatically satisfied. Therefore, on the basis of (4.11), there exist \(\beta_{c}>0\) and \(\eta_{c}>0\) such that \(\det\mathbb{P}_{c}>0\) if and only if (4.10) holds. Summing up the discussion above, our assertion has been proved. Now, it remains to show that under the assumptions of Theorem 1.2, there exists \(\beta_{c}>0\) which satisfies (4.10) and (4.11) simultaneously. For convenience, we denote the positive solution of (4.10) as \(S_{1}^{c}:=((L_{1}^{c})^{2},\,(R_{1}^{c})^{2})\) and positive solution of (4.11) as \(S_{2}^{c}:=((L_{2}^{c})^{2},\,(R_{2}^{c})^{2})\). Thanks to \(a_{1}<1\) and (4.5), \(S_{1}^{c}\) and \(S_{2}^{c}\) are not empty. Direct calculations show that \[L_{1}^{c} =\frac{1-\sqrt{1-a_{1}}}{a_{1}}\sqrt{a_{2}+d_{1}},\quad R_{1}^{c} =\frac{1+\sqrt{1-a_{1}}}{a_{1}}\sqrt{a_{2}+d_{1}},\] \[L_{2}^{c} =\frac{1}{a_{1}}(\sqrt{\delta}-\sqrt{\delta-a_{1}a_{2}}),\quad R _{2}^{c} =\frac{1}{a_{1}}(\sqrt{\delta}+\sqrt{\delta-a_{1}a_{2}}).\] By the fact that \[R_{1}^{c}>\frac{\sqrt{a_{2}+d_{1}}}{a_{1}}>\frac{\sqrt{\delta}}{a_{1}}>L_{2}^ {c},\] \(S_{1}^{c}\) and \(S_{2}^{c}\) have overlap part if and only if \(L_{1}^{c}<R_{2}^{c}\), namely \[(1-\sqrt{1-a_{1}})\sqrt{a_{2}+d_{1}}<\sqrt{\delta}+\sqrt{\delta-a_{1}a_{2}}. \tag{4.14}\] Therefore, it remains to show that in each case of Theorem 1.2, we have (4.4), (4.5) and (4.14). First, we investigate the case that \(d_{1}\leq d_{1}^{c}\). By numerator rationalization of (4.14), we have \[\sqrt{\delta}<\frac{a_{1}a_{2}}{(1-\sqrt{1-a_{1}})\sqrt{a_{2}+d_{1}}}+\sqrt{ \delta-a_{1}a_{2}}.\] Then, squaring both sides of the above inequality yields \[a_{1}a_{2}-\left(\frac{a_{1}a_{2}}{(1-\sqrt{1-a_{1}})\sqrt{a_{2}+d_{1}}}\right)^{ 2}<\frac{2a_{1}a_{2}\sqrt{\delta-a_{1}a_{2}}}{(1-\sqrt{1-a_{1}})\sqrt{a_{2}+d_{ 1}}}. \tag{4.15}\] Thanks to \(d_{1}\leq d_{1}^{c}\), the left hand side of the above inequality is nonpositive, and the inequality automatically holds. Hence, we have verified the case (1.9). Next, we turn to the case that \(d_{1}>d_{1}^{c}\). Since if we have (4.15), then (4.5) directly holds. Direct computations show that (4.14), namely (4.15) is equivalent to \[\frac{d_{2}}{r}<4\left(1-\sqrt{1-a_{1}}+\frac{1}{1-\sqrt{1-a_{1}}}\frac{a_{1}a _{2}}{a_{2}+d_{1}}\right)^{-2}-1=d_{2}^{c}.\] Since we also need \(d_{2}\) satisfies (4.4), it is now necessary to consider what kind of \(d_{1}\) satisfies \(d_{2}^{c}\geq a_{2}+d_{1}-1\) so that not \(\frac{d_{2}}{r}<d_{2}^{c}\) but \(\frac{d_{2}}{r}<a_{2}+d_{1}-1\) is the crucial restriction. It is not difficult to verify that when \(a_{1}a_{2}\geq 1\), we have \(d_{2}^{c}\leq a_{2}+d_{1}-1\). Hence, the assumptions in the case (1.12) is sufficient to obtain (4.4), (4.5) and (4.14). When \(a_{1}a_{2}<1\), \(d_{2}^{c}\geq a_{2}+d_{1}-1\) is equivalent to \[\left(\frac{1-\sqrt{1-a_{1}a_{2}}}{1-\sqrt{1-a_{1}}}\right)^{2}-a_{2}\leq d_{ 1}\leq\left(\frac{1+\sqrt{1-a_{1}a_{2}}}{1-\sqrt{1-a_{1}}}\right)^{2}-a_{2}=d _{1}^{h}, \tag{4.16}\] where \(d_{1}^{h}\) is defined in (1.7). Since \(a_{1}a_{2}<1\), we have \[\left(\frac{1-\sqrt{1-a_{1}a_{2}}}{1-\sqrt{1-a_{1}}}\right)^{2}-a_{2}<d_{1}^{ c},\] and the restriction of \(d_{1}\) on the left side of (4.16) can be neglected. Therefore, when \(a_{1}a_{2}<1\), if \(d_{1}^{h}<d_{1}\leq d_{1}^{c}\), we need \(\frac{d_{2}}{r}<a_{2}+d_{1}-1\), if \(d_{1}>d_{1}^{c}\), we need \(\frac{d_{2}}{r}<d_{2}^{c}\), and they are the cases (1.10) and (1.11) respectively. Summarizing the discussion above, we draw out that under the assumptions of Theorem 1.2, there exist \(\beta_{c}>0\) and \(\eta_{c}>0\) such that \(\mathbb{P}_{c}\) is positive definite. Namely, there exists a constant \(\varepsilon_{2}>0\) such that \[\mathbf{Y}^{\mathrm{T}}\mathbb{P}_{c}\mathbf{Y}\geq\varepsilon_{2}|\mathbf{Y}| ^{2}.\] Substituting it into (4.9), we have \[\frac{\mathrm{d}}{\mathrm{d}t}E_{c}(t)\leq-\varepsilon_{2}\int_{\Omega}| \mathbf{Y}|^{2}\mathrm{d}x-\frac{\eta_{c}}{c}\int_{\Omega}|\nabla w|^{2}\, \mathrm{d}x\leq-\varepsilon_{c}F_{c}(t), \tag{4.17}\] where \(\varepsilon_{c}=\min\{\varepsilon_{2},\frac{\eta_{c}}{c}\}\). On the basis of Lemma 4.1, the \(L^{\infty}\) convergence of \(w\) to \(\tilde{w}\) can be verified as follows. **Lemma 4.2**.: _Suppose that the assumptions of Theorem 1.2 hold and \((u,v,w)\) is the global solution of (1.1)-(1.2), then_ \[||w-\tilde{w}||_{L^{\infty}(\Omega)}\to 0\quad as\quad t\to\infty.\] The proof is omitted since it is the same as that of Lemma 3.2. Then thanks to Lemma 4.2, there exists a smooth bounded positive function \(\gamma(t)\), which decays to \(0\) as \(t\to\infty\) and satisfies \[\tilde{w}-\gamma(t)\leq w(x,t)\leq\tilde{w}+\gamma(t),\quad x\in\Omega,\ t\geq 0.\] and the auxiliary ODE system is introduced as follows: \[\left\{\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\bar{u}_{c}& =\bar{u}_{c}\big{[}1-\bar{u}_{c}-a_{2}\underline{v}_{c}-d_{1}( \tilde{w}-\gamma(t))\big{]},& t>0,\\ \frac{\mathrm{d}}{\mathrm{d}t}\bar{v}_{c}&=r\bar{v }_{c}\big{[}1-\bar{v}_{c}-\frac{d_{2}}{r}(\tilde{w}-\gamma(t))\big{]},& t>0,\\ \frac{\mathrm{d}}{\mathrm{d}t}\underline{v}_{c}&=r \underline{v}_{c}\big{[}1-a_{1}\bar{u}_{c}-\underline{v}_{c}-\frac{d_{2}}{r}( \tilde{w}+\gamma(t))\big{]},& t>0,\end{aligned}\right. \tag{4.18}\] with initial data \[\begin{aligned} \bar{u}_{c}(0)&=\bar{u}_{0}^{c} &:=\max_{\bar{\Omega}}u_{0},\\ \bar{v}_{c}(0)&=\bar{v}_{0}^{c}&:= \max\{\max_{\bar{\Omega}}v_{0},\tilde{v}\},\quad\underline{v}_{c}(0)= \underline{v}_{0}^{c}:=\min\{\min_{\bar{\Omega}}v_{0},\tilde{v}\}.\end{aligned} \tag{4.19}\] From (4.19), we infer that the initial data of (4.18) satisfies \[0<\bar{u}_{0}^{c}\leq 1,\quad 0<\underline{v}_{0}^{c}\leq\tilde{v}\leq\bar{v}_{0 }^{c}<+\infty. \tag{4.20}\] Parallel to Lemmas 3.3, 3.4, 3.5 in Section 3.2 and Lemma 3.6 in Section 3.3, in the following lemmas, we derive some estimates related to the auxiliary ODE system (4.18)-(4.19) and then verify the \(L^{\infty}\) convergence of \(u\), \(v\) to \(0\), \(\tilde{v}\) respectively. **Lemma 4.3**.: _The auxiliary ODE system (4.18)-(4.19) admits a unique global solution carrying the property_ \[0<\bar{u}_{c}(t)\leq 1,\quad 0<\bar{v}_{c}(t)\leq\max\{\bar{v}_{0}^{c},1\}, \quad 0<\underline{v}_{c}(t)\leq 1,\quad t\geq 0.\] **Lemma 4.4**.: _The solution of (4.18)-(4.19) satisfies_ \[\underline{v}_{c}(t)\leq\tilde{v}\leq\bar{v}_{c}(t),\quad t\geq 0.\] **Lemma 4.5**.: _Suppose that the assumptions of Theorem 1.2 hold. Let \((u,v,w)\) be the solution of (1.1)-(1.2), and \((\bar{u}_{c},\bar{v}_{c},\underline{v}_{c})\) be the solution of (4.18)-(4.19), then_ \[0\leq u(x,t)\leq\bar{u}_{c}(t),\quad x\in\Omega,\ t\geq 0,\] \[\underline{v}_{c}(t)\leq v(x,t)\leq\bar{v}_{c}(t),\quad x\in \Omega,\ t\geq 0.\] **Lemma 4.6**.: _Suppose that the assumptions of Theorem 1.2 hold, and \((u,v,w)\) is the global solution of (1.1)-(1.2). Then_ \[||u||_{L^{\infty}(\Omega)}+||v-\tilde{v}||_{L^{\infty}(\Omega)}\to 0\quad as \quad t\to\infty.\] The proofs of Lemmas 4.3, 4.4 and 4.5 are omitted since they are similar to those of Lemmas 3.3, 3.4 and 3.5 respectively and simpler. However, the proof of Lemma 3.6 cannot be applied to Lemma 4.6 since for the homogeneous tumor state \((0,\tilde{v},\tilde{w})\), \((0,\underline{v}_{c})\) is the the lower solution of \((u,v)\) as demonstrated in Lemma 4.5. Our strategy here is to use the results of ODE competitive systems. Proof of Lemma 4.6.: Thanks to the Lemmas 4.4 and 4.5, it suffices to show \[||\bar{u}_{c}||_{L^{\infty}(\Omega)}+||\bar{v}_{c}-\underline{v}_{c}||_{L^{ \infty}(\Omega)}\to 0\quad as\quad t\to\infty,\] where \((\bar{u}_{c},\bar{v}_{c},\underline{v}_{c})\) is the solution of (4.18)-(4.19). Notice that the equation of \(\bar{v}_{c}\) in (4.18) is independent to \(\bar{u}_{c}\) and \(\underline{v}_{c}\), we directly obtain that \(\bar{v}_{c}(t)\to\tilde{v}\) as \(t\to+\infty\). Hence, we only need to prove that for any \(0<\varepsilon<\varepsilon_{0}\), where \(\varepsilon_{0}=\varepsilon_{0}(a_{1},a_{2},d_{1},d_{2},r)\) is small enough, there exists \(\mathcal{T}=\mathcal{T}(\varepsilon)\) such that \(\forall\,t>\mathcal{T}\), \[||\bar{u}_{c}||_{L^{\infty}(\Omega)}+||\tilde{v}-\underline{v}_{c}||_{L^{ \infty}(\Omega)}<\varepsilon. \tag{4.21}\] Thanks to the selection of \(\gamma(t)\), we know that for all \(\varepsilon>0\), there exists \(\mathcal{T}^{\prime}>0\) such that for all \(t>\mathcal{T}^{\prime}\), \(\gamma(t)\leq\frac{1}{2}\left(\min\{\,d_{1},\frac{d_{2}}{r}\}\right)^{-1}\varepsilon\). Treating \(\mathcal{T}^{\prime}\) as the initial time, we let \((\bar{\mathfrak{u}},\underline{\mathfrak{p}})\) be the solution of the following ODE system: \[\left\{\begin{aligned} &\frac{\mathrm{d}}{\mathrm{d}t}\bar{ \mathfrak{u}}=\bar{\mathfrak{u}}\big{[}1-\bar{\mathfrak{u}}-a_{2}\underline{ \mathfrak{p}}-d_{1}\tilde{w}+\frac{1}{2}\varepsilon\big{]},& t>\mathcal{T}^{\prime},\\ &\frac{\mathrm{d}}{\mathrm{d}t}\underline{\mathfrak{p}}=r\underline {\mathfrak{p}}\big{[}1-a_{1}\bar{\mathfrak{u}}-\underline{\mathfrak{p}}- \frac{d_{2}}{r}\tilde{w}-\frac{1}{2}\varepsilon\big{]},& t>\mathcal{T}^{\prime},\end{aligned}\right. \tag{4.22}\] with initial data \[\bar{\mathfrak{u}}(\mathcal{T}^{\prime})=\bar{u}_{c}(\mathcal{T}^{\prime}), \quad\underline{\mathfrak{p}}(\mathcal{T}^{\prime})=\underline{v}_{c}( \mathcal{T}^{\prime}).\] Since \(\bar{u}_{c}\) and \(\underline{v}_{c}\) in (4.18) form a competitive ODE system, it is not difficult to verify that \(\bar{\mathfrak{u}}(t)\geq\bar{u}_{c}(t)\) and \(\underline{\mathfrak{p}}(t)\leq\underline{v}_{c}(t)\) for all \(t\geq\mathcal{T}^{\prime}\). Hence, to prove (4.21), it suffice to demonstrate that there exists \(\mathcal{T}=\mathcal{T}(\varepsilon)>\mathcal{T}^{\prime}\) such that \(\forall\,t>\mathcal{T}\), \[||\bar{\mathfrak{u}}(t)||_{L^{\infty}}+||\underline{\mathfrak{p}}(t)-\tilde{v} ||_{L^{\infty}}<\varepsilon.\] Denote two straight lines on \((\bar{\mathfrak{u}},\underline{\mathfrak{p}})\) plane \[\mu:1-\bar{\mathfrak{u}}-a_{2}\underline{\mathfrak{p}}-d_{1} \tilde{w}+\frac{1}{2}\varepsilon=0,\] \[\nu:1-a_{1}\bar{\mathfrak{u}}-\underline{\mathfrak{p}}-\frac{d_{ 2}}{r}\tilde{w}-\frac{1}{2}\varepsilon=0.\] First of all, we show that under the assumptions of Theorem 1.2, system (4.22) does not have positive steady state. If \(a_{1}a_{2}=1\), then \(\mu\) and \(\nu\) are parallel. If \(a_{1}a_{2}\neq 1\), direct computations show that \(\mu\) and \(\nu\) intersect at \((\mathfrak{u}_{0},\mathfrak{v}_{0})\), where \[\mathfrak{u}_{0}=\frac{1}{1-a_{1}a_{2}}\left(1-\delta+\frac{1}{2}(a_{2}+1) \varepsilon\right),\] \[\mathfrak{v}_{0}=\left(1+\frac{d_{2}}{r}\right)^{-1}\left[1+\frac{a_{1}}{a_{1 }a_{2}-1}\left(1+\frac{d_{2}}{r}-a_{2}-d_{1}\right)+\frac{2a_{1}a_{2}+a_{1}-1} {2(1-a_{1}a_{2})}\varepsilon\right].\] We can immediately obtain that when \(\varepsilon_{0}\) is sufficiently small and \(a_{1}a_{2}<1\), there is \(\mathfrak{u}_{0}<0\). Hence, in the case (1.10) and (1.11), there is no positive steady state. It remains to consider the cases (1.9) and (1.12) when \(a_{1}a_{2}>1\). In the case (1.9), thanks to its last inequality, we have \[\mathfrak{v}_{0}<-\left(1+\frac{d_{2}}{r}\right)^{-1}\frac{d_{1}}{a_{2}}<0.\] In the case (1.12), since \(\mathfrak{v}_{0}\) increases in \(d_{2}\) and \(a_{1}a_{1}>1\), we have \[\mathfrak{v}_{0}<\left(1+\frac{d_{2}}{r}\right)^{-1}\left\{1+\frac{a_{1}}{a_{ 1}a_{2}-1}\left[4\left(1-\sqrt{1-a_{1}}+\frac{1}{1-\sqrt{1-a_{1}}}\frac{a_{1}a _{2}}{a_{2}+d_{1}}\right)^{-2}-a_{2}-d_{1}\right]\right\}.\] Since the right hand side of the above inequality decreases in \(d_{1}\), we obtain that \[\mathfrak{v}_{0}< \left(1+\frac{d_{2}}{r}\right)^{-1}\left\{1+\frac{a_{1}}{a_{1}a_ {2}-1}\left[4\left(1-\sqrt{1-a_{1}}+\frac{1}{1-\sqrt{1-a_{1}}}\frac{a_{1}a_{2 }}{a_{2}+d_{1}^{c}}\right)^{-2}-a_{2}-d_{1}^{c}\right]\right\}\] \[= \left(1+\frac{d_{2}}{r}\right)^{-1}\left\{1-\frac{a_{1}}{\left(1 -\sqrt{1-a_{1}}\right)^{2}}\right\}\] \[< 0.\] Next, to describe the trajectory of \((\mu,\nu)\), it remains to consider the semi-trivial steady state of (4.22). Without loss of generality, we assume that (4.22) has two positive semi-trivial steady state \[\left(1-d_{1}\left(1+\frac{d_{2}}{r}\right)^{-1}+\frac{1}{2} \varepsilon,0\right)\quad\text{and}\quad\left(0,\,\left(1+\frac{d_{2}}{r} \right)^{-1}\!-\frac{1}{2}\varepsilon\right).\] Since it is not difficult to verity that \(\nu\) stays above \(\mu\) on the \((\bar{\mathfrak{u}},\underline{\mathfrak{p}})\) plane when \(\varepsilon_{0}\) is small enough, it follows from the ODE theory of competitive system that \[(\bar{\mathfrak{u}},\underline{\mathfrak{p}})\to\left(0,\,\left(1+\frac{d_{2} }{r}\right)^{-1}\!-\frac{1}{2}\varepsilon\right),\quad t\to+\infty.\] Hence, there exists \(\mathcal{T}\geq\mathcal{T}^{\prime}\) such that for all \(t>\mathcal{T}\), we have \(\bar{\mathfrak{u}}<\frac{1}{4}\varepsilon\) and \(\tilde{v}-\underline{\mathfrak{p}}<\frac{1}{4}\varepsilon+\frac{1}{2} \varepsilon=\frac{3}{4}\varepsilon\), which implies that (4.21) holds for all \(t>\mathcal{T}\) Now we are ready to prove Theorem 1.2 by establishing the desired quantitative convergence statement on stabilisation. Proof of Theorem 1.2.: With Lemma 2.4 and Lemma 4.6 at hand, by arguments similar to the method we use to prove (3.36), we have \(w\) converges exponentially to \(\tilde{w}\) in \(L^{\infty}\) as \(t\to\infty\). Therefore, there exist two constants \(\mathcal{C}_{6}>0\) and \(\kappa_{2}>0\) such that \(\gamma\) could be selected as \(\gamma(t)=\mathcal{C}_{6}e^{-\kappa_{2}t}\). To obtain the exponential decay rate of \(u\) and \(v\), we turn back to (4.18). First, by substituting the new \(\gamma\) into the first equation of (4.18), we have \[\frac{\mathrm{d}}{\mathrm{d}t}(\bar{v}_{c}-\tilde{v})= r\bar{v}_{c}\big{[}-(\bar{v}_{c}-\tilde{v})+\mathcal{C}_{6}d_{2}e^{- \kappa_{2}t}\ \big{]}\] \[\leq -r\tilde{v}(\bar{v}_{c}-\tilde{v})+r\max\Big{\{}1,\|v_{0}\|_{L^{ \infty}(\Omega)}\Big{\}}\,\mathcal{C}_{6}d_{2}e^{-\kappa_{2}t}.\] By multiplying the above inequality with \(e^{\mathcal{A}_{5}t}\), where \(\mathcal{A}_{5}=\frac{1}{2}\min\{r,\,\kappa_{2}\}\), it is not difficult to obtain that \(\bar{v}_{c}\) tends to \(\tilde{v}\) exponentially. Next, we consider the decay rate of \(\bar{u}_{c}\). (4.21) implies that there exists \(\mathcal{T}_{2}>0\) such that for all \(t>\mathcal{T}_{2}\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\bar{u}_{c}= \bar{u}_{c}\Big{[}-\Big{(}\frac{a_{2}+d_{1}}{1+\frac{d_{2}}{r}}-1 \Big{)}-\bar{u}_{c}+a_{2}(\tilde{v}-\underline{v}_{c})+\mathcal{C}_{6}d_{1}e^{ -\kappa_{2}t}\Big{]}\] \[\leq -\frac{1}{2}\Big{(}\frac{a_{2}+d_{1}}{1+\frac{d_{2}}{r}}-1\Big{)} \bar{u}_{c}.\] Since the coefficient before \(\bar{u}_{c}\) is negative due to (4.4), we directly have \(\bar{u}_{c}\) decays exponentially to \(0\) when \(t>\mathcal{T}_{2}\). Finally, we consider the equation of \(\underline{v}_{c}\) in (4.18). When \(t\) is sufficiently large, \[\frac{\mathrm{d}}{\mathrm{d}t}(\tilde{v}-\underline{v}_{c})= -r\underline{v}_{c}(\tilde{v}-\underline{v}_{c})+r\underline{v}_{c} (\mathcal{C}_{6}d_{2}e^{-\kappa_{2}t}+a_{1}\bar{u}_{c})\] \[\leq -\frac{1}{2}r\tilde{v}(\tilde{v}-\underline{v}_{c})+r\tilde{v}( \mathcal{C}_{6}d_{2}e^{-\kappa_{2}t}+a_{1}\bar{u}_{c}).\] By the discussion above, the second bracket in the right hand side of the above inequality decays exponentially to \(0\) when \(t>\mathcal{T}_{2}\). Hence, similar to the case of \(\bar{v}_{c}\), we obtain that \(\underline{v}_{c}\) tends to \(\tilde{v}\) exponentially when \(t\) is sufficiently large. Summarizing the discussion above, by using Lemma 4.5, we have \[||u||_{L^{\infty}(\Omega)}+||v-\tilde{v}||_{L^{\infty}(\Omega)}\] \[\leq ||u||_{L^{\infty}(\Omega)}+||\bar{v}_{c}-\tilde{v}||_{L^{\infty}( \Omega)}+||\underline{v}_{c}-\tilde{v}||_{L^{\infty}(\Omega)}\to 0\quad\text{ exponentially as}\quad t\to\infty.\] The proof is complete. The healthy state This section is devoted to the proof of Theorem 1.3, which is about the global convergence of the healthy state \[(u,v,w)=(1,0,0).\] The main idea of the proof is similar to those of Theorems 1.1 and 1.2. The first key step is still the construction of a proper Lyapunov functional in the following lemma. **Lemma 5.1**.: _Suppose that assumptions of Theorem 1.3 hold, \((u,v,w)\) is the global solution of (1.1)-(1.2). Define_ \[A_{r}(t)=\int_{\Omega}(u(x,t)-1-\ln u(x,t))\,\mathrm{d}x,\] \[B_{r}(t)=\int_{\Omega}v(x,t)\,\mathrm{d}x,\] \[C_{r}(t)=\frac{1}{2}\int_{\Omega}w^{2}(x,t)\,\mathrm{d}x.\] _Then there exist \(\beta_{r}>0\), \(\eta_{r}>0\) and \(\varepsilon_{r}>0\) such that the functions \(E_{r}(t)\) and \(F_{r}(t)\) defined by_ \[E_{r}(t)=A_{r}(t)+\frac{\beta_{r}}{r}B_{r}(t)+\frac{\eta_{r}}{c}C_{r}(t),\quad t >0, \tag{5.1}\] _and_ \[F_{r}(t)= \int_{\Omega}(u(x,t)-1)^{2}\,\mathrm{d}x+\int_{\Omega}v(x,t)^{2} \,\mathrm{d}x+\int_{\Omega}w(x,t)^{2}\,\mathrm{d}x\] \[+\int_{\Omega}\left|\nabla w(x,t)\right|^{2}\mathrm{d}x,\quad t >0, \tag{5.2}\] _satisfy_ \[E_{r}(t)\geq 0,\quad t\geq 0, \tag{5.3}\] _as well as_ \[\frac{\mathrm{d}}{\mathrm{d}t}E_{r}(t)\leq-\varepsilon_{r}F_{r}(t). \tag{5.4}\] We will present the proof of this lemma in details at the end since it explains why the conditions on the parameters in Theorem 1.3 are required. Thanks to Lemma 5.1, the \(L^{\infty}\) convergence of \(w\) to zero is established as follows. **Lemma 5.2**.: _Suppose that assumptions of Theorem 1.3 hold, \((u,v,w)\) is the global solution of (1.1)-(1.2), then_ \[||w||_{L^{\infty}(\Omega)}\to 0\quad as\quad t\to\infty.\] Again, Lemma 5.2 indicates that there exists a smooth bounded positive function \(\theta(t)\), which decays to \(0\) as \(t\to\infty\) and satisfies \[w(x,t)\leq\theta(t),\quad x\in\Omega,\ t\geq 0.\] Thus we introduce the following auxiliary ODE system: \[\left\{\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\underline{u}_{r} &=\underline{u}_{r}\big{[}1-\underline{u}_{r}-a_{2}\bar{v}_{r}-d _{1}\vartheta(t)\big{]},&\quad t>0,\\ \frac{\mathrm{d}}{\mathrm{d}t}\bar{v}_{r}&=r\bar{v }_{r}\big{[}1-a_{1}\underline{u}_{r}-\bar{v}_{r}\big{]},&\quad t>0, \end{aligned}\right. \tag{5.5}\] with initial data \[\underline{u}_{r}(0)=\underline{u}_{0}^{r}:=\min_{\bar{\Omega}}u_{0},\quad \bar{v}_{r}(0)=\bar{v}_{0}^{c}:=\max_{\bar{\Omega}}v_{0}, \tag{5.6}\] Next, under the assumptions of Theorem 1.3, again parallel to Lemmas 3.3 and 3.5 in Section 3.2 we derive the follwing estimates related to the auxiliary ODE system (5.5)-(5.6) * \(0<\underline{u}_{r}(t)\leq 1,\ 0<\bar{v}_{r}(t)\leq 1,\quad t\geq 0\,\); * \(\underline{u}_{r}(t)\leq u(x,t)<1,\ \ 0<v(x,t)\leq\bar{v}_{r}(t),\quad x\in \Omega,\ t\geq 0.\) Then since the auxiliary ODE system (5.5) is also competitive, similar to proof of Lemma 4.6, we have \[||u-1||_{L^{\infty}(\Omega)}+||v||_{L^{\infty}(\Omega)}\to 0\quad\text{as} \quad t\to\infty.\] To complete the proof of Theorem 1.3, the last step is to show \[||u-1||_{L^{\infty}(\Omega)}+||v||_{L^{\infty}(\Omega)}+||w||_{L^{\infty}( \Omega)}\to 0\quad\text{exponentially as}\quad t\to\infty,\] by similar arguments in handling the homogeneous tumor state at the end of Section 4. We omit all the details since they are similar and simpler. It remains to prove Lemma 5.1. Proof of Lemma 5.1.: Thanks to the assumption \(v_{0}\leq 1\), by comparison principle of parabolic equations, we obtain that \(v(x,t)\leq 1\) for all \(x\in\Omega\) and \(t>0\). Straightforward computations show \[\frac{\mathrm{d}}{\mathrm{d}t}A_{r}(t) =-\int_{\Omega}(u-1)^{2}\,\mathrm{d}x-a_{2}\int_{\Omega}(u-1)v\, \mathrm{d}x-d_{1}\int_{\Omega}(u-1)w\,\mathrm{d}x, \tag{5.7}\] \[\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}t}B_{r}(t) =-a_{1}\int_{\Omega}(1-u)v\,\mathrm{d}x-(a_{1}-1)\int_{\Omega}v\, \mathrm{d}x-\int_{\Omega}v^{2}\mathrm{d}x-\frac{d_{2}}{r}\int_{\Omega}vw\, \mathrm{d}x\] \[\leq-a_{1}\int_{\Omega}(1-u)v\,\mathrm{d}x-a_{1}\int_{\Omega}v^{ 2}\,\mathrm{d}x-\frac{d_{2}}{r}\int_{\Omega}vw\,\mathrm{d}x,\] (5.8) \[\frac{1}{c}\frac{\mathrm{d}}{\mathrm{d}t}C_{r}(t) =-\frac{1}{c}\int_{\Omega}|\nabla w|^{2}\,\mathrm{d}x+\int_{\Omega }vw\,\mathrm{d}x-\int_{\Omega}w^{2}\,\mathrm{d}x. \tag{5.9}\] By differentiating (5.1) and substituting (5.7)-(5.9) into it, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}E_{r}(t)\leq-\int_{\Omega}\mathbf{Z}^{\mathrm{T}} \mathbb{P}_{r}\mathbf{Z}\,\mathrm{d}x-\frac{\eta_{r}}{c}\int_{\Omega}\left| \nabla w\right|^{2}\mathrm{d}x, \tag{5.10}\] where \[\mathbb{P}_{r}=\left(\begin{array}{ccc}1&\dfrac{a_{2}+a_{1}\beta_{r}}{2}& \dfrac{d_{1}}{2}\\ \dfrac{a_{2}+a_{1}\beta_{r}}{2}&a_{1}\beta_{r}&\dfrac{d_{2}}{r}\dfrac{\beta_{ r}-\eta_{r}}{2}\\ \dfrac{d_{1}}{2}&\dfrac{d_{2}}{r}\dfrac{\beta_{r}-\eta_{r}}{2}&\eta_{r} \end{array}\right) \tag{5.11}\] and \[\mathbf{Z}=(u-1,v,w)^{\mathbf{T}}\,.\] Similar to the previous two cases, we claim that there exist two positive constants \(\beta_{r}\) and \(\eta_{r}\) such that \(\mathbb{P}_{r}\) is positive definite if and only if there exists \(\beta_{r}>0\) satisfies the following two inequalities simultaneously: \[\Phi_{r}(\beta_{r}):=-a_{1}^{2}\beta_{r}^{2}+2\Big{[}\,2\Big{(}a_{1}+\frac{d _{2}}{r}\Big{)}-a_{1}(a_{2}+d_{1})\,\Big{]}\beta_{r}-(a_{2}+d_{1})^{2}>0, \tag{5.12}\] \[\Psi_{r}(\beta_{r}):=-a_{1}^{2}\beta_{r}^{2}+2(2a_{1}-a_{1}a_{2})\beta_{r}-a_ {2}^{2}>0. \tag{5.13}\] For simplicity, we denote \(\alpha:=\frac{1}{2}(a_{2}+a_{1}\beta_{r})\). To verify our assertion, we just need to compute all the principal minors of \(\mathbb{P}_{r}\): \[\mathbf{M_{1}^{r}}:=1,\] \[\mathbf{M_{2}^{r}}:=\begin{vmatrix}1&\alpha\\ \alpha&a_{1}\beta_{r}\end{vmatrix}=a_{1}\beta_{r}-\alpha^{2}=\frac{1}{4}\left( -a_{1}^{2}\beta_{r}^{2}+2(2a_{1}-a_{1}a_{2})\beta_{r}-a_{2}^{2}\,\right)= \frac{1}{4}\Psi_{r}(\beta_{r}),\] and (5.13) holds if and only if \(\mathbf{M_{2}^{r}}>0\). Now, we consider the discriminant of \(\mathbb{P}_{r}\). \[\begin{split}\det\mathbb{P}_{r}=\frac{1}{4}& \left\{-\eta_{r}^{2}+2\Big{(}2(a_{1}\beta_{r}-\alpha^{2})+\Big{(} \frac{d_{2}}{r}\beta_{r}-\alpha d_{1}\Big{)}\Big{)}\eta_{r}\right.\\ &\left.+\Big{(}2\alpha d_{1}\frac{d_{2}}{r}\beta_{r}-\Big{(}\frac {d_{2}}{r}\beta_{r}\Big{)}^{2}-a_{1}d_{1}^{2}\beta_{r}\Big{)}\right\}.\end{split} \tag{5.14}\] Notice from (5.13) and \(a_{1}>1\) that \[2\alpha d_{1}\frac{d_{2}}{r}\beta_{r}-\Big{(}\frac{d_{2}}{r}\beta_{r}\Big{)}^ {2}-a_{1}d_{1}^{2}\beta_{r}<-\beta_{r}\Big{(}\frac{d_{2}}{r}\alpha-d_{1}\Big{)} ^{2}\leq 0,\] fundamental properties of quadratic polynomials implies that there exists \(\eta_{r}>0\) such that \(\det\mathbb{P}_{r}>0\) if and only if the following situation holds: \[\left\{\begin{aligned} &\Delta_{r}>0,\\ & 2(a_{1}\beta_{r}-\alpha^{2})+\Big{(}\frac{d_{2}}{r}\beta_{r}- \alpha d_{1}\Big{)}\geq 0,\end{aligned}\right. \tag{5.15}\] where \(\Delta_{r}\) is the discriminant of the quadratic (5.14). By calculating this discriminant and substituting \(\alpha=\frac{1}{2}(a_{2}+a_{1}\beta_{r})\) into it, we have \[\Delta_{r}=16\Big{[}(a_{1}+d_{2})\beta_{r}-\alpha d_{1}-\frac{1}{4}d_{1}^{2} \,\Big{]}(a_{1}\beta_{r}-\alpha^{2})=\Phi_{r}(\beta_{r})\Psi_{r}(\beta_{r})\] Since we already have (5.13), it follows from the above equations that \(\Delta_{r}>0\) if and only if (5.12) holds. Also, when \(\Delta_{r}>0\) and \(\Psi_{r}(\beta_{r})>0\), \[2(a_{1}\beta_{r}-\alpha^{2})+\Big{(}\frac{d_{2}}{r}\beta_{r}-\alpha d_{1} \Big{)}>\Big{(}a_{1}+\frac{d_{2}}{r}\Big{)}\beta_{c}-\alpha^{2}-\alpha d_{1}> \frac{1}{4}\Phi_{c}(\beta_{c})>0,\] which implies the second equation in (5.15) is automatically satisfied. Therefore, on the basis of (5.13), there exist two positive constants \(\beta_{r}\) and \(\eta_{r}\) such that \(\det\mathbb{P}_{r}>0\) if and only if (5.12) holds. Summing up the discussion above, our assertion has been proved. Now, it remains to show that under the assumptions of Theorem 1.3, there exists \(\beta_{r}>0\) which satisfies (5.12) and (5.13) simultaneously. For this purpose, we denote the positive solution of (5.12) as \(S_{1}^{r}:=((L_{1}^{r})^{2},\,(R_{1}^{r})^{2})\) and positive solution of (5.13) as \(S_{2}^{r}:=((L_{2}^{r})^{2},\,(R_{2}^{r})^{2})\). We assume for now that we have \[\frac{d_{2}}{r}>a_{1}(a_{2}+d_{1}-1), \tag{5.16}\] which is already contained in the case (1.15). Thanks to (5.16), \(S_{1}^{r}\) is not empty. On the other hand, \(S_{2}^{r}\) is not empty due to \(a_{2}<1\). Then, direct computations show that \[L_{1}^{r}=\frac{1}{a_{1}}\left(\sqrt{a_{1}+\frac{d_{2}}{r}}-\sqrt {a_{1}+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}\right),\] \[R_{1}^{r}=\frac{1}{a_{1}}\left(\sqrt{a_{1}+\frac{d_{2}}{r}}+ \sqrt{a_{1}+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}\right),\] \[L_{2}^{r}=\frac{1}{\sqrt{a_{1}}}\big{(}1-\sqrt{1-a_{2}}\,\big{)},\quad R_{2}^{r}=\frac{1}{\sqrt{a_{1}}}\big{(}1+\sqrt{1-a_{2}}\,\big{)}.\] Since \(\frac{d_{2}}{r}>a_{1}(a_{2}+d_{1}-1)\), we have \[R_{1}^{r}>\frac{\sqrt{a_{2}+d_{1}+1}}{\sqrt{a_{1}}}>\frac{1}{\sqrt{a_{1}}}>L_{ 2}^{r},\] we only need \(L_{1}^{r}<R_{2}^{r}\), namely \[\sqrt{a_{1}+\frac{d_{2}}{r}}-\sqrt{a_{1}+\frac{d_{2}}{r}}-a_{1}(a_{2}+d_{1})< \sqrt{a_{1}}\big{(}1+\sqrt{1-a_{2}}\,\big{)}, \tag{5.17}\] so that there is overlap part between \(S_{1}^{r}\) and \(S_{2}^{r}\). Recall that we also need (5.16), hence, in the following part, we verify that (5.16) and (5.17) hold under the assumption of Theorem 1.3. First, we consider the case (1.15). Since in this case we already have (5.16), it remains to show that when \(d_{1}\leq d_{1}^{r}\), where \(d_{1}^{h}\) is defined in (1.17), we can derive (5.17) from (5.16). By numerator rationalization of (5.17), we obtain \[\frac{a_{1}(a_{2}+d_{1})}{1+\sqrt{1-a_{2}}}<\sqrt{a_{1}+\frac{d_{2}}{r}}+\sqrt {a_{1}+\frac{d_{2}}{r}-a_{1}(a_{2}+d_{1})}. \tag{5.18}\] Substituting \(\frac{d_{2}}{r}=a_{1}(a_{2}+d_{1}-1)\) into the inequality above yields \[\sqrt{a_{2}+d_{1}}<1+\sqrt{1-a_{2}},\] which is equivalent to \(d_{1}<d_{1}^{r}\). Since \(\frac{d_{2}}{r}\) is strictly smaller than \(a_{1}(a_{2}+d_{1}-1)\), we obtain that (5.18) still holds when \(d_{1}=d_{1}^{r}\). Notice that the right hand side of (5.18) increases in \(d_{2}\), we obtain that (5.17) still holds when (5.16) is satisfied. Next, we demonstrate that in the case (1.16), we have (5.16) and (5.17). Direct computations show that (5.17) is equivalent to \(\frac{d_{2}}{r}>d_{2}^{r}\), where \(d_{2}^{r}\) is defined in (1.18). Hence, we have (5.17) in the case (1.16). On the other hand, it follows from the discussion in \(d_{1}\leq d_{1}^{r}\) part that when \(d_{1}>d_{1}^{h}\), there is \(d_{2}^{h}>a_{2}+d_{1}-1\). Hence, in the case (1.16) we have (5.16). Summarizing the discussion above, we draw out that assumptions of Theorem 1.3 suffice to show the existence of positive \(\beta_{r}\) and \(\eta_{r}\) such that \(\mathbb{P}_{r}\) is positive definite. By the definition of positive definite matrix, there exists a constant \(\varepsilon_{3}>0\) such that \[\mathbf{Z}^{\mathrm{T}}\mathbb{P}_{r}\mathbf{Z}\geq\varepsilon_{3}|\mathbf{Z} |^{2}.\] Substituting it into (5.10), we have \[\frac{\mathrm{d}}{\mathrm{d}t}E_{r}(t)\leq-\varepsilon_{3}\int_{\Omega}| \mathbf{Z}|^{2}\,\mathrm{d}x-\frac{\eta_{r}}{c}\int_{\Omega}|\nabla w|^{2}\, \mathrm{d}x\leq-\varepsilon_{r}F_{r}(t), \tag{5.19}\] where \(\varepsilon_{r}=\min\{\varepsilon_{3},\frac{\eta_{r}}{c}\}\).
2307.08959
A Large and Variable Leading Tail of Helium in a Hot Saturn Undergoing Runaway Inflation
Atmospheric escape shapes the fate of exoplanets, with statistical evidence for transformative mass loss imprinted across the mass-radius-insolation distribution. Here we present transit spectroscopy of the highly irradiated, low-gravity, inflated hot Saturn HAT-P-67 b. The Habitable Zone Planet Finder (HPF) spectra show a detection of up to 10% absorption depth of the 10833 Angstrom Helium triplet. The 13.8 hours of on-sky integration time over 39 nights sample the entire planet orbit, uncovering excess Helium absorption preceding the transit by up to 130 planetary radii in a large leading tail. This configuration can be understood as the escaping material overflowing its small Roche lobe and advecting most of the gas into the stellar -- and not planetary -- rest frame, consistent with the Doppler velocity structure seen in the Helium line profiles. The prominent leading tail serves as direct evidence for dayside mass loss with a strong day-/night- side asymmetry. We see some transit-to-transit variability in the line profile, consistent with the interplay of stellar and planetary winds. We employ 1D Parker wind models to estimate the mass loss rate, finding values on the order of $2\times10^{13}$ g/s, with large uncertainties owing to the unknown XUV flux of the F host star. The large mass loss in HAT-P-67 b represents a valuable example of an inflated hot Saturn, a class of planets recently identified to be rare as their atmospheres are predicted to evaporate quickly. We contrast two physical mechanisms for runaway evaporation: Ohmic dissipation and XUV irradiation, slightly favoring the latter.
Michael Gully-Santiago, Caroline V. Morley, Jessica Luna, Morgan MacLeod, Antonija Oklopčić, Aishwarya Ganesh, Quang H. Tran, Zhoujian Zhang, Brendan P. Bowler, William D. Cochran, Daniel M. Krolikowski, Suvrath Mahadevan, Joe P. Ninan, Guðmundur Stefánsson, Andrew Vanderburg, Joseph A. Zalesky, Gregory R. Zeimann
2023-07-18T03:55:28Z
http://arxiv.org/abs/2307.08959v1
# A Large and Variable Leading Tail of Helium in a Hot Saturn Undergoing Runaway Inflation ###### Abstract Atmospheric escape shapes the fate of exoplanets, with statistical evidence for transformative mass loss imprinted across the mass-radius-insolation distribution. Here we present transit spectroscopy of the highly irradiated, low-gravity, inflated hot Saturn HAT-P-67 b. The Habitable Zone Planet Finder (HPF) spectra show a detection of up to 10% absorption depth of the 10833 A Helium triplet. The 13.8 hours of on-sky integration time over 39 nights sample the entire planet orbit, uncovering excess Helium absorption preceding the transit by up to 130 planetary radii in a large leading tail. This configuration can be understood as the escaping material overflowing its small Roche lobe and advecting most of the gas into the stellar--and not planetary--rest frame, consistent with the Doppler velocity structure seen in the Helium line profiles. The prominent leading tail serves as direct evidence for dayside mass loss with a strong day-/night- side asymmetry. We see some transit-to-transit variability in the line profile, consistent with the interplay of stellar and planetary winds. We employ 1D Parker wind models to estimate the mass loss rate, finding values on the order of \(2\times 10^{13}\) g/s, with large uncertainties owing to the unknown XUV flux of the F host star. The large mass loss in HAT-P-67 b represents a valuable example of an inflated hot Saturn, a class of planets recently identified to be rare as their atmospheres are predicted to evaporate quickly. We contrast two physical mechanisms for runaway evaporation: Ohmic dissipation and XUV irradiation, slightly favoring the latter. Exoplanet atmospheres, Exoplanet atmospheric dynamics, Stellar winds, Exoplanet atmospheric variability 1000-day - 1000-day 100-day 100-day 100-day 10-000-day 100-000 & Schlichting, 2019; Berger et al., 2023). Whatever the cause, some large fraction of planets undergo transformative atmospheric escape, and the signal should be widely discernable. Such signals have been searched for, and increasingly found, in many transiting planet systems with at least 28 detections to date (Dos Santos, 2022). Uncertainty in system ages, evaporation timescales, X-ray/UV radiation, and dominating physical mechanisms degrade our ability to foretell if any given planet will exhibit ongoing signatures of atmospheric escape. Episodic stellar wind gusts and other forms of astrophysical variability could also subdue the appearance of atmospheric escape, even where we expect it most. The over 57 published non-detections of atmospheric escape (Dos Santos, 2022; Guilluy et al., 2023) must encode these natural whims in a way that we have not yet disentangled. Nevertheless, we can boost our chances of witnessing active and significant atmospheric escape by targeting sources that seem predisposed to loss. These intrinsic or extrinsic factors may include proximity to the host star, low surface gravity, and high energy incident radiation. Inflated hot Saturns stand out as an especially extreme category that should exhibit mass loss. This category is defined as having masses comparable to Saturn's (\(M_{\rm p}\sim 0.3M_{\rm Jup}\)), with equilibrium temperatures high enough to expect radius inflation (\(T_{\rm eq}>1000\) K). Their low gravitational potentials should let go of their atmospheres more readily than their hot Jupiter counterparts. Lower gravity also implies larger atmospheric scale heights, making them easier to detect in transmission spectroscopy. And their large transit depths and short periods should make them readily detectable in transit searches in large numbers, like hot Jupiters. But inflated hot Saturns are rare (Thorngren & Fortney, 2018). The cause for their underabundance remains an open question, with at least two conceivable explanations. Tidal migration mechanisms--either high-eccentricity or disk-based--could hypothetically proceed in a mass-dependent manner, efficiently for Jupiter-mass planets, but inefficiently for the lower-mass Saturns (Thorngren & Fortney, 2018; Dawson & Johnson, 2018). In this scenario, sub-Saturn mass planets never make it to the close-in orbital separations that would lead to the conditions needed for inflation in the first place. Alternatively--and most consequentially for atmospheric escape--another explanation may prevail. Inflated hot Saturns may either form _in-situ_ or effectively migrate to close-in orbital separations (Dawson & Johnson, 2018), but once they arrive, the intense irradiation overheats the planet. This heating leads to runaway inflation and, ultimately, complete disintegration. In this scenario, the inflationary half-life becomes so short that the probability of observing members in the class decreases sharply with density, causing the apparent lack of inflated sub-Saturns (Thorngren et al., 2023). Batygin et al. (2011) predicted runaway inflation of hot Saturns as a consequence of the Ohmic dissipation mechanism. Here, lightly thermally ionized atmospheric flows induce drag in a planetary magnetic field, weakly coupling the stellar incident energy into the planetary interior. A key prediction of Ohmic dissipation is that the anomalous heating efficiency \(\epsilon(T_{\rm eq})\) should exhibit a peak around \(T_{\rm eq}\sim 1500-2000\) K (Menou, 2012; Rogers & Komacek, 2014; Ginzburg & Sari, 2016). Thorngren & Fortney (2018) favored Ohmic dissipation as the mechanism responsible for inflating hot Jupiters by showing that the observed sample of inflated planets implies an anomalous heating efficiency peak at equilibrium temperatures of \(\sim 1500\) K. Therefore Ohmic dissipation stands out as a leading physical mechanism for inflation and mass loss. Recently, Thorngren et al. (2023) showed that hot Saturns can undergo catastrophic erosion by stellar X-ray and Extreme UV (XUV) photoevaporative mass loss. Planets with densities less than \(\sim\)0.1 g cm\({}^{-3}\) achieve mass loss rates of up to \(10^{3}\)\(M_{\oplus}\)/Gyr, setting up a positive feedback loop: the planet increases in radius, overflowing its Roche lobe, fueling greater mass loss, and increasing in radius even further. This vicious cycle systematically depopulates the mass-radius plane with a cliff defined by the \(\rho_{p}\sim\)0.1 g cm\({}^{-3}\) dividing line. A runaway inflation scenario predicts an inevitable and profound mass loss rate for inflated hot Saturns. As the planet's atmosphere overflows its Roche lobe at an ever-increasing pace, instantaneous mass loss rates may exceed \(\dot{M}>10^{13}\) g/s, over 10\(\times\) larger than those previously seen in atmospheric escape measurements to date (Dos Santos, 2022). Inflated hot Saturns, therefore, make excellent targets for direct measurement of atmospheric escape and for testing the underlying mechanisms of mass loss. Large mass loss alone does not guarantee detectability. The ability to detect even immense mass loss hinges on its observability in spectral tracers. Ly\(\alpha\), He i 10833 A, and H\(\alpha\) have emerged as the most amenable to detection (Seager & Sasselov, 2000; Vidal-Madjar et al., 2003; Jensen et al., 2012; Yan & Henning, 2018; Oklopcic & Hirata, 2018; Spake et al., 2018; Dos Santos, 2022; Owen et al., 2023), but each of these has its own observational limitations. Ly\(\alpha\) can suffer from interstellar medium H i absorption censoring its low-velocity line core, for example. Here we focus on He i 10833 A, which offers some advantages. In particular, metastable Helium's ability to be observed at high spectral resolution from the ground has been especially valuable for evincing velocity substructure of the escaping gas motion relative to the exoplanet restframe (Alonso-Floriano et al., 2019; Ninan et al., 2020), and convincingly associating the signal to an exoplanetary origin as opposed to stellar contamination (Cauley et al., 2018). He i has resulted in at least 14 systems with detections (Dos Santos, 2022)1. The sample of detections includes both hot Jupiters and lower mass planets but lacks inflated hot Saturns owing to their intrinsic rarity below the 0.1 g cm\({}^{-3}\) threshold. An observational picture of mass loss in inflated hot Saturns appears to be lacking for this reason. Footnote 1: See also Table S1 of Zhang et al. 2023 for a compilation of exoplanets with detections and non-detections of the helium excess. The Habitable Zone Planet Finder (HPF) Helium Exospheres program has been conducting a survey of exoplanets to search for atmosphere loss via the He i 10833 A metastable triplet. The survey's multiple-visit sampling strategy has enabled a search for atmospheric loss at large out-of-transit separations from the planet, recently revealing giant tidal tails of Helium escaping the hot Jupiter HAT-P-32 b (Zhang et al., 2023). Here we present a multi-year observational campaign searching for atmospheric escape in HAT-P-67 b, a transiting inflated hot Saturn orbiting an F5 subgiant at an orbital separation of 0.06 AU and a 4.8-day orbital period (Zhou et al., 2017). Its strong insolation (\(T_{\rm eq}\sim 2000\) K) combined with HAT-P-67 b's extremely low surface gravity (\(\log g_{p}<2.3\) dex) makes it an exceptional candidate for strong atmospheric mass loss via Roche lobe overflow. Its \(\sim\)0.05 g cm\({}^{-3}\) density places it below the 0.1 g cm\({}^{-3}\) threshold predicted to exhibit runaway inflation (Thorngren et al., 2023). Figures 1 and 2 show how much of an outlier HAT-P-67 b is: large, low mass, and heavily irradiated. The evolutionary state of HAT-P-67 b offers even more intriguing possibilities. The planet may have undergone re-inflation (Saunders et al., 2022; Grunblatt et al., 2022, 2023), as the host star evolved through the subgiant phase- with the planet's insolation increasing with the star's rapid luminosity jump in this part of the short-lived HR diagram. The tidal gravity of the nearby massive (\(M_{\star}\sim 1.6~{}M_{\odot}\)) host star may amplify mass loss rates even further (Erkaev et al., 2007; Thorngren et al., 2023). Its status as a rare inflated hot Saturn makes HAT-P-67 b a promising laboratory, uniquely suited for testing the runaway inflationary predictions of Ohmic dissipation and XUV photoevaporation. We assemble both archival, previously published, and new observations of the HAT-P-67 system, chronicled in Section 2. In Section 3, we refine the stellar and planet properties based on those observations, including dis Figure 1: Overview of exoplanet demographics. The pixel bins reflect the observed density of over 5000 planets accessed from the NASA Exoplanet Archive. Detections of atmospheric escape are common among large planets with strong insolation. HAT-P-67 bis among the most inflated planets known. Figure 2: Mass-radius trends for inflated hot sub-Saturns and hot Jupiters, with layout following Figure 2 of Thorngren & Fortney (2018) and updated with NASA Exoplanet Archive confirmed planets. The trend lines show the mass-radius relationship for equilibrium temperatures of 500, 1000, 1250, 1500, and 2000 K, assuming the mean composition and mean heating efficiency from the original figure. HAT-P-67 b (\(\star\) symbol) stands alone in a region defined by the lack of inflated sub-Saturns and explained by short lifetimes from runaway inflation and mass loss. tance, radius, and orbit re-analyses. The Helium excess detection is presented in Section 3.5, with an analysis of the signal's trend with orbital phase. We use this orbital structure to reconstruct the geometry and mass loss of the escaping Helium exosphere with 1D Parker wind models (Section 4). We assess the physical mechanisms (Section 5) giving rise to the escaping atmosphere, weighing the distinctive predictions of XUV irradiation and Ohmic dissipation. Finally, in Section 6, we question the assumptions in our approach, discuss the overall congruence of predictions and observations, and highlight some implications for future exosphere studies. ## 2 Observations ### Habitable Zone Planet Finder (HPF) The Habitable Zone Planet Finder Spectrograph (HPF; Mahadevan et al., 2012, 2014; Metcalf et al., 2019) on the queue-scheduled 10-meter _Hobby-Eberly Telescope_ (HET; Ramsey et al., 1998) operates in the near-IR from \(8100-12800\) A spanning the \(z\), \(Y\), and \(J\) bands at spectral resolving power \(R=55,000\). The HET fixed-elevation design (Shetrone et al., 2007) limits the observability of HAT-P-67 to less than 1 hour "tracks" for a fixed range of hour angles before (east track) and after (west track) the star transits the meridian. Whereas conventional steerable telescopes could conduct continuous point-and-stare observations of HAT-P-67 for hours, HET cannot. In practice, this limitation means that in-transit and out-of-transit observational phases were rarely possible on the same night. Instead, we organized the observations into four campaigns to coincide with HAT-P-67 b transits on UT dates 2020 April 28, 2020; May 22, 2020; June 15, 2020; and 2022 April 29. These campaigns have out-of-transit observations at least one night before and one night after, and often two nights on either side of the transit. Two more transit snapshots were obtained in 2022 June-July without the accompanying visits immediately before and after. The in-transit campaigns had up to 14 exposures per HET track, with integration times between 5-8.5 minutes. We also obtained random-in-phase lower-priority "P1-P4" reconnaissance observations (Shetrone et al., 2007). These out-of-transit snapshot observations typically received 4 or fewer individual exposures. We observed HAT-P-67 with HPF for a total of 41 visits on 39 unique nights, with two of those nights observing both the east and west HET tracks. The total on-source integration time exceeds 13.8 hours, with \(N=152\) individual spectra possessing a typical signal-to-noise ratio of 80 per pixel. Table 1 summarizes the log of observations. The observation schedule was strategically restricted to the spring season when the Barycentric Earth Radial Velocity (BERV) would Doppler shift telluric absorption lines sufficiently far away from the core of the Helium 10833 A feature (Spake et al., 2022). The small BERV still means the redward Helium line wing has significant telluric contamination between 10834\(-\)10836 A, but the core and blue line wings appear relatively pristine. Sky emission lines land in this region but are more easily mitigated by HPF's simultaneous sky reference fiber. Only 19 out of the 39 nights possess A0V telluric calibration standard stars. These standard stars were used to spot-check our telluric masking. Figure 3 indicates the epochs of select HPF visits overlaid on the _TESS_ time-series lightcurve. ### TESS Light Curves HAT-P-67 was observed with the _Transiting Exoplanet Survey Satellite_(TESS, Ricker et al., 2014) in Sectors 24, 26, 51, 52, 53 with 2-minute cadence, and in Sector 25 with 30-minute (FFI) cadence. The Sector 25 FFI data fell just barely off the TESS detectors, in collateral pixels where no starlight ever hit the detector. We also assembled a comparison sample of about 1000 lightcurves to interpret the prevalence of lightcurve modulation and stellar activity among broadly F subgiant-like stars. We selected sources based on _Gaia_ DR3 \(T_{\rm eff}\) estimates and similar \(\log g\), and availability of at least one sector of TESS 2-minute cadence data. We visually spot-checked these lightcurves to understand artifacts and windowing effects. ### Gaia DR3 The stellar system consists of a binary with an M dwarf companion HAT-P-67B (_Gaia DR3 1358614983131339904_) separated on-sky by \(9\farcs 0\)(Mugrauer, 2019), well-separated from the planet-host star and not a source of contamination for the HPF observations. HAT-P-67A (_Gaia DR3 1358614983131339392_) has a parallax of \(2.69\pm 0.01\) mas in _Gaia_ DR3, placing it at about 372\(\pm\)1.4 pc. Minor corrections to the parallax uncertainty (El-Badry et al., 2021) and bias (Lindegren et al., 2021) appear negligible for the \(G=9.98\) mag source. The DR3 parallax places the system about 8.7% farther than previously estimated by Zhou et al. (2017), which adopted a _Gaia_ DR1-informed parallax of \(2.92\pm 0.23\) mas, including a systematic \(-0.325\) mas bias term (Stassun and Torres, 2016). The wide companion HAT-P-67B has nearly identical parallax (\(2.58\pm 0.05\) mas) and proper motions, confirming its interpretation as co-moving, with a projected separation of about 3400 AU. The IAU naming convention would demand HAT-P-67A_b_ to refer to the planet, which orbits the primary. Hereafter, we simply drop the A designation for notational simplicity since the wide companion will not factor into our analysis. ### ASAS-SN, DASCH, and ZTF We retrieved ground-based photometry with the _AllSky Automated Survey for Supernovae_ (ASAS-SN) using the Sky-Portal (Shappee et al., 2014; Kochanek et al., 2017). The precision of ASAS-SN was too low to perceive stellar variability, so we can place a relatively uninformative limit of \(<5\%\) stellar variability on years-long timescales. Similarly, HAT-P-67 appears in the Harvard Plate Archive, with 5809 measurements digitized through the Digital Access to a Sky Century at Harvard (DASCH) program, spanning over 120 years of coarse photometric monitoring. In principle, these datasets could inform long-term variability trends such as stellar cycles. In practice, the 0.15 magnitude jitter appears too coarse to perceive any genuine astrophysical variability, with no conspicuous trend seen. We can therefore place a relatively mild constraint that the star appears stable at the \(\sim 30\%\) level over periods of tens to hundreds of years. HAT-P-67A was saturated in ZTF (Bellm et al., 2019), but the M-dwarf companion HAT-P-67B had up to thousands of visits across several years. The data quality appeared too poor to perceive any genuine astrophysical variability, with the indication of some lunar background signals in the periodogram. ## 3 Analysis ### Gaia DR3 Zhou et al. (2017) previously derived stellar radius estimates of 2.1-2.7 \(R_{\odot}\) through Spectral Energy Distribution (SED) and isochrone fitting as part of a joint orbit fit. We systematically increase those stellar radius Figure 3: Overview of all available TESS Sectors showing 24 full or partial transits with 34 visits with HPF (vertical gray bars), 7 of which coincide with transits. The rest of the visits sample out-of-transit phases. The 8 HPF visits between days 2050 and 2690 are not shown. Note the large time breaks between some consecutive panels, which correspond roughly to the duration of a TESS sector. estimates by 8.7% to match the greater Gaia _DR3_ distance (SS2.3). For a fixed \(R_{p}/R_{\star}\) from the measured transit depth, the larger \(R_{\star}\) implies a proportionally larger planet radius. This update systematically decreases the estimate for the already-low density of HAT-P-67 by 28%, to a mere \(<\)0.035 g cm\({}^{-3}\), albeit with significant uncertainties from the weak mass constraint. The luminosity increases by 18%, to about 10.3 \(L_{\odot}\). ### TESS Light Curve Previously, HAT-P-67 b transits had only been detected with _HATNet_(Bakos et al., 2004) and followed up with KeplerCam on the FLWO 1.2 m telescope (Zhou et al., 2017). These ground-based photometers were not intended to measure weak, long-term stellar variability signals. We, therefore, examined the _TESS_ lightcurves for out-of-transit photometric variability. The revised precision and continuous coverage of _TESS_ can also refine the orbital solution. #### 3.2.1 Revised exoplanet orbital parameters We assembled a composite lightcurve by stitching TESS Sectors 24, 26, 51, 52, 53, which were reduced with the default SPOC pipeline (Caldwell et al., 2020), and lightly post-processed with lightkurve(Barentsen et al., 2019). These TESS data exhibit an RMS scatter of better than 1 part-per-thousand (ppt) at native 2-minute sampling. We fit a Keplerian orbit model to the TESS lightcurve using the exoplanet framework (Foreman-Mackey et al., 2021). We obtained revised orbital properties shown in Table 2. These properties are broadly consistent with the previously reported values from Zhou et al. (2017) and the updated ephemeris of Ivshina & Winn (2022). Figure 3 shows an overview of all the TESS Sectors with a Gaussian Process trendline in green highlighting the out-of-transit modulations, and the exoplanet transit model in orange. Figure 4 shows the best-fit orbit overlaid in purple on the detrended TESS lightcurve, which is binned in the orange dots. Table 2 lists only one of the previous or \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ UTC} & \multicolumn{1}{c}{Track} & Desc. & BJD & \(N_{\rm exp}\) & \(\phi\) \\ & & & & 2457000.00+ & & \\ \hline 2020-04-27 & E & Pre & 1966.78 & 4 & -0.192 \\ 2020-04-28 & E & Transit & 1967.79 & 12 & 0.018 \\ 2020-04-29 & E & Control & 1968.78 & 4 & 0.225 \\ 2020-05-20 & W & Pre & 1989.95 & 4 & -0.373 \\ 2020-05-21 & W & Pre & 1990.94 & 4 & -0.169 \\ 2020-05-22 & E & Transit & 1991.72 & 14 & -0.006 \\ 2020-05-23 & W & Control & 1992.93 & 4 & 0.246 \\ 2020-05-24 & W & Baseline & 1993.94 & 4 & 0.455 \\ 2020-06-13 & W & Pre & 2013.89 & 3 & -0.397 \\ 2020-06-14 & W & Pre & 2014.89 & 4 & -0.190 \\ 2020-06-15 & E & Pre & 2015.64 & 5 & -0.033 \\ 2020-06-15 & E & Transit & 2015.66 & 4 & -0.029 \\ 2020-06-15 & W & Transit & 2015.89 & 9 & 0.019 \\ 2020-06-16 & W & Control & 2016.87 & 4 & 0.224 \\ 2020-06-18 & W & Pre & 2018.88 & 4 & -0.360 \\ 2020-07-22 & W & Pre & 2052.79 & 2 & -0.309 \\ 2020-08-01 & W & Pre & 2062.75 & 2 & -0.240 \\ 2021-01-31 & E & Pre & 2246.03 & 2 & -0.136 \\ 2021-02-01 & E & Post & 2247.02 & 2 & 0.070 \\ 2021-02-24 & E & Pre & 2269.96 & 1 & -0.161 \\ 2021-02-26 & E & Control & 2271.94 & 2 & 0.250 \\ 2021-03-04 & E & Baseline & 2277.93 & 2 & 0.496 \\ 2021-03-31 & E & Post & 2304.86 & 2 & 0.094 \\ 2022-04-28 & E & Pre & 2697.79 & 4 & -0.217 \\ 2022-04-29 & E & Transit & 2698.78 & 14 & -0.011 \\ 2022-04-30 & E & Control & 2699.77 & 4 & 0.196 \\ 2022-05-01 & E & Baseline & 2700.77 & 4 & 0.403 \\ 2022-05-02 & E & Pre & 2701.77 & 4 & -0.390 \\ 2022-06-20 & E & Pre & 2750.64 & 3 & -0.230 \\ 2022-06-22 & E & Control & 2752.64 & 3 & 0.187 \\ 2022-06-22 & W & Control & 2752.86 & 3 & 0.232 \\ 2022-06-23 & W & Baseline & 2753.86 & 3 & 0.439 \\ 2022-06-26 & E & Transit & 2756.64 & 1 & 0.017 \\ 2022-07-01 & W & Post & 2761.85 & 1 & 0.101 \\ 2022-07-10 & W & Pre & 2770.82 & 1 & -0.035 \\ 2022-07-11 & W & Control & 2771.81 & 1 & 0.172 \\ 2022-07-13 & W & Pre & 2773.82 & 3 & -0.411 \\ 2022-07-15 & W & Transit & 2775.78 & 1 & -0.003 \\ 2022-07-16 & W & Control & 2776.80 & 1 & 0.209 \\ 2022-07-20 & W & Post & 2780.77 & 1 & 0.034 \\ 2022-07-29 & W & Pre & 2789.74 & 1 & -0.101 \\ 2022-07-30 & W & Post & 2790.78 & 1 & 0.115 \\ \hline \end{tabular} Note. – All exposure times were 308.85 s except for observations from 2020-07-22 to 2021-03-31, which were 511.20 s. Column 2 describes the HP "track", restricted to either East (E) or West (W). Column 3 is the normalized phase \(\phi\in(-0.5,0.3)\), with zero at midtransit. \end{table} Table 1: HPF Observation Log Figure 4: Best fit orbit overlaid on the composite TESS lightcurve. Revised planet properties are consistent with Zhou et al. (2017), with a systematic shift from a revised _Gaia_ DR3 distance. bit determinations--also assuming a circular orbit--compared with the orbital solution reported here. #### 3.2.2 Stellar rotation rate from periodogram analysis The TESS Sector 26 lightcurve exhibits a weak \(\sim\)3 ppt peak-to-valley out-of-transit modulation. TESS Sectors 51-53 do not show as conspicuous a modulation signal but still show some ostensibly stellar variability. The ebb and flow of the modulation can be seen in the minimally processed TESS lightcurve in Figure 3. We fit the TESS lightcurve modulation with a quasiperiodic Gaussian Process (GP) model using celerite(Foreman-Mackey et al., 2017; Foreman-Mackey, 2018). The GP model fit to the entire composite lightcurve yields a 4.7-day period; when fit to individual sectors alone, the periods hover around 5.9 days. The signal is weak enough, especially in Sectors 51-53, that _TESS_ instrumental systematics may contribute some of the out-of-transit modulation that we see in Figure 3. We can independently constrain the expected stellar rotation period based on the stellar radius estimate, \(v\sin i_{\star}\), and the observation of the planet's orbital inclination \(i_{p}\sim 90^{\circ}\) from orbit fitting and Doppler tomography (Zhou et al., 2017). We assume spin-orbit alignment, \(i_{\star}\sim i_{p}\). We adopt a high limit of \(v\sin i=35.8\pm 1.1\) km/s and a lower \(v\sin i=30.9\pm 2\) km/s value if macroturbulence is accounted for. We obtain a range of \(3.2<P_{\rm rot}<4.8\) days. This range is typical for F stars that have not yet evolved too far into the subgiant branch (Avallone et al., 2022). The geometrical constraint comports with the 4.7-day GP-based modulation period derived from the stitched TESS lightcurve but is lower than the 5.9-day per-Sector fits, slightly preferring the lower 4.7-day modulation as the stellar rotation period. The 5.9-day value would require a stellar radius over \(3.2R_{\odot}\), which appears implausible. ### Revised system evolutionary state The slightly increased distance and radius imply an 18% more luminous host star than previously estimated. Figure 5 shows evolutionary tracks using the solar-metallicity MESA (Paxton et al., 2011, 2013, 2015) Isochrones and Stellar Tracks (MIST; Dotter, 2016; Choi et al., 2016) for HAT-P-67, which has [Fe/H]\(=-0.08\pm 0.05\)(Zhou et al., 2017). The tracks show that a 1.8 Gyr, 1.64 \(M_{\odot}\) HAT-P-67 could be at the end of its main sequence lifetime (teal track), yielding a gradual rise in incident radiation over the lifetime of HAT-P-67 b. Alternatively, a 1.54 \(M_{\odot}\) evolutionary track would send HAT-P-67 along the subgiant branch at an age over 2.2 Gyr, with a rapid 50% increase in luminosity, potentially leading to "re-inflation" of the planet (Thorngren et al., 2021). Either evolutionary track appears consistent with the observed SED, stellar surface gravity, and available constraints from spectral fitting. We, therefore, adopt a MIST-based 2.0\(\pm\)0.2 Gyr age for the system, in-between the previous Geneva and Dartmouth-based scenarios (Zhou et al., 2017). Additional metallicity and rotation effects could slightly alter the evolutionary tracks and therefore increase uncertainty in inferred ages and masses. ### HPF analysis I: RV fitting \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Zhou et al. 2017 & This Worka \\ \hline \(R_{\star}\) (\(R_{\odot}\)) & \(2.546^{+0.009}_{-0.009}\) & \(2.65\pm 0.12\) \\ \(P\) (days) & \(4.8101025^{+4.3\times 10^{-7}}_{-3.3\times 10^{-7}}\) & \(4.8101046\pm 4.7\times 10^{-6}\) \\ \(T_{c}\) (BTJDb) & \(-1038.61530^{+0.00076}_{-0.0004}\) & \(2694.027\pm 0.001\) \\ \(T_{14}\) (hours) & \(6.9888\pm 0.046\) & \(7.062\pm 0.028\) \\ \(R_{p}/R_{\star}\) & \(0.0834\pm 0.0017\) & \(0.08396^{+0.0035}_{-0.0004}\) \\ \(a/R_{\star}\) & \(5.691^{+0.057}_{-0.124}\) & \(5.036^{+0.089}_{-0.086}\) \\ \(b\equiv a\cos i/R_{\star}\) & \(0.12^{+0.12}_{-0.08}\) & \(0.509^{+0.026}_{-0.029}\) \\ \(K\) (ms\({}^{-1}\)) & \(<36\,(1\sigma)\) & \(33^{+1}_{-15}\) \\ RV jitter (\(\mathrm{ms}^{-1}\)) & \(<59\) & \(164\) \\ RV sys.(km s\({}^{-1}\)) & \(-1.4\pm 0.5\) & \(-0.07\pm 0.03\) \\ \(M_{p}\) (\(M_{3}\)) & \(0.34^{+0.25}_{-0.19}\) & \(0.32^{+0.21}_{-0.15}\) \\ \(i\) (deg) & \(88.8^{+1.1}_{-1.3}\) & \(84.19^{+0.43}_{-0.41}\) \\ \(a\) (AU) & \(0.06505^{+0.00273}_{-0.0070}\) & \(0.062\pm 0.003\) \\ \hline \end{tabular} \end{table} Table 2Revised orbital and planetary parameters Figure 5: MIST evolutionary model tracks for HAT-P-67. The \(T_{\rm eff}\) and luminosity of HAT-P-67 (gray circle) are consistent with either the late main sequence or recently evolved subgiant. The labeled numbers indicate age in Gyr. We conducted orbit fitting via precision radial velocity (PRV) measurements following the procedures described in Tran et al. (2021) and updated for joint lightcurve and RV fits (Tran et al., 2022), but without a joint Gaussian Process modeling procedure (Tran et al., 2023). Here we included the new TESS lightcurves and the existing Keck HIRES points reported in Table 3 of Zhou et al. (2017). The HPF points exhibited an RV jitter of \(\sim 164\) m s\({}^{-1}\), a few times larger than the typical Keck HIRES measurements, due to the lower information content in the near-IR than the visible for this relatively rapidly rotating F star. So even though the HPF points were more numerous, their marginal value for RV orbit determination was subdued. The period and \(T_{0}\) were fixed from the _TESS_ transits, and we assume a circular orbit with \(K>10\) m s\({}^{-1}\). The semiamplitude prior acts as a mass constraint, excluding planets with masses so low that the observed radius would exceed the Hill radius (Roche lobe overflow). Figure 6 shows the fitted RV semiamplitude \(K=33^{+21}_{-15}\) m s\({}^{-1}\), a relatively weak constraint but consistent with broadly Saturn-mass planets. Table 2 lists the revised properties, with minor differences from the existing values. The semiamplitude prior accounts for the apparent improvement in the mass constraint. Hypothetically, a truly Roche lobe overflow planet could be consistent with the available data, so our mass constraint could be considered an upper limit. ### HPF analysis II: He I 10833 A The preparation for analyzing the Helium feature followed a similar but slightly different procedure than that used for the RV. Namely, we used Goldilocks2 for 2D echellogram reduction. This tool outputs 1D extracted spectra for the target and two reference fibers: blank sky and a laser frequency comb (LFC, Metcalf et al., 2019). The observations were acquired with the LFC turned off, so this unilluminated spectrum was discarded. Footnote 2: [https://github.com/grzeimann/Goldilocks_Documentation](https://github.com/grzeimann/Goldilocks_Documentation) The sky fiber and target fiber have slightly different throughputs and illumination properties, with the target fiber receiving 93% of the flux of the sky reference fiber on average. This ratio depends on wavelength and season at the few percent level. We quantified the wavelength dependence by acquiring calibration observations of blank sky in both the target and sky fiber. We applied this wavelength-based scale factor to each target spectrum's associated reference sky fiber to achieve sky-line subtraction residuals typically less than the photon noise (Gully-Santiago et al., 2022). A few lines exhibit residuals that may arise from genuine differences in the local atmospheric conditions between the target and sky fiber. The HET's fixed-altitude design means that the airmass remains relatively constant across all pointings, lowering the telluric line variability compared to fully steerable telescopes, which may sample a wider range of airmasses. Variability in atmospheric conditions still makes it difficult to mitigate telluric lines to within the photon noise limit, so we masked spectral regions predicted to have significant telluric lines with a template generated by tellfit (Gullikson et al., 2014). We then shifted the spectral coordinates to their common barycentric-corrected reference frame (Wright and Eastman, 2014), as implemented in astropy(Astropy Collaboration et al., 2013, 2018, 2022). The spectral continuum was flattened and normalized to two pre-selected continuum indices highlighted as vertical blue bands in Figure 7. The sky-subtraction, telluric masking, barycentric correction, flattening, and all of our other standard pre-processing steps were carried out in the open-source Python interface muler(Gully-Santiago et al., 2022). Figure 7 shows a zoom-in on the He 10833 A region of interest, with all 152 individual exposures overlaid. We see variability in the Helium line of up to 10%, much greater than the \(<1\%\) pixel-to-pixel variation. The feature width spans about 3 A. Figure 8 shows the spectra for four campaigns, with before and after spectra showing conspicuous excess absorption during transit and 1 day before transit but negligible excess absorption after transit. The individual transits show significant morphological variation, with substructure in the bulk line-of-sight velocity distribution. The 13.8 hours of exposure, combined with HAT-P-67 b's short (\(P\sim 4.8\) day) period, means that a Figure 6: RV orbit fit including both HPF and Keck HIRES data points. The RV information content in HPF is much less than in Keck HIRES for this F5 spectral type, and so the joint fit constrains the mass to roughly a Saturn mass. large fraction of the orbit has been collected, with some phases (especially near transit) heavily sampled, and some other out-of-transit phases sampled more sparsely. The largest gap spans merely 0.18 in phase, as seen in the Equivalent Width time series in Figure 9. For visualization purposes, we constructed a 2-D intensity phase scan, binning in phase and wavelength, and spanning the entire orbit of HAT-P-67 b. Figure 10 overlays the planet's approximately \(\pm 150\) km/s orbital Doppler velocity, with the stellar rest frame velocity demarcated by the vertical line. The vanishing \(<36\) m/s reflex motion of the star is imperceptible at this scale. The horizontal dashed lines indicate the moments of transit ingress (-0.03), mid-center (0.0), and egress (+0.03), demarcated as TRANSIT in the figure. We define 4 additional distinct groups of phases with adjectival qualifiers in anticipation of the need to discuss bulk trends: PRE, POST, CONTROL, and BASELINE. The BASELINE phases define the out-of-transit baseline. The "control" spectra designate phases just before the baseline but are not used to define the baseline, to inspect minor correlation structure in the spectra without dividing it out. Table 1 lists the adjectival qualifiers for each HPF visit. The in-transit phases exhibit over 10% excess absorption peaks. A large absorption signal can be seen preceding transit ingress, with some significant absorption evident before \(-0.2\) in phase. The egress drop-off is extremely sharp, with almost no excess absorption directly after transit, as seen in the Equivalent Width (EW) time series (Figure 9). We construct a fractional residual absorption spectrum by subtracting off and re-normalizing to the non-varying baseline. We define the non-varying baseline spectrum as the average over phases \(0.4-0.5\), which exhibited stable spectra with the least absorption. Figure 10 shows that significant absorption can be seen as a few percent at \(-0.37\) in phase, rising to 10% just before and during transit. The abrupt dropoff in helium excess at planet egress is conspicuous. The underlying structure of the metastable He i triplet consists of three quantum components, two of which (\(J=1\) and 2) are typically blended owing to Doppler broadening from the finite temperature of the gas \(T_{0}\). HAT-P-67 b exhibits blending of all three components Figure 8: Four observing campaigns centered on an in-transit epoch with before-and-after visits typically separated by 1 night. The after-transit spectra tend to show negligible absorption. Line-of-sight velocity substructure can be seen in the before and during transit epochs. Figure 7: Overlay of all 152 individual HPF exposures of HAT-P-67, spanning 2020-2022. Variability is seen in the He i 10833 Å triplet near the vertical orange shaded band but not in the adjacent Si line. These snapshot spectra were barycentric corrected and continuum flattened with a linear fit to the regions in the blue vertical bands. Sharp telluric absorption lines have been masked in regions near 10835 and 10837.5 Å. (\(J=0\), 1, and 2) into a single broad Gaussian-like feature seen in Figure 7. The mere observation of this broadening implies that HAT-P-67 b probes a larger velocity dispersion than typical measurements and may be associated with either a higher gas temperature, complex planetary wind flows, or some mix of both. Whatever the cause, we treat the feature as a single Gaussian line for the purpose of estimating bulk characteristics of the Helium excess absorption feature. We restricted the fitting to the 10828\(-\)10838 A region of the residual spectra. The model constructed in this way has a total of 4 parameters: amplitude \(A\), location \(\lambda_{c}\), Gaussian width \(\sigma\), and constant offset. We repeated a similar process for the nearby Si line at 10830 A as a control sample. Excess is confidently detected from phases \(-0.3\) to \(+0.03\). The full-width at half maximum (FWHM) of the feature is about 1.8\(-\)2.6 A, equivalent to a line-of-sight velocity distribution in the range of 50-75 km/s. Individual fits to the He feature show a typical line center uncertainty \(\pm 0.1\) A or better. The out-of-transit phases exhibit a line center position of 10833.4 A, consistent with the stellar He i rest frame zero velocity. The excess absorption resides slightly blueward of the stellar zero restframe velocity reference. As mentioned previously, the telluric masking near 10836 A censors our view of the existence (or not) of any redshifted lobe in this region. The excess detections exhibit a bulk blueshift of up to 30 km/s. The excess absorption spectrum exhibits an increasing line-of-sight velocity distribution as the planet transits, going from 60 km/s at ingress, up to 100 km/s at egress. Finally, we detect a much weaker but still significant post-egress blue-shifted absorption. This lobe appears to decrease linearly in wavelength, corresponding to a characteristic blueshift increasing from near zero at transit midpoint to 80 km/s at 12 hours after transit. It exhibits a slightly lower line-of-sight velocity distribution of 50 km/s, about half of the peak during transit. We examined other spectral lines using the same methodology as above, finding no detectable variability in the Ca ii infrared triplet, Hydrogen lines, or other neutral metal lines. We show some of the diagnostic plots for these non-detections in Appendix Section A. ### Keck HIRES We retrieved 22 epochs of archival _Keck HIRES_ spectra via the Keck Observatory Archive (KOA). Of these, 19 were acquired through the iodine cell as reported by Zhou et al. (2017), providing RV orbit constraints. The 14'' slit height of the C2 decker appears to cause order overlap in Ca ii H and K lines, which contaminated the spectral extraction for 14 of the 22 spectra. The 3''.5 B2 decker allowed the faithful extraction of this region Figure 10: Absorption depth phase scan over the entire HAT-P-67b orbit from 41 visits with HPF. The planet orbital rest frame is shown as the black sine wave. Gaps in data at 10834-10836 Å arise from telluric masking. Unmasked telluric lines are perceptible at 10825 Å. Figure 9: Equivalent Width lightcurve of Helium absorption in HAT-P-67b. The system exhibits a characteristic absorption leading up to transit, followed by a sharp decline after transit passage. The EWs were computed in the orange shaded band (10832.3\(-\)10834.2 Å) in Figure 7. for 6 spectra, which exhibit no perceptible Ca ii H and K variability. The H\(\alpha\) region exhibited no perceptible variability, but the timing of these spectra coincidentally missed the orbital phases (\(-0.3\) to \(+0.03\)) where we see the greatest He i 10833 A absorption excess, leaving open the possibility that detectable H\(\alpha\) absorption could be present at more favorable orbital phases. New spectra or a more careful extraction of the archival C2 decker spectra would be needed. ### Velocity substructure Figure 11 shows the centroid positions of the Helium excess feature. The bulk velocity drifts of the planet's outflow can be seen with a few conspicuous trends. The pre-transit phases (\(-0.3\) to \(+0.03\)) show a slight tendency towards blueshift relative to stellar restframe, with an accelerating blueshift from phases \(-0.3\) to \(-0.15\), albeit with some visit-to-visit scatter. The post-transit observations (\(+0.03\) to \(+0.15\)) exhibit weak-albeit-significant equivalent widths, as denoted by their smaller marker size. The centroids dramatically accelerate to larger bulk blueshifts, as low as 10831 A. At the moment of transit midcenter, the centroids exhibit a redshifted absorption relative to the planet restframe. Measurements from April 2022 and May 2020 transits partially overlap in phase coverage while yielding slightly different line centroid locations. Both time series sequences appear consistent with near-zero bulk restframe velocity but with significant variability in the line profile substructure. The May 2020 sequence shows a slight trend towards alignment with the planet restframe. The April 2020 and June 2020 partial transits sampled phases close to planet egress, with a clear blueshift relative to both the star and planet. Overall this pattern of velocity centroids appears consistent with the majority of the gas rapidly obeying the stellar restframe but with a large spread in the line-of-sight velocity distribution, as we would expect from a range of launch angles. We revisit the interpretations and caveats for this substructure in the next sections. ## 4 Atmospheric escape ### Signal Inconsistent with Stellar Activity Interpretation The planet's orbital period is close to the stellar rotational period, so the prospect of stellar activity contamination arises. Overall, the HPF spectra disfavor a stellar activity interpretation for a few reasons. First, a hypothetical stellar origin of Helium 10833 A variability should be accompanied by variability in other tracers. Instead, the Ca ii Infrared Triplet (IRT) shows stable line profiles in our HPF spectra. The Appendix Section A discusses these and other non-detections. Second, the velocity substructure and phasing appears inconsistent with stellar activity: the gradual rise and then abrupt truncation of the Helium excess at the moment of planet egress (Figure 9) appears inconsistent with a more smoothly varying stellar variability. Finally, the weak post-egress excess appears blueshifted to wavelengths as short as 10830 A, which would fall entirely outside of the original Helium spectral line's rotationally broadened profile, ruling out a heritage from the star's surface. Figure 11: Line centroid positions of the He i 10833 Å feature. A slight blueshift relative to the stellar restframe (vertical line) can be seen, with a weak excess blueshift trend after planet egress. The line-of-sight velocity distribution of 2-3 Å means that the gas distribution exhibits both advancing and receding lobes at all phases other than post-egress. The recent discovery of giant tidal tails in HAT-P-32 b (Zhang et al., 2023) provides an additional anchor. HAT-P-32 b is in many ways the most analogous known system to HAT-P-67 b, having comparably low mass and a comparably large radius for similar insolation around an F star. In HAT-P-32 b the planetary interpretation is unambiguous owing to the different stellar rotation and orbital periods. In Section 5.4 we show that radius inflation mechanisms expect similar and large mass loss rates for both of these systems. ### Size and geometry of outflow Helium absorption occurs predominantly before the planet is in-transit, indicating a _leading_ tail escaping the planet. The leading tail is detected 0.1 AU away from the planet, or 130 planetary radii, well outside of the Roche lobe at \(<2.6\) planetary radii. We are detecting Roche lobe overflow, escaping material that is entirely unbound from the planet. The bulk of the detectable unbound gas may be understood as following its own Keplerian trajectory at a slightly shorter period orbit, pulling away from the planet parallel to the orbital path. Such a scenario may arise from preferential emission on the planet dayside. The large size of the gas stream overfills the transit chord, which means both the advancing blue side and receding red side of the rotating star are nearly continuously obscured in this spin-orbit-aligned system (Zhou et al., 2017). If the planet transits near the stellar equator (Zhou et al., 2017), the transit chord covers about 10% of the projected stellar surface area. Hypothetically a nearly 100% opaque optically thick stream of gas covering only the transit chord could reproduce the observed 10% absorption excess. Alternatively--and more likely--an optically thin gas stream has some additional vertical extent perpendicular to the plane of orbit, which would act to overfill the entire projected stellar disk. Assuming the gas has a spatially uniform optical depth of about 0.1 would reproduce the 10% absorption excess. The leading tail geometry seen in HAT-P-67 b contrasts with the morphology of the HAT-P-32 b system, which exhibits giant symmetric tidal tails with comparable absorption depth preceding and trailing the planet (Zhang et al., 2023). These systems represent two of the most extended features associated with exoplanets, and we compare and contrast them in Section 6. ### 1D Parker wind models and mass loss rates We explore one-dimensional (1D) models, which offer ease of interpretation and rapid calculation. We employ the open-source p-winds code (Dos Santos et al., 2022), which implements a transonic 1D Parker wind model with radiative transfer of the Helium 10833 A triplet (Oklopcic and Hirata, 2018; Lampson et al., 2020). The predicted Helium ionization depends sensitively on the XUV flux (Oklopcic, 2019). Ideally, we would have a measurement of the XUV spectrum of HAT-P-67, but the limited facilities and distance of the source preclude these challenging observations. Instead, we constructed panchromatic X-ray to visible SEDs by scaling and stitching together synthetic spectra to provide a range of high and low \(L_{X}/L_{\rm bol}\). We chose a 6400 K PHOENIX (Husser et al., 2013) solar metallicity photospheric spectral template with \(\log g=3.5\)--the closest PHOENIX grid point to published and updated values available-- scaled to the solid angle seen by HAT-P-67 b. The F7IV-V \(\tau\) Boo serves as the closest analog with available synthetic X-ray coronal spectra (Sanz-Forcada et al., 2011). But HAT-P-67 resides closer to the Kraft break than does \(\tau\) Boo, and the move to hotter F stars may be associated with weakening of the stellar wind and other atmospheric changes leading to the prospect of much lower XUV luminosity for HAT-P-67 (Avallone et al., 2022). So there remains significant uncertainty about the applicability of the \(\tau\) Boo XUV spectrum to HAT-P-67. We scaled the synthetic SED of \(\tau\) Boo retrieved from X-exoplanets (Sanz-Forcada et al., 2011) so that the integral of ground-state ionizing photons (\(\lambda<504\) A) exhibited \(L_{X}/L_{\rm bol}\in[10^{-6},10^{-4}]\). Figure 12 shows two conceivable SEDs with these high and low radiation hardnesses. Figure 12: Synthetic XUV radiation scenarios for HAT-P-67 b. The low and high XUV scenarios correspond to \(L_{X}/L_{\rm bol}\) of \(10^{-6}\) and \(10^{-4}\), where \(L_{X}\) is defined as He 1 \({}^{1}\)S Ionizing photons that trigger the recombination cascade needed for observing the He 2 \({}^{3}\)S ionization metastable state. He 2 \({}^{3}\)S Ionizing photons depopulate the state and suppress the observability of He i 10833 Å. The XUV flux of HAT-P-67 is uncertain. The SED shows that the XUV corona model does not extend redward to 2600 A photons capable of ionizing out of the 2 \({}^{3}\)S Helium metastable state. This apparent deficit should be negligible since the F spectral type obtains most of its NUV photons (\(\lambda\sim 2600\) A) from the Wein side of the photospheric spectrum, so the hardness of the radiation stems mostly from the assumptions of the corona flux level. The wavelength-dependent cross section for absorption (not shown) is largest just blueward of the ionization thresholds, so the differences in the spectrum between 504 and 1000 A have relatively little impact on the overall ionization out of the metastable state. Equipped with these two SEDs of differing radiation hardness, we explored different mass loss rates and exosphere temperatures. Figure 13 shows one example model spectrum with \(\dot{M}=2\times 10^{13}\) g/s, and \(T_{0}=14\,000\) K, with an XUV spectrum possessing \(L_{x}/L_{\rm bol}=10^{-5}\). This mass loss rate would imply a characteristic lifetime of \(<\)1000 Myr. The model spectrum exhibits approximately the same equivalent width as our HPF observations, making it a hypothetical scenario among a family of partially degenerate solutions. At least a few shortcomings limit the applicability of this 1D Parker wind model. First, the model-dependent line profile (_i.e._ width and depth) is degenerate with XUV flux, \(\dot{M}\), and \(T_{0}\), and so a range of these parameters can be fine-tuned to obtain a large range of mass loss rates consistent with the data and our limited understanding of the XUV flux. These known degeneracies have been pointed out previously (Vissapragada et al., 2022; Oklopcic, 2019), but the problem for HAT-P-67 b appears somewhat more acute since XUV data are scarce for F-type stars near the Kraft break. Second, the 1D model breaks down when attempting to explain the inherently 3D leading tail geometry. ### Direct Evidence for Preferential Dayside Mass Loss Several physical phenomena could conceivably control the geometry and extent of the escaping material. The stellar potential controls the overall geometry through tides and the Coriolis force, resulting in lobe morphologies that lead and trail the planet (McCann et al., 2019). Here we explore the three predominant dynamical effects: orbital shear, stellar wind confinement, and day-/night- side mass loss asymmetries. Figure 14 shows an illustration of Keplerian shear adapted to the system properties of HAT-P-67 b. In this shear-dominated scenario, the planetary wind launches primarily from the dayside, with relatively little or no wind launched from the nightside. The planet wind initially launches radially outward from the exobase, with the strongest wind located near the sub-stellar point, the line connecting the planet to the star along the vertical axis in the figure. Inefficient grazing incidence heating near the terminator subdues the mass loss in the \(x-\)direction, meaning that this wind exhibits not a hemispherical shape, but more concentration along the star-planet line. The gas increasingly experiences the star's Keplerian potential past the Roche lobe, accelerating in the direction of orbital motion, \(+x\). The accelerating column eventually overtakes the planet completely, with the prospect of extending to hundreds of planetary radii. The mass loss has to be large enough to shield the material to make it observable in the metastable He i 10833 A triplet. The observation of such a prominent leading tail leads us to the inescapable conclusion that HAT-P-67 b is predominantly losing mass on the planet's dayside. An isotropic mass loss would manifest a comparably conspicuous trailing tail, which we do not observe. Importantly, orbital shear, stellar wind confinement, preferential dayside mass loss, and radiative transfer must all conspire to create the conditions of high enough column density to populate the He i metastable triplet to detectable levels while imbuing the velocity substructure that we see. The detailed 3D modeling of the inter Figure 13: Simulated 1D Parker wind model of Helium absorption in HAT-P-67 b. The spectrum was generated with a mass loss rate of \(2\times 10^{13}\) g/s, exosphere gas temperature of \(14\,000\) K, and with an XUV luminosity \(L_{x}/L_{\rm bol}=10^{-5}\) intermediate between those shown in Figure 12. The p-winds model yields the He i 10833 triplet lines (colored thin lines), with a cumulative feature in black resembling the data, shown as the May 2020 in-transit mean. Additional 3D velocity dispersion of the gas can explain differences between the line shapes of the model and data. Overall the 1D models may be too simplistic to represent the inherently 3D structure of the outflow. play of these effects is beyond the scope of the current work, but is feasible with adaptations to existing 3D simulations (MacLeod and Oklopcic, 2022). Here we interpret the bulk velocity substructure in Figure 11 under these conceptual gas dynamics mechanisms. We attribute the mild pre-transit arc to stellar wind acceleration. The stellar wind is initially too weak to plow the dense escaping gas until a separation of 60-130 planetary radii (+0.15 orbital phase), the inflection point in an enormous bow shock. Gas at these separations has had enough time to diffuse, both out of the orbital plane and to slightly larger radii shells, in turn lowering the column density along the line-of-sight. The lower column density provides both fewer Helium atoms to participate in absorption, and less overall NUV shielding, leading to greater fractional ionization and further depopulation of the He i metastable state. The weak post-egress tail can be understood as follows. A weak nightside mass loss means that there is both less overall column density and less NUV photoionization shielding, subduing the overall He i metastable state's signal strength. The blueshift arises from the Keplerian shear that lags the planet, and the ever-blueshifting centroid represents the stellar wind's greater ability to carry away the lower total inertia of less nightside material. In order to examine the in-transit velocity structure, we have to consider the interplay of two subtle geometrical effects. First, the planet's Roche lobe is small enough (2.7 \(R_{\rm p}\)) that the majority of the projected stellar disk should be filled with escaping material unbound from the planet, even at the time of mid-transit. In other words, only a small fraction of the He i excess signal would be expected to trace the planet's motion, as opposed to the hypothetical in-transit signal for WASP-107 b that definitively traces out the planet's orbital path (MacLeod and Oklopcic, 2022). Second, the star's non-negligible rotational velocity sets up a configuration analogous to Doppler Tomography in the Rossiter-McLaughlin (RM) effect, but distinct in a subtle way. Whereas traditional Doppler Tomography treats the scanning reticle as an opaque planetary disk, here the reticle resembles a transmissive filter with a wavelength center and width that varies in space and time. The interplay of spatial and spectral illumination and absorption can yield minor second-order effects, and overall we anticipate those effects to be secondary compared to the mere existence of absorbing material spanning the entire stellar disk. ### Variable planetary wind Figure 8 and 11 show differences in the Helium line profiles over months-long and years-long timescales. The line profiles are qualitatively similar, but show slightly different tendencies towards redshifting. For example, the centroid of the Helium line during the April 2022 transit appears slightly blueshifted relative to the stellar rest frame. In comparison, the May 2020 transit probes the same planetary phases, yet resides slightly redshifted at transit midcenter. We interpret this line profile variability as indicative of genuine planetary wind variability, which can arise from the interplay of stellar and planetary winds (Murray-Clay et al., 2009) and weather-like feedbacks in the planetary upper atmosphere. ## 5 Mechanisms driving atmospheric escape ### XUV Irradiation-driven mass loss XUV photoevaporation heats the upper layers in the atmosphere, driving a hydrodynamic wind (Murray-Clay et al., 2009). Photoevaporation stands out as of Figure 14: Schematic of Keplerian orbital shear. Locations outside the Roche lobe experience orbital shear from the Keplerian potential. A wind launched primarily from the dayside will tend to form a leading tail. fering a natural cause of the observed leading tail attributable to dayside/nightside differences in mass loss: the greatest supersonic motions arise from the sub-solar point on the planetary dayside. The effect may be boosted if dayside/nightside energy transport proceeds inefficiently (Murray-Clay et al., 2009). Such a scenario is illustrated in Figure 14. The combined effects of photoevaporation, anomalous heating, and tidal gravity are predicted to have especially drastic outcomes for inflated hot Saturns (Thorngren et al., 2023) such as HAT-P-67 b, where the low densities lead to large mass loss rates: \[\dot{M}=\frac{3}{4}\frac{\eta F_{\rm XUV}}{GK_{t}\rho_{\rm XUV}}. \tag{1}\] where \(K_{t}\) is the tidal gravity term, \(\rho_{\rm XUV}\) is the planet density determined using the radius at which XUV radiation is deposited, and \(\eta\sim 0.4\) comes from Caldiroli et al. (2022). Assuming \(\log L_{\rm XUV}/L_{\rm bol}=-4.2\), we would expect HAT-P-67 b to exhibit \(\dot{M}\sim 10^{2}\) M\({}_{\oplus}\)/Gyr (\(2\times 10^{13}\) g/s). At a current \(M_{\rm p}\sim 95\) M\({}_{\oplus}\) and an accelerating mass loss rate, its lifetime would measure in the \(<\)100 Myr range. The Thorngren et al. (2023) simulations focused on 0.75-1.25 \(M_{\odot}\) host stars; the higher mass \(\sim\)1.6 \(M_{\odot}\) HAT-P-67 would deliver greater tidal gravity for a given semi-major axis, and possibly lower \(\log L_{\rm XUV}/L_{\rm bol}\) compared to more active G stars. Nevertheless, we can project the time series trends in their Figures 3 and 4 to recreate a qualitative history and fate of HAT-P-67 b under the assumptions of XUV photoevaporation. Broadly, the planet would have started with an initial mass up to 50% greater than at present, with \(R\sim 1.4\)\(R_{\rm Jup}\), for an initial density of \(\sim\)0.2 g/cm\({}^{-3}\). It would lose mass at a rate of tens of Earth masses per Gyr for its 1 Gyr lifetime, expanding modestly until it reaches the critical \(\sim\)0.1 g/cm\({}^{-3}\) threshold, at which point the mass loss rate increases to its current value. It will last only tens of Myr in its current state before losing almost all of its envelope and settling as a \(5-15\) M\({}_{\oplus}\) core with a final radius of 0.2\(-\)0.3 R\({}_{\rm Jup}\). ### Ohmic dissipation-driven mass loss The Ohmic dissipation mechanism made two key observable predictions. Anomalous heating efficiency, \(\epsilon\), should initially increase as a function of equilibrium temperature, then degrade as irradiation exceeds equilibrium temperatures of about 2000 K. Second, hot Saturns (\(\lesssim 0.5M_{\rm Jup}\)) should undergo runaway evaporation, whereas hot Jupiters (\(\gtrsim 1M_{\rm Jup}\)) should reach stable equilibrium--albeit inflated--radii after Gyr timescales. Both of these outcomes predated data that could validate them, and these outcomes differ from the behavior of other heating mechanisms, such as photoevaporation or tides. Ohmic dissipation may therefore be responsible for both dramatic atmospheric escape and significant radius inflation. Batygin et al. (2011) showed that Ohmic heating acts to inflate hot Jupiters, with planets \(\lesssim 0.5M_{\rm Jup}\) overflowing their Roche lobes and leading to evaporation on Gyr timescales. The Ohmic heating scenario requires only a modest planetary magnetic field (\(\gtrsim\)1 G) and an equilibrium temperature great enough to thermally ionize some modest fraction of neutral metals, such as the low ionization species of Na i and K i. The "sweet spot" for this phenomenon appears to prefer equilibrium temperatures in the range of \(1500<T_{\rm eq}<2000\) K (Batygin et al., 2011; Menou, 2012; Ginzburg and Sari, 2016; Thorngren and Fortney, 2018; Knierim et al., 2022), where the conductivity is strong enough to cause an effective drag without being so strong that magnetic braking slows the planetary wind. HAT-P-67 b's \(\sim\)2000 K sits to the higher end, but still in a region of high Ohmic dissipation heating efficiency. A proposed order-of-magnitude scaling law predicts an inflation timescale, Eq. 20 in Batygin et al. (2011): \[\tau_{\rm inf}\sim\left(\frac{0.01}{\epsilon}\right)\left(\frac{M}{M_{J}} \right)^{2}\left(\frac{R_{J}}{R}\right)^{3}\left(\frac{1500{\rm K}}{T_{\rm eff }}\right)^{4}{\rm Gyr}. \tag{2}\] yielding an incredibly short \(<\)5 Myr timescale for HAT-P-67 b, assuming a typical \(\epsilon\sim 0.01\). The order of magnitude of this inflation timescale is so fleetingly short that--according to this scenario--HAT-P-67 b must be in the runaway stage of inflation, rapidly losing mass and growing in surface area to fuel a positive feedback loop. Under this interpretation, HAT-P-67 b would represent an example of a new category of planet system that is doomed to evaporate entirely due to the Ohmic dissipation mechanism, as predicted by Batygin et al. (2011). At Roche lobe overflow, a 5 Myr inflation timescale may imply an instantaneous \(\dot{M}_{\rm infl}>10^{3}\) M\({}_{\oplus}\)/Gyr. A few caveats complicate the unambiguous causal interpretation of Ohmic dissipation. First, the scaling law arguments that produced Equation 2 were only proposed as coarse estimates, with numerical simulations needed to quantify inflation timescales for individual systems. Accordingly, a 10\(\times\) higher inflation timescale of 50 Myr--allowed by the coarse scaling law--would yield \(\dot{M}_{\rm infl}\sim 10^{14}\) g/s, still a very large mass loss rate, and only a factor of 5 away from the baseline 1D model. Factors of a few uncertainties in the Ohmic dissipation efficiency \(\epsilon\) may also contribute. Second, numerical simulations (Wu and Lithwick, 2013) and analytic theory (Ginzburg and Sari, 2016) indicate that Ohmic dissipation can stall the contraction of hot Jupiters but cannot easily "re-inflate" them after having undergone a traditional cooling curve. Heat transfer from the atmosphere to a cooled core appears to proceed too slowly, on the order of tens of Gyr (Ginzburg and Sari, 2016). This path dependence of Ohmic dissipation would restrict the allowed evolutionary histories of HAT-P-67 b, to have arrived at its current location within a few to tens of Myr. This short time window prefers a physical mechanism such as disk migration, which would be faster than secular eccentric migration with an outer companion. _In-situ_ formation of a hot Saturn at these close-in separations may be implausible (Dawson and Johnson, 2018). Other caveats like unknowns in planetary magnetic fields, zonal band geometries, and dayside/nightside temperature differences make it impossible to uniquely prescribe Ohmic dissipation, and instead, several additional factors may also be at play (Sarkis et al., 2021). ### Reinflation Evolved stars increase in luminosity, delivering greater insolation to planets at a fixed separation. The heightened equilibrium temperature can cause mature planets to inflate, the phenomenon known as _reinflation_. Such re-inflated hot planets around red giant stars have been recently found by _TESS_(Grunblatt et al., 2022, 2023), though not all planets around evolved stars appear to re-inflate (Saunders et al., 2022). Zhou et al. (2017) estimated that HAT-P-67 b received about twice the incident flux as a Zero Age Main Sequence (ZAMS) HAT-P-67, based on comparison to the Geneva isochrones. In Section 3.3 MIST evolutionary tracks showed two equally plausible states: the tail-end of the main sequence or a recently evolved sub-giant. Figure 15 employs these tracks to quantify the prospects for re-inflation of HAT-P-67 b. We further assume that the orbital location has not changed over the system lifetime, that the planet's response to anomalous heating stimulus is instantaneous, and that the anomalous heating efficiency \(\hat{\epsilon}_{G}\) peaks at \(T_{\rm eq}=1750\) K as derived by Thorngren and Fortney (2018). We conclude that reinflation likely does not have a significant effect on the evolution of HAT-P-67 b: inflation timescales remain relatively unchanged over the planet's lifetime, despite a 50% spike in incident radiation in the subgiant scenario. The reason is subtle, as we describe next. The second panel from the top illustrates a counter-vailing effect: the increase in flux actually triggers a decrease in anomalous heating efficiency, \(\epsilon\). The greater stellar energy couples into the planet less effectively than the weaker radiation did, roughly balancing out. Together the effects nearly cancel when computing the inflationary lifetimes, yielding nearly identical curves in the bottom panel: a secularly evolving main sequence history gives almost the same inflation timescale as a rapidly increasing subgiant. It is important to emphasize that the anomalous heating efficiency found by Thorngren and Fortney (2018) is agnostic to what the actual heating mechanism is: either XUV Irradiation or Ohmic dissipation both have to obey the trend of \(\hat{\epsilon}_{G}(T_{\rm eq})\). The third panel recreates the numerically computed curve for 0.5 \(M_{\rm Jup}\), \(T_{\rm eq}=\)1800 K from (Batygin et al., 2011), which should be considered as representative since it was not necessarily tailored to the evolutionary history of HAT-P-67 b. Nevertheless, the similarity of the curve is remarkable since it arrives at approximately the correct planet radius at the right age for about the right initial mass. ### Weighing the causes for a lack of sub-Saturns When applied to an ensemble of planet systems, both the XUV irradiation and Ohmic dissipation mechanisms expect a void in the planet mass-radius plane. However, the theories make different quantitative predictions for the placement of the dividing lines between stable and unstable populations. We illustrate these differences in Figure 16, which shades the mass-radius plane with predictions for the inflation timescale under HAT-P-67b-like conditions. The XUV irradiation timescale better matches the distribution of planets, with the 0.1g cm\({}^{-3}\) density contour setting a conspicuous dividing line in density. The Ohmic dissipation shading predicts short inflation timescales extending into a region of Jupiter-mass planets with radii commonly observed. The better match of observed exoplanet demographics to the XUV irradiation shading disfavors Ohmic dissipation. Under either scenario, HAT-P-67b resides in an extremely short-lifetime region. Its nearest analog, HAT-P-32b, offers an interesting test case since it has recently been shown to exhibit a large helium excess (Zhang et al., 2023). It resides just slightly denser than the 0.1 \(g\)\(cm^{-3}\) dividing line, with XUV predicting over 10\(\times\) longer inflationary timescale for HAT-P-32b than for HAT-P-67b, while Ohmic dissipation expects merely a factor of 3 difference. HAT-P-32b exhibits a more symmetric leading and trailing tail, whereas HAT-P-67b stands out as exhibiting evidence for preferential mass loss on the highly irradiated planet dayside. These differences may make this pair an especially valuable laboratory for developing theories of atmospheric escape. Finally, we consider the prospect that the planetary mass of HAT-P67b is much lower than the 1-\(\sigma\) estimate, such that the white-light planetary radius _is_ the Hill radius (we have previously assumed the Hill radius is 2.7 planetary radii). This extreme scenario could manifest enormous mass loss rates, with the atmosphere's reservoir of mass fleeing the gravitational potential well directly, without the cushion of an exosphere. This terminal Roche Lobe overflow should have observational consequences. In particular, heavy elements would easily leak into the planetary wind, yielding possibly many observable metal lines in the UV. ### Other planets likely to be evaporating The key figure-of-merit can be distilled to \(\tau_{\rm infl}\), which can be thought of as an atmospheric escape spectroscopy metric (Kempton et al., 2018) for inflated planets in the shaded regime of Figure 16. Table 3 presents the rank-ordered list of planets by this metric, indicating that they are mostly smaller and more massive than HAT-P-67 b. Only KELT-11 b shows a shorter nominal inflationary timescale than HAT-P-67 b, owing to its low mass and equilibrium temperature residing closer to the peak of \(\epsilon_{G}(T_{\rm eq})\). We predict that these few other inflated HAT-P-67 b-analogs should show evidence for significant atmospheric escape, comparable to what we see for HAT-P-67 b. These sources make excellent targets for He i 10833 observations, and/or other atmospheric escape diagnostics. As emphasized, there are significant uncertainties in the inflation timescale, but at least its order-of-magnitude value gives us a quantitative and justified way to prioritize target selection. ## 6 Discussion ### Cause for stellar modulation As discussed in Section 3.2.2, the _TESS_ lightcurve exhibits up to 0.36% peak-to-valley modulation amplitude, with a characteristic timescale of \(P=4.7-5.9\) days. Some sectors exhibit lower amplitudes. The conventional interpretation would design these cyclical modulations to the familiar stellar activity: surface features--either starspots, faculae, or plage--entering and exiting the projected stellar disk on the stellar rotation period comparable to the 4.8-day orbital period of planet b. The cause for such orbital and rotational synchronization would then be either coincidence or secular star-planet tidal interactions. For the latter, disk migration could have naturally ceased near the co-rotation radius, leaving the planet to reside naturally near the stellar \(P_{\rm rot}\). Alternatively, Star Planet Magnetic Interaction (SPMI) could be at play (Strugarek, 2018). In this scenario, the magnetic field of the planet permeates the space between the star and the planet. Magnetic perturbations propagate via planetary magnetic fields with the Alfven speed. The planet can interact with the star if the Alfven speed exceeds the stellar wind speed controlling the bulk motion of the intervening medium. The non-detection of variability in the Ca ii H and K lines and H\(\alpha\) lines appears to disfavor this SPMI interpretation since SPMI could be expected to cause variations in these diagnostics. Figure 15: Conceivable evolutionary scenarios for HAT-P-67 b. The planet’s equilibrium temperature increases as the stellar luminosity gradually increases on the main sequence. The second panel from the top shows the corresponding estimate for anomalous heating efficiency from Thorngren and Fortney (2018). The Ohmic dissipation model makes predictions for runaway inflation, depending on the incident stellar flux. The heuristic scaling relation for the inflation timescale predicts vanishingly short inflationary lifetimes in this extreme regime but illustrates the countervailing effects of the top two panels. It is hypothetically possible--albeit unlikely--that mass loss could be great enough to produce variability in the wide band _TESS_ lightcurve. The HPF spectra reveal up to 10% signal depth over a few Angstroms. The TESS bandpass barely includes 10833 A, at a location of diminishing throughput. The Helium signal alone would manifest as a mere \(\sim 4\) ppm flux loss when integrated over a TESS-throughput-weighted F-star spectrum-negligible compared to the observed 0.36% peak-to-valley modulation. An ensemble of additional lines in the red-optical cannot realistically resemble the TESS modulation. The inventory of such conceivable atomic lines detectable from the planet in the TESS bandpass numbers merely a few, with Ca ii infrared triplet and H\(\alpha\) chief among them (Linssen and Oklopcic, 2023). A putative H\(\alpha\) line would have to be about 30% deep over 10 A wide to manifest perceptibly in the TESS data. Such an implausibly deep and wide line likely would have been observed as perturbations in the Keck HIRES spectra, even with its limited phasing. Hypothetically, dust dredged up in the mass loss process could cause a large enough broadband continuum flux loss to manifest in TESS. We may expect to see some variable reddening in that case. ### Implications for exosphere detection in non-transiting planets We measured a large extent of Helium escape along the arc of the orbit, but we necessarily can place only coarse constraints on the vertical extent--in the direction perpendicular to the orbital plane, \(\pm z\). We estimate the detectable vertical extent must be at least a few stellar radii, such that we still could have detected Figure 16: Runaway inflation timescale in the exoplanet mass-radius diagram. The shading of the left panel shows \(\tau_{\rm infl}\) from Ohmic dissipation; the right panel shows \(\tau_{\rm infl}\equiv M/\dot{M}\) from photoionization-driven mass loss. Individual planets with confident mass and radius detections are shown as small black dots. Helium non-detections are shown in red squares, atmospheric escape detections from any origin in green circles (Dos Santos, 2022). Under either scenario, HAT-P-67 b resides in a sparsely populated region with a short inflationary timescale, making it unstable to runaway evaporation with large mass loss expected. Photoionization better predicts the depopulation of sources in the upper left low-density region of the diagram, defined by the 0.1 g cm\({}^{-3}\) iso-density dashed white line. Helium escape in a hypothetical HAT-P-67b-like system even if it were non-transiting, in a "near miss" configuration. We propose a new category of semi-transiting planet, dubbed "exospheric grazers", in which the planet does not produce a detectable white-light transit depth, but _does_ produce measurable _line_-based exospheric absorption. This category appears to have been neglected due to the assumption that exospheres were only easily detectable at several planetary radii. While large tails have been seen previously in Ly\(\alpha\), the scarcity of UV resources prohibited searches of this kind. The discovery of large Helium tails in HAT-P-32 b (Zhang et al., 2023) and now HAT-P-67 b suggest that the identification of these exospheric grazers may be achievable with current instrumentation and possibly existing archival data from near-IR RV-planet searches. Non-transiting, strongly irradiated planets with well-constrained orbits may make ideal targets for the detection of this phenomenon. ## 7 Conclusions We have presented a multi-year spectroscopic survey of HAT-P-67 b, a low-density, heavily irradiated Saturn-mass (or lower) planet. We identified a large leading tail, with a much weaker trailing tail, which we found to be direct evidence of preferential dayside mass loss. HAT-P-67 b stands out as an outlier in the mass-radius plane, which we examine through the lens of different mechanisms for anomalous heating. Both XUV irradiation and Ohmic dissipation predict runaway inflation for such inflated hot Saturns, and we quantitatively weigh these two scenarios, finding XUV irradiation as more straightforwardly predictive of the overall demographic population of exoplanets observed to date. We report in-transit line profile variability, which we attribute to the delicate interplay of planetary and stellar winds. We identify several avenues for future work, including additional monitoring of the line profile variability to probe the stellar-and-planetary wind interaction. The large signal should be perceptible in other spectral tracers, such as metal lines in the UV. We offer a list of other planets that may be likely to exhibit mass loss under the Ohmic dissipation and XUV irradiation scenarios. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & & \multicolumn{2}{c}{(OD)} & (XUV) \\ Planet & Mass & Radius & \(T_{\rm sq}\) & \(\tau_{\rm infl}\) & \(\tau_{\rm infl}\) \\ & \(M_{\rm Jup}\) & \(R_{\rm Jup}\) & K & Myr & Myr \\ \hline HAT-P-67 b & \(0.34^{+0.25}_{-0.19}\) & \(2.085^{+0.096}_{-0.071}\) & \(1900^{+25}_{-25}\) & 4 & 40 \\ KELT-11 b & \(0.171\pm 0.015\) & \(1.35\pm 0.1\) & \(1710^{+51}_{-48}\) & 3 & 90 \\ HAT-P-65 b & \(0.527\pm 0.083\) & \(1.89\pm 0.13\) & \(1930^{+45}_{-24}\) & 9 & 100 \\ WASP-127 b & \(0.1647^{+0.0214}_{-0.0172}\) & \(1.311^{+0.025}_{-0.029}\) & \(1400^{+24}_{-24}\) & 7 & 200 \\ HAT-P-32 b & \(0.68^{+0.11}_{-0.1}\) & \(1.98\pm 0.045\) & \(1840^{+17}_{-7}\) & 10 & 200 \\ WASP-153 b & \(0.39\pm 0.02\) & \(1.55^{+0.1}_{-0.08}\) & \(1700^{+40}_{-40}\) & 10 & 300 \\ HATS-26 b & \(0.65\pm 0.076\) & \(1.75\pm 0.21\) & \(1920^{+61}_{-41}\) & 20 & 400 \\ Kepler-7 b & \(0.441^{+0.043}_{-0.042}\) & \(1.622\pm 0.013\) & \(1630^{+10}_{-10}\) & 10 & 400 \\ HATs-56 b & \(0.602\pm 0.035\) & \(1.688^{+0.039}_{-0.055}\) & \(1900^{+16}_{-16}\) & 20 & 400 \\ Kepler-12 b & \(0.432^{+0.053}_{-0.051}\) & \(1.754^{+0.031}_{-0.036}\) & \(1480^{+30}_{-30}\) & 20 & 400 \\ TOI-954 b & \(0.174^{+0.018}_{-0.017}\) & \(0.852^{+0.053}_{-0.062}\) & \(1530^{+123}_{-16}\) & 20 & 500 \\ WASP-174 b & \(0.33\pm 0.091\) & \(1.437\pm 0.05\) & \(1530^{+17}_{-17}\) & 10 & 500 \\ HAT-P-64 b & \(0.58^{+0.18}_{-0.13}\) & \(1.703\pm 0.07\) & \(1770^{+22}_{-16}\) & 20 & 500 \\ WASP-172 b & \(0.47\pm 0.1\) & \(1.57\pm 0.1\) & \(1740^{+60}_{-60}\) & 10 & 500 \\ HAT-P-40 b & \(0.48\pm 0.13\) & \(1.52\pm 0.17\) & \(1770^{+33}_{-33}\) & 20 & 500 \\ \hline \end{tabular} Note. –XUV Inflation timescale assumes \(L_{X}/L_{\rm bol}=6.3\times 10^{-4}\) (Equation 5 of Sanz-Forcada et al. 2011), which is an overestimate for HAT-P-67 and old/slowly-rotating host stars. \end{table} Table 3: Coarse Inflation Timescales for Other Systems ## Appendix A Spectral variability non-detections ### Calcium IR Triplet and other line diagnostics We examined the Ca ii IR Triplet lines (Ca IRT) for evidence of variability. No variation was seen in these features, which were pre-processed in the same way as the Helium feature. Figure 17 shows a similar phase plot as Figure 10, adapted to the line at 8662 A. We see no conspicuous variability at this or any of the other Ca IRT lines. We also found no detectable variability in the Pa\(\delta\) line at 10050 A, nor other deep lines at 10330 A and elsewhere. He i 10833 appears to be the only conspicuously variable line in the HPF spectrum. This material is based upon work supported by the National Aeronautics and Space Administration under Grant Number 80NSSC21K0650 for the ADAP program, 80NSSC20K0257 for the XRP program, and 80NSSC22K0181 through the TESS Guest Investigator program issued through the Science Mission Directorate. C.V.M. acknowledges support from the Alfred P. Sloan Foundation under grant number FG-2021-16592. Support for program HST-AR-15805.001-A was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Associations of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5- 26555. Based on observations obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximillians-Universitaet Muenchen, and Georg-August Universitaet Goettingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. These results are based on observations obtained with the Habitable-zone Planet Finder Spectrograph on the HET. The HPF team acknowledges support from NSF grants AST-1006676, AST-1126413, AST-1310885, AST-1517592, AST-1310875, ATI 2009889, ATI-2009982, AST-2108512, AST-2108801, and the NASA Astrobiology Institute (NNA09DA76A) in the pursuit of precision radial velocities in the NIR. The HPF team also acknowledges support from the Heising-Simons Foundation via grant 2017-0494. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University and the Eberly College of Science. GS acknowledges support provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51519.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of NASA's Astrophysics Data System. HET (HPF), TESS, ASAS, Exoplanet Archive pandas(McKinney, 2010), emcee(Foreman-Mackey et al., 2013), matplotlib(Hunter, 2007), numpy(Harris et al., 2020), scipy(Virtanen et al., 2020), ipython(Perez & Granger, 2007), seaborn(Waskom et al., 2014), astropy(Astropy Collaboration et al., 2022), muler(Gully-Santiago et al., 2022), lightkurve(Barentsen et al., 2019), telfit(Gullikson et al., 2014), exoplanet(Foreman-Mackey et al., 2021), jupyter(Kluyver et al., 2016), p-winds(Dos Santos et al., 2022), HxRGproc,(Ninan et al., 2018)
2301.12457
EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary Computation
Inspired by natural evolutionary processes, Evolutionary Computation (EC) has established itself as a cornerstone of Artificial Intelligence. Recently, with the surge in data-intensive applications and large-scale complex systems, the demand for scalable EC solutions has grown significantly. However, most existing EC infrastructures fall short of catering to the heightened demands of large-scale problem solving. While the advent of some pioneering GPU-accelerated EC libraries is a step forward, they also grapple with some limitations, particularly in terms of flexibility and architectural robustness. In response, we introduce EvoX: a computing framework tailored for automated, distributed, and heterogeneous execution of EC algorithms. At the core of EvoX lies a unique programming model to streamline the development of parallelizable EC algorithms, complemented by a computation model specifically optimized for distributed GPU acceleration. Building upon this foundation, we have crafted an extensive library comprising a wide spectrum of 50+ EC algorithms for both single- and multi-objective optimization. Furthermore, the library offers comprehensive support for a diverse set of benchmark problems, ranging from dozens of numerical test functions to hundreds of reinforcement learning tasks. Through extensive experiments across a range of problem scenarios and hardware configurations, EvoX demonstrates robust system and model performances. EvoX is open-source and accessible at: https://github.com/EMI-Group/EvoX.
Beichen Huang, Ran Cheng, Zhuozhao Li, Yaochu Jin, Kay Chen Tan
2023-01-29T15:00:16Z
http://arxiv.org/abs/2301.12457v10
# EvoX: A Distributed GPU-accelerated Library ###### Abstract During the past decades, evolutionary computation (EC) has demonstrated promising potential in solving various complex optimization problems of relatively small scales. Nowadays, however, ongoing developments in modern science and engineering are bringing increasingly grave challenges to the conventional EC paradigm in terms of scalability. As problem scales increase, on the one hand, the encoding spaces (i.e., dimensions of the decision vectors) are intrinsically larger; on the other hand, EC algorithms often require growing numbers of function evaluations (and probably larger population sizes as well) to work properly. To meet such emerging challenges, not only does it require delicate algorithm designs, but more importantly, a high-performance computing framework is indispensable. Hence, we develop a distributed GPU-accelerated algorithm library -- EvoX. First, we propose a generalized workflow for implementing general EC algorithms. Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices. Third, we provide user-friendly interfaces to both researchers and practitioners for benchmark studies as well as extended real-world applications. To comprehensively assess the performance of EvoX, we conduct a series of experiments, including: (i) scalability test via numerical optimization benchmarks with problem dimensions/population sizes up to millions; (ii) acceleration test via a neuroevolution task with multiple GPU nodes; (iii) extensibility demonstration via the application to reinforcement learning tasks on the OpenAI Gym. The code of EvoX is available at [https://github.com/EMI-Group/EvoX](https://github.com/EMI-Group/EvoX). Scalable evolutionary computation, algorithm library, GPU computing, distributed computing, neuroevolution. ## I Introduction With the development of modern science and engineering, various emerging optimization problems are posing stiff challenges to the optimization algorithms. Despite that evolutionary computation (EC) has been shown to be a promising tool for solving complex optimization problems of relatively small scales, it has come to a consensus that the conventional EC paradigm suffers from the curse of dimensionality [1, 2] - the phenomenon that the search space of a problem grows exponentially with the number of dimensions. Since EC algorithms often rely on random search/sampling to find solutions, the very large number of possible solutions can make it difficult for EC algorithms to explore the space effectively in high-dimensional complex search spaces. Additionally, the computational complexities of EC algorithms may also grow with the number of dimensions, making them slow and impractical for large-scale optimization problems [3]. To improve the scalability of EC algorithms, researchers have made persistent efforts during the past decade [4, 5, 6]. In the early days, most efforts were mainly dedicated to making improvements from the methodology point of view, by proposing tailored algorithm frameworks/operators. For example, the cooperative coevolution (CC) [7, 8, 9] is among the popular frameworks tailored for improving the scalability of EC algorithms. In the CC framework, a problem is divided into a collection of lower-dimensional subproblems. Each subproblem is optimized individually and its population of candidate solutions is coevolved with the other subproblems in a round-robin fashion; then, representative solutions from each subproblem are combined to form a context vector, which is used to evaluate the overall solution; finally, the context vector is updated iteratively and serves as the context for cooperation between the subproblems. Some other researchers proposed various tailored operators with better scalability, including variants of the differential evolution (DE) [10, 11, 12], the particle swarm optimization (PSO) [13, 14, 15, 16], as well as the estimation of distribution algorithms (EDAs) [17, 18, 19, 20], among many others. Undoubtedly, algorithmic innovations can improve the scalability of EC algorithms. Nowadays, however, when considering the performance of an algorithm, it is also important to take into consideration the essential roles of modern computing architectures/devices [21]. As the most indicative example, the rapid development of modern deep learning algorithms can be attributed, in part, to the advancement of GPUs for training and running deep neural networks more efficiently. Inherently, the population-based nature of EC algorithms also makes it possible to potentially parallelize the computing process. There have been some attempts that aim to improve the scalability of EC algorithms through GPU computing or distributed computing: a parallelized version of DE was proposed to use GPU computing and can handle continuous problems with up to 1000 dimensions [22]; a parallel version of the compact genetic algorithm for CPU/GPU architectures was proposed and tested on OneMax and noisy OneMax with up to one billion variables [23]; an improved GPU-based model for MA-SW-Chain was proposed and tested on a scaled version of the CEC'2013 large-scale benchmarking suite on functions with up to 100 million decision variables [24]. Despite a few individual works along this line, there has not yet been a systematic research effort in the EC field similar to what has been seen in the deep learning area. The literature has already demonstrated the potential for EC algorithms to improve scalability, whether through algorithmic innovations or hardware acceleration. However, there is still much room for improvement and further research, especially in terms of developing more efficient and effective algorithms by utilizing modern computing architectures and devices to their full potential. Additionally, it would be beneficial to investigate the potential applications of EC in various domains and industries. Nonetheless, there are three main issues that are currently hindering the further development of scalable EC. **Workflow**: In a typical EC workflow, the main components (i.e., crossover/mutation, fitness evaluation, selection) are executed sequentially in a main loop. While this structure is organized and cohesive, it is not compatible with asynchronous computing methods such as GPU computing or distributed computing, making it challenging to implement or modify algorithms for improved flexibility and concurrency. **Computational Cost**: The main source of computational cost in EC algorithms is the fitness evaluations needed for population-based stochastic search. From a statistical perspective, it is often necessary to increase the number of samples (i.e., fitness evaluations) exponentially as the dimension of the search space grows linearly in order to accurately approximate a target distribution. **Running Environment**: While most EC algorithms were initially developed for solving pure numerical optimization problems and do not have specific requirements for the running environment, the running environments for large-scale optimization tasks are often specific to the task at hand. For example, neuroevolution tasks are often closely tied to deep learning scenarios, which require running environments with software/hardware. To push the boundary of EC towards better scalability and wider applicability by addressing the aforementioned issues, we develop a distributed GPU-accelerated algorithm library - EvoX. In summary, the main contributions are: **-** We propose a generalized EC workflow. On the one hand, it fully decouples the implementation of algorithms, problems, user-friendly monitors, and possible population decoders or fitness transformers, bringing more flexibility and generality to EC than before. On the other hand, complete modularization makes it possible for both researchers and practitioners to easily parallelize EC algorithms via high-performance computing frameworks. **-** We develop a scalable computing framework for running EC algorithms on distributed GPUs. Based on the proposed generalized EC workflow, a powerful distributed GPU acceleration library EvoX is developed. First, EvoX supports ready-to-use GPU computing acceleration, such that users can easily run EC algorithms on GPU(s) without any additional engineering work. Second, EvoX supports ready-to-use distributed computing framework, such that users can easily deploy EC algorithms on distributed machines at hand. **-** We provide a user-friendly interface for both numerical benchmark tests and other challenging problems in real-world applications. First, EvoX provides a generalized Problem module to fully support running EC algorithms for challenging data-related tasks (e.g. neuroevolution) via GPU computing. Second, EvoX provides a tailored interface to provide seamless connections to complex environments (e.g. those in reinforcement learning) on top of a high-performance distributed computing framework. The remainder of this paper will be organized as follows. Section II provides the necessary background information. Section III presents a generalized EC workflow. Section IV and Section V describe the design and contents of EvoX, respectively. Section VI contains the experiments conducted on EvoX. Finally, Section VII concludes the paper and discusses our future work. ## II Background and Related Work In this section, we first briefly overview some representative EC libraries; then we provide background knowledge of neuroevolution, evolutionary reinforcement learning; finally, we introduce the related techniques of GPU computing and distributed computing. ### _EC libraries in Python_ In the EC field, the Python programming language has emerged as a popular choice for implementing EC algorithms. This is due in part to the availability of powerful and easy-to-use EC libraries, such as DEAP (Distributed Evolutionary Algorithms in Python) [25], PyGAD (Python Genetic Algorithms and Differential Evolution Framework) [26], Pymoo (Python Multi-Objective Optimization) [27], and Pagmo (Parallel Global Multiobjective Optimizer) [28]. In this subsection, we will review the features and capabilities of these libraries. DEAP is a long-standing and feature-rich framework for implementing evolutionary algorithms in Python. It offers support for a wide range of EC algorithms, including both single and multiple objective algorithms. DEAP also includes a broad range of built-in benchmark problems, making it easy to evaluate the performance of EC algorithms. DEAP is well-suited for rapid prototyping and testing of ideas and is a popular choice among researchers in the field of EC. Another prominent feature of DEAP is its support for parallelization. DEAP allows evaluations to be easily parallelized, making it possible to run them on multiple cores or even across multiple machines through scoop [29]. PyGAD is a library for implementing genetic algorithms in Python. It offers different types of crossover, mutation, and parent selection operators for implementing genetic algorithms. What makes PyGAD unique is its focus on machine learning tasks. PyGAD includes features and tools specifically designed for training artificial neural networks, making it a good choice for applying evolutionary computation (EC) in machine learning. Pymoo is a library that focuses on multi-objective optimization algorithms in Python. Its main strength lies in its comprehensive support for multi-objective optimization. Pymoo includes a wide range of benchmark problems and state-of-the-art multi-objective algorithms. Pymoo also includes many operators suitable for multi-objective algorithms, allowing users to easily customize the algorithms. Furthermore, Pymoo has a set of powerful and flexible features related to multi-objective optimization, such as visualization and decision-making. All these features building towards multi-objective optimization make Pymoo a suitable tool for the task. Pagmo is a C++ library for massively parallel optimization, with a Python binding called Pygmo. It is built around the generalized island model [30], which allows coarse-grained parallelization. It offers a variety of algorithms, benchmark problems, and migration policies, making it easy for users to implement parallelized algorithms. Additionally, it supports batch fitness evaluation, enabling users to perform parallel fitness evaluations using their own methods. Despite the attractive features introduced above, these existing libraries have a common deficiency: the lack of scalability. Potentially, both GPU computing and distributed computing are powerful tools for improving the scalability of EC, particularly in scenarios involving large amounts of data or complex calculations. However, none of these existing libraries supports GPU computing, while the support for distributed computing is either missing or inefficient. As a result, the lack of support for GPU computing or distributed computing in these libraries has largely limited their performance and applicability for certain types of optimization problems. Besides, the extensibility of these libraries towards more complex problems is also very limited, making it difficult for EC practitioners to get involved. In detail, the key features of these libraries in comparison with EvoX are summarized in Table I. ### _Neuroevolution_ Neuroevolution is a field focusing on using evolutionary computation (EC) algorithms to optimize artificial neural networks (ANNs). It has a long history of development and is facing some emerging challenges and opportunities [31]. In the field of neuroevolution, the use of EC algorithms for optimizing ANNs has gained significant attention due to its potential advantages over traditional gradient-based methods. Since EC algorithms can explore a much larger search space than gradient-based methods, they have better potential for discovering more diverse and novel solutions. Broadly speaking, neuroevolution is able to evolve various aspects of neural networks, such as the building blocks, architectures, weights, and even the training rules. Although neuroevolution was initially considered an alternative to backpropagation for optimizing the weights of small and fixed-topology ANNs, some attempts quickly turned to evolving network architectures as well [32, 33, 34]. With the booming development of deep learning, researchers are now paying increasing attention to the automatic design of deep neural networks (DNNs) via neuroevolution - the evolutionary neural architecture search (ENAS) [35] - which is particularly useful when facing complex scenarios involving multiple objectives to be optimized simultaneously (e.g. hardware-aware deployment of DNNs [36]). Despite the biologically plausible and technically attractive characteristics of neuroevolution, one of the main limitations is the computational complexity, which is particularly challenging when dealing with DNNs [37]. Intuitively, one way to improve the scalability of neuroevolution algorithms is to make full use of the computing power of GPUs and distributed computing systems, such that different candidate solutions (i.e. networks) in the population can be evaluated (i.e. trained) simultaneously. This would allow for more efficient and faster training of ANNs. ### _Evolutionary Reinforcement Learning_ Reinforcement learning (RL) is a powerful and widely-studied framework for learning and decision-making in complex, dynamic environments. At its core, RL is concerned with how an agent should act to maximize a reward signal over time. This requires the agent to learn a policy, or a mapping from states to actions, that will allow it to take actions that lead to the most reward from the environment. The environment is typically modeled as a Markov decision process (MDP), consisting of: * set of states \(\mathcal{S}\) with an initial state distribution \(P(s_{0})\), * set of possible actions \(\mathcal{A}\), * reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), * transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\to P(\mathcal{S})\), * discount factor \(\gamma\in[0,1]\). At each time step \(t\), the agent can choose an action \(a_{t}\in\mathcal{A}\) based on the current state \(s_{t}\) and a policy \(\pi:S\to P(A)\). The objective is to find the optimal policy \(\pi^{*}\) that maximizes the expected reward: \[\pi^{*}=\operatorname*{argmax}_{\pi}\mathbb{E}_{P(s_{0}),\pi,\mathcal{T}}[ \sum_{t=0}^{T-1}\gamma^{t}r_{t}], \tag{1}\] where \(T\) is length of the episode, \(a_{t}\sim\pi(s_{t})\), \(s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})\), and \(r_{t}=\mathcal{R}(s_{t},a_{t})\). Evolutionary Reinforcement Learning (EvoRL) [38] specifically focuses on using EC algorithms to deal with various challenging optimization problems in RL, such as hyperparameter optimization [39], policy search [40], reward shaping [41], exploration [42], among many others [43]. One of the key advantages of EvoRL is that it can explore a much larger search space than gradient-based RL methods such as Q-learning. This allows for the discovery of a wider range of potential policies, which can lead to better performance in terms of reward and other metrics. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Name} & \multicolumn{2}{c}{System} & \multicolumn{2}{c}{Algorithms} & \multicolumn{2}{c}{Problems} \\ \cline{2-7} & GPU & Distributed & Single-objective & Multi-objective & Numerical & Neurovolution & Reinforcement Learning \\ & Computing & Computing & Algorithms & Algorithms & Benchmarks & Tasks & Tasks \\ \hline DEAP & & & ✓ & ✓ & ✓ & ✓ & \\ PyGAD & & & ✓ & ✓ & ✓ & ✓ & \\ pymoo & & & ✓ & ✓ & ✓ & & \\ pagno & & & ✓ & ✓ & ✓ & & \\ EvoX & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} TABLE I: Summarized main features of EC libraries In recent years, as tasks and environments have become increasingly complex, scalability has become a major bottleneck for EvoRL. On the one hand, due to its population-based nature, EvoRL can be computationally intensive, making it difficult to apply in real-time or real-world situations. On the other hand, the large number of parameters in the policy models can cause a severe curse of dimensionality for the EC algorithm when applied to RL tasks. To address these challenges, researchers have proposed a number of methods to improve the scalability of EvoRL. These methods include using parallel computing to distribute the computational workload across multiple machines or processors [44, 45], as well as using more efficient algorithms and strategies for evolving the population [46, 47]. However, the development of EvoRL is still in its infancy, and there is a lack of a high-performance EC library with a user-friendly interface for solving RL tasks. ### _JAX for GPU Computing_ GPU computing is a technology that utilizes GPUs to conduct general-purpose computing instead of CPUs. A GPU usually consists of tens or hundreds of thousands of cores, which is thousands of times more than a typical CPU. To match the computational capabilities of GPUs, their memories are often way faster in terms of bandwidth compared to CPU memories. Moreover, to provide fast synchronization between different cores, GPUs usually have much larger and faster shared caches compared to CPUs. These properties make it possible to process data on GPUs with higher parallelization than CPUs. In the past decade, GPU computing has been one of the driving forces of deep learning. Libraries like PyTorch [48] have introduced the ability to utilize GPUs. Nonetheless, few works have been dedicated to accelerating EC algorithms via GPU computing. Since EC algorithms usually involve a chain of computationally cheap operations, they are often constrained by memory bandwidth more than computing power. Hence, the decrease in the number of memory accesses can drastically improve their performance when parallelized on GPUs. Recently, the JAX has been released as a library offering a NumPy-like API for GPU-accelerated numeral calculations [49]. With just-in-time compilation features, JAX can run on multiple hardware backends including both CPU and GPU by optimizing the Python functions. The optimization in the compilation provides automatic fusing of small operators, thus substantially helping to save memory bandwidth. Such features of JAX are particularly beneficial for the parallelization of EC algorithms. It is noteworthy to mention that several pioneering initiatives have already been undertaken to enhance the performance of specific EC algorithms through the utilization of JAX. Two representative examples are the EvoJAX library [50] and the evosax library [51], which have been developed to accelerate evolution strategies and are predominantly applied to neuroevolution tasks. ### _Ray for Distributed Computing_ Distributed computing allows for the collective power of multiple computers to be harnessed in order to solve complex problems that would be difficult or impossible to solve using a single computer. Since computers coordinate with each other by passing messages through the network, the communication cost in a distributed system usually has a huge impact on the overall scalability. In a distributed system, each computer (or node) has its own local memory that cannot be accessed by other computers. To solve a problem collaboratively, computers need to communicate with each other by message passing through the network. Ray is a popular framework for distributed computing in Python and has been shown to be well-suited for applications in machine learning and other scientific computing tasks [52]. As a user-friendly framework, Ray provides both actor-parallel and task-parallel programming abstraction, and the communication between actors and tasks will be handled automatically. One of the key features of Ray is its distributed scheduler, which is composed of both a global scheduler and per-node local schedulers. This design allows Ray to efficiently schedule tasks to run on the appropriate node, both locally and across the distributed system, providing improved scalability. With its scheduler, users can specify resource requirements for actors and tasks, and Ray will automatically place them on nodes with adequate resources. Such features will allow us to easily scale the EC algorithms across multiple machines. ### _Remarks_ EC has been shown to have promising potential in tackling complex tasks such as those found in neuroevolution and EvoRL. However, the lack of support for computing acceleration in existing EC libraries presents a challenge to further development in more advanced EC algorithms. To address this limitation, we have initiated the development of EvoX, a new library that improves the scalability of EC algorithms by leveraging the strengths of recently-developed high-performance computing tools: JAX and Ray. On the one hand, JAX is well-suited for GPU computing of EC algorithms due to its use of just-in-time compilation and support of CPU/GPU backends, which fuses operations and minimizes memory accesses during acceleration. On the other hand, Ray is a distributed framework that allows for the scheduling of computations across multiple machines and CPU/GPU resources. By combining the capabilities of both, EvoX is able to improve the scalability of EC and extend the applications towards larger and more complex problems. ## III Generalized EC Workflow Generally, distributed GPU acceleration of EC workflow may face two main issues: on the one hand, each component in an EC workflow may have its own way of parallelization; on the other hand, different components in an EC workflow must be synchronized in the distributed system. To address such issues, we propose a generalized EC workflow on top of an _ask-and-tell_ interface, which considers an EC workflow as an agent (i.e. an EC algorithm) which iteratively transitions through the states by performing ask and tell actions for problem-solving. Specifically, given \(\theta\) and \(\mathcal{D}\) are the hyperparameters for defining the problem and the algorithm respectively, the algorithm can be characterized by \(\mathcal{A}_{\theta}=\langle\theta,g^{\text{sk}},g^{\text{ell}}\rangle\), problem \(\mathcal{P}_{\mathcal{D}}=\langle\mathcal{D},f\rangle\), where \(g^{\text{skk}}\) and \(g^{\text{ell}}\) are ask and tell actions for generating a new population and updating the algorithm state. A simple iteration on generation \(t\) is: \[\mathbf{X}_{t},S_{t+1}^{A_{\theta}} =g_{\theta}^{\text{sk}}(S_{t}^{A_{\theta}}), \tag{2}\] \[\mathbf{y}_{t},S_{t+1}^{\mathcal{P}_{\mathcal{D}}} =f_{\mathcal{D}}(S_{t}^{\mathcal{P}_{\mathcal{D}}},h(\mathbf{X}_{ t})),\] (3) \[S_{t+1}^{\mathcal{A}_{\theta}} =g_{\theta}^{\text{ell}}(S_{t+1}^{A_{\theta}},\mathbf{y}_{t}), \tag{4}\] where \(\mathbf{X}_{t}\), \(\mathbf{y}_{t}\) denote the population of candidate solutions and the corresponding fitness values at generation respectively; \(S^{\mathcal{A}_{\theta}}\), \(S^{\mathcal{P}_{\mathcal{D}}}\) are the state of the algorithm and problem respectively; \(h\) is the optional decoder function. A summary of notations is given in II. Based on the formulation above, EvoX can fully decouple algorithms and problems and leaves more flexibility to the workflow. On one hand, neither \(g^{\text{ask}}\) nor \(g^{\text{ell}}\) calls \(f\) internally. On the other hand, \(f\) is ignorant of the implementation of algorithm functions \(g^{\text{ask}}\) and \(g^{\text{full}}\), such that the user can easily change the problem to a validation/test phase by simple settings. The generalized workflow also allows each component to work with its own way of parallelization. Moreover, since \(S^{\mathcal{A}_{\theta}}\) and \(S^{\mathcal{P}_{\mathcal{D}}}\) can capture the randomness by explicitly storing the pseudo-random number generator key inside, the states can be easily synchronized when running EC algorithms in a distributed systems. Most importantly, the proposed generalized EC workflow strictly follows the paradigm of _functional programming_, which is intrinsically compatible with JAX-based implementations. ## IV Engineering Designs On the basis of the generalized EC workflow as formulated above, this section will further introduce the detailed engineering designs of EvoX. ### _Main Pipeline_ In GPU computing, _tensor_ is the essential data structure for GPU acceleration. Hence, in EvoX, we view the pipeline of running an EC algorithm for problem-solving as an iterative procedure of processing the tensor of a population. As shown in Fig. 1, generally, the Pipeline passes the tensor of population \(\mathbf{X}_{t}\) (as well as the corresponding fitness values \(\mathbf{y}_{t}\)) through the Algorithm module and Problem module, with the support of optional modules of Monitor and Decoder. The Decoder module is designed to transform the population \(\mathbf{X}_{t}\) encoded by an EC algorithm into decision vectors in the original problem space. For example, when training an ANN, the weights are typically represented by a set of tensors, one for each layer. However, EC algorithms often output a single, tightly packed tensor as the population. In this case, the Decoder can be used to decode the tightly packed tensor \(\mathbf{X}_{t}\) into a set of tensors representing the weights of an ANN, as shown in Figure 2. It is important to note that the Decoder is an optional module, as it is not necessary for plain numerical optimization where \(\mathbf{X}_{t}\) is already defined in the original problem space. The Monitor is designed for observing intermediate results in each iteration via visualization tools. For example, the users may observe the population \(\mathbf{X}_{t}\) to analyze how the algorithm behaves in a certain fitness landscape. Besides, the users may also observe the fitness \(\mathbf{y}_{t}\) to check how the optimization process goes on. Additionally, another functionality of Monitor is to check whether the termination criterion (e.g. maximum number of iterations/evaluations) of the optimization process is reached. In the following subsections, we will elaborate another two modules Algorithm and Problem in more detail. ### Algorithm Module As is described in section III, Algorithm in EvoX is basically a class initialized with a set of hyperparameters \(\theta\) \begin{table} \begin{tabular}{l l} \hline Notation & Description \\ \hline \(t\) & The generation counter \\ \(\mathcal{A}_{\theta}\) & The algorithm parameterized by \(\theta\) \\ \(\mathcal{P}_{\mathcal{D}}^{\mathcal{A}_{\theta}}\) & The problem parameterized by \(\mathcal{D}\) \\ \(S_{t}^{\mathcal{A}_{\theta}}\) & The state of \(\mathcal{A}_{\theta}\) at generation \(t\) \\ \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) & The state of \(\mathcal{P}_{\mathcal{D}}\) at generation \(t\) \\ \(f^{\prime}\) & The fitness function \\ \(h\) & The decoder \\ \(g^{\text{sk}}\) & The ask method of the algorithm, \\ & used to give out candidate solutions. \\ \(g^{\text{ull}}\) & The tell method of the algorithm, \\ & used to update the state based on the fitness. \\ \(\mathbf{X}_{t}\) & The candidate solutions at generation \(t\) \\ \(\mathbf{y}_{t}\) & The fitness values at generation \(t\) \\ \hline \end{tabular} \end{table} TABLE II: Summary of notations Fig. 2: An illustration on how decoder can be used in neuroevolution tasks. Here decoder decodes \(\mathbf{X}_{t}\) into \(\mathbf{X_{t}}^{\prime}\), which represents a set of weights for neural networks. consisting of two methods - ask and tell, maintaining a global state \(S^{\mathcal{A}_{\theta}}\) - an independent object in EvoX. Additionally, Algorithm also has a setup method for generating the initial state \(S_{0}^{\mathcal{A}_{\theta}}\). As an illustrative example, we present the implementation of vanilla evolution strategy in EvoX, as listed in Lst. 1. To start with, in lines 7 to 10, the _init_ method initializes three hyperparameters in the constructor of this class. dim is the problem dimension, pop_size is the population size, and topk indicates the top \(k\) individuals of the population are considered as elite to be selected. In lines 12 to 19, the setup method initializes its internal state \(S_{0}^{\mathcal{A}_{\theta}}\). In this vanilla evolution strategy, we keep an independent normal distribution for each decision variable, such that the mean and standard deviation vectors are initialized independently. In addition, we record key in the state as the seed for the pseudo-random number generator. In lines 21 to 32, the ask method is defined in correspondence to \(g_{\theta}^{\text{sk}}\). In this method, the algorithm splits the key and samples a new candidate population according to the mean and standard deviation. Then a new state is generated by updating the key and adding the candidate population to it. In lines 34 to 45, the tell method of the algorithm is defined in correspondence to \(g_{\theta}^{\text{sk}}\). In this method, the algorithm picks the top \(k\) individuals in the candidate population according to their fitness ranking. Then, the mean and the standard deviation are adapted to the mean and the standard deviation of the elite population. Finally, in line 46, the newly generated candidate population is returned via the new_state. ### Problem Module In contrast to the past when EC algorithms were mostly tested on pre-configured numerical benchmark problems, optimization problems of today are becoming increasingly complex - usually involving data-related configurations. Therefore, in our design, Problem is parameterized by dataset \(\mathcal{D}\) with internal state \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\). As illustrated in Fig. 3, in the case of ANN training, the dataset \(\mathcal{D}=\{\mathcal{D}_{trn},\mathcal{D}_{vld},\mathcal{D}_{tst},...\}\) can consist of training data, validation data, and test data; correspondingly, the problem state \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) will record the choice of dataset together with other essential parameters such as batch index. With such a design principal, Problem is well extensible to support a wide spectrum of problems ranging from numerical optimization (leaving \(\mathcal{D}\) and \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) empty) to other data-related tasks. In the following, we will elaborate three typical scenarios for defining problems using the tailored Problem Module in EvoX. #### Iv-C1 Numerical Optimization Since the plain numerical optimization problems are often well-formulated functions with basic maths operations, output \(\mathbf{y}_{t}\) is only determined by the Fig. 3: An illustrative example of how EvoX handles data-related fitness evaluations in a machine learning task via the Problem module. In this example, the problem \(\mathcal{P}_{\mathcal{D}}\) involves three datasets: training set \(\mathcal{D}_{trn}\), validation set \(\mathcal{D}_{vld}\), and test set \(\mathcal{D}_{tst}\). \(X_{t}\) and \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) denote the population and problem state at generation \(t\), where \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) indicates which data samples (i.e. batch) to take at each iteration. Finally, the loss of the corresponding machine learning task is calculated and returned as the fitness values \(\mathbf{y}_{t}\). input \(\mathbf{X}_{t}\). Thus in EvoX, such problems can be implemented by simply leaving \(\mathcal{D}\) and \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) empty. #### Iv-C2 Neuroevolution As introduced in Section II-B, a neuroevolution task usually involves the training of an ANN to fit a certain dataset. Since a forward pass of an ANN is expensive, in EvoX, we adopt the common workflow as in mini-batch gradient descent. First, the whole training dataset \(\mathcal{D}_{trn}\) is split into small batches of data \(\mathcal{B}_{1},\mathcal{B}_{1},..,\mathcal{B}_{n}\). Then, at each iteration, only one batch of data is used to calculate the fitness of each individual as \(\mathbf{y}_{t}^{(i)}=\frac{1}{n}\sum_{x\in\mathcal{B}_{k}}\mathcal{L}(x, \mathbf{X}_{t}{}^{\prime})\), where \(\mathcal{L}(\cdot)\) is the loss function, \(i\) denotes the \(i^{\text{th}}\) row in the column vector \(\mathbf{y}_{t}\), \(k\) denotes the batch index as stored in \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\), and \(n\) denotes the batch size. In practice, since large batch sizes may be intractable to calculate in a single forward pass, we also introduced a parameter called num_passes, to allow the loss of a batch to be calculated by multiple passes. #### Iv-C3 Reinforcement Learning As introduced in Section II-C, a typical RL task aims to train an agent for maximizing the total reward in a certain environment. To obtain the reward, it usually takes multiple steps for an agent to complete an episode by interacting with the environment. Our EvoX helps bridge the gap between EC and RL tasks via the tailored Problem module. Specifically, EvoX has the candidate population \(\texttt{X}_{t}\) encode a set of parameters for the policy model, where for each candidate solution, EvoX runs a complete episode using this policy and return the sum of all rewards as the fitness value. It is worth noting that, since there can be multiple rewards in some RL environments, one may treat these multiple rewards just as in the case of multi-objective optimization. In practice, it can be computationally expensive to complete an episode to get the final reward, while some popular RL environment (e.g. OpenAI Gym [53]) merely provides a single-thread interface. This can be a painful bottleneck for the population-based EC algorithms which require batches of fitness evaluations in each generation, substantially hindering the development of EvoRL. Hence, to speed up the fitness evaluation process for RL tasks, we design a Ray-based interface for running RL environments with parallel computing. As illustrated in Fig. 4, with our interface, the function evaluation process for RL tasks can be deployed in a parallel computing environment with multiple workers, where each worker is created on CPU/GPU core for running the RL environment(s) in parallel. ### _Distributed Acceleration_ In a typical EC workflow, the main computational cost usually comes from a large number of fitness evaluations of the candidate solutions. Since the fitness evaluation of each candidate solution is intrinsically isolated, theoretically, we can have an EC algorithm deployed on a distributed computing system to improve the concurrency of fitness evaluations. However, when scaling beyond the boundary of physical machines, the communication cost between different machines becomes another obstacle. Intuitively, we may attempt to achieve distributed fitness evaluations by running the algorithm in one node and sending the candidate solutions to the other nodes for evaluation. However, when it comes to complex tasks such as neuroevolution, each candidate solution may consist of millions (or even larger number) of decision variables, and thus the communication overhead for sending the candidate solutions to each node will be prohibitively huge. Hence, as the problem dimension increases, such an intuitive method suffers from the sharply increasing communication costs between the nodes, thus leading to poor scalability. In order to accelerate the EC workflow by distributed computing in a scalable manner, we design a distributed pipeline on top of Ray, partially inspired by the work in [47]. As illustrated in Fig. 5, given a number of \(n\) nodes (i.e. machines) in distributed computing systems, the distributed pipeline creates a copy of standard pipeline (refer to Section IV-A) on each node; hence the pipeline on each node only takes care of \(\frac{1}{n}\) of the candidate solutions for fitness evaluations, running in an isolated and Fig. 4: An illustrative example of how EvoX handles an RL task as an optimization problem via the Problem module. \(\mathbf{X}_{t}{}^{\prime}\) and \(S_{t}^{\mathcal{P}_{\mathcal{D}}}\) denote the population of policy models and the problem state respectively. These policy models \(\mathbf{X}_{t}{}^{\prime}\) were decoded from the tensor of population \(\mathbf{X}_{t}\) by a decoder (as in Fig 2). The controller will evenly distribute these policy models to a set of workers running in parallel. Each worker will run a complete episode in the RL environment (refer to Section II-C) to obtain the fitness \(y\) for each policy model in the population. Finally, the rewards collected from these workers are aggregated by the controller to obtain the final fitness values \(\mathbf{y}_{t}\). Fig. 5: The task-parallel workflow using distributed pipeline with \(n\) nodes. Each node runs a pipeline but only evaluates the fitness \(\frac{1}{n}\) of the population. These incomplete fitness values are denoted by \(\mathbf{y}_{t}{}^{\prime}\). Then all nodes will exchange the incomplete fitness values with other nodes to form the complete fitness \(\mathbf{y}_{t}\). concurrent manner as if the other nodes do not exist; finally, the fitness values \({\bf y_{\it t}}^{\prime}\) as obtained by each node are exchanged to generate the entire fitness vector \({\bf y_{\it t}}\) to be passed to the next generation. Another important issue to be considered is the _synchronization_ - to have the algorithm running on each pipeline image performing synchronous behaviors, such that the fitness information is always exchangeable. To this end, we have the pipeline copies on each node initialized with the same random seed, such that the subsequent behaviors (despite the randomness as generated via the same random seed) of the algorithm will be naturally synchronous per iteration \(t\). This synchronization method also guarantees that the same experiments will always end up with the same result regardless of the number of nodes in use, thus further strengthening the reproducibility of the experiments conducted with EvoX. ## V Library Content As summarized in Fig. 6, the content of EvoX mainly consists of three main components: the Algorithms component, the Operators component, and the Problems component. In the following, we will introduce each component in detail. The Algorithms component includes all EC algorithms available in EvoX, consisting of two subcomponents: Single-objective and Multi-objective. The Single-objective subcomponent includes algorithms for single-objective optimization(PSO [54, 55], DE [56], CSO [13], etc.), while the Multi-objective subcomponent includes EC algorithms for multi-objective optimization (NSGA-II [57], MOEA/D [58], RVEA [59], etc.). The Problems component includes all the pre-defined problems in EvoX, consisting of two subcomponents: Numerical and Extended. The Numerical subcomponent includes all possible numerical benchmark problems, including not only basic ones for single-objective optimization (Sphere, Ackley [60], etc.), but also composite ones for multi-objective optimization (ZDT [61], DTLZ [62], etc.). The Extended subcomponent includes potential extended application problems, currently mainly related to neuroevolution and RL tasks. The Operators component includes commonly used operators in EC algorithms, consisting of two subcomponents: Selection and Reproduction. The Selection subcomponent includes all possible selection operators that can be adopted for selecting candidate solutions (Tournament Selection, Random Selection, etc). The Reproduction subcomponent includes all possible operators for reproducing new candidate solutions (BitFlip Mutation, SBX Crossover, etc.). Apart from the above components, EvoX also allows the user to implement their own components. Thanks to the well decoupled modular engineering designs, the users may replace any of the existing components with their own tailored one(s) while reusing all other contents provided by EvoX. ## VI Experimental Study To demonstrate the performance of EvoX, we will conduct a series of experiments in this section. First, Section VI-A introduces the general experimental settings. Then, Section VI-B demonstrates the scalability performance brought about by GPU computing. Moreover, Section VI-C demonstrates the acceleration performance brought about by distributed computing Finally, Section VI-D demonstrates the extendability of EvoX towards complex RL tasks. ### _Experiment Settings_ The experiment in Section VI-B is carried out on a physical machine equipped with an Intel Core i9-10900X CPU @ 3.70GHz and a single NVIDIA RTX 3090 GPU. All the other experiments are conducted on a cluster with 8 nodes, where each node has 16 cores and 32 threads from an Intel Xeon Gold 6226R CPU @ 2.90GHz CPU and a single NVIDIA RTX 3090 GPU. Specifically, the experiment in Section VI-C uses up to 6 nodes while the experiment in Section VI-D only use a single node. All experiments were repeated for 11 times using different random seeds. Fig. 6: Main library content of EvoX. The Algorithms component contains two sub-components: Single-objective and Multi-objective. The Operators component contains two subcomponents: Selection and Reproduction. The Problems component currently contains two subcomponents: Numerical Benchmarks, and Extended Applications which include tasks like neuroevolution and RL. ### _Scalability Test via GPU Computing_ To demonstrate the scalability performance of EvoX, we conduct benchmark experiments by running both single-objective and multi-objective algorithms on a single GPU device, in comparison with conventional CPU computing. The benchmark program is first run with GPU enabled and disabled respectively. In the single-objective benchmarks, we have the classic PSO run on the Ackley problem. In multi-objective benchmarks, we have the NSGA-II run on the ZDT1. In both single-objective and multi-objective benchmarks, we first fix the population size to 128 and scale up the number of dimensions of the problem, and then we fix the number of dimensions to 128 and scale up the population size. As shown in Fig. 6(a), when scaling up the number of problem dimensions with PSO, CPU computing and GPU computing have similar performance when the number of problem dimensions is very smaller, but as the number of problem dimensions becomes larger, the performance of GPU computing quickly surpasses CPU computing. When the number of problem dimensions is larger than 100,000, GPU computing is almost 100x more efficient in terms of time per iteration. As shown in Fig. 6(b), scaling up population size in PSO shows similar observations. As shown in Fig. 6(c), when scaling up the number of problem dimensions with NSGA-II, we can also observe that GPU computing constantly performs better than CPU computing. As shown in Fig. 6(d), when it comes to scaling up population size with NSGA-II, GPU computing also scales much better than CPU computing. In comparison with PSO, however, the general computing cost of NSGA-II increases sharper in terms of population size, mainly due to the higher algorithmic computational complexity. ### _Acceleration Test via Distributed Computing_ To assess the acceleration performance of EvoX, we conduct an experiment on the task of neuroevolution for image classification on multiple GPU devices. Specifically, we train a convolution neural network on the CIFAR-10 dataset [63] using 1 to 8 GPUs and measure the total runtime. CIFAR-10 is an image classification dataset containing a total of 60,000 \(32\times 32\) color images. The architecture of the convolution neural network is given in Tab. III, and ReLU [64] is used as the activation function in between layers. For the algorithm module, we adopt PGPE [65] with ClipUp [66], and the population size is set to 300. On the problem module, we set the batch size to 3000. When conducting the experiment with \(n\) nodes, since each node only evaluates \(\frac{1}{n}\) of the candidate population, we also tune the num_passes parameter to \(\frac{30}{n}\) in order to fully utilize the each GPU device. The experiment runs 1000 training iterations in total, with a validation phase for every 100 iterations. Fig. 7(a) and Fig. 7(b) present the runtime and the performance with different numbers of GPU nodes respectively. The runtime includes both the training phase and the validation phase. It can be observed that the runtime rapidly decreases as the number of GPU nodes grows, achieving near-linear acceleration rate. This observation is consistent with our Fig. 8: Results of acceleration test. The number of GPU nodes are ranged from 1 to 6. The same data is presented in two ways, in (a) the y-axis is the total runtime, and in (b) the y-axis is the performance which is measured by the inverse of the runtime. \begin{table} \begin{tabular}{c l l l} \hline \hline Input Shape & Layer & Filter Shape & Strides \\ \hline \(32\times 32\times 3\) & Conv & \(3\times 3\times 3\times 32\) & 1 \\ \(30\times 30\times 32\) & Max Pooling & \(2\times 3\times 32\times 32\) & 2 \\ \(15\times 15\times 32\) & Conv & \(3\times 3\times 32\times 32\) & 1 \\ \(13\times 13\times 32\) & Max Pooling & \(2\times 2\) & 2 \\ \(6\times 6\times 32\) & Conv & \(3\times 3\times 32\times 32\) & 1 \\ \(512\) & Fully Connected & \(512\times 64\) & — \\ \(64\) & Fully Connected & \(64\times 10\) & — \\ \hline \hline \end{tabular} \end{table} TABLE III: Architecture of the ANN adopted in the neuroevolution experiment. Fig. 7: Results of scalability test. The dark lines represent the mean values runs and shaded regions are bounded by the standard deviations. Both the x-axis and the y-axis are in logarithmic scale. expectation as the distributed pipeline only requires exchanges of cheap fitness values among the workers, being communication-efficient. ### _Extensibility to RL Tasks_ To assess the usability of EvoX, when extended to solving complex application problems, we conduct an experiment on two representative RL tasks as shown in Fig. 9. We will demonstrate that EvoX can handle both tasks fluently and efficiently despite their challenging features. * Bipedal Walker: In this RL task, the agent controls a 4-joint walker robot. The goal is to move forward without falling and apply as little torque as possible. At any given moment, the agent can observe 24 real values given by the sensors and control the torque applied to the four motors on the robot, ranging from \(-1\) to \(1\). Specifically, the agent adopts a policy model of an MLP consisting of two hidden layers, each containing 64 neurons. To learn the policy model, there are a total number of 6020 parameters to be optimized. Here, we adopt the CMA-ES algorithm as the optimizer, where the detailed algorithm settings can be found at Tab. IV. * ATART Pong: In this RL task, the agent controls a paddle on the right side of the screen and tries to use it to hit a ball away from its own goal and into the opponent's goal. For decision makings, the agent observes the raw frame from the game, which is a \(210\times 160\) colored image. To reduce the complexity, we pre-process the image first by converting the full-size colored image into gray-scale one with a smaller size (\(80\times 80\)). The agent employs a policy model of an MLP with 2 hidden layers, where the first and second hidden layers have 64 and 32 neurons respectively. The model takes the pre-processed image as input and outputs the probability of all available actions. To simplify the decision-making process, we always choose the action with the highest probability. In order to learn this policy model, there are a total number of 411,942 parameters to be optimized. Here, we adopt PGPE as the optimizer, where the detailed algorithm settings can be found in Tab. V. As shown in Fig. 10, in both RL tasks, utilizing multiple processes results in a significant performance improvement in terms of computational time per iteration. It is worth noting that although both tasks are mainly executed via CPU computing, the entire algorithm workflow still runs on GPU. This behavior is made possible by our design as described in Section IV, where the algorithm module and the problem module are decoupled to allow for tasks that are CPU-intensive and not native to EvoX to achieve high parallelism. In addition to performance improvement in terms of computational time, the experimental results also demonstrate the potential of EC algorithms in tackling challenging RL tasks. The Bipedal Walker task poses a significant challenge in evolving a valid walking strategy. As shown in Fig. (a)a, there is a large variation during the later part of the optimization process. The reason for this is that the point when an agent learns to walk marks the edge of a plateau in the optimization process. Before this point, the agent receives little to no reward, but once it starts moving forward, it quickly earns high rewards. So the high variance reflects the fact that the algorithm sometimes discovers the walking strategy early on, leading to significantly better performance. However, in some attempts, the algorithm fails to discover a decent strategy, leading to poor performance. In the ATARI Pong task, despite having a large number of trainable parameters (400K), the algorithm, as shown in Fig. (b)b, steadily improves the policy towards a promising solution, with little variance. In summary, EvoX provides comprehensive support for extending EC algorithms to RL tasks, as well as other com \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Population Size & 128 \\ Learning rate for standard deviation & 0.011 \\ Adaptive Optimizer & Adam [67] \\ Fitness Shaping & Yes \\ Standard deviation max change & 0.2 \\ \hline \hline \end{tabular} \end{table} TABLE V: Parameter settings of PGPE \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Population Size & 128 \\ Initial step size & 0.1 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Parameter settings of PGPE Fig. 10: Performance comparison of two RL tasks with multiprocessing enabled and disabled, in terms of computational time per iteration. (a) In the Bipedal Walker task, enabling multiprocessing resulted in almost \(6\times\) performance improvement. (b) In ATARI Pong, enabling multiprocessing resulted in almost \(12\times\) performance improvement. \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Population Size & 128 \\ Initial step size & 0.1 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Parameter settings of CMA-ES Fig. 9: In-game images of two RL tasks. (a) The Bipedal Walker: in this RL task, the agent receives the observation as a vector and controls the robot to move forward. (b) The ATARI game Pong: in this RL task, the agent can observe the raw image of the game and control the right paddle. plex application problems. With little additional engineering work, practitioners can instantly enjoy the benefits of EvoX by connecting their problems via the seamless and friendly interface in the problem module. ## VII Conclusion and future work In this paper, we presented EvoX, a distributed GPU-accelerated EC library that significantly improves the scalability of the EC workflow. When using a single machine, we observed a two-orders-of-magnitude speed-up in certain settings. By running on multiple machines, we were able to scale the workflow even further. This advancement allows existing EC algorithms to solve problems with high-dimensional search spaces more efficiently. Our work also paves the way for future EC development, making it easy to develop more complex EC algorithms that can leverage increasing computing power. Furthermore, meta-learning algorithms could also be greatly improved by the increased computing power, as the process of meta-learning is known to be computationally demanding. Additionally, we simplify the process of applying EC algorithms to extended applications. With EvoX, EC algorithms can work with neuroevolution or RL tasks seamlessly, substantially reducing the barrier for researchers to tackle these problems. However, during the development of EvoX, we also identified certain limitations. First, many algorithms were not designed with parallelism in mind, thus making it difficult to accelerate them via advanced GPU computing techniques. Moreover, many extended applications are CPU-intensive and tend to be the most time-consuming part of the workflow, thus somehow diminishing the impact of acceleration in EC algorithms. In the future, we plan to incorporate more parallelism within existing algorithms and integrate external tasks directly into our library to enable GPU computing in both algorithms and problems. ## Acknowledgement We wish to thank Zhenyu Liang for his contributions to implementing the multiple-objective algorithms. We would also like to express our appreciation to Kebin Sun, Lishuang Wang and Jiachun Li for their efforts in helping with the implementations and testings.
2306.06485
The Defense of Networked Targets in General Lotto games
Ensuring the security of networked systems is a significant problem, considering the susceptibility of modern infrastructures and technologies to adversarial interference. A central component of this problem is how defensive resources should be allocated to mitigate the severity of potential attacks on the system. In this paper, we consider this in the context of a General Lotto game, where a defender and attacker deploys resources on the nodes of a network, and the objective is to secure as many links as possible. The defender secures a link only if it out-competes the attacker on both of its associated nodes. For bipartite networks, we completely characterize equilibrium payoffs and strategies for both the defender and attacker. Surprisingly, the resulting payoffs are the same for any bipartite graph. On arbitrary network structures, we provide lower and upper bounds on the defender's max-min value. Notably, the equilibrium payoff from bipartite networks serves as the lower bound. These results suggest that more connected networks are easier to defend against attacks. We confirm these findings with simulations that compute deterministic allocation strategies on large random networks. This also highlights the importance of randomization in the equilibrium strategies.
Adel Aghajan, Keith Paarporn, Jason R. Marden
2023-06-10T16:45:56Z
http://arxiv.org/abs/2306.06485v1
# The Defense of Networked Targets ###### Abstract Ensuring the security of networked systems is a significant problem, considering the susceptibility of modern infrastructures and technologies to adversarial interference. A central component of this problem is how defensive resources should be allocated to mitigate the severity of potential attacks on the system. In this paper, we consider this in the context of a General Lotto game, where a defender and attacker deploys resources on the nodes of a network, and the objective is to secure as many links as possible. The defender secures a link only if it out-competes the attacker on both of its associated nodes. For bipartite networks, we completely characterize equilibrium payoffs and strategies for both the defender and attacker. Surprisingly, the resulting payoffs are the same for any bipartite graph. On arbitrary network structures, we provide lower and upper bounds on the defender's maximum value. Notably, the equilibrium payoff from bipartite networks serves as the lower bound. These results suggest that more connected networks are easier to defend against attacks. We confirm these findings with simulations that compute deterministic allocation strategies on large random networks. This also highlights the importance of randomization in the equilibrium strategies. ## I Introduction Networks are ingrained in modern technological society, thanks to advances in computing, communication, and control. Critical infrastructures, transportation networks, and cyber-physical systems are a few among many examples of systems that operate through complex interconnections. While their distributed nature gives rise to operations at unprecedented scale and efficiency, it also introduces vulnerabilities to adversarial interference. A central component of ensuring the security of networked systems is the strategic allocation of limited resources to defend against potential attacks. For example, allocating firewall and malware detectors, ensuring secure state estimation in critical infrastructures, and deploying defensive assets are among many problems requiring the strategic allocation of limited resources [1, 3, 4, 5, 7, 9, 10, 11, 23, 27, 27, 35, 37, 38]. In this paper, we focus on the competitive allocation of resources between an attacker and a defender of a network. We formulate such a setting in the context of a General Lotto game. The General Lotto game is a popular variant of the famous Colonel Blotto game, wherein two opponents simultaneously allocate their limited resources against each other in order to secure multiple valuable battlefields [6, 15, 17, 22, 24, 29, 31, 36]. Colonel Blotto games and General Lotto games have been studied for well over 100 years, where its primary line of research has focused on characterizing equilibrium strategies and payoffs. More recently, they have been utilized to model complex adversarial environments that are relevant to many applications of interest [2, 12, 16, 25, 33]. In its classic formulation, a player's objective is to accumulate as much value as possible by securing individual battlefields. While this model is descriptive of many types of applications, this classic objective fails to precisely capture many other important scenarios. In particular, the operation of electricity grids, cyber networks, oil pipelines, and logistics chains requires uninterrupted interaction and communication between collaborating network nodes. In these applications, the defender's success in protecting such networked systems depends on securing certain _subsets_ of nodes, rather than securing as many individual nodes as possible. Ensuring the security of networked systems often requires the protection of certain graph characteristics, such as sub-networks or connected paths [2, 13, 30, 32, 33]. In this paper, we formulate a networked General Lotto game where a defender and attacker allocate resources to the nodes of a network. The defender's objective is to preserve the functioning of as many edges in the network as possible - an edge is able to function if both of its endpoint nodes are under the control of the defender. Thus, in order to secure any given edge in the network, the defender is required to send more resources to _both_ endpoint nodes than the attacker. On the other hand, the attacker's objective is to disrupt the functioning of as many edges in the network as possible. As such, the attacker only needs to control _at least one_ of the endpoint nodes to disrupt the edge. The performance for each player is measured by the fraction of total edges in the network that it has secured. These objectives highlight the inherent asymmetry that exists in attack-defense scenarios. Namely, the asymmetry favors attackers, as attacks are more easily amplified by the network's connectivity properties (e.g. spreading of malware, disruption of communications, etc.). The networked setting vastly differs from the classical formulation, and consequently the resulting equilibrium strategies will be vastly different. To illustrate, consider the figure below: There are three battlefields. In the classic setup, the objective is to win as many battlefields as possible, wherein each battlefield has a value of \(1/3\). For the allocation strategies shown, the blue player (defender) wins two battlefields and thus obtains a payoff of \(2/3\). However, for the same allocation strategies in the networked setup, the defender still wins both battlefields, but _does not_ attain any positive payoff. This is because it is necessary (but not sufficient) for the defender to win the center battlefield in order to secure any of the two edges (here, each are worth 1/2). Thus, one expects the equilibrium (equivalently, optimal) strategies in such settings to depend on the network structure under which the battlefields are arranged. Indeed, one of the main goals of this paper is to derive equilibrium strategies in networked General Lotto games of the above form. Moreover, a central question we seek to address in this paper is how the performance of the attacker and defender may change depending on characteristics of the network structure. We further illustrate the intricacies that can arise in simplified examples below. ### First illustrative examples To generate some initial intuition, we consider the following simplified setup. The defender \(\mathcal{X}\) has a budget of \(X\in\mathbb{R}_{\geq 0}\) resources, and the attacker \(\mathcal{Y}\) has \(Y\in\mathbb{R}_{\geq 0}\) resources. Each must decide how to allocate their resources to the nodes of a network (suppose there are \(n\) nodes). The set of feasible allocations for \(\mathcal{X}\) is the set of vectors \(\{(x_{1},\ldots,x_{n})\in\mathbb{R}_{\geq 0}^{n}:\sum_{i=1}^{n}x_{i}=X\}\) (and similarly for \(\mathcal{Y}\)). Performance is measured as the fraction of edges that a player secures. In order to secure an edge of the network, \(\mathcal{X}\) must win both endpoint nodes, whereas \(\mathcal{Y}\) only needs to win at least one. If they tie on a node, then we will assume the node is awarded to the attacker \(\mathcal{Y}\). Three different networks are shown in Figure 1, where each player has a budget of 6 resources. Here, we illustrate that the graph structures significantly impact the players' attainable performances. (a) Notice that in the star graph with six nodes (leftmost diagram), the attacker is guaranteed to secure the entire network by allocating resources only to the center node. This is because all edges are connected to the center. The defender here does not have enough resources to counter this strategy, and thus cannot secure a single edge. (b) Let us now consider the ring graph with six nodes (center diagram). For the same budgets, the attacker now cannot guarantee that it secures the entire network. This is because the attacker wins at least two edges by securing a single node, regardless of what the defender does. The best payoff it can guarantee itself on this network is two edges out of six. (c) A similar analysis applies to the line graph with five nodes (rightmost diagram). The attacker here can only guarantee that it secures two edges out of four. Clearly, the players' performance guarantees in these simplified examples are shaped by the structure of the networks. This paper focuses on analyzing the _Network General Lotto_ game (Section II), wherein randomized allocations that satisfy the budget in expectation are permitted. Surprisingly, our analysis finds that in the Network General Lotto game, the performance guarantees _do not change_ across the three networks shown in Figure 1. That is, a player's performance in an equilibrium is _identical_ on the star, ring, and line network. A summary of our main contributions are given below. ### Our Contributions Among the primary contributions of this paper, we establish equilibrium payoffs and strategies for the Network General Lotto game in the class of bipartite networks (Theorem 3.1), which includes star graphs, rings with an even number of nodes, line graphs, tree graphs, and many others. Despite the variety of topologies that belong to the class of bipartite networks, the equilibrium payoffs for any bipartite network are _identical_, and depends only on the relative budgets between the players. Hence, the payoffs are independent of any other characteristics. However, the equilibrium allocation strategies are intimately linked to the specific network topology. Beyond bipartite networks, we identify analytical lower and upper bounds on the defender's security payoff guarantee on any network (Theorem 4.1). The lower bound coincides with the equilibrium payoff attained on any bipartite network. This suggests that bipartite networks are the "easiest" networks to attack and "hardest" to defend. The defender can guarantee the lower bound payoff by implementing its equilibrium strategy characterized for bipartite networks. We further highlight the dependence of network structure by considering scenarios where the defender can only use deterministic strategies, as in Figure 1. Here, we identify bipartite and complete graphs as the two extreme structures determining the range of the defender's effectiveness - on complete graphs, a positive payoff is ensured against the widest range of attacker budgets compared to arbitrary graphs. On bipartite graphs, a positive payoff is ensured for the smallest range of attacker budgets (Proposition 6.1). These results are corroborated through numerical simulations on various random Erdos-Renyi graphs. Moreover, the simulations highlight the importance for the defender to implement randomized strategies, as there is a significant degradation in performance compared to the lower bound of performance in Theorem 4.1. ### Related work A wide array of recent literature studies resource allocation over networks using a variety of different formulations in order to study the security of networked systems [16, 32]. The impact of cyber attacks on dynamic networked systems is a central area of study in control and cyber-physical systems [5, 23, 28]. Game-theoretic approaches have focused on the problem of how a network of agents will invest in costly security protection resources [1, 18]. The work of [1] considers a behavioral attack graph model and determines pure-strategy equilibrium security investments among multiple defenders of a network. Another formulation considers resources that can dynamically be re-allocated via network links [8, 33]. The Colonel Blotto game and its variants are emerging as a flexible framework for studying complex adversarial interactions. As previously discussed, the most well-known results here are for settings where each player's objective is to accumulate as much value as possible by securing individual valuable battlefields [20]. As such, alternate player objectives are a central focus in the Colonel Blotto literature, where success depends on securing subsets of battlefields rather than securing individual battlefields [2, 14, 16, 20, 21, 32, 34]. In [21], a defender has a weakest-link objective associated with securing multiple networks, while the attacker has a best-shot objective associated with only securing a single network. Network Colonel Blotto games are considered in [16], where a defender's payoff is determined by whether it can preserve certain network characteristics such as its connectivity or average degree. This work primarily showcases computational methods to find approximate solutions in a model with integer-restricted allocations. In [32], pure strategy equilibria are found in a network formation Blotto game, where players succeed if they secure connected components of a graph. Our work contributes to this literature by featuring analytical characterizations of equilibrium mixed strategies in Network General Lotto games. **Notation:** We denote \([n]=\{1,\ldots,n\}\) as the set of the first \(n\) natural numbers. For a subset \(B\subseteq[n]\) and a vector \(x\in\mathbb{R}^{n}\), we write \(x_{B}=(x_{d})_{d\in B}\in\mathbb{R}^{|B|}\). We denote the set of non-negative real numbers as \(\mathbb{R}_{+}\). We will use bold lettering to denote random variables, i.e. \(\mathbf{x}\sim F\) is a random variable with realization \(x\) over some distribution \(F\). ## 2 Problem Formulation Consider a graph \(G=(V,E)\), where \(V\triangleq[n]\) is the set of vertices (or nodes) and \(E\subseteq V\times V\) is the set of edges. We will use terms "network" and "graph" interchangeably. In this paper, we consider networks containing at least one edge, i.e., \(E\neq\varnothing\). In a _Network General Lotto_ game over the graph \(G\), there are two opposing players \(\mathcal{X}\) and \(\mathcal{Y}\), each aiming to secure as many edges as possible by allocating resources to the nodes. In order to secure an edge \(\{i,j\}\in E\), player \(\mathcal{X}\) is required to out-compete \(\mathcal{Y}\), i.e. send more resources, on both nodes \(i,j\in V\). Player \(\mathcal{Y}\) secures the edge \(\{i,j\}\) if \(\mathcal{X}\) fails to secure it. Indeed, it is more difficult for \(\mathcal{X}\), who we refer to as the defender, to secure edges than it is for \(\mathcal{Y}\), who we refer to as the attacker. Formally, an allocation for player \(\mathcal{X}\) is a vector \(x=(x_{i})_{i\in[n]}\in\mathbb{R}_{+}^{n}\), and similarly a vector \(y\in\mathbb{R}_{+}^{n}\) for player \(\mathcal{Y}\). Player \(\mathcal{X}\) (player \(\mathcal{Y}\)) has a limited resource budget \(X\) (budget \(Y\)) to allocate in expectation. An admissible strategy for \(\mathcal{X}\) is any \(n\)-variate (cumulative) distribution function \(F_{\mathcal{X}}:\mathbb{R}_{+}^{n}\to[0,1]\) that belongs to \[\mathbb{F}(X)\triangleq\left\{F_{\mathcal{X}}:\mathbb{E}_{\mathbf{x}\sim F_{ \mathcal{X}}}\left[\sum_{i\in[n]}\mathbf{x}_{i}\right]\leq X\right\}. \tag{1}\] In words, player \(\mathcal{X}\) can implement any randomization of allocations as long as it does not exceed its budget in _expectation_. The admissible strategies belonging to (1) are the defining feature of General Lotto games relative to Colonel Blotto games [17, 22, 24]: in Blotto games, the distribution \(F_{\mathcal{X}}\) must not have support over any allocation that exceeds the budget [22, 29]. If \(x\) and \(y\) are allocations for players \(\mathcal{X}\) and \(\mathcal{Y}\), the payoff awarded to player \(\mathcal{X}\) is \[\pi_{\mathcal{X}}(x,y;G)\triangleq\frac{1}{|E|}\sum_{\{i,j\}\in E}1_{\{x_{i} \geq y_{i},x_{j}\geq y_{j}\}}, \tag{2}\] where \[1_{\{x_{i}\geq y_{i},x_{j}\geq y_{j}\}}\triangleq\begin{cases}1,&\text{if }x_{i} \geq y_{i},x_{j}\geq y_{j}\\ 0,&\text{otherwise}\end{cases}.\] Figure 1: Three examples of a simplified setup where players are restricted to pure allocations. Here, we illustrate the max-min allocation strategy for the attacker (red) on three distinct networks – that is, the highest payoff the attacker can guarantee regardless of the defender’s (blue) allocation. Here, the defender and attacker each have 6 resource units, and we assume ties are awarded to the attacker. (Left) The attacker can guarantee that it secures every edge in the star network simply by sending all resources to the center node. There is no strategy that the defender can use to secure a single edge. (Center) On the ring graph of six nodes, the attacker can only guarantee that it secures two out of six edges. (Right) In the line network of five nodes, the attacker can only guarantee that it secures two out of four edges. In these examples, the topology of the graph impacts the performance guarantees for the players. In other words, player \(\mathcal{X}\) secures the edge \(\{i,j\}\), if he wins _both_ nodes \(i,j\). Conversely, player \(\mathcal{Y}\) secures the edge \(\{i,j\}\) if he wins at least one of the nodes \(i,j\). Therefore, the payoff awarded to player \(\mathcal{Y}\) is \(\pi_{\mathcal{Y}}(x,y)=1-\pi_{\mathcal{X}}(x,y)\), i.e. it is a constant-sum game. Given a strategy profile \((F_{\mathcal{X}},F_{\mathcal{Y}})\in\mathbb{F}(X)\times\mathbb{F}(Y)\), the _expected payoffs_ to each player are denoted as \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}};G) =\mathbb{E}_{\mathbf{x}\sim F_{\mathcal{X}},\mathbf{y}\sim F_{ \mathcal{Y}}}[\pi_{\mathcal{X}}(\mathbf{x},\mathbf{y};G)], \tag{3}\] \[\pi_{\mathcal{Y}}(F_{\mathcal{X}},F_{\mathcal{Y}};G) =\mathbb{E}_{\mathbf{x}\sim F_{\mathcal{Y}},\mathbf{y}\sim F_{ \mathcal{Y}}}[\pi_{\mathcal{Y}}(\mathbf{x},\mathbf{y};G)].\] We will denote a Network General Lotto game over a graph \(G\) with budgets \(X,Y\) as the triple \((X,Y,G)\). **Definition 2.1**: _An equilibrium of \((X,Y,G)\) is a strategy profile \((F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})\in\mathbb{F}(X)\times\mathbb{F}(Y)\) that satisfies_ \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*};G)\leq\pi_{\mathcal{X}} (F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*};G)\leq\pi_{\mathcal{X}}(F_{\mathcal{ X}}^{*},F_{\mathcal{Y}};G) \tag{4}\] _for all \(F_{\mathcal{X}}\in\mathbb{F}(X)\) and \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\)._ We will often omit the dependence of \(\pi_{\mathcal{X}}\) on the graph \(G\) when the context is clear. ## 3 Bipartite networks In this section, we will restrict attention to the class of bipartite graphs. The definition of a bipartite graph is given below: **Definition 3.1**: _A graph \(G=(V,E)\) is bipartite if there are two disjoint subsets of nodes \(B_{1},B_{2}\) such that \(B_{1}\cup B_{2}=V\), and no edges exist between nodes in the same subset._ Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles. Recall that the example networks from Figure 1, i.e. star, ring (even number of nodes), and line, were all bipartite. In the simplified setting from the examples where only deterministic integer allocations were permitted, the players' performance guarantees heavily depended on the network's structure. ### Equilibrium characterizations We present our main result below, which is an equilibrium characterization of the Network Lotto game for any bipartite network. **Theorem 3.1**: _Consider a Network General Lotto game \((X,Y,G)\), where \(G\) is bipartite. The equilibrium payoff to \(\mathcal{X}\) is given by_ \[\gamma(X,Y)\triangleq\begin{cases}1-\frac{Y}{X},&\text{if }X\geq 2Y\\ \frac{X}{4Y},&\text{if }X<2Y\end{cases}, \tag{5}\] _and the payoff to \(\mathcal{Y}\) is \(1-\gamma(X,Y)\). An equilibrium strategy profile \((F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})\) is given as follows. Let \(B_{1},B_{2}\) be the bipartite partition of \(G\)._ _If \(X<2Y\):_ \[F_{\mathcal{X}}^{*}(x) =1-\frac{X}{2Y}+\frac{X|E|}{4Y^{2}}\min\left\{\left\{\frac{x_{i}} {d_{i}}\right\}_{i\in V},\frac{2Y}{|E|}\right\} \tag{6}\] \[F_{\mathcal{Y}}^{*}(y) =\frac{|E|}{4Y}\sum_{k=1,2}\min\left\{\left\{\frac{y_{i}}{d_{i}} \right\}_{i\in B_{k}},\frac{2Y}{|E|}\right\}\] _If \(X\geq 2Y\):_ \[F_{\mathcal{X}}^{*}(x) =\frac{|E|}{X}\min\left\{\left\{\frac{x_{i}}{d_{i}}\right\}_{i\in V ^{*}},\frac{X}{|E|}\right\} \tag{7}\] \[F_{\mathcal{Y}}^{*}(y) =1-\frac{2Y}{X}+\frac{Y|E|}{X^{2}}\sum_{k=1,2}\min\left\{\left\{ \frac{y_{i}}{d_{i}}\right\}_{i\in B_{k}},\frac{X}{|E|}\right\}\] _The equilibrium payoff (5) is identical for any bipartite graph \(G\), and are hence independent of any other graph characteristics. It only depends on the relative budgets \(X\) and \(Y\). The equilibrium strategies \((F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})\), however, are strongly linked to the particular structure of the bipartite graph. We elaborate on the equilibrium strategies specified by (6) and (7) below._ ### Interpretation of equilibrium strategies For the case \(X<2Y\) (6), the defender's strategy \(F_{\mathcal{X}}^{*}\in\mathbb{F}(X)\) randomizes over allocations as follows. With probability \(1-\frac{X}{2Y}\), no resources are allocated at all. With probability \(\frac{X}{2Y}\), a single sample \(u\sim\text{Unif}[0,4Y]\) is drawn, and the allocation is determined as \(x_{i}=\frac{d_{i}}{2|E|}\cdot u\) for each node \(i\in V\). In other words, \(\mathcal{X}\) allocates a total of \(u\) resources among the nodes proportionally to their degree centralities. The allocations to each node \(x_{i}\) are random, but correlated through the uniform sample \(u\). One can verify that this strategy is budget-feasible in expectation. An illustration is depicted in Figure 2 (left). The attacker's strategy \(F_{\mathcal{Y}}^{*}\in\mathbb{F}(Y)\) randomizes over allocations as follows. With probability 1/2, it allocates resources only to nodes in the partition \(B_{k}\), \(k\in\{1,2\}\), in the following manner. A single sample \(u\sim\text{Unif}[0,2Y]\) is drawn. The allocation to each node \(i\in B_{k}\) is determined as \(y_{i}=\frac{d_{i}}{|E|}\cdot u\) and to each node \(i\in B_{-k}\) as \(y_{i}=0\). An illustration is depicted in Figure 2 (right). A similar interpretation applies for the case \(X\geq 2Y\). The difference here is that \(\mathcal{X}\) is the "stronger" player, and will not give up (allocate nothing) with a non-zero probability. As the "weaker" player, \(\mathcal{Y}\) assigns a non-zero probability to give up. Revisiting the examples from Figure 1, we can begin to reason why the initial intuition would not apply to the full Network Lotto game. Considering the star network, even if the attacker has more resources than the defender, and allocates only to the center node, the defender is able to counter this strategy because the admissible strategy space \(\mathbb{F}(X)\) enables it to randomize over allocations that exceed the attack on the center. Thus, the attacker itself needs to randomize its allocation to other nodes in the network as well. ## 4 Beyond Bipartite Graphs In this section, we investigate the players' performance on graphs that are not bipartite. While we do not provide equilibrium characterizations for arbitrary graphs \(G\), we seek to provide bounds on the defender's optimal security payoff: \[S_{\mathcal{X}}^{*}(X,Y,G)\triangleq\max_{F_{X}\in\mathbb{F}(X)}\min_{F_{ \mathcal{Y}}\in\mathbb{F}(Y)}\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}};G) \tag{8}\] In the result below, we establish lower and upper bounds on player \(\mathcal{X}\)'s optimal security payoff for arbitrary graphs. **Theorem 4.1**: _Consider a Network General Lotto game \((X,Y,G)\), where \(G=(V,E)\) is any graph with \(n=|V|\geq 2\) nodes. Then_ \[\gamma(X,Y)\leq S_{\mathcal{X}}^{*}(X,Y,G)\leq\gamma_{n}(X,Y) \tag{9}\] _where \(\gamma(X,Y)\) is defined in (5), and_ \[\gamma_{n}(X,Y)\triangleq\begin{cases}1-\frac{n}{2(n-1)}\frac{Y}{X},&\text{if }X \geq\frac{n}{n-1}Y\\ \frac{n-1}{2n}\frac{Y}{X},&\text{if }X<\frac{n}{n-1}Y\end{cases} \tag{10}\] _for \(n=2,3,\ldots\)._ There are several interesting things to note in Theorem 4.1. Player \(\mathcal{X}\) can find a strategy \(F_{X}\in\mathbb{F}(X)\) that guarantees itself a payoff of at least \(\gamma(X,Y)\), regardless of the graph \(G\) and \(\mathcal{Y}\)'s strategy. The fact that \(\mathcal{X}\) can ensure a payoff of at least \(\gamma(X,Y)\) on any network suggests that bipartite graphs are the "most difficult" to defend. As such, Theorem 4.1 implies that the optimal performance guarantee cannot decrease by adding any amount of additional edges to an existing bipartite network. Thus, only the defender can benefit from adding edges to the network, e.g. by forming cliques or odd-length cycles. An illustration summarizing Theorems 3.1 and 4.1 is provided in Figure 3. The upper bound of (9) indicates that \(\mathcal{Y}\) can find a strategy \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\) that ensures \(\mathcal{X}\) cannot obtain a payoff that exceeds \(\gamma_{n}(X,Y)\). We informally describe such a strategy now and in Figure 4. A detailed analysis is provided in Section V-C. For any graph \(G=(V,E)\), consider the collection of \(n\) vertex covers \(V_{i}=V\backslash\{i\}\). With probability \(1/n\), the strategy \(F_{\mathcal{Y}}\) selects the vertex cover \(V_{k}\), \(k\in V\), and allocates a total of \(u\sim\text{Unif}[0,2Y]\) resources among nodes \(i\in V_{k}\) proportionally according to \[y_{i}=\frac{u}{2|E|}\cdot\begin{cases}d_{i},&\text{if }i\notin\mathcal{N}_{k}\\ d_{i}+1,&\text{if }i\in\mathcal{N}_{k}\end{cases} \tag{11}\] The first entry above indicates that the share of resources sent to nodes not connected to \(k\) is proportional to their degree centralities. The second entry above places slightly more weight to nodes that are direct neighbors of \(k\). The motivation to add extra weight to these nodes is because each one uniquely covers the edges \(\{i,k\}\), \(i\in\mathcal{N}_{k}\). There is more redundancy for any other edge \(\{i,j\}\) in the graph, since at least two nodes \(i,j\notin\mathcal{N}_{k}\) are able to cover it. ## V Analysis and Proofs In this section, we highlight analytical techniques and provide the proofs for our main results. ### Correlated allocation strategies Before proceeding with the proofs, we first define a class of randomized allocation strategies which we term _correlated allocation strategies_. This class of strategies serves as the basis for our analysis, and generalizes the randomized allocation strategies that were detailed in Sections III and IV. **Definition 5.1**: _Consider a player \(\mathcal{Z}\in\{\mathcal{X},\mathcal{Y}\}\) with resource budget \(Z\), and a finite collection of subsets of nodes \(\mathcal{D}\subseteq 2^{V}\). A correlated allocation strategy is specified by a cumulative distribution function \(F_{Z}\in\mathbb{F}(Z)\) for \(z\in\mathbb{R}_{+}^{|V|}\) of Fig. 3: Theorem 3.1 states \(\gamma(X,Y)\) (5) is the equilibrium payoff to the defender \(\mathcal{X}\) on any bipartite network (left and center networks). The right network is not bipartite – Theorem 4.1 states upper and lower bounds on the max-min value for \(\mathcal{X}\). Fig. 2: Illustration of equilibrium strategies on bipartite networks (Theorem 3.1) in the regime \(X<2Y\). The case \(X\geq 2Y\) is similar. (Left) The defender’s equilibrium strategy is to draw a sample \(u\) from the density function shown. Using a total of \(u\) resources, it distributes to each node in the network proportionally to their degree centralities, i.e. \(x_{i}=\frac{d_{i}}{2|E|}u\). Observe that with probability \(1-\frac{X}{2Y}\), no resources are allocated at all. (Right) The attacker’s equilibrium strategy is to first choose one of the node partitions \(B_{k}\), \(k\in\{1,2\}\) with probability 1/2. It then draws a sample \(u\) from the uniform density function shown. Using a total of \(u\) resources, it distributes to each node \(i\) in \(B_{k}\) proportionally to their degree centralities, i.e. \(y_{i}=\frac{d_{i}}{|E|}u\). For each node \(i\notin B_{k}\), no resources are allocated. The payoffs that result from this strategy profile is identical for every bipartite network. the form \[F_{\mathcal{Z}}(z)=1-\delta+\frac{\delta^{2}}{2Z}\sum_{D\in\mathcal{D}}p_{D}\min \left\{\left\{\frac{z_{i}}{w_{D,i}}\right\}_{i\in D},\frac{2Z}{\delta}\right\}, \tag{12}\] for some \(\delta\in[0,1]\), probabilities \(\{p_{D}\}_{D\in\mathcal{D}}\) s.t. \(\sum_{D\in\mathcal{D}}p_{D}=1\), and positive weights \(\{w_{D,i}\}_{i\in D}\) for each \(D\in\mathcal{D}\) s.t. \(\sum_{i\in D}w_{D,i}=1\). A correlated allocation strategy can more intuitively be described as follows. To generate a sample allocation \(\mathbf{z}\sim F_{\mathcal{Z}}\), player \(\mathcal{Z}\) first randomly selects a subset of nodes \(D\) from the collection \(\mathcal{D}\) according to the probability vector \(\{p_{D}\}_{D\in\mathcal{D}}\). Player \(\mathcal{Z}\) then allocates resources only to nodes \(i\in D\) in the following fashion. With probability \(1-\delta\), no resources are allocated at all. With probability \(\delta\), a single sample from the uniform distribution on the interval \([0,\frac{2Z}{\delta}]\) is taken, i.e. \(\mathbf{u}\sim\text{Unif}\left[0,\frac{2Z}{\delta}\right]\). Then, the resources are allocated according to \(\mathbf{z}_{i}=w_{D,i}\cdot\mathbf{u}\) for \(i\in D\) and \(\mathbf{z}_{i}=0\) for \(i\notin D\). It follows that the strategy (12) is budget-feasible, i.e. \(\mathbb{E}_{\mathbf{z}\sim F_{\mathcal{Z}}}\left[\sum_{i=1}^{n}\mathbf{z}_{i }\right]=Z\). Thus, the random variable \(\mathbf{u}\) serves as a correlating device on the player's allocations to nodes in \(D\), where the amounts are proportional to the weights \(w_{D,i}\). We identify the following structural property regarding best responses against correlated allocation strategies. **Lemma 5.1**: _Consider a correlated allocation strategy \(F_{\mathcal{X}}\in\mathbb{F}(X)\) for player \(\mathcal{X}\) of the form (12). Then for any \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\), there exists a \(F_{\mathcal{Y}}^{\prime}\in\mathbb{F}(Y)\) such that_ * \(\pi_{\mathcal{Y}}(F_{\mathcal{Y}}^{\prime},F_{\mathcal{X}})\geq\pi_{\mathcal{ Y}}(F_{\mathcal{Y}},F_{\mathcal{X}})\)_, and_ * _for all_ \(i\in V\) _s.t._ \(i\in D\) _for some_ \(D\in\mathcal{D}\)_, we have_ \[\mathbb{P}_{\mathbf{y}\sim F_{\mathcal{Y}}^{\prime}}\left(\mathbf{y}_{i}>\frac {2X\max_{D\in\mathcal{D}}\{w_{D,i}\}}{\delta}\right)=0.\] (13) _Here, we assume \(w_{D,i}=0\) if \(i\notin D\). An identical result holds with player indices reversed._ In words, a player prefers not to randomize over allocations outside of the support of the other player's correlated allocation strategy. This is because for some distribution \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\) such that \(\mathbb{P}_{\mathbf{y}\sim F_{\mathcal{Y}}}\left(\mathbf{y}_{i}>\frac{2X\max_ {D\in\mathcal{D}}\{w_{D,i}\}}{\delta}\right)>0\), we can use another distribution \(F_{\mathcal{Y}}^{\prime}\) such that \[F_{\mathcal{Y}}^{\prime}\left(y\right)=\begin{cases}F(y),&y<\frac{2X\max_{D\in \mathcal{D}}\{w_{D,i}\}}{\delta}\\ 1,&\text{otherwise}\end{cases},\] without losing any payoff. ### Equilibrium characterizations on bipartite graphs Consider a bipartite graph \(G=(V,E)\) with partition \(B_{1},B_{2}\). Let \(d_{i}\) be the degree of node \(i\in V\). Recall the profile \((F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})\) of correlated allocation strategies defined in (6) and (7). To reiterate, * If \(X<2Y\): \(\mathcal{X}\) uses the correlated allocation strategy with \(\mathcal{D}=\{V\}\), \(\delta=\frac{X}{2Y}\), and \(w_{V,i}=\frac{d_{i}}{2|E|}\) for \(i\in V\). From (12), we recover \[F_{\mathcal{X}}^{*}(x)=1-\frac{X}{2Y}+\frac{X|E|}{4Y^{2}}\min\left\{\left\{ \frac{x_{i}}{d_{i}}\right\}_{i\in V},\frac{2Y}{|E|}\right\}.\] (14) Player \(\mathcal{Y}\) uses the correlated allocation strategy with \(\mathcal{D}=\{B_{1},B_{2}\}\), \(p_{B_{k}}=1/2\) for \(k\in\{1,2\}\), \(\delta=1\), and \(w_{B_{k},i}=\frac{d_{i}}{|E|}\) for \(k\in\{1,2\}\) and \(i\in B_{k}\). From (12), we recover \[F_{\mathcal{Y}}^{*}(y)=\frac{|E|}{4Y}\sum_{k=1,2}\min\left\{\left\{\frac{y_{i} }{d_{i}}\right\}_{i\in B_{k}},\frac{2Y}{|E|}\right\}.\] (15) * If \(X\geq 2Y\): \(\mathcal{X}\) uses the correlated allocation strategy with \(\mathcal{D}=\{V\}\), \(\delta=1\), and \(w_{V,i}=\frac{d_{i}}{2|E|}\) for \(i\in V\). From (12), \[F_{\mathcal{X}}^{*}(x)=\frac{|E|}{X}\min\left\{\left\{\frac{x_{i}}{d_{i}} \right\}_{i\in V},\frac{X}{|E|}\right\}.\] (16) Player \(\mathcal{Y}\) uses the correlated allocation strategy with \(\mathcal{D}=\{B_{1},B_{2}\}\), \(p_{B_{k}}=1/2\) for \(k\in\{1,2\}\), \(\delta=\frac{2Y}{X}\), and \(w_{B_{k},i}=\frac{d_{i}}{|E|}\) for \(k\in\{1,2\}\) and \(i\in B_{k}\). From (12), \[F_{\mathcal{Y}}^{*}(y)=1-\frac{2Y}{X}+\frac{Y|E|}{X^{2}}\sum_{k=1,2}\min\left\{ \left\{\frac{y_{i}}{d_{i}}\right\}_{i\in B_{k}},\frac{X}{|E|}\right\}.\] (17) We are now ready to prove Theorem 3.1. Proof:: To prove the theorem, we proceed with the following steps: 1. We prove for every \(F_{\mathcal{X}}\in\mathbb{F}(X)\), \(\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*})\leq\pi_{\mathcal{X}}^{*}(X,Y)\) for some \(\pi_{\mathcal{X}}^{*}(X,Y)\in\mathbb{R}_{+}\) which is the payoff of player \(\mathcal{X}\). 2. Then, we show \(\pi_{\mathcal{X}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})=\pi_{\mathcal{X}}^{*}(X,Y)\). 3. Finally, we prove \(\pi_{\mathcal{Y}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}})\leq 1-\pi_{\mathcal{X}}^{*}(X,Y)\) for every \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\). Since \(\pi_{\mathcal{Y}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})=1-\pi_{\mathcal{X}}(F_ {\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})=1-\pi_{\mathcal{X}}^{*}(X,Y)\), we also have \(\pi_{\mathcal{Y}}(F_{\mathcal{X}},F_{\mathcal{Y}})=1-\pi_{\mathcal{X}}^{*}(X,Y)\). Therefore, from the definition of equilibrium (4), proving these steps shows that \((F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})\) is an equilibrium with equilibrium payoff \(\pi_{\mathcal{X}}^{*}(X,Y)\) for player \(\mathcal{X}\) and payoff \(1-\pi_{\mathcal{X}}^{*}(X,Y)\) for player \(\mathcal{Y}\). We start the proof for the case \(X\geq 2Y\). For any distribution \(F_{\mathcal{X}}\in\mathbb{F}(X)\) for player \(\mathcal{X}\), we have \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*}) =\mathbb{E}_{F_{\mathcal{X}},F_{\mathcal{Y}}^{*}}[\pi_{\mathcal{X} }(\mathbf{x},\mathbf{y})]\] \[=\mathbb{E}_{F_{\mathcal{X}}}\left[\mathbb{E}_{F_{\mathcal{Y}}^{*}}[ \pi_{\mathcal{X}}(\mathbf{x},\mathbf{y})\mid\mathbf{x}]\right],\] Figure 4: Illustration of the attacker’s strategy on arbitrary graphs that places the upper bound \(\gamma_{n}\) on the defender’s attainable payoff (Theorem 4.1). Resources are allocated proportionally according to (11). Here, \(u\sim\text{Unif}[0,2Y]\) is the randomization on total resources. which follows from the law of total expectation. From above equality and the payoff function for player \(\mathcal{X}\) (2), we have \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*})\] \[\quad=\mathbb{E}_{F_{\mathcal{X}}}\left[\frac{1}{|E|}\sum_{\{i,j\} \in E}\mathbb{E}_{F_{\mathcal{Y}}^{*}}\left[1_{\{\mathbf{x}_{i}\geq\mathbf{y}_ {i},\mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\Big{|}\mathbf{x}\right]\right]\] \[\quad=\mathbb{E}_{F_{\mathcal{X}}}\left[\frac{1}{|E|}\sum_{\{i,j\} \in E}\mathbb{E}_{F_{\mathcal{Y}}^{*},\{i,j\}}\left[1_{\{\mathbf{x}_{i}\geq \mathbf{y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\Big{|}\mathbf{x}\right] \right],\] which follows from the fact that \(1_{\{\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\) does not depend on \(\mathbf{x}_{[n]\setminus\{i,j\}},\mathbf{y}_{[n]\setminus\{i,j\}}\). Here, we denote \(F_{\mathcal{Z},e}\), where \(\mathcal{Z}\in\{\mathcal{X},\mathcal{Y}\}\) and \(e=\{i,j\}\in E\), as the (two-dimensional) marginal distribution of \(F_{\mathcal{Z}}\) over \(\mathbf{x}_{i},\mathbf{x}_{j}\). Therefore, from above equality, we have \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*}) =\mathbb{E}_{F_{\mathcal{X}}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E }\mathbb{P}(\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}| \mathbf{x})\right]\] \[=\mathbb{E}_{F_{\mathcal{X}}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E }F_{\mathcal{Y},\{i,j\}}^{*}(\mathbf{x}_{\{i,j\}})\right].\] From (17), the bi-variate marginal distribution w.r.t allocation to nodes \(i\) and \(j\) is \[F_{\mathcal{Y},\{i,j\}}^{*}(x_{\{i,j\}})\] \[\quad=F_{\mathcal{Y}}^{*}(x)\big{|}_{x_{n}=\infty,k\in[n]\setminus \{i,j\}}\] \[\quad=1-\frac{2Y}{X}+\frac{Y|E|}{X^{2}}\left(\min\left\{\frac{x_{ i}}{d_{i}},\frac{X}{|E|}\right\}+\min\left\{\frac{x_{j}}{d_{j}},\frac{X}{|E|} \right\}\right)\] \[\quad=1-\frac{2Y}{X}+\frac{Y|E|}{X^{2}}\left(\frac{x_{i}}{d_{i}}+ \frac{x_{j}}{d_{j}}\right).\] The last equality follows from Lemma 5.1. We then have \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*})\] \[\quad=\mathbb{E}_{F_{\mathcal{X}}}\left[\frac{1}{|E|}\sum_{\{i,j\} \in E}\left(1-\frac{2Y}{X}+\frac{Y|E|}{X^{2}}\left(\frac{\mathbf{x}_{i}}{d_{i} }+\frac{\mathbf{x}_{j}}{d_{j}}\right)\right)\right]\] \[\quad=\mathbb{E}_{F_{\mathcal{X}}}\left[1-\frac{2Y}{X}+\frac{Y}{ X^{2}}\sum_{\{i,j\}\in E}\left(\frac{\mathbf{x}_{i}}{d_{i}}+\frac{\mathbf{x}_{j}}{d_{ j}}\right)\right].\] Since \[\sum_{\{i,j\}\in E}\left(\frac{x_{i}}{d_{i}}+\frac{x_{j}}{d_{j}}\right)=\sum_{ i\in[n]}d_{i}\left(\frac{x_{i}}{d_{i}}\right)=\sum_{i\in[n]}x_{i}, \tag{18}\] the above equality implies \[\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}^{*}) =\mathbb{E}_{F_{\mathcal{X}}}\left[1-\frac{2Y}{X}+\frac{Y}{X^{2}} \sum_{i=1}^{n}\mathbf{x}_{i}\right]\] \[\overset{(a)}{\leq}1-\frac{2Y}{X}+\frac{Y}{X}\] \[=1-\frac{Y}{X},\] where \((a)\) follows from \(\mathbb{E}_{F_{\mathcal{X}}}\left[\sum_{i=1}^{n}\mathbf{x}_{i}\right]\leq X\). Also, note that since \(\mathbb{E}_{F_{\mathcal{X}}^{*}}\left[\sum_{i=1}^{n}\mathbf{x}_{i}\right]=X\), we have \(\pi_{\mathcal{X}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}}^{*})=1-\frac{Y}{X}\). Similarly, for any distribution \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\) for player \(\mathcal{Y}\), we have \[\pi_{\mathcal{Y}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}})\] \[\quad=1-\mathbb{E}_{F_{\mathcal{X}}^{*},F_{\mathcal{Y}}}[\pi_{ \mathcal{X}}(\mathbf{x},\mathbf{y})]\] \[\quad=1-\mathbb{E}_{F_{\mathcal{Y}}}\left[\mathbb{E}_{F_{ \mathcal{X}}^{*}}[\pi_{\mathcal{X}}(\mathbf{x},\mathbf{y})\mid\mathbf{x}]\right]\] \[\quad=1-\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{|E|}\sum_{\{i,j \}\in E}\mathbb{E}_{F_{\mathcal{X},\{i,j\}}^{*}}\left[1_{\{\mathbf{x}_{i}\geq \mathbf{y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\Big{|}\mathbf{y}\right]\right]\] \[\quad=1-\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{|E|}\sum_{\{i,j \}\in E}\mathbb{P}(\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x}_{j}\geq \mathbf{y}_{j}|\mathbf{y})\right]. \tag{19}\] Moreover, we have \[\mathbb{P}(\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x}_{j}\geq \mathbf{y}_{j}|\mathbf{y}) =1+F_{\mathcal{X},\{i,j\}}^{*}(\mathbf{y}_{i},\mathbf{y}_{j})\] \[\quad-F_{\mathcal{X},\{i\}}^{*}(\mathbf{y}_{i})-F_{\mathcal{X},\{ j\}}^{*}(\mathbf{y}_{j})(\mathbf{y}_{j})\] \[\overset{(a)}{=}1+\frac{|E|}{X}\min\left\{\frac{\mathbf{y}_{i}}{d_ {i}},\frac{\mathbf{y}_{j}}{d_{j}}\right\}\] \[\quad-\frac{|E|}{X}\frac{\mathbf{y}_{i}}{d_{i}}-\frac{|E|}{X}\frac{ \mathbf{y}_{j}}{d_{j}}\] \[\quad=1-\frac{|E|}{X}\max\left\{\frac{\mathbf{y}_{i}}{d_{i}}, \frac{\mathbf{y}_{j}}{d_{j}}\right\},\] where \((a)\) follows from (16). Plugging above equality into (19) gives \[\pi_{\mathcal{Y}}(F_{\mathcal{X}}^{*},F_{\mathcal{Y}}) =\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{X}\sum_{\{i,j\}\in E }\max\left\{\frac{\mathbf{y}_{i}}{d_{i}},\frac{\mathbf{y}_{j}}{d_{j}}\right\}\right]\] \[\leq\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{X}\sum_{\{i,j\} \in E}\left(\frac{\mathbf{y}_{i}}{d_{i}}+\frac{\mathbf{y}_{j}}{d_{j}}\right)\right]\] \[\overset{(a)}{=}\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{X}\sum_ {i=1}^{n}\mathbf{y}_{i}\right]\] \[\overset{(b)}{\leq}\frac{Y}{X},\] where \((a)\) follows from the same argument as (18), and \((b)\) follows from \(\mathbb{E}_{F_{\mathcal{Y}}}\left[\sum_{i=1}^{n}\mathbf{y}_{i}\right]\leq Y\). The proof for the case \(X<2Y\) follows the similar idea to the case \(X\geq 2Y\). For completeness, we provide the main lines of the proof for the case \(X<2Y\). For any \(F_{\mathcal{X}}\in\mathbb{F}(X)\) we have \[\pi_{\mathcal{X}} (F_{\mathcal{X}},F_{\mathcal{Y}}^{*})\] \[=\mathbb{E}_{F_{X}}\left[\mathbb{E}_{F_{\mathcal{Y}}^{*}}\left[ \frac{1}{|E|}\sum_{\{i,j\}\in E}1_{\{\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x }_{j}\geq\mathbf{y}_{j}\}}\bigg{|}\mathbf{x}\right]\right]\] \[=\mathbb{E}_{F_{X}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E}\mathbb{ E}_{F_{\mathcal{Y},\{i,j\}}^{*}}\left[1_{\{\mathbf{x}_{i}\geq\mathbf{y}_{i}, \mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\bigg{|}\mathbf{x}\right]\right]\] \[=\mathbb{E}_{F_{X}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E}F_{ \mathcal{Y},\{i,j\}}^{*}(\mathbf{x}_{\{i,j\}})\right]\] \[=\mathbb{E}_{F_{X}^{\prime}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E }\frac{|E|}{4Y}\left(\frac{\mathbf{x}_{i}}{d_{i}}+\frac{\mathbf{x}_{j}}{d_{j} }\right)\right]\] \[=\mathbb{E}_{F_{X}^{\prime}}\left[\frac{1}{4Y}\sum_{i=1}^{n} \mathbf{x}_{i}\right]\leq\frac{X}{4Y}.\] For any \(F_{\mathcal{Y}}\in\mathbb{F}(Y)\), we have \[\pi_{\mathcal{Y}} (F_{\mathcal{X}}^{*},F_{\mathcal{Y}})\] \[=1-\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E }\mathbb{E}_{F_{\mathcal{X},\{i,j\}}^{*}}\left[1_{\{\mathbf{x}_{i}\geq\mathbf{ y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}\}}\bigg{|}\mathbf{y}\right]\right]\] \[=1-\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{1}{|E|}\sum_{\{i,j\}\in E }\mathbb{P}(\mathbf{x}_{i}\geq\mathbf{y}_{i},\mathbf{x}_{j}\geq\mathbf{y}_{j}| \mathbf{y})\right]\] \[=1-\frac{X}{2Y}+\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{X}{4Y^{2} }\sum_{\{i,j\}\in E}\max\left\{\frac{\mathbf{y}_{i}}{d_{i}},\frac{\mathbf{y}_{j }}{d_{j}}\right\}\right]\] \[\leq 1-\frac{X}{2Y}+\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{X}{4Y^{2 }}\sum_{\{i,j\}\in E}\left(\frac{\mathbf{y}_{i}}{d_{i}}+\frac{\mathbf{y}_{j}}{ d_{j}}\right)\right]\] \[=1-\frac{X}{2Y}+\mathbb{E}_{F_{\mathcal{Y}}}\left[\frac{X}{4Y^{2} }\sum_{i=1}^{n}\mathbf{y}_{i}\right]\] \[=1-\frac{X}{4Y}.\] ### Security bounds for general graphs Next, we provide the proof for the lower and upper bounds on the defender's security value \(S_{\mathcal{X}}^{*}\) detailed in Theorem 4.1. Proof of Theorem 4.1.: To establish the lower bound \(\gamma(X,Y)\leq S_{\mathcal{X}}^{*}(X,Y,G)\) (9), consider an arbitrary graph \(G\) and suppose player \(\mathcal{X}\) allocates according to the strategy (16) if \(X\geq 2Y\), and according to the strategy (14) if \(X<2Y\). For \(X\geq 2Y\), we have \[\max_{F_{\mathcal{X}}\in\mathbb{F}(X)}\min_{F_{\mathcal{Y}}\in \mathbb{F}(Y)}\pi_{\mathcal{X}}(F_{\mathcal{X}},F_{\mathcal{Y}}) \geq\min_{F_{\mathcal{Y}}\in\mathbb{F}(Y)}\pi_{\mathcal{X}}(F_{ \mathcal{X}}^{*},F_{\mathcal{Y}})\] \[=\min_{F_{\mathcal{Y}}\in\mathbb{F}(Y)}1-\pi_{\mathcal{Y}}(F_{ \mathcal{X}}^{*},F_{\mathcal{Y}})\] \[\geq 1-\frac{Y}{X}=\gamma(X,Y),\] where the last inequality follows the similar argument to the proof of Theorem 3.1. The proof for the case \(X<2Y\) is similar. To establish the upper bound \(S_{\mathcal{X}}^{*}(X,Y,G)\leq\gamma_{n}(X,Y)\) (9), we consider the following correlated allocation strategy \(G_{\mathcal{Y}}\) for player \(\mathcal{Y}\). \(\mathcal{D}\) is the collection of \(n\) vertex covers \(\{V_{k}\}_{k\in[n]}\), where we define the \(k\)th vertex cover as \(V_{k}=V\backslash\{k\}\in\mathcal{D}\) for each \(k\in V\). In words, the \(k\)th vertex cover \(V_{k}\) is the set of all nodes except node \(k\). We assume each vertex cover is chosen with equal probability \(1/n\). For simplicity, let \(w_{V_{k},i}\triangleq w_{k,i}\). We set the weights \(w_{k,i}\) as follows. For any \(k\in V\) and \(i\in V\backslash\{k\}\), \[w_{k,i}=\begin{cases}\frac{d_{i}}{2|E|},&\text{if }k\notin\mathcal{N}_{i}\\ \frac{d_{i}+1}{2|E|},&\text{if }k\in\mathcal{N}_{i}\end{cases}, \tag{20}\] where \(\mathcal{N}_{i}\) is the neighbor set of node \(i\) for \(i\in V\). In words, the weight of node \(i\) in vertex cover \(k\) is determined by how many nodes it "covers". For each neighbor already in \(V_{k}\), node \(i\) accumulates a weight \(\frac{1}{2|E|}\) (they do not need to be covered). If \(k\) is a neighbor, then \(i\) accumulates a weight of \(\frac{1}{|E|}\). It then follows that the sum of weights in any vertex cover is normalized, i.e. \(w_{k}\triangleq\sum_{i\neq k}w_{k,i}=1\). An example illustration of this strategy was provided in Figure 4. From (12), the strategy \(G_{\mathcal{Y}}\) is explicitly written as \[G_{\mathcal{Y}}(y)=1-\delta+\frac{\delta^{2}}{2Yn}\sum_{k=1}^{n}\min\left\{ \left\{\frac{y_{i}}{w_{k,i}}\right\}_{i\in V_{k}},\frac{2Y}{\delta}\right\}, \tag{21}\] where \(\delta\in[0,1]\) is a tunable parameter. The bi-variate marginal distribution over \(i,j\in V\) is then given by \[G_{\mathcal{Y},\{i,j\}}\left(y_{\{i,j\}}\right)=\] \[1-\delta+\frac{\delta^{2}}{2Yn}\Bigg{(}\min\left\{\frac{y_{i}}{w_ {j,i}},\frac{2Y}{\delta}\right\}+\min\left\{\frac{y_{j}}{w_{i,j}},\frac{2Y}{ \delta}\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{k\neq i,j}\min\left\{ \frac{y_{i}}{w_{k,i}},\frac{y_{j}}{w_{k,j}},\frac{2Y}{\delta}\right\}\Bigg{)}. \tag{22}\] For any \(F_{X}\in\mathbb{F}(X)\) with the property of Lemma 5.1, i.e. \(\mathbb{P}_{F_{X}}(\mathbf{x}_{i}>\frac{2Y}{\delta|E|})=0\), the expected payoff \(\pi_{\mathcal{X}}(F_{\mathcal{X}},G_{\mathcal{Y}})\) to \(\mathcal{X}\) is \[\pi_{\mathcal{X}}(F_{\mathcal{X}},G_{\mathcal{Y}}) =\frac{1}{|E|}\mathbb{E}_{F_{\mathcal{X}}}\left[\sum_{\{i,j\} \in E}G_{\mathcal{Y},\{i,j\}}(\mathbf{x}_{\{i,j\}})\right]\] \[=1-\delta+\frac{\delta^{2}}{2Yn|E|}\mathbb{E}_{F_{\mathcal{X}}} \left[A_{1}(\mathbf{x})+A_{2}(\mathbf{x})\right], \tag{23}\] where \[A_{1}(x) \triangleq\sum_{\{i,j\}\in E}\sum_{k\neq i,j}\min\left\{\frac{x_{i}}{w_ {k,i}},\frac{x_{j}}{w_{k,j}}\right\},\] \[A_{2}(x) \triangleq\sum_{\{i,j\}\in E}\left(\frac{x_{i}}{w_{j,i}}+\frac{x_{j}} {w_{i,j}}\right). \tag{24}\] Due to \(\min\{a,b\}\leq\frac{a+b}{2}\), we have \[A_{1}(x) \leq\frac{1}{2}\sum_{\{i,j\}\in E}\sum_{k\neq i,j}\left(\frac{x_{i} }{w_{k,i}}+\frac{x_{j}}{w_{k,j}}\right) \tag{25}\] \[=\frac{1}{2}\sum_{\{i,j\}\in E}\left(x_{i}\left(\sum_{k\neq i,j} \frac{1}{w_{k,i}}\right)+x_{j}\left(\sum_{k\neq i,j}\frac{1}{w_{k,j}}\right) \right),\] For each \(\{i,j\}\in E\), we calculate \[\sum_{k\neq i,j}\frac{1}{w_{k,i}} =\sum_{k\in\mathcal{N}_{i}\setminus\{j\}}\frac{1}{w_{k,i}}+\sum_{ k\notin\mathcal{N}_{i}}\frac{1}{w_{k,i}} \tag{26}\] \[=\frac{2|E|(d_{i}-1)}{d_{i}+1}+\frac{2|E|(n-(d_{i}+1))}{d_{i}}\] \[=2|E|\left(\frac{n-1}{d_{i}}-\frac{2}{d_{i}+1}\right)\triangleq \Gamma_{i},\] where we have used (20). We then obtain \[A_{1}(x) =\sum_{\{i,j\}\in E}\left(x_{i}\Gamma_{i}+x_{j}\Gamma_{j}\right) \tag{27}\] \[=|E|\sum_{i\in V}x_{i}d_{i}\left(\frac{n-1}{d_{i}}-\frac{2}{d_{i} +1}\right),\] We can also express the second term from (24), \[A_{2}(x) =|E|\sum_{i\in V}x_{i}\sum_{j\in\mathcal{N}_{i}}\frac{1}{w_{j,i}} \tag{28}\] \[=|E|\sum_{i\in V}x_{i}\frac{2d_{i}}{d_{i}+1},\] which is derived using (20). We thus obtain the following upper bound on the payoff for player \(\mathcal{X}\) \[\pi_{\mathcal{X}}(F_{\mathcal{X}},G_{\mathcal{Y}})=1-\delta+\frac {\delta^{2}}{2Yn|E|}\mathbb{E}_{F_{\mathcal{X}}}\left[A_{1}(\mathbf{x})+A_{2} (\mathbf{x})\right] \tag{29}\] \[\leq 1-\delta+\frac{\delta^{2}}{2Yn}\mathbb{E}_{F_{\mathcal{X}}} \biggl{[}\sum_{i\in V}\mathbf{x}_{i}d_{i}\biggl{(}\frac{n-1}{d_{i}}-\frac{2}{d _{i}+1}+\frac{2}{d_{i}+1}\biggr{)}\biggr{]}\] \[=1-\delta+\frac{\delta^{2}(n-1)}{2Yn}\mathbb{E}_{F_{\mathcal{X}} }\left[\sum_{i\in V}\mathbf{x}_{i}\right]\] \[=1-\delta+\frac{\delta^{2}(n-1)}{2n}\frac{X}{Y}.\] Recall that the variable \(\delta\in[0,1]\) is a parameter that player \(\mathcal{Y}\) can tune. The optimal \(\delta^{*}\) that minimizes the upper bound above is given by \[\delta^{*}=\begin{cases}\frac{nY}{(n-1)X},&\text{if }X\geq\frac{n}{n-1}Y\\ 1,&\text{if }X<\frac{n}{n-1}Y\end{cases}. \tag{30}\] Therefore, player \(\mathcal{Y}\), using \(F_{\mathcal{Y}}\), can ensure player \(\mathcal{X}\) obtains a payoff no greater than \[\gamma_{n}(X,Y,G)=\begin{cases}1-\frac{nY}{2(n-1)X},&\text{if }X\geq\frac{n}{n-1}Y\\ \frac{n-1}{2n}\frac{X}{Y},&\text{if }X<\frac{n}{n-1}Y\end{cases}. \tag{31}\] ## 6 Performance of deterministic strategies In this section, we highlight the importance of randomized allocation strategies by considering scenarios where the defender (\(\mathcal{X}\)) does not have the ability to randomize its allocation, i.e. it must select a deterministic strategy \[x\in\Delta(X)\triangleq\left\{x^{\prime}\in\mathbb{R}_{+}^{n}:\sum_{i=1}^{n}x _{i}^{\prime}\leq X\right\}. \tag{32}\] Recall the performance lower bound \(\gamma(X,Y)\) from Theorem 4.1 is achieved through the randomized strategy (16). While this lower bound has no explicit dependence on the network structure \(G\), we will highlight the dependence of payoff guarantees on network structure when restricted to deterministic strategies through numerical studies. We first present some general properties regarding conditions on graph structure and opponent budget \(Y\) for which \(\mathcal{X}\) can guarantee a positive payoff. We then define a heuristic class of deterministic strategies for \(\mathcal{X}\) and evaluate its performance via numerical simulations on random graphs. These studies not only highlight the the importance of randomized allocations and the impact of different graph structures on performance. ### General properties of deterministic strategies The following result illustrates some general properties regarding deterministic strategies for \(\mathcal{X}\) on structured networks. **Proposition 6.1**: _Let_ \[S_{\mathcal{X}}^{d}(X,Y,G)\triangleq\max_{x\in\Delta(X)}\min_{y\in\Delta(Y)} \pi_{\mathcal{X}}(x,y;G)\] _be the security value for \(\mathcal{X}\) under deterministic strategies on any graph \(G\). We have_ 1. _If_ \(G\) _is bipartite,_ \(S_{\mathcal{X}}^{d}(X,Y,G)>0\) _iff_ \(Y\leq\frac{X}{2}\)_._ 2. _If_ \(G\) _is a complete graph,_ \(S_{\mathcal{X}}^{d}(X,Y,G)>0\) _iff_ \(Y\leq\frac{n-1}{n}X\)_._ 3. \(S_{\mathcal{X}}^{d}(X,Y,G)=0\) _if_ \(Y>\frac{n-1}{n}X\)_._ 4. \(S_{\mathcal{X}}^{d}(X,Y,G)>0\) _if_ \(Y\leq X/2\)_._ Proposition 6.1 identifies bipartite and complete graphs as the two extreme network structures determining a defender's effectiveness. Indeed, bipartite networks provide positive payoff guarantees on the smallest range of parameters, i.e. \(Y\in[0,X/2]\) (property (a)). Complete networks provide positive payoff guarantees on the widest range of parameters, i.e. for any \(Y\in[0,\frac{n-1}{n}X]\) (property (b)). The range of parameters where this guarantee holds for an arbitrary graph \(G\) must lie somewhere in between, i.e. \(Y\in[0,fX]\) for some \(\frac{1}{2}\leq f\leq\frac{n-1}{n}\) (property (c)-(d)). The proof of the Proposition is provided in Section 5. First, we prove parts (c) and (d). To prove part (c), let set \(A_{i}=[n]\setminus\{i\}\) for \(i\in[n]\). First, note that \(A_{i}\)s are vertex cover. Therefore, if for some \(i\in[n]\), \(Y>X_{i}\triangleq\sum_{j\in A_{i}}x_{j}\), assigning \(y_{j}=x_{j}+\frac{Y-X_{i}}{n-1}\) for all \(j\in A_{i}\), player \(\mathcal{Y}\) wins all vertices in \(A_{i}\), and hence, wins all the edges. On the other hand, We have \[\sum_{i=1}^{n}\sum_{j\in A_{i}}x_{j}=\sum_{j=1}^{n}(n-1)x_{j}=(n-1)X.\] This implies \[\min_{j\in[n]}\left\{\sum_{j\in A_{j}}x_{j}\right\}\leq\frac{n-1}{n}X,\] which completes the proof of \(S_{\mathcal{X}}(X,Y,G)=0\) if \(Y>\frac{n-1}{n}X\). To prove part (d), suppose that there is an edge between nodes \(i\) and \(j\), and \(x_{i}=x_{j}=\frac{X}{2}\). Therefore, in order for player \(\mathcal{Y}\) to win this edge, \(Y\) should be strictly greater \(\frac{X}{2}\). For part (a), note that "if" part is implied by part (d). To prove the the other direction, let \(B_{1}\) and \(B_{2}\) be the vertex set of two part of the a bipartite graph \(G\), i.e., \(B_{1}\cup B_{2}=V\), and there is no edge between any two vertices in \(B_{1}\) or \(B_{2}\). Since \(\sum_{j\in B_{1}}x_{j}+\sum_{j\in B_{2}}x_{j}=X\), we have \[\min\left\{\sum_{j\in B_{1}}x_{j},\sum_{j\in B_{2}}x_{j}\right\}\leq\frac{X}{2}.\] Therefore, if \(Y>\frac{X}{2}\), player \(\mathcal{Y}\) can win all vertices in \(B_{1}\) or \(B_{2}\), and hence, win all the edges. For part (b), note that "only if" part is implied by part (c). To prove the the other direction, we note that every vertex cover in a complete graph has \(n-1\) vertices. If player \(\mathcal{X}\) assign his budget uniformly, i.e., \(x_{i}=\frac{X}{n}\) for \(i\in[n]\), \(Y\) has to be strictly greater than \(\frac{n-1}{n}X\) in order for player \(\mathcal{Y}\) to win all edges. In the following, we will quantify and compare the performance of a certain class of deterministic strategies on various networks through simulations. These experiments corroborate the findings of Proposition 6.1. ### Numerical simulations On arbitrary networks, we evaluate the performance of the following deterministic strategy for \(\mathcal{X}\): \[x_{i}^{(d)}\triangleq\frac{d_{i}}{2|E|}X,\quad\forall i\in V, \tag{32}\] where \(d_{i}\) is the degree of node \(i\). In words, the amount of resources \(\mathcal{X}\) allocates to any node is proportional to its degree centrality1. Footnote 1: This deterministic strategy is inspired by its randomized counterpart (16), which is optimal for bipartite graphs. In response, player \(\mathcal{Y}\) then selects a _deterministic_ strategy \(y\in\Delta(Y)\). The best-response problem for \(\mathcal{Y}\), i.e \(\max_{y\in\Delta(Y)}\pi_{\mathcal{Y}}(x,y)\) for any \(x\in\Delta(X)\) is known as a _budgeted maximum coverage_ problem, which is NP-hard. To evaluate the performance of a defender that implements \(x^{(d)}\), we adapt a greedy algorithm from [19] to approximate \(\mathcal{Y}\)'s best-response, which we detail in Algorithm 1. This gives a performance factor2 of \(1-1/\sqrt{e}\). Footnote 2: Other methods can improve the guarantee to \(1-1/e\)[19]. We do not utilize these, however, because they come with a much higher computational cost than Algorithm 1, especially on large scale networks. We denote the resulting payoff to \(\mathcal{X}\) as \(u_{\mathcal{X}}^{(d)}\triangleq\pi_{\mathcal{X}}(x^{(d)},y(x^{(d)}))\), where \(y(x^{(d)})\in\Delta(Y)\) is the response generated from Algorithm 1. Figure 5 shows numerical simulations characterizing the performance ratio \(u_{\mathcal{X}}^{(d)}/\gamma(X,Y)\) on Erdos-Renyi (ER) random networks, where \(\gamma(X,Y)\) is the lower bound (5) on the payoff that \(\mathcal{X}\)'s can ensure on any graph using randomized strategies (Theorem 4.1). Hence, \(\gamma(X,Y)\) serves as a benchmark comparison. There are two main conclusions from these experiments. First, \(x^{(d)}\) performs better on graphs with higher edge density. The performance is worst on star, ring, and line networks, improves on ER random networks for higher link parameter \(p\in[0,1]\), and performs best on the complete network (Figure 5, right). These numerical findings are consistent with the results established in Proposition 6.1. Second, deterministic strategies perform significantly worse relative to randomized strategies. Considering that we use the lower bound \(\gamma(X,Y)\) as the benchmark comparison, and the greedy response is sub-optimal for \(\mathcal{Y}\), the performance ratio \(u_{\mathcal{X}}^{(d)}/\gamma(X,Y)\) serves as an upper bound on the performance of \(x^{(d)}\). Specifically, it is an upper bound on the ratio wherein \(\mathcal{Y}\) implements a best response and the denominator is taken as the actual max-min value for \(\mathcal{X}\) (LHS of (9)). **Remark 6.1**: _The strategy \(x^{(d)}\) is a heuristic strategy for \(\mathcal{X}\) used for the purpose of making performance comparisons on arbitrary networks via numerical simulations. It is not proven here that \(x^{(d)}\) is optimal among deterministic strategies, i.e. one that solves \(\max_{x\in\Delta(X)}\min_{y\in\Delta(Y)}\pi_{\mathcal{X}}(x,y)\). Such a problem is difficult to solve, since the inner minimization alone is NP-hard._ ## 7 Conclusion and Future Research In the context of General Lotto games, we studied a competitive resource allocation scenario between an attacker and defender of a network. The objective for each player is to secure as many edges of the network as possible, where the defender is required to win both endpoint nodes in order to secure an edge. We completely characterized equilibrium payoffs and strategies for both players when the network is bipartite. On arbitrary networks, we provided lower and upper bounds on the defender's max-min performance. To Figure 5: Simulations showing the deterministic-to-randomized performance ratio \(u_{\mathcal{X}}^{(d)}/\gamma(X,Y)\) on a variety of networks. We fix \(n=100\) for all graphs considered. For ER random graphs, we average the performance over 100 independent samples at each value of \(Y\). Here, \(p\) is the independent link formation probability. Deterministic strategies perform better on more densely connected networks. In all simulations, we fix \(X=2\). further demonstrate the impact of network topology, we then considered a defender restricted to deterministic strategies. We identified bipartite and complete graphs as the two extreme structures that determine the defender's effectiveness. These findings were corroborated through numerical simulations. It is of interest to extend our equilibrium results to larger classes of networks. In Section VI, we saw that the performance of deterministic strategies is highly dependent on the network structure. We would like to identify the salient network characteristics, e.g. edge density, cluster coefficients, etc, that contribute to the performance.
2310.14767
Predicting COVID-19 Infections Using Multi-layer Centrality Measures in Population-scale Networks
Understanding the spread of SARS-CoV-2 has been one of the most pressing problems of the recent past. Network models present a potent approach to studying such spreading phenomena because of their ability to represent complex social interactions. While previous studies have shown that network centrality measures are generally able to identify influential spreaders in a susceptible population, it is not yet known if they can also be used to predict infection risks. However, information about infection risks at the individual level is vital for the design of targeted interventions. Here, we use large-scale administrative data from the Netherlands to study whether centrality measures can predict the risk and timing of infections with COVID-19-like diseases. We investigate this issue leveraging the framework of multi-layer networks, which accounts for interactions taking place in different contexts, such as workplaces, households and schools. In epidemic models simulated on real-world network data from over one million individuals, we find that existing centrality measures offer good predictions of relative infection risks, and are correlated with the timing of individual infections. We however find no association between centrality measures and real SARS-CoV-2 test data, which indicates that population-scale network data alone cannot aid predictions of virus transmission.
Christine Hedde-von Westernhagen, Javier Garcia-Bernardo, Ayoub Bagheri
2023-10-23T10:07:28Z
http://arxiv.org/abs/2310.14767v2
# Predicting COVID-19 Infections Using Multi-layer Centrality Measures in Population-scale Networks ###### Abstract Understanding the spread of SARS-CoV-2 has been one of the most pressing problems of the recent past. Network models present a potent approach to studying such spreading phenomena because of their ability to represent complex social interactions. While previous studies have shown that network centrality measures are generally able to identify influential spreaders in a susceptible population, it is not yet known if they can also be used to predict infection risks. However, information about infection risks at the individual level is vital for the design of targeted interventions. Here, we use large-scale administrative data from the Netherlands to study whether centrality measures can predict the risk and timing of infections with COVID-19-like diseases. We investigate this issue leveraging the framework of multi-layer networks, which accounts for interactions taking place in different contexts, such as workplaces, households and schools. In epidemic models simulated on real-world network data from over one million individuals, we find that existing centrality measures offer good predictions of relative infection risks, and are correlated with the timing of individual infections. We however find no association between centrality measures and real SARS-CoV-2 test data, which indicates that population-scale network data alone cannot aid predictions of virus transmission. ## 1 Introduction Since the onset of the COVID-19 pandemic in late 2019, a myriad of studies has attempted to adequately model how the virus spreads within and across populations in order to predict outbreaks and assess interventions (see [1] for an overview). Like for all epidemic diseases, models of COVID-19 have to take into account how members of susceptible populations are interconnected. To this end, it is important to consider that human interaction takes place in multiple contexts simultaneously. One way of accommodating this structure is the use of multi-layer network models, where each layer represents a different type of interaction [2]. The centrality of a node within the network has been show to be a good predictor of its spreading capacity, i.e., the number of adjacent nodes it infects. This has been studied for networks representing a single type of interaction [3, 4][5, ch. 10.3], as well as for multi-layer networks where interactions can be of multiple types, modelled by the individual layers [6]. While the identification of influential spreaders in a network allows for preventive measures to attenuate large infection events, the risk of infection for an individual cannot directly be inferred from this. However, the individual infection risk is of substantial interest for epidemic scenarios. It can facilitate government interventions specifically targeted at high-risk groups, or allow citizens to receive personalized risk assessments, and thus contributes to the mitigation of the overall epidemic scenario. In the context of SARS-CoV-2, several studies already employed multi-layer networks in modelling the spread of the virus to predict outbreak size [7, 8, 9], but none of them have used the framework to predict individual infections, and none have used large-scale registry data on individuals. This study aims at filling this gap by using network data from over one million individuals to answer the research question: _How well can multi-layer centrality measures predict the risk and timing of individual infections with epidemic diseases like COVID-19?_ In recent years, definitions have been developed for multi-layer versions of centralities, such as PageRank [10], Eigenvector [11], and Betweenness [12, 13] then showed that node rankings based on the respective single-layer and multi-layer centralities can differ substantially. Differences between the measures can be attributed to the increased structural complexity of multi-layer networks, resulting in spreading dynamics that cannot be observed in single-layer networks [14, 15]. To assess the performance of multi-layer centrality measures in predicting infections, we made use of registry micro-data of the whole Dutch population. Administrative population networks bear a novel opportunity to researchers studying social processes, in that they do not suffer from common drawbacks of studies based on surveys or digital trace data [16, 17]. Tapping this source of information thus presents another contribution of this study. Specifically, we used the registry data on family, household, school, and work relationships. On the basis of this network, we simulated the infection process of a COVID-19-like epidemic. A summary of our methodological procedure is given in Figure 1. The results of the prediction tasks indicate that multi-layer centrality measures can provide a good assessment of a node's relative infection risk. We also find a moderate predictive performance of certain centrality measures concerning the exact time point of infection. However, across all prediction tasks, the multi-layer measures perform slightly worse than their single-layer counterparts. Applying the measures to PCR-test data of actual COVID-19 infections showed no explanatory power, demonstrating that predicting infections at the individual level in this epidemic scenario cannot be done based on administrative network data alone. The paper proceeds as follows: Section 2 provides a theoretical background on the role of node centrality in spreading processes, and introduces concepts and notation of multi-layer networks. Section 3 describes the data used in this study in more detail. In Section 4, we outline the methodological procedure to answer the research question. Section 5 presents and discusses the results of the analyses. We close the paper with a conclusion and possible opportunities for future research in Section 6. ## 2 Theoretical Background ### The role of network centrality in spreading processes Centrality measures are an integral part of characterizing nodes within networks, especially in the social sciences where the concept originated in the late 1940's [18]. Each of the of the measures developed since then intends to capture the relevance of a node in a theoretically distinct way. When investigating spreading processes in a network of individuals, the outcomes of interest are often aggregate results of the process, such as the final size of the spread, the rate at which it occurred, or the spreading capacity of each individual. These outcomes are indispensable in understanding the scale of spreading phenomena. Accordingly, the role of centrality measures in applications of disease, information, or behaviour spreading has mostly been to detect individuals who influence changes in those target variables [3, 4, 5, 6]. Figure 1: Schematic plot of methodological procedure based on a four-layer network of families, households, schools, and workplaces. The network contains the same three individuals as nodes that are present in each layer. Ties within layers are displayed as bold lines, ties between layers as dashed lines. Dashed inter-layer ties are categorical couplings, i.e., every node is tied to all of its representations in the other layers. A detailed description of all analysis steps shown on the right-hand side of the figure can be found in Section 4. Considering single-layer network representations, correlation-based analyses from several studies [3, 4, 5, 6] suggest that most types of existing centrality measures are associated with final outbreak sizes as well as the spreading caused by individual nodes. However, those findings are highly dependent on the type and size of the network, as well as the type of spreading process which was investigated. A comprehensive study conducted by [4] indicates that for epidemic spreading Degree centrality usually shows the largest correlation with final outbreak size. [3] also find evidence for the influence of PageRank and Katz centrality on this outcome. That random walk based measures like PageRank perform well is further corroborated by [5, ch. 10.3]. For the case of multi-layer networks, research on centrality and spreading outcomes is more limited. [6] demonstrated that, generally, measures based only on the direct neighborhood of a node are more predictive of its spreading capacity than measures like PageRank or Betweenness that also incorporate links at longer distances. While previous studies give a notion about which centrality measures are generally useful in explaining spreading processes, the spreading capacity of a node does not directly translate to its risk of becoming infected. Research on this topic is very limited, with a notable exception being [19], who find quasi equal performance of Degree, Betweenness, and Closeness centrality in predicting infection probabilities. However, the study only looked at a small single-layer network, in which the centrality measures were highly correlated. Clearly, there is no straightforward guidance on how to relate centrality with COVID-19 infections in a multi-layer network. We therefore look at three prominent measures which have shown to be important in some of the findings presented above: Degree, Eigenvector, and PageRank centrality. This leaves only measures based on calculating the shortest path between all pairs of nodes (e.g., Betweeness and Closeness) unconsidered in this paper, since it was infeasible to compute them on a network of the size presented here. Before addressing the mathematical definitions of the studied centrality measures, the concept of multi-layer networks and their notation is introduced. ### Multi-layer networks and tensorial notation Multi-layer networks have emerged simultaneously from a variety of research areas, each with the aim to model real-world systems composed of relationships which may differ in their type, strength, and direction. Depending on the restrictions imposed on the network, one can distinguish a broad range of subvariants of the most general multi-layer network. Examples of such restrictions are the ordering of the layers, the presence or absence of nodes in specific layers, the presence of edges between layers, or the weighting of certain edges. [2] have introduced a unifying mathematical framework for multi-layer networks and provide a thorough introduction to their properties and applications. A concise way of notation for multi-layer networks is to represent them as tensors. Following the notation of [13], the _adjacency tensor_ of rank 4 is defined as \(M_{j\beta}^{i\alpha}\), which captures the link of node \(i\) in any layer \(\alpha\) to node \(j\) in any layer \(\beta\). This formulation allows to explicitly consider multiple types of connections between nodes, which would otherwise get lost in the aggregation of the layers, but are vital to the dynamical properties of the network [20, 21]. In this study, a _multiplex network_ is employed to model interpersonal relationships. This variant of a multi-layer network contains the same set of nodes in each layer, therefore it is also commonly referred to as node-aligned multi-layer network. Furthermore, it does not allow for ties between different nodes in different layers, but only between a node in one layer and its copy in all other layers, so called categorical couplings. Concerning the adjacency tensor \(M\), this property can be expressed as \(\alpha=\beta\), except when \(i=j\). All edges in the network are undirected, representing symmetric relationships. A schematic plot of the network in the context of this study is given in Figure 1. ### Centrality in multi-layer networks Considering the long tradition of centrality measures in network analysis, attempts have been made to establish the concept of centrality in the relatively new area of multi-layer networks. Using a tensor representation as introduced above, [13, Supplementary Note 3] show how some commonly used single-layer measures can be generalized to obtain analogous multi-layer measures. In the following, we provide the equations for the multi-layer centrality measures investigated in this study. Throughout, we follow the notation of [13] with slight deviations. For more details on the mathematical formulation of multi-layer networks and their properties see also [22]. **Degree Centrality**. The most commonly inspected property of a node is its degree, i.e., the number of incoming or outgoing ties of a node. Using Einstein notation for conciseness, the multi-layer degree centrality (or'multidegree') of a node \(i\) across all layers can be defined as \[\kappa_{i}=K_{i\alpha}u^{\alpha}, \tag{1}\] where \(K_{i\alpha}\) is the degree of node \(i\) in layer \(\alpha\) and \(u^{\alpha}\) is a vector of ones used to contract the layer index, i.e., to sum up the degree over all layers. In a multiplex network as used here, this measure is analogous to the strength of a collapsed single-layer network because of the exclusively categorical couplings between layers. **Eigenvector Centrality**. Another prominent measure of node centrality quantifies a node's importance based on the node's ties and the number of its neighbours ties. In the domain of multi-layer networks this is achieved by finding the eigentensor \(\Theta_{i\alpha}\) that satisfies \[M^{i\alpha}_{j\beta}\Theta_{i\alpha}=\lambda_{1}\Theta_{j\beta}, \tag{2}\] where \(\lambda_{1}\) is the leading eigenvalue of \(M^{i\alpha}_{j\beta}\). Solving this eigenvalue problem and contracting the layers as in the case of Degree centrality gives the overall centrality for each node: \[\theta_{i}=\Theta_{i\alpha}u^{\alpha}. \tag{3}\] **PageRank Centrality**. PageRank centrality, originally developed as part of Google's search algorithm [23], is based on a random walk on the network with the transition tensor \[R^{i\alpha}_{j\beta}=rT^{i\alpha}_{j\beta}+\frac{(1-r)}{NL}u^{i\alpha}_{j \beta}, \tag{4}\] where \(r\) is the rate of jumping to a neighboring node and \(1-r\) is the rate of the walker being teleported to any other node in the network. The transition tensor \(T^{i\alpha}_{j\beta}=M^{i\gamma}_{j\beta}\tilde{D}^{i\alpha}_{i\gamma}\), with \(\tilde{D}^{i\alpha}_{j\beta}\) being the inverse of the non-zero entries of the strength tensor of the network, meaning, \(\tilde{D}\) normalizes the adjacency tensor \(M\). The number of nodes and layers are given by \(N\) and \(L\), respectively. \(u^{i\alpha}_{j\beta}\) is a rank-4 tensor of ones. Finally, the PageRank centrality of node \(i\) is given by \[\omega_{i}=\Omega_{i\alpha}u^{\alpha}, \tag{5}\] with \(\Omega_{i\alpha}\) being the eigentensor of the transition tensor \(R^{i\alpha}_{j\beta}\). ## 3 Data ### Construction of the network dataset National registry data provided by _Statistics Netherlands_ (CBS) was used to construct a multi-layer network based on the Dutch population. The data provides direct information on different kinds of relationships between individuals registered as residents in the Netherlands. This allowed to assemble an undirected multiplex network that consists of four layers, namely, families, households, schools, and workplaces. Each node within a layer represents a person, and per definition of the multiplex structure, each layer contains the same set of nodes. Unlike other approaches to obtain social network datasets, such as surveys or digital trace data, this source does not suffer from the common problems of non-response bias, selection bias, or social desirability effects. However, it should be pointed out that ties in this network do not measure social interactions directly, but only represent formal relations between individuals. While this relationship definition does not explicitly consider informal ties such as friendships, prior research has shown that most of a person's close relationships originate from, and change with, the contexts of family, school, or work [24, 25]. Ties in our data thus plausibly indicate "a highly increased probability that two individuals interact socially" [17, 146]. While country-level population data bears a highly valuable resource, it also drives up computational demands. We therefore conducted analyses on a regional subset, including all primary schools students from the Dutch capital city Amsterdam. By the beginning of 2020, the municipality (Dutch: gemeente) of Amsterdam has housed around 5 percent of the Dutch population [26]. Initiating the network construction at the school level was motivated by this context's role as bridge between different households in potential disease transmission [27]. While the same argument could be made for work relationships, some workplaces in this dataset encompass more than a hundred employees, so the contact probability of work colleagues is on average lower than between members of a school year. All students within the selected schools became nodes in the final network, where members of the same school year are completely interconnected. For the family relations of the network, we added all parents and full siblings of the school students as nodes. The household layer added nodes living in the same household as the students, regardless of their family relationship. We then retrieved information on the colleagues of all nodes included so far, and added them as new nodes to the network. Ties were established between people working at the same workplace address. Data on individual COVID-19 infections based on PCR-test results of SARS-CoV-2 was also provided by CBS, and could be linked to the network data. The coverage of the SARS-CoV-2 data ranges from June 2020 until September 2021, which led to the decision to also base the network dataset on administrative records from 2020. ### Network characteristics After assembling the dataset from the registry data, we arrived at a network consisting of about 1.6M nodes, with ca. 200K ties in the family layer, 273K ties in the household layer, 1.4M in the school layer, and 58.9M ties in the workplace layer. Table 1 summarizes further network characteristics. Comparing the network sizes across layers, one can clearly observe the dominance of work relationships, which is also present in the complete population data, and results from people being employed at large companies. In assembling the original population dataset, the number co-worker ties in such companies was limited to a maximum of 100 which were sampled based on living in the same (or a close-by) location [28, 7-8]. This procedure resulted partly in directed relationships, i.e., \(i\) being the co-worker of \(j\) but not the other way around. We reconstructed these reciprocities to obtain an undirected network, which lead to some nodes having a degree \(>\) 100. Accordingly, the Degree distribution of the work layer shows large variation and is rather left-skewed. The distributions in the household, family, and school layers are much less skewed than in the work layer, but the school layer also exhibits a high amount of variation. The Degree distribution of the aggregate network is shown in Supplementary Figure S2 and strongly resembles that of the complete population as shown in [17, Figure 1]. The global clustering coefficient of 1 in the household and school layers is caused by the design that all members of one household or school year, respectively, are completely connected. For the other layers and the aggregate network, there is also a considerable degree of clustering present, meaning, in about 70 percent of the cases where two people are linked to a third person, these two people are also connected. The clustering within families is not perfect since half-sibling relationships are not included, allowing two sampled students who share a parent to not be connected themselves. The sampling method for the work layer discussed above also leads to imperfect clustering in that layer. While, by design of the dataset construction, the aggregate network consists of one giant component, the individual layers are fragmented into many individual components. These components represent individual families, households, school years, and workplaces. However, there is some bridging between family components present due to the mentioned half-sibling relationships. The share of nodes belonging to the giant component within a layer ranges from 0.01 percent in the family layer to 0.93 percent in the workplace layer. Overall, the individual network layers and the aggregate network show quite distinct characteristics. This observation has also received a detailed discussion with respect to the complete population data in [16]. In the context of this study, the layer heterogeneity highlights how aggregation can discard a lot of structural variation that may be vital to spreading behaviour. ## 4 Methods ### Epidemic modeling In order to obtain the outcome of interest for this study--individual infections--we first simulated a COVID-19-like epidemic on the network data. As commonly employed in epidemiology, we used a SIR model (see, e.g., [15, Section 3]), which captures \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & & & & & \multicolumn{6}{c}{Degree Distribution} \\ Layer & Nodes & Ties & Clustering & Components & \% giant comp. & Pctl. 5 & Mean & Median & Pctl. 95 & SD \\ \hline Family & 166,393 & 199,781 & 0.76 & 41235 & 0.01 & 1 & 2.40 & 2 & 4 & 1.07 \\ Household & 164,013 & 273,344 & 1 & 41627 & 0.05 & 1 & 3.33 & 3 & 5 & 2.07 \\ School & 58,079 & 1,386,609 & 1 & 1566 & 0.27 & 16 & 47.75 & 45 & 91 & 24.08 \\ Work & 1,446,817 & 58,860,777 & 0.70 & 13022 & 0.93 & 3 & 81.37 & 100 & 164 & 54.90 \\ \hline Aggregate & 1,570,812 & 60,538,563 & 0.70 & 1 & 100 & 3 & 77.08 & 81 & 163 & 55.43 \\ \hline \hline \end{tabular} * _Note:_ Instead of minima and maxima, the lower and upper five percentiles are given for the degree distribution as to not disclose information at the individual level. \end{table} Table 1: Descriptive network measures by layer and for the aggregate network. the infection states in a population over time across three groups: **S**usceptible, **I**nfected, and **R**ecovered, where recovered individuals are permanently removed from the susceptible population. The model can be expressed as a set of differential equations: \[\frac{dS}{dt} =-\beta IS, \tag{6}\] \[\frac{dI}{dt} =\beta IS-\gamma I,\] (7) \[\frac{dR}{dt} =\gamma I, \tag{8}\] where \(\beta\) indicates the infection rate, i.e., the transition rate \(S\to I\), and \(\gamma\) the recovery rate, i.e., the transition rate \(I\to R\). The state variables \(S,I,R\) define the total number of individuals in the respective state, together summing up to the population size \(N\). While, traditionally, the model assumes a homogeneous mixing of the population, when applied on a network, it is evaluated at each individual node based on the current state of its neighbors. This represents a more realistic scenario than the possibility of getting in contact with any individual of the population at any time. Instead of a fixed infection rate \(\beta\), the multi-layer framework furthermore allowed that each layer got assigned its own infection rate \(\tau_{i}\), with \(l\in\{family,household,school,work\}\). These infection rates were then applied in generating new infections at weekly time steps \(t\). The values for \(\tau_{i}\) were based on the so called secondary attack rate. This rate expresses the share of infections originating from an initial infection among the number of possibly infected individuals in a group over fixed a period of time, usually one to two weeks. Based on the literature on secondary attack rates of SARS-CoV-2 in household contexts, \(\tau_{household}\) was set to \(0.20\)[29, 30, 31]. This means that within one week after one household member got infected, 20 percent of the household members would be infected. The same studies suggest that infections originating from children are less common, and \(\tau_{school}\) was then set to \(0.10\). Since family relations in this network do not necessarily imply living in the same household, \(\tau_{family}=0.15\) was set somewhat lower than \(\tau_{household}\). Finally, infections in workplaces were assumed to be on average the least likely, expressed by \(\tau_{work}=0.05\). To arrive at a vector of individual infection probabilities for the nodes in a layer \(l\) in week \(t\) based on the previously infected direct neighbors, we used the Reed-Frost (or chain-binomial) model [32, 33]: \[\boldsymbol{\tau^{\prime}}_{tt}=1-(1-\tau_{l})\boldsymbol{\Gamma}_{t-1}, \tag{9}\] where \(\boldsymbol{\Gamma_{t-1}}\) is a vector with the number of infected neighbors of each node. Note that if a node became infected in one layer, it also changed its infection status in all other layers. This also implies that infection probability was higher for individuals with ties in multiple layers. The recovery time \(\gamma\) did not differ by layer and was sampled for each individual from a Weibull distribution with shape \(\eta=1\) and scale \(\lambda=5\) on a scale of days [34]. Since the model was evaluated at weekly intervals, the recovery status \(R_{ii}\in\{0,1\}\) of a node was changed to \(R_{ii}=1\) if \(\sum_{t=1}^{T}I_{t-1}\times 7>\gamma\), i.e., if the number of days infected was greater than the recovery time. In the following analyses, the model was run \(k=100\) times with differing random seed nodes from which the spread started. We set the number of seed nodes to 10 per simulation, since it is realistic to assume that the virus entered the Dutch population via more than a single infected person. This also decreased the dependence of the epidemic process on the position of the seed nodes in the network. It should be noted that the model was constructed for a setting in the early phases of the COVID-19 pandemic, without the possibility of vaccination, or considering differing infectiousness of virus subvariants. It was also outside the scope of this study to account for the effects of interventions like mobility restrictions. Descriptive results of the simulations are given in Supplementary Table S1 and Supplementary Figure S1. ### Prediction of infections using centrality measures Before relating the centrality measures to the timing of infections, we inspected the correlations between the different measures. This enabled a first insight into qualitative differences between the measures and allowed to identify potential problems of multicollinearity in the subsequent analyses. Since the distributions of the measures did not follow normality, we used the non-parametric Spearman's \(\rho\) of the centrality ranks instead of Pearson moment correlations [35]. We also show univariate distributions of all centrality measures in Supplementary Figures S2 through S4. In a next step, we predicted individual infection _risk_ over time based on a person's centrality. For this task, we used Cox proportional hazards models, regressing time dependent infection risks on different combinations of centrality measures. The model is commonly expressed as \[h_{i}(t|x_{i})=h_{0}(t)e^{\beta_{1}x_{1}+\beta_{2}x_{2}+\dots}, \tag{10}\] where \(h_{i}(t|x_{i})\) is the so called hazard rate of an individual conditional on its values for the predictor variables \(x_{i1},x_{i2},...\). This can be interpreted as the instantaneous risk of an individual experiencing an event--here: infection--given that they did not experience it up until this point. The baseline hazard \(h_{0}(t)\) is the hazard function for a person whose predictor variables are all zero. The coefficients \(\beta_{1},\beta_{2},...\) were estimated using the partial maximum likelihood method [36], which takes into account the number of right-censored observations--i.e., individuals for which no event could be observed in the given time. The models included the introduced centrality measures as linear combinations up to the third-order polynomial as predictors. Across-variable interactions were also considered, resulting in 25 models in total. The exact predictor combinations are displayed together with the results in Figure 2. The models were run separately for single- and multi-layer versions of the centrality measures to allow for comparisons between those. We evaluated the performance of the models using a variant of the Concordance index (C-index), which is weighted to account for censored observations as proposed by [37]. The C-index expresses the degree to which the estimated individual risks align with the order in which infections actually occurred. The index ranges from 0, meaning the risks are completely inverse to the order of infections, to 1, indicating perfect alignment of infection risk and order of infections.We retrieved C-index values for each of the \(k\) simulations, which were then averaged after applying a Fisher \(z\)-transformation [38]. This transformation was desirable since sampling distributions of correlation-like measures do not follow normality. The transformation rescales the estimates such that they approximate a normal distribution, which then allows to construct appropriate confidence intervals. After investigating infection risks, we obtained Spearman's rank correlations \(\rho\) of the _time point_ of infection of a node and its centrality value for the respective multi-layer and single-layer measures. This analysis was analogous to the procedure of most studies investigating the relationship of centrality measures and outbreak size or spreading capacity. The correlation coefficient for one simulation \(k\) of the epidemic was defined as \[\rho_{k}=\frac{cov(R(TTI_{k}),R(Cent))}{\sigma_{R(TTI_{k})}\sigma_{R(Cent)}}, \tag{11}\] with \(cov(R(TTI_{k}),R(Cent))\) being the covariance of the _ranks_ of the time to infection and the respective centrality, and \(\sigma\) being their standard deviations. To arrive at an overall estimate of \(\rho\), we again retrieved correlation coefficients for each of the \(k\) simulations, which were then averaged after being rescaled to normality as mentioned for the C-index. Lastly, we used XGBoost (Extreme gradient boosting) models to predict the time point of individual infections based on the centrality measures. XGBoost is a tree-based method which enables memory efficient identification of complex data patterns, and has been successfully employed in various machine-learning tasks [39]. This modeling approach estimates the actual time until an individual gets infected, as opposed to the infection risk at any point in time, as achieved by the Cox models. It also goes beyond the introduced rank correlations in that the dependent variable captures quantifiable time-intervals between infections instead of just their ordering, and by considering multiple predictors simultaneously. Since the algorithm computes estimates based on ensembles of many different decision trees, it comes at the drawback of lower interpretability than, e.g., parametric regression methods. Within each of the \(k=100\) datasets resulting from the epidemic simulations, the algorithm was trained on 10 percent of observations and then tested on the remaining 90 percent. Hyperparameters were selected using 10-fold cross-validation within one of the datasets. Final hyperparameter settings are shown in Supplementary Table S2. Model performance was assessed using \(R^{2}\) and root-mean-square error (RMSE) values, averaged over the test partitions of the 100 simulated datasets. The importance of individual predictors was evaluated using Cover, Frequency, and Gain values, again, averaged over all simulated datasets. Gain expresses the reduction in prediction error achieved by splitting the data on a certain variable relative to the other variables, summed up over all trees. Frequency can be regarded as a more crude measure of Gain, and Cover captures the relative number of observations that were assigned to a leaf after splitting on a certain variable as compared to the other variables. Apart from using the XGBoost algorithm with the simulated infection data, we also applied it on infections derived from positive PCR-test results of our sample. Additionally, we augmented these models with information about age, postcode area (first two digits), and whether a person is of Dutch origin, to see how much these variables could improve predictions compared to models only relying on centrality measures. Again, model performance was evaluated using \(R^{2}\) and RMSE values. Descriptive statistics of the personal characteristics and PCR-test data are presented in Supplementary Table S4 and Supplementary Figures S6 and S7. ### Ethical statement Ethical approval for the use of the data was obtained by the Ethical Review Board of the Faculty of Social and Behavioural Sciences of Utrecht University on November 8, 2022 (case numbers 22-1886, 22-1887, 22-1888). Informed consent of the participants has been obtained by the parties responsible for collection and provision of the data, which are Statistics Netherlands (CBS) and the National Institute for Public Health and the Environment (RIVM). In accordance with the Statistics Netherlands Act ("Wet op het Centralen bureau voor de statistiek") and the General Data Protection Regulation of the European Union, the data is protected by strict privacy regulations ensuring that no information about individual persons is disclosed. All methods were carried out in accordance with relevant guidelines and regulations. ## 5 Results ### Correlations between centrality measures The results of the preliminary correlation analysis revealed interesting insights into the association between both multi- and single-layer centralities (Table 2). Across both network types, a similar pattern emerged: PageRank centrality and Eigenvector centrality showed the smallest correlation (\(\rho_{multi}=0.20,\rho_{single}=0.33\)), followed by medium-sized correlations of Degree and Eigenvector centrality (\(\rho_{multi}=0.47,\rho_{single}=0.42\)). This is likely due to Eigenvector centrality allocating most of the centrality to few highly connected nodes, resulting in little variation within the measure. The correlation of Degree and PageRank was in both cases substantial, with \(\rho_{multi}=0.79\) and \(\rho_{single}=0.84\). Looking at the correlations between the two centrality types (lower left of Table 2), revealed that the respective centrality counterparts of PageRank are very similar to one another, with \(\rho=0.96\). Eigenvector was also notably correlated at \(\rho=0.64\). Since Degree centrality in the multi-layer definition corresponds to the weighted Degree in the single-layer representation, those measures were perfectly correlated. Similarly, correlations of Degree centrality with any of the other measures did not differ depending on using the multi- or single-layer representation. These results suggest that there might not be a great benefit in using the multi-layer versions of centrality, since they seem to measure similar properties as the single-layer versions. The following multivariate analyses confirm this suspicion. ### Centrality measures correlate strongly with the _risk_ of infection We used Cox proportional-hazards models (Section 4.2) to examine whether centrality measures can predict infection risks over time, taking into account possible multivariate associations between the measures. C-index values of the models to assess their predictive performance are presented in Figure 2. Since confidence intervals were extremely narrow across all models, they are included only in Supplementary Table S2. This shows that there was little dependence of the infection process on the seed nodes of the epidemic. We find a strong association of up to \(C=0.77\) between the centrality of the node and its relative risk of infection. This means, when choosing two random individuals in the network, the models could predict correctly who gets infected first in the epidemic in 77 percent of the cases. The risk of infection is mostly dependent on Eigenvector centrality. Even the simplest model including only linear Eigenvector centrality (Model 4) achieved a C-index of 0.77. Adding higher-order terms of any centrality did not lead to notable improvements of predictions for both multi-layer and single-layer models. Models including only PageRank centrality, or additionally also Eigenvector centrality, fared worst, with a predictive performance ranging from \(C=0.62\) to \(C=0.66\). While the ordering of centrality measures in terms of performance \begin{table} \begin{tabular}{c|l|l l l l l l} \hline \hline & & \multicolumn{3}{c|}{Multi-layer} & \multicolumn{3}{c}{Single-layer} \\ \cline{3-8} & & Degree & Eigenvector & PageRank & Degree & Eigenvector & PageRank \\ \hline \multirow{3}{*}{Multi-layer} & Degree & 1.00 & & & & & \\ & Eigenvector & 0.47 & 1.00 & & & & \\ & PageRank & 0.79 & 0.20 & 1.00 & & & \\ \hline \multirow{3}{*}{Single-layer} & Degree & 1.00 & 0.47 & 0.79 & 1.00 & & \\ & Eigenvector & 0.42 & 0.64 & 0.29 & 0.42 & 1.00 & \\ \cline{1-1} & PageRank & 0.84 & 0.25 & 0.96 & 0.84 & 0.33 & 1.00 \\ \hline \hline \end{tabular} \end{table} Table 2: Rank correlations between different centrality measures measured by Spearman’s \(\rho\). is consistent across multi- and single-layer models--Eigenvector \(>\) Degree \(>\) PageRank--the single-layer measures generally performed slightly better than the multi-layer ones. While the good performance in risk predictions across most of the measures is a promising result, these risks only express the relative ordering of individuals in getting infected and might be of limited use in practice. In contrast, the following results uncover the extent to which the timing of the infections could be predicted accurately. ### Centrality measures correlate moderately with _time_ to infection #### 5.3.1 Bivariate rank correlations Turning from infection risks to their timing, we first inspected the rank correlation between centrality measures and the time to infection in the epidemic model. All types of centralities showed a negative Spearman's \(\rho\) (Table 3)--i.e., nodes with higher centralities become infected earlier. For both multi- and single-layer measures, Eigenvector centrality had the largest association, with \(\rho_{multi}=-0.45\) and \(\rho_{single}=-0.58\), respectively. Degree centrality came second with \(\rho_{multi}=\rho_{single}=-0.24\). PageRank centrality followed closely after, with \(\rho_{multi}=-0.15\) and \(\rho_{single}=-0.19\). These results show no support for multi-layer measures being better predictors of infection timing than their single-layer versions--but rather the opposite. Furthermore, while a relationship between the respective centralities and the time to infection could be detected in both network structures, the correlations are far lower than what has previously been observed for outbreak size or spreading capacity of a node, where \(\rho\) often ranges between \(0.7-0.9\) (see references Section 2.1). This is an important finding, as it could indicate that the goals of possible interventions--preventing spread or preventing infections--cannot Figure 2: Concordance indices for Cox proportional hazards models by type and order of included centrality measure. An \(x\) denotes terms composed of a single variable, \(i\) denotes two- and three-way interactions between multiple variables. Since the estimates’ confidence intervals resulting from the simulations were very narrow, they were omitted from this figure, and included in Supplementary Table S2 instead. necessarily be achieved by the same means. It also hints at how the type and size of network considered in investigating spreading outcomes may lead to substantially different insights about node properties like centrality. #### 5.3.2 XGBoost models with simulated infection data The highly flexible XGBoost algorithm allows to model complex patterns underlying the relationship of centrality measures and time to infection in our data. The distributions of \(R^{2}\) and RMSE values from the 100 simulations for models including multi-layer or single-layer centrality measures are displayed in Figure 3. The plot shows that the single-layer centrality measures achieved a slightly better performance in predicting the timing of infections. On average, the single-layer models performed at \(R^{2}=0.40,RMSE=2.25\), compared to \(R^{2}=0.33,RMSE=2.38\) when using the multi-layer centralities. Both centrality types exhibit similarly normally-shaped distributions of their performance metrics, showing that the result are robust to changes in the epidemic seed. To assess how the individual centrality measures contributed to model performance, Figures 4 and 5 depict distributions of three measures of variable importance metrics (Cover, Frequency and Gain, see Methods) across simulations. In line with the results of the previous analyses, Eigenvector centrality contributed most to the performance of the XGBoost models, whereas PageRank, and most notably Degree centrality, did not add much to the predictions. This finding applies to both multi-layer and single-layer measures. Results of the same prediction task using linear regression are provided in Supplementary Figure S5. Polynomial combinations of all centrality measures were required to achieve a maximum performance of \(R^{2}=0.10,RMSE=2.76\), which lies notably below the results obtained from the XGBoost approach. #### 5.3.3 XGBoost models with real infection data and additional predictors Using positive PCR-test results of our sample observations, we applied the XGBoost algorithm again to inspect the extent to which the centrality measures can be used to predict infection timing in actual COVID-19 infection data. However, neither using the model parameters trained on the simulated data nor training another set of models on the PCR-test data achieved any notable prediction performance for both the multi- and single-layer measures (\(R^{2}=0.002,RMSE=14.86\)). This indicates that administrative network data is not sufficient to derive the actual infection process that relies on in-person contacts. \begin{table} \begin{tabular}{c|l|c c} \hline \hline & & Mean & 95\% CI \\ \hline \multirow{3}{*}{Multi-layer} & Degree & -0.24 & [-0.242,-0.239] \\ & Eigenvector & -0.45 & [-0.455,-0.448] \\ & PageRank & -0.15 & [-0.156,-0.154] \\ \hline \multirow{3}{*}{Single-layer} & Degree & -0.24 & [-0.242,-0.239] \\ & Eigenvector & -0.58 & [-0.585,-0.576] \\ \cline{1-1} & PageRank & -0.19 & [-0.187,-0.185] \\ \hline \hline \end{tabular} \end{table} Table 3: Spearman’s \(\rho\) of centrality measures and time to infection averaged across epidemic simulations. Figure 3: Distributions of \(R^{2}\) and RMSE values across 100 epidemic simulations for XGBoost models including multi-layer or single-layer centrality measures. We thus explored how other personal characteristics could potentially improve predictions. We added age, the first two digits of the postcode area, and a variable indicating whether a person is of Dutch origin to the XGBoost models with the centrality measures. This lead to a small increase in model performance, to the same extent for both centrality types (\(R^{2}=0.02,RMSE=14.67\)). While this increase is still of no practical relevance, it should be noted that age was the most important predictor in these models according to a Gain value of 0.43. This relationship of age and time until infection also becomes apparent by comparing the infection process of different age groups as shown in Supplementary Figures S6 and S7. ## 6 Conclusion In this study, we set out to investigate the predictive ability of centrality measures regarding the risk and timing of infections with a COVID-19-like disease. Drawing on large-scale registry data, we found that, among the investigated measures, Eigenvector centrality was best suited to predict risk and timing of individual infections. Especially relative infection risks could be identified well, even in the most parsimonious models. In practice, this would allow to identify individuals belonging to groups at high risk of infection solely based on their network position, and potentially enable targeted policies. Regarding the infection timing, predictive performance showed to be more limited, even when using a powerful machine learning algorithm like XGBoost. The estimated time of infection deviated on average around 2.3 weeks from the actual time point under the simulated epidemic scenario. Considering that the majority of simulated infections occurred within about 10 weeks (see Supplementary Figure S1) this only gives a rough indication of a person's infection timing. Across all analyses, findings did not provide evidence for the multi-layer versions of centrality measures being better predictors of individual infections than their single-layer definitions. In fact, the opposite result emerged, with single-layer centrality measures being somewhat more powerful predictors. This raises the question in how far the more complex multi-layer representation of administrative social networks can contribute to understanding epidemic infection scenarios better. Our analysis using positive PCR-tests indicates that registry data is not well suited to estimate infection risk at the individual level. While the discrepancy between real SARS-CoV-2 test data and the simulation could lie in the specification of the epidemic model, it likely resulted from the administrative data not reflecting social contacts accurately enough. Future research should investigate this issue more closely before conducting similar analyses, possibly combining the registry data with data based on contact tracing. Given that a more representative contact network can be established, centrality measures present an opportunity to augment models of infection prediction. Combining them with other individual characteristics related to infection risk, e.g., age or occupation, might result in a more powerful tool to inform targeted government interventions or provide guidelines for individuals. While we only worked with a subset of the population data, the network characteristics such as the Degree distribution Figure 4: Distributions of variable importance as measured by Cover, Frequency, and Gain across 100 epidemic simulations for models including multi-layer centrality measures. resembled closely those of the complete population network as discussed in [17]. Results from this study can thus likely be transferred to a population-based analysis. Our analysis has a number of limitations that open up additional avenues for future research. Given the size of the data--ca. 1.6M nodes and 61M ties in total--and the increased computational power needed for working with multi-layer networks, we only were able to use centrality measures based on random walks or a node's direct neighborhood. However, measures based on shortest paths, like Betweenness or Closeness, could offer new insights. Calculating these measures would require the use of approximate methods. Another immediate opportunity would be to test the presented measures in varying epidemic scenarios. This could be done by either varying the parameters of the here used SIR model, as well as considering other frameworks of epidemic disease modeling, e.g., allowing for additional infection states. Similarly, the same models could be applied to different regions or countries for which comparable data is available. An extension to a time-dependent approach is another possibility. This would allow to account for changing spreading dynamics caused by interventions, vaccination, or other time-dependent factors. Apart from other applications and modeling approaches, more fundamental work is needed to understand why certain measures perform better or worse than others. For the simulated epidemics, the higher performance of Eigenvector centrality can partly be attributed to the explicit inclusion of an individual's direct neighborhood in generating new infections. Still, it would be valuable to derive exactly how the centrality measures relate to epidemic outcomes.
2310.18175
Effect of interfacial Dzyaloshinskii-Moriya interaction in spin dynamics of an Antiferromagnet coupled Ferromagnetic double-barrier Magnetic Tunnel Junction
In this work, we have studied the spin dynamics of a synthethic Antiferromagnet (SAFM)$|$Heavy Metal (HM)$|$Ferromagnet (FM) double barrier magnetic tunnel junction (MTJ) in presence of Ruderman-Kittel-Kasuya-Yoside interaction (RKKYI), interfacial Dzyaloshinskii-Moriya interaction (iDMI), N\'eel field and Spin-Orbit Coupling (SOC) with different Spin Transfer Torque (STT). We employ Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation to investigate the AFM dynamics of the proposed system. We found that the system exhibits a transition from regular to damped oscillations with the increase in strength of STT for systems with weaker iDMI than RKKYI while display sustained oscillatons for system having same order of iDMI and RKKYI. On the other hand the iDMI dominating system exhibits self-similar but aperiodic patterns in absence of N\'eel field. In the presence of N\'eel field, the RKKYI dominating systems exhibit chaotic oscillations for low STT but display sustained oscillation under moderate STT. Our results suggest that the decay time of oscillations can be controlled via SOC. The system can works as an oscillator for low SOC but display nonlinear characteristics with the rise in SOC for systems having weaker iDMI than RKKYI while an opposite characteristic are noticed for iDMI dominating systems. We found periodic oscillations under low external magnetic field in RKKYI dominating systems while moderate field are necessary for sustained oscillation in iDMI dominating systems. Moreover, the system exhibits saddle-node bifurcation and chaos under moderate N\'eel field and SOC with suitable iDMI and RKKYI. In addition, our results indicate that the magnon lifetime can be enhanced by increasing the strength of iDMI for both optical and acoustic modes.
Reeta Devi, Nimisha Dutta, Arindam Boruah, Saumen Acharjee
2023-10-27T14:43:15Z
http://arxiv.org/abs/2310.18175v1
Effect of interfacial Dzyaloshinskii - Moriya interaction in spin dynamics of an Antiferromagnet coupled Ferromagnetic double - barrier Magnetic Tunnel Junction ###### Abstract In this work, we have studied the spin dynamics of a synthethic Antiferromagnet (SAFM)[Heavy Metal (HM)]Ferromagnet (FM) double barrier magnetic tunnel junction (MTJ) in presence of Ruderman - Kittel - Kasuya - Yoside interaction (RKKYI), interfacial Dzyaloshinskii - Moriya interaction (iDMI), Neel field and Spin-Orbit Coupling (SOC) with different Spin Transfer Torque (STT). We employ Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation to investigate the AFM dynamics of the proposed system. We found that the system exhibits a transition from regular to damped oscillations with the increase in strength of STT for systems with weaker iDMI than RKKYI while display sustained oscillatons for system having same order of iDMI and RKKYI. On the other hand the iDMI dominating system exhibits self-similar but aperiodic patterns in absence of Neel field. In the presence of Neel field, the RKKYI dominating systems exhibit chaotic oscillations for low STT but display sustained oscillation under moderate STT. Our results suggest that the decay time of oscillations can be controlled via SOC. The system can works as an oscillator for low SOC but display nonlinear characteristics with the rise in SOC for systems having weaker iDMI than RKKYI while an opposite characteristic are noticed for iDMI dominating systems. We found periodic oscillations under low external magnetic field in RKKYI dominating systems while moderate field are necessary for sustained oscillation in iDMI dominating systems. Moreover, the system exhibits saddle-node bifurcation and chaos under moderate Neel field and SOC with suitable iDMI and RKKYI. In addition, our results indicate that the magnon lifetime can be enhanced by increasing the strength of iDMI for both optical and acoustic modes. pacs: 72.25.Dc, 72.25.-b, 75.78.-n, 75.75.-c, 85.75.-d ## I Introduction Recently, there has been a resurgence of interest in Antiferromagnets (AFM) within the field of spintronics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. This renewed attention is due to the unique characteristics of AFMs, such as their high resonance frequency in the terahertz range [2; 3], the absence of stray magnetic fields [4], and their remarkable stability under magnetic fields [5; 6]. Consequently, devices based on AFMs offer the potential for faster operation compared to traditional ferromagnetic (FM) devices, making them promising candidates for applications in data storage and information processing [11]. Moreover, the recent discovery of electrical switching in AFM based devices via Spin-Orbit Torque (SOT) demonstrates that AFMs can be manipulated electrically in similar ways to their FM counterparts [12; 13; 14]. This discovery has sparked significant research interest in AFM spintronics [15; 16; 17; 18; 19]. With further discovery of Giant Magnetoresistance (GMR) [20] and Spin transfer Torque (STT) [21; 22; 23] in magnetic tunnel junctions (MTJ) the AFM based heterostuctures and magnetic random access memories (MRAM) received significant boost as a material for future technology. A typical MTJ usually consists of a tunnel barrier between two ferromagnetic layers, act as pinned and free layers. However, such configurations face issues with thermal stability below 40 nanometers [24]. To overcome this, researchers have turned into double-interface MTJs, which involve placing a heavy metal (HM) layer between two ferromagnetic layers [24; 25; 26; 27; 28; 29; 30; 31]. This FM[HM]FM configuration offers better thermal stability and plays a crucial role in enhancing spin-orbit coupling (SOC) and also in generation of the Ruderman-Kittel-Kasuya-Yosida interaction (RKKYI) [29; 30; 31]. It is to be noted that the RKKYI ferromagnetically couple the magnetizations of the two layers, resulting them behave like identical layers [24; 25; 26]. Additionally, the lack of inversion symmetry in these systems can also generate an anti-symmetric interfacial Dzyaloshinskii-Moriya interaction (iDMI) which chirally couple the spins [32; 33; 34; 35; 36; 37]. Moreover, the emergence of iDMI can also be triggered via the strong SOC of the HM layer and hence play significant role in the formation of magnetic textures, such as chiral domains [38], magnetic skyrmions [39; 40], and Neel-type domain walls [41]. Recent studies suggest that the RKKYI counteract the adverse effects of iDMI in STT-induced switching [24; 25; 26]. Consequently, it is important to comprehend how SOC, RKKYI, and iDMI collectively influence the STT-induced spin dynamics of AFM based MTJs. AFM based MTJs require low STT to switch different resistance states [42; 43] and also has improved thermal stability [31; 44]. Thus, these devices are more energy efficient and suitable for high temperature operations. Moreover, this features also enable higher data storage density in MRAM [31; 45]. Apart from that AFM based MTJs have potential applications in spin-transfer oscillators for microwave signal generation due to their low STT and stability [46]. Numerous studies have been done to investigate the mechanism behind SOT-induced Neel vector switching and to gain a better understanding of the AFM dynamics in various hybrid structures considering Neel SOT [47; 48; 49; 50; 51; 52; 53]. Efforts have been made
2303.00602
Numerical Simulations of a Spin Dynamics Model Based on a Path Integral Approach
Inspired by path integral molecular dynamics, we build a spin model, in terms of spin coherent states, from which we can compute the quantum expectation values of a spin in a constant magnetic field, at finite temperature. This formulation facilitates the description of a discrete quantum spin system in terms of a continuous classical model and recasts the quantum spin effects within the framework of path integrals in a double $1/s$ and $\hbar s$ expansion, where $s$ is the magnitude of the spin. In particular, it allows for a much more direct path to the low- and high-temperature limits of the quantum system and to the definition of effective classical Hamiltonians that describe both thermal and quantum fluctuations. In this formalism, the quantum properties of the spins emerge as an effective anisotropy. We use atomistic spin dynamics to sample the path integral, calculate thermodynamic observables and show that our effective classical models can reproduce the thermal expectation values of the quantum system within temperature ranges relevant for studying magnetic ordering.
Thomas Nussle, Stam Nicolis, Joseph Barker
2023-03-01T15:51:32Z
http://arxiv.org/abs/2303.00602v3
# Numerical Simulations of a Spin Dynamics Model Based on a Path Integral Approach ###### Abstract Inspired by path integral molecular dynamics, we build a spin model, in terms of spin coherent states, from which we can compute the quantum expectation values of a spin in a constant magnetic field. This formulation facilitates the description of a discrete quantum spin system in terms of a continuous classical model and recasts the quantum spin effects within the framework of path integrals. In particular, it allows for a much more direct path to the low- and high-temperature limits and to the definition of effective classical Hamiltonians. In this formalism, the quantum properties of the spins are described through an effective anisotropy. To check this, we solve the effective classical model using atomistic spin dynamics, we calculate thermodynamic observables and show that our effective classical models can reproduce accurate quantum expectation values within the relevant temperature ranges. ## I Introduction Spin models of magnetic materials are usually either quantum or classical in terms of the elementary building blocks on which they are based. In quantum spin models, the spin states belong to the quantum space of states that includes all linear superpositions of the eigenstates of \(\hat{S}_{z}\) and the spin variables are quantum operators, whereas in classical spin models, the'spins' are actually magnetic moments of fixed length. Even though there are semi-classical models which describe quantum models in terms of at least "partially classical" systems, the computed quantities are, in the end, classical objects. Quantum models allow accurate calculation of both thermodynamics and dynamics, which intrinsically include purely quantum effects such as entanglement and quantum fluctuations. However, the size of systems that can be studied is often limited to tens or hundreds of spins due to the large computational cost, as solving quantum problems exactly amounts to diagonalization of larger and larger matrices, and even approximation schemes thereof suffer from scaling issues. Numerical methods, such as quantum Monte Carlo (QMC), allow calculations of very large quantum spin systems (hundreds of thousands of spins) with very high accuracy. However, there is no access to dynamical quantities, as QMC is intrinsically a description of thermodynamics, where time is absent. Other quantum methods which do provide access to real-time dynamics cannot provide results for such large systems. Additionally, fundamental issues also arise, such as the'sign problem' in the case of antiferromagnets, since the Hubbard-Stratonovich transformation leads to an effective Hamiltonian that is not hermitian although the evolution operator is unitary [1]. Classical spin models are frequently used to study the dynamics and thermodynamics of magnetic materials, helping to interpret experiments at "high" temperatures, where quantum effects-such as entanglement-can be neglected. The computational cost is relatively low, and the formalism is easy to parallelize, leading to routine simulations of the dynamics of hundreds of thousands or even millions of spins. While these classical models give a good qualitative description of the magnetic dynamics, issues arise at lower temperatures, where the assumption of classical Boltzmann statistics is no longer appropriate. The magnon Debye temperature tends to be very high and of the same size as the magnetic ordering temperature, so the 'low-temperature' regime may cover most of the temperature range of magnetic ordering [2; 3]. Recent efforts have been made to introduce ad hoc corrections to classical spin models to produce results that more closely resemble quantum models and to better agree with experimental measurements [4; 5; 6; 7; 8]. However, these approaches are incapable of including quantum effects, such as tunneling between macroscopic states or zero-point fluctuations. These quantum effects are becoming relevant on larger length scales and higher temperatures, for example, with the measurement of the motion of domain walls induced by quantum domain fluctuations in Cr up to 40K [9]. Thus, what is still lacking is a dynamical quantum model whose accuracy can bridge the gap between a fully quantum simulation of a few atoms and an effective classical model and that enables simulations scalable to the size of spintronic device components of millions of spins. Here, we describe a bridge between quantum and classical spin models by employing a path integral formalism for spin dynamics. This is inspired by path integral molecular dynamics [10] where the efficiency of classical molecular dynamics is used to calculate quantum properties, by establishing the appropriate evolution equations to move in the phase space of the quantum system and thus sample configurations therein [11]. However, how to take into account spin degrees of freedom and sample the corresponding phase space is by no means obvious. First attempts to do so [12], in particular for molecular magnets [13] express the spin degrees of freedom in terms of equivalent, though fictitious, position and momentum variables and using the known molecular dynamics formalism in this guise. Hence, these involve mapping the spin Hamiltonian to a particle Hamiltonian. This makes the interpretation of the results in terms of classical magnetic moments, the actual experimental observable, much less straightforward, and this mapping is difficult to build for more complex spin interactions. However, the real problem which we must overcome is that the space of positions and momenta is flat; while the space spanned by the spin degrees of freedom is curved. It is this problem that is solved by using the basis of spin coherent states [12]. While spin coherent states have been used in some quantum methods [14], these methods incur a non-trivial cost, for large systems, as well as not being well-suited for extracting the information on the individual (classical) spin components. We note that spin coherent states have also recently been used in methods to derive/rederive equations of motion for magnetization dynamics [15]. In this Article we consider the simplest nontrivial spin system: a single spin in an external magnetic field, described by the Zeeman Hamiltonian. We develop a formalism which uses the spin coherent states and the operators that act on them to solve for some exact cases, and compare the results obtained to numerical calculations performed with classical atomistic spin dynamics methods, augmented with a field, which represents the quantized nature of the spins. We demonstrate that this formalism can take into account the quantum effects of the spin, across a broad range of temperatures, with deviations appearing only at "very low" temperatures, as expected by intuition. ## I From the classical spin states to the spin coherent states In molecular dynamics, the dynamical variables of the quantum system take values in a flat space. This makes the application of path integrals using classical positions and momenta relatively straightforward. For spin systems, the dynamical variables, the components of spin, take values in a curved space and can only take discrete values due to the discrete spectrum of the spin Hamiltonian \[\left\{\left|s,m\right\rangle\right\},\quad m\in\llbracket-s,s\rrbracket, \tag{1}\] where \(s\) is the principal quantum number and \(m\) labels all different possible states with this given spin \(s\). For example, with \(s=2\) there are \(2s+1=5\) eigenstates: \[\left\{\left|2,-2\right\rangle,\left|2,-1\right\rangle,\left|2,0\right\rangle,\left|2,1\right\rangle,\left|2,2\right\rangle\right\}. \tag{2}\] However, all possible states of a quantum system of spin \(s=2\) are linear combinations of these five states, i.e. they are described as \[\left|\psi\right\rangle=c_{-2}\left|2,-2\right\rangle+c_{-1}\left|2,-1\right\rangle +c_{0}\left|2,0\right\rangle+c_{1}\left|2,1\right\rangle+c_{2}\left|2,2\right\rangle \tag{3}\] The normalization of these states implies that the coefficients satisfy the constraint \[\left|c_{-2}\right|^{2}+\left|c_{-1}\right|^{2}+\left|c_{0}\right|^{2}+\left| c_{1}\right|^{2}+\left|c_{2}\right|^{2}=1, \tag{4}\] which defines a point on the unit sphere in ten dimensions, but the property that five phases can be modded out reduces this to a five-dimensional manifold. The real challenge is to sample this space efficiently. The partition function of this quantum spin system is the volume of this five-dimensional manifold, which is finite: \[\begin{split}&\mathcal{Z}=\int d^{2}c_{-2}d^{2}c_{-1}d^{2}c_{0}d^{2}c_{ 1}d^{2}c_{2}\\ &\delta(\left|c_{-2}\right|^{2}+\left|c_{-1}\right|^{2}+\left|c _{0}\right|^{2}+\left|c_{1}\right|^{2}+\left|c_{2}\right|^{2}-1).\end{split} \tag{5}\] Upon coupling the magnetic moment to a thermal bath, the partition function takes the form \[\begin{split}&\mathcal{Z}=\int d\psi\langle\psi|e^{-\beta H}| \psi\rangle=\\ &\int d^{2}c_{-2}d^{2}c_{-1}d^{2}c_{0}d^{2}c_{1}d^{2}c_{2}\\ &\delta(\left|c_{-2}\right|^{2}+\left|c_{-1}\right|^{2}+\left|c _{0}\right|^{2}+\left|c_{1}\right|^{2}+\left|c_{2}\right|^{2}-1)\,e^{-\beta H (c)},\end{split} \tag{6}\] with \(\beta=1/(k_{B}T)\), where \(k_{B}=1.381\times 10^{-23}\) J/K is the Boltzman constant and \(T\) is the temperature in Kelvin. From Eq. (6) it is not obvious how the dynamical behavior of the quantum system, defined over the full manifold, goes over to that of a classical system, localized on the five states \(\left\{\left|2,-2\right\rangle,\left|2,-1\right\rangle,\left|2,0\right\rangle,\left|2,1\right\rangle,\left|2,2\right\rangle\right\},\) in the "classical limit" and how this can be defined. This requires a careful discussion of what we mean by a 'quantum' system. On the one hand, we have the discrete basis of the eigenstates of the Hamiltonian, but on the other hand, we have the quantum superposition of states which leads to a continuous manifold of possible quantum states. Here, we emphasize that we are dealing with classical measurements of quantum systems, which means that the outcome of any single measurement can only be an eigenstate of our Hamiltonian-which is labeled by an integer for spin systems. The prototype of this situation is the experiment by Stern and Gerlach [16], where, even though the possible quantum states of the electron can belong to a superposition, \[\left|\psi\right\rangle=a\left|\uparrow\right\rangle+b\left|\downarrow \right\rangle, \tag{7}\] such that \(a^{2}+b^{2}=1\), the outcome of the measurement of the experiment is either \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\). This is in contrast to a classical measurement of the projection along the \(z\)-axis of a classical magnetic moment for which a single measurement could take any value between \(+\mu_{s}\) and \(-\mu_{s}\) where \(\mu_{s}\) is the total magnetic moment. Thus, if our Hamiltonian is a function of \(\hat{S}_{z}\) only, then the partition function corresponding to the classical measurement of said quantum system is given as a sum over the eigenstates of this Hamiltonian, rather than an integral over the quantum manifold of states, \[\mathcal{Z}\equiv\mathrm{Tr}(e^{-\beta\hat{\mathcal{H}}})=\sum_{m=-s}^{s} \left\langle s,m\right|e^{-\beta\hat{\mathcal{H}}[\hat{S}_{z}]}\left|s,m \right\rangle. \tag{8}\] One way to sample the partition function over the quantum space of states is to recast the system in terms of the so-called spin coherent states [17]. Indeed, not only do the spin coherent states form a continuous basis for the spin system, enabling a mapping onto the continuous description in terms of a unit vector living on a sphere, but it has also been shown that their behavior is close to the classical limit [18]. Thus, they enable us to efficiently sample the manifold of quantum states, in a way that can offer hints as to the properties of the classical limit. The spin coherent states have previously been used to study fundamental aspects such as emerging supersymmetry in spin systems [19], semiclassical transition probabilities [20], and energy gap computations within mean-field quantum perturbation theory [21]. To use the spin coherent states, we work as follows: for a given quantum spin number \(s,\) we set \[\left|p\right\rangle\equiv\left|s,s-p\right\rangle, \tag{9}\] where \(p\in\{0,1,\ldots,2s-1,2s\}\) using the labeling introduced above and we define the spin coherent states \(\left|z\right\rangle,\) labeled by a complex number \(z,\) by the action of the lowering operator [22], \(\hat{S}_{-}=\hat{S}_{x}-i\hat{S}_{y}\), as \[\left|z\right\rangle\equiv\left(1+\left|z\right|^{2}\right)^{-s}\mathrm{e}^{z \hat{S}_{-}/\hbar}\left|0\right\rangle \tag{10}\] where the \(1/\hbar\) factor is a bookkeeping device needed to keep the exponential dimensionless. The action of \(\hat{S}_{+}\), \(\hat{S}_{-}\) and \(\hat{S}_{z}\) on \(\left|p\right\rangle\) produces \[\hat{S}_{-}\left|p\right\rangle =\hbar\sqrt{(2s-p)(p+1)}\left|p+1\right\rangle \tag{11}\] \[\hat{S}_{+}\left|p\right\rangle =\hbar\sqrt{p(2s-p+1)}\left|p-1\right\rangle\] \[\hat{S}_{z}\left|p\right\rangle =\hbar(s-p)\left|p\right\rangle.\] The expression in (10) is equivalent to \[\left|z\right\rangle\equiv\left(1+\left|z\right|^{2}\right)^{-s}\sum_{p=0}^{2 s}\binom{2s}{p}^{1/2}\,z^{p}\left|p\right\rangle, \tag{12}\] which, as we shall see, is more convenient for computing the action of spin operators on the spin coherent states. In this basis, we can write the partition function (8) as an integral over the complex label \(z\) for the spin coherent states as \[\mathcal{Z}=\int d\mu(z)\left\langle z\right|e^{-\beta\hat{\mathcal{H}}}\left| z\right\rangle \tag{13}\] where the measure must be properly normalized as \(\int d\mu(z)\left|z\right\rangle\left\langle z\right|=1\). In this case \[d\mu(z)=\frac{2s+1}{\pi}\frac{dz}{\left(1+\left|z\right|^{2}\right)^{2}}. \tag{14}\] To study the quantum system close to the classical limit, we must calculate the matrix elements of \(\hat{S}_{z}\) and its powers on the states \(\left|z\right\rangle\). The first two powers are \[\left\langle z\right|\hat{S}_{z}\left|z\right\rangle =\hbar s\frac{1-\left|z\right|^{2}}{1+\left|z\right|^{2}} \tag{15}\] \[\left\langle z\right|\hat{S}_{z}^{2}\left|z\right\rangle =\left(\hbar s\frac{1-\left|z\right|^{2}}{1+\left|z\right|^{2}} \right)^{2}+2\hbar^{2}s\frac{\left|z\right|^{2}}{(1+\left|z\right|^{2})^{2}}. \tag{16}\] In general, it can be shown that the higher-order terms are all of the form \[\left\langle z\right|\hat{S}_{z}^{k}\left|z\right\rangle=\left(\hbar s\frac{1 -\left|z\right|^{2}}{1+\left|z\right|^{2}}\right)^{k}+\text{noncommutative terms}. \tag{17}\] The first term is the leading term in the classical limit. Non-commutative terms occur because \(\hat{S}_{z}\) and \(\hat{S}_{\pm}\) do not commute. The second term in (16) is an example, but there is no general closed expression for the correction of higher-order powers. These noncommutative terms describe the contribution of the curvature of the sphere of quantum states, essentially the difference in the trajectory between states on a flat surface compared to a curved surface. However, the noncommutative terms are always of the same order in \(\hbar\) as the leading term. Thus neglecting the noncommutative terms does not simply correspond to the semi-classical \(\hbar\) expansion and needs to be justified differently. We remark that by setting \(\hbar s\equiv\) s, equation (16) can be written as \[\left\langle z\right|\hat{S}_{z}\left|z\right\rangle =\hbar s\frac{1-\left|z\right|^{2}}{1+\left|z\right|^{2}}=\text{ s}\frac{1-\left|z\right|^{2}}{1+\left|z\right|^{2}} \tag{18}\] \[\left\langle z\right|\hat{S}_{z}^{2}\left|z\right\rangle =\left(\hbar s\frac{1-\left|z\right|^{2}}{1+\left|z\right|^{2}} \right)^{2}+2\hbar^{2}s\frac{\left|z\right|^{2}}{(1+\left|z\right|^{2})^{2}}=\] \[\text{s}^{2}\left\{\left(\frac{1-\left|z\right|^{2}}{1+\left|z \right|^{2}}\right)^{2}+2\frac{1}{s}\frac{\left|z\right|^{2}}{(1+\left|z \right|^{2})^{2}}\right\},\] which highlights the property that the noncommutative terms, which are sensitive to the curvature of the manifold of spin superpositions, are of higher order in an \(1/s\) expansion; and that the operators, that have a sensible large-spin, i.e. semi-classical, limit are \(\hat{S}_{z}^{k}/\text{s}^{k}\). Indeed, this limit entails taking \(\hbar\to 0,\)\(s\rightarrow\infty\) while keeping the product, \(\text{s}\equiv\hbar\) fixed. In our case, which terms are to be neglected will depend on both this first expansion, and the \(\beta\) series expansion of \(e^{-\beta\hat{\mathcal{H}}}\) in the partition function. The second expansion being in powers of \(\beta\), it is a high temperature expansion. However, it is not a Taylor expansion around a given value; thus, higher orders of \(\beta\) will improve the temperature range and convergence towards the quantum solution, and technically going to an infinite order in \(\beta\) yields the exact quantum solution. When ignoring the noncommutative terms, we can rewrite the first term on the right-hand side of (17) as an exponential series \[\sum_{k=0}^{\infty}\frac{1}{k!}\left\langle z\right|\hat{S}_{z}^{k}\left|z \right\rangle\approx\exp\left(\hbar s\frac{1-\left|z\right|^{2}}{1+\left|z \right|^{2}}\right), \tag{19}\] and this will be shown later to yield the classical limit. We now define the Hamiltonian for a single spin (whose electromagnetic properties will be described by its \(g-\)factor) in an applied magnetic field that is constant along the \(z\)-direction, \[\hat{\mathcal{H}}=-\frac{g\mu_{\text{B}}}{\hbar}\hat{S}_{z}B_{z} \tag{20}\] Fo the electron, \(g\approx 2.002=\left|g_{\text{e}}\right|\) is the absolute value of the electron \(g\)-factor, \(\mu_{\text{B}}=9.274\times 10^{-23}\) J/T is the Bohr magneton, \(\hbar=1.05457182\times 10^{-34}\) J/K is Planck's constant and \(B_{z}\) is the applied magnetic field in Tesla. Choosing a fixed field direction(which can always b taken to be along \(z\)) simplifies the calculation by reducing the noncommutativity as we work with the exponential of operators. To compute the partition function, we again express the exponential as a series \[\exp\left(-\beta\hat{\mathcal{H}}\right)=\sum_{k=0}^{\infty}\frac{1}{k!}\left( \beta\frac{g\mu_{\text{B}}}{\hbar}\hat{S}_{z}B_{z}\right)^{k}, \tag{21}\] and compute the matrix elements \(\left\langle z\right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\), which, using equation (19), can be approximated by \[\left\langle z\right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\approx \sum\limits_{k=0}^{\infty}\frac{1}{k!}\left(\beta\frac{g\mu_{\mathrm{B}}}{ \hbar}\right)^{k}\left(\hbar s\frac{1-|z|^{2}}{1+|z|^{2}}\right)^{k}B_{z}^{k}. \tag{22}\] Thus, the matrix elements take the simple form \[\left\langle z\right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\approx \exp\left(\beta g\mu_{\mathrm{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}}\right). \tag{23}\] The complex value \(z\) (and its conjugate \(\bar{z}\)) can then be mapped onto a unit 2-sphere by defining a unit _spin coherent state vector_[23], \(\mathbf{n}\), with components \[n_{x} =\frac{z+\bar{z}}{1+|z|^{2}} \tag{24}\] \[n_{y} =-i\frac{z-\bar{z}}{1+|z|^{2}}\] \[n_{z} =\frac{1-|z|^{2}}{1+|z|^{2}},\] and using this we can rewrite the matrix elements (23) as \[\left\langle z\right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\approx \exp\left(\beta g\mu_{\mathrm{B}}B_{z}sn_{z}\right). \tag{25}\] This leads immediately to the definition of an equivalent classical Hamiltonian \[\mathcal{H}_{\text{eff}}=-g\mu_{\mathrm{B}}B_{z}sn_{z}=-\mu_{s}\mathbf{B} \cdot\mathbf{S}, \tag{26}\] where we identify \(\mathbf{S}=\mathbf{n}\) as the classical spin vector (magnetic moment) with length \(\mu_{s}=sg\mu_{\mathrm{B}}\). This recovers the classical precession of a magnetic moment in a magnetic field. Therefore, dropping the noncommutative terms, yields the expected classical limit of this quantum system. We emphasize that _all_ the powers of \(\hat{S}_{z}^{k}\) are needed to recover the classical limit-only the noncommutative terms have been dropped. As we go to the large-spin limit, since the radius of the sphere is proportional to \(1/s\), it becomes smaller and smaller, which justifies neglecting these terms. The vector \(\mathbf{n}\) defined by the spin coherent states plays the role of the spin unit vector, which is commonly used in classical Heisenberg spin models. Thus, not only does the spin coherent states basis provide us with a continuous (integral) description of the quantum system, but it also yields a straightforward interpretation of the quantum system (described by its states and operators) in terms of the continuous classical system (described by the magnetization vector). We shall now use the partition function in the spin coherent state basis to compute expectation values for the quantum spin Hamiltonian, close to the classical limit, by performing an expansion in increasing orders of \(\beta\). We shall then compare these results to direct numerical calculations. ## II Partition function and expectation values The expectation value of an operator \(\hat{O}\) for the discrete quantum spin system is \[\left\langle\hat{O}\right\rangle=\frac{\sum\limits_{m=-s}^{s}\left\langle s,m \right|\hat{O}\exp(-\beta\hat{\mathcal{H}})\left|s,m\right\rangle}{\sum\limits _{m=-s}^{s}\left\langle s,m\right|\exp(-\beta\hat{\mathcal{H}})\left|s,m \right\rangle}, \tag{27}\] where the denominator is the partition function (8). In the spin coherent state basis, the expectation value is expressed in terms of integrals, rather than sums, _viz._ \[\left\langle\hat{O}\right\rangle=\frac{\int d\mu(z)\left\langle z\right|\hat{O }\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle}{\int d\mu(z)\left\langle z \right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle}. \tag{28}\] As mentioned above, the spin coherent states are not eigenstates of \(\hat{S}_{z}\), making the exponentiation more subtle. The action of the exponential of \(\hat{S}_{z}\) on \(\left|s,m\right\rangle\) simply yields the exponentiation of the eigenvalue \[e^{\hat{S}_{z}/\hbar}\left|s,m\right\rangle=e^{m}\left|s,m\right\rangle; \tag{29}\] but in the spin coherent state basis, we cannot exactly compute the action and must resort to approximations such as the \(1/s\) expansion and the high- and low-temperature expansions. We proceed by calculating the expectation value \(\left\langle\hat{S}_{z}\right\rangle\) as a function of temperature with the Zeeman Hamiltonian (20). This is known to be qualitatively different for classical and quantum spin models due to spin quantisation[24]. The expectation value \(\left\langle\hat{S}_{z}\right\rangle\) can be identified with the magnetization induced by an external field (in the limit when the exchange interaction can be neglected). The exact quantum expectation value, calculated from the discrete basis, where the action of \(\hat{S}_{z}\left|s,m\right\rangle=\hbar m\left|s,m\right\rangle\), gives \[\left\langle\hat{S}_{z}\right\rangle=\frac{\sum\limits_{m=-s}^{s}\hbar m\exp( \beta g\mu_{\mathrm{B}}mB_{z})}{\sum\limits_{m=-s}^{s}\exp(\beta g\mu_{ \mathrm{B}}mB_{z})}. \tag{30}\] The expectation value in the classical limit is calculated with the spin coherent states using equation (28) and the approximation in equation (23) which neglects the terms proportional to powers of \(1/s\), yielding \[\left\langle\hat{S}_{z}\right\rangle\approx\hbar s\frac{\int dz\frac{1-|z|^{2} }{(1+|z|^{2})^{3}}\exp\left(\beta g\mu_{\mathrm{B}}B_{z}s\frac{1-|z|^{2}}{1+|z |^{2}}\right)}{\int dz\frac{1}{(1+|z|^{2})^{2}}\exp\left(\beta g\mu_{ \mathrm{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}}\right)}. \tag{31}\] Using these expressions for the discrete quantum model (30) and the classical limit of the spin coherent state (31), we plot the expectation value \(\left\langle\hat{S}_{z}\right\rangle\) as a function of temperature in Figure 1. Neglecting the terms due to the non-commutativity of \(\hat{S}_{z}\) and \(\hat{S}_{\pm}\), i.e. working to leading order in the \(1/s\) expansion, means the representation by the spin coherent states produces the classical limit (blue solid line), as expected, with an immediate decay of the spin alignment with the external field as soon as the temperature is non-zero. Equation (31) is, in fact, identical to the expectation value \(\left\langle S_{z}\right\rangle\) of a classical spin, as is expected from Ehrenfest's theorem-a useful sanity check (see Appendix A). In the quantum case (red solid line) the expectation value remains almost flat-at low temperatures-and displays a slower characteristic decay around the zero temperature value, along with an initial inflection point that is expected on general grounds [7]. These characteristic differences between quantum and classical models of single spins are well known and well studied. Of practical interest is that we can obtain an _intermediate_ approximation for the quantum expectation value by retaining terms related to the commutation of operators. This pulls quantum features into the classical model in a rigourous manner. To do this, the exponential functions in the spin coherent state expectation value (28) must be expanded as a series in \(\beta\), \[\exp\left(\beta\hbar\hat{S}_{z}\right)\approx 1+\beta\hbar\hat{S}_{z}+\frac{1}{ 2}\left(\beta\hbar\hat{S}_{z}\right)^{2}+\ldots \tag{32}\] Higher-order terms beyond \(\hat{S}_{z}\) contain the effects of the noncommutativity of operators, as seen in (16), and we now include these terms as we evaluate the expectation value. We calculate \(\left\langle\hat{S}_{z}\right\rangle\) in the spin coherent state basis for increasing orders in the \(\beta\) expansion, which includes the terms due to noncommutity of \(\hat{S}_{z}\) to higher orders. The results are shown with dashed lines in Figure 1. '1 correction term' includes \(\hat{S}_{z}^{2}\), '2 correction terms' \(\hat{S}_{z}^{3}\) and so on. We see that including even the first noncommuting term in this expansion yields a solution that is already significantly different from the classical result and close to the quantum solution at temperatures of the order of \(1\) K and above. The agreement improves as the temperature increases, as expected for an expansion in powers of \(\beta\). Evaluating to higher orders in \(\beta\) causes the expectation value to converge more quickly to the quantum solution (Figure 1), thus producing a continuous description of the discrete quantum system, which is one of our main objectives. For _very low_ temperatures, close to \(0\) K, the approximation as a power series in \(\beta\) breaks down and diverges because \(\beta\) is the inverse of the temperature. We emphasize, however, that already at first order in \(\beta\), this semi-classical model accurately captures the salient features of the thermal spin statistics of the quantum system at temperatures of the order of \(1\) K. Next, we build a numerical sampling technique for this path integral based on classical spin dynamics. ## III Effective Hamiltonian and atomistic spin dynamics ### Low-temperature expansion of the matrix elements Building a classical Hamiltonian dynamics model to emulate a quantum system, expressed in the spin coherent states basis, requires finding an effective classical Hamiltonian \(\mathcal{H}_{\text{eff}}\) which approximates \(\left\langle z\right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\) as \(\exp(-\beta\mathcal{H}_{\text{eff}})\). By finding such an approximate expression, we recast the quantum system with partition function (8) into an effective classical system with partition function \[\begin{split}\mathcal{Z}&=\int d\mu(z)\left\langle z \right|\exp(-\beta\hat{\mathcal{H}})\left|z\right\rangle\\ &\approx\int d\tilde{\mu}(z)\exp(-\beta\mathcal{H}_{\text{eff}}),\end{split} \tag{33}\] where \(\mathcal{H}_{\text{eff}}\) yields the same expectation values as for the quantum case and \(\tilde{\mu}(z)\) describes a potentially enlarged, higher-dimensional, phase space, as is the case in path integral molecular dynamics approaches [25]. We consider the partition function with the first commutation correction (16), and seek an expression such that \[\begin{split}\exp\left(-\beta\mathcal{H}_{\text{eff}}\right)& \approx\exp\left(\beta g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}} \right)\\ &\quad+\left(\beta g\mu_{\text{B}}B_{z}\right)^{2}\frac{s|z|^{2}} {\left(1+|z|^{2}\right)^{2}},\end{split} \tag{34}\] where the first term on the right-hand side is the classical limit and the second term is the first noncommutation term which appears on the right-hand side of (16). We ignore all higher-order non commutation terms in \(\left\langle z\right|\hat{S}_{z}^{k}\left|z\right\rangle\), beyond \(k=2\). This is the same level of approximation used in '1 correction term' in Fig. 1. As a first and very coarse approximation (for more details, see appendix B) we take \[\mathcal{H}_{\text{eff}}^{\text{low-T}}=-g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}} {1+|z|^{2}}+g\mu_{\text{B}}B_{z}\frac{\sqrt{2s}|z|}{1+|z|^{2}}, \tag{35}\] Figure 1: Expectation value \(\left\langle\hat{S}_{z}\right\rangle\) for spin \(s=1/2\) as a function of temperature. Red solid line - the exact quantum solution in the discrete spin basis \(\left|s,m\right\rangle\) from Eq. (30). Blue solid line - the classical limit of the spin coherent state basis from Eq. (31). Dashed lines are successive corrections to the partition function to include noncommutative terms such as appear in Eq. (16). The applied field is \(B_{z}=1\) T for all figures. which, written in terms of the spin coherent state vector \(\mathbf{n}\), is \[\mathcal{H}_{\text{eff}}^{\text{low-T}}=-g\mu_{\text{B}}B_{z}sn_{z}+\tfrac{1}{2} g\mu_{\text{B}}B_{z}\sqrt{2s}\sqrt{1-n_{z}^{2}}. \tag{36}\] The first term is again the purely classical Zeeman Hamiltonian (26). The second term arises due to the quantization of spin and energetically favors the spin to align with the quantization axis (\(z\)). It has a form similar to magnetocrystalline anisotropy, but its origin is the quantum behavior of the spin rather than any physical interaction. We will refer to this term as \(\mathcal{H}_{\text{Qeff}}\). To calculate the classical expectation values of this effective Hamiltonian, we use the techniques of atomistic spin dynamics (ASD) [26; 27; 28; 29; 30]. This is usually used to model the dynamics of localized spin magnetic moments \(\mathbf{\mu}=\mu_{s}\mathbf{S}\) where \(\mathbf{S}\) is a unit vector and \(\mu_{s}=gs\mu_{B}\) is the size of the spin magnetic moment. The moments interact with a local effective magnetic field \(\mathbf{B}_{\text{eff}}\) obtained from a Hamiltonian \(\mathcal{H}_{\text{eff}}\) that encodes the different magnetic interactions of the system. Here we will retain our use of the vector \(\mathbf{n}\) rather than \(\mathbf{S}\) to emphasize that we are solving the dynamics of the spin coherent state vector rather than making an _a priori_ assumption of classical spin magnetic moments. Calculations of the thermodynamic quantities of classical spins can be performed with ASD or Monte Carlo calculations, but ASD is trivial to parallelize across large ensembles of spins, allowing efficient calculation as well as the ability to calculate real-time dynamics. The classical spin dynamics is described by the Landau-Lifshitz-Gilbert (LLG) equation of motion \[\dot{\mathbf{n}}=-\frac{\gamma}{1+\alpha^{2}}\left(\mathbf{n}\times\mathbf{B}_ {\text{eff}}+\alpha\mathbf{n}\times\left(\mathbf{n}\times\mathbf{B}_{\text{ eff}}\right)\right), \tag{37}\] where \(\gamma\) is the gyromagnetic ratio in \(\text{rad}\cdot\text{s}^{-1}\cdot\text{T}^{-1}\), \(\alpha\) is a dimensionless damping parameter, and the effective field \(\mathbf{B}_{\text{eff}}\) in Tesla is calculated as \[\mathbf{B}_{\text{eff}}=-\frac{1}{\mu_{s}}\mathbf{\nabla}_{\mathbf{n}}\mathcal{H}. \tag{38}\] thus, the field from our effective Hamiltonian (36) is \[\mathbf{B}_{\text{eff}}^{\text{low-T}}=B_{z}\mathbf{e}_{z}+\frac{\sqrt{2}}{2 \sqrt{s}}B_{z}\frac{n_{z}}{\sqrt{n_{x}^{2}+n_{y}^{2}}}\mathbf{e}_{z}, \tag{39}\] where \(\mathbf{e}_{z}\) is the unit vector along \(z\). This expression is apparently singular for \(n_{z}=1\); this singularity simply indicates that the magnetic field doesn't have any effect on a moment that is aligned with it; we realize, indeed, that such an initial condition, which must be treated separately, is very improbable at any finite temperature. Temperature is included in the formalism by adding a stochastic field \(\mathbf{B}_{\text{eff}}\rightarrow\mathbf{B}_{\text{eff}}+\mathbf{\eta}\) that turns the Landau-Lifshitz-Gilbert equation of motion (37) into a Langevin equation. The stochastic field is defined through the fluctuation dissipation theorem, which in the classical case requires \(\mathbf{\eta}\) to be a white noise with the properties \[\begin{split}\langle\eta_{i}(t)\rangle&=0\\ \langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle&=\frac{ 2\alpha\delta_{ij}\delta(t-t^{\prime})}{\beta\mu_{s}\gamma},\end{split} \tag{40}\] where \(i,j\) are Cartesian components. Recently, stochastic fields using the quantum fluctuation dissipation theorem have been used, enforcing a Bose-Einstein statistical distribution for the noise [2]. This assumes that the relevant thermally occupied objects in this case are magnons, which should obey Planck statistics. Here, our work differs in that the quantum nature of the spin will be included directly into the effective field without making any assumption of the statistical distribution. We numerically integrate the LLG equation (37) using a symplectic integration scheme [31] with a timestep of \(0.05\) ps. The expectation values from the numerical method are calculated as averages over time and multiple realizations of the stochastic dynamics \[\langle S_{z}\rangle=\frac{1}{N_{s}}\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}\sum_{t=1 }^{N_{t}}n_{i,z}(t), \tag{41}\] where \(N_{s}\) is the number of independent spin trajectories and \(N_{t}\) is the number of time samples. The average in time is taken after an equilibration period where the system relaxes from the initial state to a thermalized state. The simulations performed here equilibrate within a few nanoseconds; therefore, we started the averaging procedure after an equilibration period of \(5\) ns. The averaging time is \(15\) ns and \(N_{s}=20\). From the effective Hamiltonian (35), we compute the expectation values for \(\hat{S}_{z}\), from the approximate partition function \[\langle\hat{S}_{z}\rangle\approx\frac{\int d\mu(z)\hbar\text{s}\frac{1-|z|^{2} }{1+|z|^{2}}\exp(-\beta\mathcal{H}_{\text{eff}})}{\int d\mu(z)\exp(-\beta \mathcal{H}_{\text{eff}})}, \tag{42}\] and compare these to the results we obtain from atomistic simulations of the same system. The results for different values of the principal quantum number \(s=1/2,\ 2,\ 5\) are shown in Figure 2. All three models, classical, quantum and the effective Hamiltonian (42) converge to the same values in the high-temperature limit. In figure 2a for \(s=1/2\) the effective model only has small corrections to the classical model and the overall behavior is not close to the quantum solution. Only the slope at zero temperature shows any of the quantum behavior with a small inflection point. This is a feature which several effective models have attempted to force artificially on the studied spin systems to reproduce the experimental behavior for magnetization curves [32]. However, our model does not impose any hypotheses on the system and has no fitting parameters. The additional computational cost of making the classical system more closely resemble its quantum avatar is minimal, requiring only the addition of a field that amounts to an effective anisotropy. Although this coarse approximation scheme provides results that are closer to the quantum results, there is no way to systematically improve the approximation scheme. For each higher-order commutation correction we must again try to derive a \(\mathcal{H}_{\text{eff}}\) ad hoc that satisfies equation (33). Therefore, we continue by developing a more systematic method for which computing the expectation values to higher orders of accuracy is straightforward. ### High-temperature spin coherent states expansion The effective model in the previous section produced by approximating the integrand of the partition function by an exponential is very coarse but yields some quantum corrections and at a very low computational cost. We now improve on this to try to recover a behavior more similar to the expansion of the partition function in Figure 1. We do this by including higher-order noncommutative terms in the expansion of \(\exp(-\beta\hat{\mathcal{H}})\) (21) in a more systematic way. We return to the partition function (13) and, similar to the path-integral molecular dynamics approaches, introduce the resolution of unity as \[\sum_{p=0}^{2s}\ket{p}\bra{p}=1, \tag{43}\] in the \(\ket{s,m}\) basis, in which \(\hat{S}_{z}\) is diagonal, resulting in \[\mathcal{Z}=\int\sum_{p=0}^{2s}d\mu(z)\bra{z}e^{\frac{\beta g\mu_{B}B_{z}}{ \hbar}B_{z}\hat{S}_{z}}\ket{p}\bra{p}. \tag{44}\] Using the definition of \(\ket{z}\) and the action of \(\hat{S}_{z}\) on \(\ket{p}\) we find \[\mathcal{Z}=\int d\mu(z)\left[e^{-\beta g\mu_{B}sB_{z}}\left(\frac{e^{\beta g \mu_{B}B_{z}}+|z|^{2}}{1+|z|^{2}}\right)^{2s}\right], \tag{45}\] for which we need to rewrite the integrand \[F[\beta,z]\equiv e^{-\beta g\mu_{B}sB_{z}}\left(\frac{e^{\beta g\mu_{B}B_{z}} +|z|^{2}}{1+|z|^{2}}\right)^{2s}, \tag{46}\] as a single exponential of the form \(F[\beta,z]\equiv\exp(-\beta\mathcal{H}_{\text{eff}})\) to identify a Hamiltonian from which to construct an effective model. Through a series of identities (see appendix C), we can write \[\begin{split} F[\beta,z]&=\exp\left\{2s\left[\ln(2 )+\ln\left(\frac{|z|}{1+|z|^{2}}\right)\right.\right.\\ &\left.\left.+\ln\left(\cosh\left(e^{\frac{\beta g\mu_{B}B_{z}} {2}}-\ln\left(|z|\right)\right)\right)\right]\right\}.\end{split} \tag{47}\] We then approximate (47) with a Taylor expansion for \(\beta\to 0\). Thus in the high-temperature limit (which we later find to be quite low) \[\begin{split}\ln(F[\beta,z])&\approx\frac{\left(1 -|z|^{2}\right)\beta g\mu_{\text{B}}sB_{z}}{1+|z|^{2}}+\frac{|z|^{2}\beta^{2} \left(g\mu_{B}\right)^{2}sB_{z}^{2}}{\left(1+|z|^{2}\right)^{2}}\\ &-\frac{|z|^{2}\left(1-|z|^{2}\right)\beta^{3}\left(g\mu_{B} \right)^{3}sB_{z}^{3}}{3\left(1+|z|^{2}\right)^{3}}+\mathcal{O}(\beta^{4}). \end{split} \tag{48}\] Mapping to the spin coherent state vector components using \((1-|z|^{2})/(1+|z|^{2})=n_{z}\) and \(|z|^{2}/(1+|z|^{2})=(1-n_{z}^{2})/4\), we can write a temperature-dependent effective Hamiltonian: \[\begin{split}\mathcal{H}_{\text{eff}}^{\text{high-T}}\approx& -g\mu_{\text{B}}sB_{z}n_{z}-\frac{1}{4}\beta\left(g\mu_{B}\right)^{2}sB_{z}^{2 }(1-n_{z}^{2})\\ &+\frac{1}{12}\beta^{2}\left(g\mu_{B}\right)^{3}sB_{z}^{3}n_{z}( 1-n_{z}^{2}).\end{split} \tag{49}\] From the temperature-dependent Hamiltonian (49) and the definition of the effective field (38), we derive \[\mathbf{B}_{\text{eff}}^{\text{high-T}}=B_{z}-\frac{1}{2}\beta g\mu_{B}B_{z}^ {2}n_{z}-\frac{1}{12}\beta^{2}\left(g\mu_{B}\right)^{2}B_{z}^{3}(1-3n_{z}^{2}). \tag{50}\] We use this effective field in numerical atomistic simulations and compare with the expectation values computed directly from the partition function (42) and the relevant terms, Figure 2: Expectation value for \(\hat{S}_{z}\) as a function of temperature for the classical limit (green solid curve), quantum solution (red solid curve) and effective model (blue solid curve) from partition function. Equivalent results from enhanced atomistic spin dynamics simulation for classical limit (purple dashed curve) and effective model (orange dashed curve). (a) Top pane \(s=1/2\), (b) middle pane \(s=2\) and (c) bottom pane \(s=5\) according to the order of the approximation, of the effective Hamiltonian (49). The results are shown in Figure 3. When we include only the first correction for the effective field, namely the first and second terms on the right-hand side of (49) then, contrary to the previous section (Figure 2), the low-temperature limit is far from both classical and quantum solutions. However, around \(1\) K, the results become very close to the quantum solution and converge to be almost identical as the temperature increases. Including higher-order terms (for example, using all the terms in (50)) we see that although at low temperatures the model moves further away from the quantum solution, the rate of convergence towards it is much faster. For the first correction, once close to the quantum solution, it takes a while before both curves are indistinguishable, and this happens much quicker when including the second term (see the inset of Figure 3). As our approximation is computed to higher orders, the convergence becomes faster. We note that there is no reason why this high-temperature expansion should become valid at much lower temperatures as we go to higher orders. The second drawback that we have to deal with is that these expectation curves have to be normalized in order for the atomistic simulations to overlap with the direct computation from the partition function. Indeed, when we compute the expectation value for \(\langle\hat{S}_{z}\rangle\) we should be using an expression of the form of Eq. (28) as \[\langle\hat{S}_{z}\rangle\approx\frac{\int d\mu(z)\left\langle z\right|\hat{S} _{z}\exp\left(\frac{g\mu_{B}}{\hbar}B_{z}\hat{S}_{z}\right)\left|z\right\rangle }{\int d\mu(z)e^{-\beta\mu_{s}B_{z}}\left(\frac{e^{g\mu_{B}B_{z}}+\left|z \right|^{2}}{1+\left|z\right|^{2}}\right)^{2s}}, \tag{51}\] but instead (see appendix D), we define \[\langle\hat{S}_{z}\rangle_{\text{app}}\equiv\frac{\int d\mu(z)\hbar s\frac{1- \left|z\right|^{2}}{1+\left|z\right|^{2}}e^{-\beta\mu_{s}B_{z}}\left(\frac{e^{ g\mu_{B}B_{z}}+\left|z\right|^{2}}{1+\left|z\right|^{2}}\right)^{2s}}{\int d\mu(z)e^{- \beta\mu_{s}B_{z}}\left(\frac{e^{g\mu_{B}B_{z}}+\left|z\right|^{2}}{1+\left|z \right|^{2}}\right)^{2s}}. \tag{52}\] We know that in the quantum case given by Eq. (30), \(\langle\hat{S}_{z}\rangle_{\text{quantum}}\) goes to \(s\) as \(\beta\rightarrow\infty\). We can show that in the same limit, for Eq. (43), we have \[\langle\hat{S}_{z}\rangle_{\text{app}}\xrightarrow[\beta\rightarrow\infty]{} \frac{s^{2}}{s+1} \tag{53}\] hence our expectation values need to be normalized by this factor to yield the correct results (see appendix D for more details). In summary, using this approximation scheme, we can compute expectation values for the quantum system from an equivalent classical atomistic simulation where the quantum nature of the system is represented by a _temperature-dependent_ effective field. In contrast to the previous section (III.1), these then need to be properly rescaled. However, we can compute a closed expression for this rescaling factor, which once again depends only on the principal quantum spin number \(s\). Once this step is fulfilled, the results are almost identical to the fully quantum expectation values for high enough temperatures, which are of the order of \(1\) K for the single spin in a magnetic field studied here. The low-temperature behavior of this scheme is not as well behaved as in Section III.1, which is not surprising, as this is a high-temperature expansion (see Appendix E). ## IV Conclusion In this Article, we have built an effective, classical, dynamical model for quantum spin systems from a path integral approach inspired by path integral molecular dynamics in the simplest case of a single spin of arbitrary size in a constant magnetic field described by a Zeeman Hamiltonian. While path integral models of spin have a long history and have been investigated in fundamental contexts such as supersymmetry or, more closely related to our work for molecular magnets, a systematic approach bridging the gap from small-size fully quantum simulations to large-scale dynamical simulations with quantum features has been lacking. Our work here is the first step towards this direction. We have started by expressing the partition function for spin systems in the spin coherent state basis to obtain a continuous description in terms of an integral rather than a sum, to make the connection to classical spin dynamics. This allows the use of highly efficient atomistic spin dynamics simulations Figure 3: Expectation value for \(\hat{S}_{z}\) for \(s=2\) as a function of temperature for classical limit (green solid curve) and quantum solution (red solid curve) and effective model with the first correction (light blue solid curve) and second correction (dark blue solid curve) from partition function. Equivalent results from enhanced atomistic spin dynamics simulation for classical limit (purple dashed curve) and second effective model with first correction (orange dashed curve) and second correction (yellow dashed curve) for quantum spin systems and makes the connection between the quantum system defined by its states and the Hamiltonian operator and classical spin dynamics more explicit. We then proceeded to expand the relevant matrix elements of the partition function in powers of \(\beta\) to compute the expectation values of \(\hat{S}_{z}\) directly from the partition function and from atomistic spin dynamics. Here, we have seen that in this first approximation this could be done very simply and efficiently by adding an anisotropic effective field, which could be directly inferred from the quantum spin number of the system. For small spin values, we have seen that the improvement is quite small but increases with the spin. Of course, spin \(s=1/2\) represents the most extreme limit of spin quantization. As the magnitude of the spin increases to \(s=2\) and \(s=5\) (Fig. 2b,c) the corrections in the effective model take the system closer to the quantum solution. Many magnetic materials of practical relevance have \(s\) in the range \(3/2\) to \(7/2\) so having an improved quantum description for these larger spin values is already very useful. We also investigated a different method of approximating the integrand of the partition function by an exponential by allowing the effective Hamiltonian of the system to be explicitly temperature-dependent, yielding a temperature-dependent effective field for describing in this way the quantum nature of the system. This method proved to be more accurate for higher temperatures, above \(1\) K, than the low-temperature expansion, but with the drawback that the expectation values computed using this method require renormalization. However, this renormalization factor has a closed general expression that depends only on the quantum spin number \(s\) of the system. The next step we aim to investigate is the more general case of a general, time-dependent, magnetic field. This introduces more noncommutativity issues with operators \(\hat{S}_{x}\), \(\hat{S}_{y}\) and \(\hat{S}_{\pm}\). Beyond this more complex Hamiltonians including the exchange interaction and magnetocrystalline anisotropy in a quantum fashion will allow the large-scale calculation of the thermodynamics of magnetic materials including quantum effects with a relatively low computational cost. In the present case of a constant magnetic field and for a single spin, we have seen that, conversely to path integral methods for molecular dynamics, we did not need to introduce copies of the spin which interact with itself. We do not expect this to hold in more complex Hamiltonians. ## Data access Python code and output data to reproduce all results and figures reported in this paper are openly available from the Zenodo repository: _Sources for: Numerical Simulations of a Spin Dynamics Model Based on a Path Integral Approach._[https://doi.org/10.5281/zenodo.76889723](https://doi.org/10.5281/zenodo.76889723). The repository contains: * Python code to generate analytic equations derived herein. * Python code to perform enhanced atomistic spin dynamics calculations with the quantum effective fields. * Python scripts to reproduce all figures. The software and data are available under the terms of the MIT License. ## Author contributions Thomas Nussle: conceptualization, methodology, investigation, software, writing - original draft. Stam Nicolis: methodology, writing - review and editing. Joseph Barker: conceptualization, methodology, software, data curation, writing - review and editing, funding acquisition. ## Acknowledgments This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/V037935/1]. JB acknowledges funding from a Royal Society University Research Fellowship. The authors thank A. Sylla, F. Labey and T. Raujouan for very insightful mathematical discussions, as well as J. Hodrien and A. Coleman from the University of Leeds Research Computing team for their help with optimizing the Python code on which this work is relying. ## Appendix A Correspondence of the spin coherent states with the classical limit Here we show that the observable \(\langle\hat{S}_{z}\rangle\) from the spin coherent states with the commutators neglected (i.e. in the classical limit (31)) is identical to \(\langle S_{z}\rangle\) calculated from the classical Heisenberg model. For a classical Heisenberg spin with Hamiltonian \[\mathcal{H}=-\mu_{s}\mathbf{B}\cdot\mathbf{S}, \tag{34}\] where \(\mathbf{S}\) lives on the unit sphere, the partition function is \[\mathcal{Z}=\int d\mathbf{S}\delta(\mathbf{S}^{2}-1)e^{-\beta\mathcal{H}}= \int d\mathbf{S}\delta(\mathbf{S}^{2}-1)e^{\beta\mu_{s}\mathbf{B}\cdot\mathbf{ S}}, \tag{35}\] for which the expectation value of the \(z\)-component of \(\mathbf{S}\) is given by \[\langle S_{z}\rangle=\frac{\int d\mathbf{S}\delta(\mathbf{S}^{2}-1)S_{z}e^{ \beta\mu_{s}\mathbf{B}\cdot\mathbf{S}}}{\int d\mathbf{S}\delta(\mathbf{S}^{2} -1)e^{\beta\mu_{s}\mathbf{B}\cdot\mathbf{S}}}. \tag{36}\] If the external field is constant along the \(z\)-direction then we have \[\langle S_{z}\rangle=\frac{\int dS_{z}S_{z}e^{\beta\mu_{s}B_{z}S_{z}}}{\int dS _{z}e^{\beta\mu_{s}B_{z}S_{z}}} \tag{37}\] as the integrals over \(S_{x}\) and \(S_{y}\) in the numerator and denominator cancel each other out. Comparing this to \(\langle\hat{S}_{z}\rangle\) for the spin coherent state (31) and using \(n_{z}=(1-|z|^{2})/(1+|z|^{2})\) and \(\mu_{s}S_{z}=gs\mu_{\text{B}}n_{z}\) we see that (10) and (31) are identical up to a factor of \(\hbar\), as the classical spin vector has no units, whereas the quantum expectation value of \(\langle\hat{S}_{z}\rangle\) is in units of \(\hbar\). ## Appendix B Coarse approximation method We expand the operator exponential series (21) up to second order in \(\beta\) \[\exp(-\beta\hat{\mathcal{H}}) \approx 1+\beta g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}} \tag{12}\] \[+\beta^{2}\left(g\mu_{\text{B}}B_{z}\right)^{2}\frac{s|z|^{2}}{ \left(1+|z|^{2}\right)^{2}}\] \[+\frac{1}{2}\left(\beta g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+| z|^{2}}\right)^{2},\] we can show that by taking \[\mathcal{H}_{\text{eff}}=-g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}}+g \mu_{\text{B}}B_{z}\frac{\sqrt{2s}|z|}{1+|z|^{2}}, \tag{13}\] and expanding the effective classical exponential up to the same order in \(\beta\), we get \[\exp(-\beta\mathcal{H}_{\text{eff}}) \tag{14}\] \[\approx 1+\beta g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+|z|^{2}}+ \beta^{2}\left(g\mu_{\text{B}}B_{z}\right)^{2}\frac{s|z|^{2}}{\left(1+|z|^{2} \right)^{2}}\] \[+\frac{1}{2}\left(\beta g\mu_{\text{B}}B_{z}s\frac{1-|z|^{2}}{1+| z|^{2}}\right)^{2}\] \[-\beta g\mu_{\text{B}}B_{z}\frac{\sqrt{2s}|z|}{1+|z|^{2}}-\left( \beta g\mu_{\text{B}}B_{z}\right)^{2}\frac{s\sqrt{2s}|z|\left(1-|z|^{2}\right) }{\left(1+|z|^{2}\right)^{2}}.\] This is where our approximation becomes more qualitative than quantitative. Indeed, the fifth and sixth terms on the right-hand side of (14) are not present in (12) even though they are not of higher order in \(\beta\), however, we have taken advantage of the freedom of choice for the sign of the extra term in the effective Hamiltonian (second term on the right-hand side of (13)) as the correction (third term on the right-hand side of (12)) comes from the square term in the exponential series. Taking the correction (second term on the right-hand side of (13)) to be negative implies that \[\exp\left(-\beta g\mu_{\text{B}}B_{z}\frac{\sqrt{2s}|z|}{1+|z|^{2}}\right)\in [0;1], \tag{15}\] or in terms of the spin coherent state vector \[\exp\left(-\beta\tfrac{1}{2}g\mu_{\text{B}}B_{z}\sqrt{2s}\sqrt{1-n_{z}^{2}} \right)\in[0;1], \tag{16}\] which means that our expectation value remains close to the classical expectation value, especially for lower temperatures where the spin preferentially aligns with the \(z\)-axis. Although this constitutes quite a coarse approximation, it is definitely a relevant primer to understand the subtleties of the path integral spin dynamics method. ## Appendix C High temperature model exponential form Starting from (46), we rewrite \[\left(\frac{e^{\beta g\mu_{B}B_{z}}+|z|^{2}}{1+|z|^{2}}\right)^{2 s}=\left(\frac{e^{\beta g\mu_{B}B_{z}}+e^{2\ln(|z|)}}{e^{\ln(1+|z|^{2})}} \right)^{2s} \tag{17}\] \[=\left(\frac{e^{\frac{\beta g\mu_{B}B_{z}}{2}+\ln(|z|)}\frac{ \left(\frac{\beta g\mu_{B}B_{z}}{2}-\ln(|z|)+e^{-\frac{\beta g\mu_{B}B_{z}}{2} +\ln(|z|)}\right)}{e^{\ln(1+|z|^{2})}}\right)^{2s}}{e^{\ln(1+|z|^{2})}}\right)^ {2s}\] \[=\left(\frac{e^{\frac{\beta g\mu_{B}B_{z}}{2}+\ln(|z|)2\cosh\left( \frac{\beta g\mu_{B}B_{z}}{2}-\ln(|z|)\right)}{e^{\ln(1+|z|^{2})}}}\right)^{2s}\] \[=\left(e^{\frac{\beta g\mu_{B}B_{z}}{2}+\ln(\frac{|z|}{1+|z|^{2}} )+\ln\left(2\cosh\left(\frac{\beta g\mu_{B}B_{z}}{2}-\ln(|z|)\right)\right)} \right)^{2s},\] hence (46) can be rewritten as \[F[\beta,z]=e^{2s\left(\ln(2)+\ln(\frac{|z|}{1+|z|^{2}})+\ln\left(\cosh\left( \frac{\beta g\mu_{B}B_{z}}{2}-\ln(|z|)\right)\right)\right)}. \tag{18}\] ## Appendix D High-temperature model normalization We approximate \[\langle z|\,\hat{S}_{z}\exp\left(\frac{\beta\mu_{s}}{\hbar}B_{z} \hat{S}_{z}\right)|z\rangle \tag{19}\] \[\approx\langle z|\,\hat{S}_{z}\ket{z}\bra{z}\exp\left(\frac{\beta \mu_{s}}{\hbar}B_{z}\hat{S}_{z}\right)|z\rangle\] \[=\hbar s\frac{1-|z|^{2}}{1+|z|^{2}}e^{-\beta\mu_{s}B_{z}s}\left( \frac{e^{\beta\mu_{s}B_{z}}+|z|^{2}}{1+|z|^{2}}\right)^{2s},\] as our approximation scheme for the partition function aims to move from a quantum description in terms of states and operators to a classical description \[\langle z|\exp\left(\frac{\beta\mu_{s}}{\hbar}B_{z}\hat{S}_{z}\right)|z \rangle\approx\exp\left(-\beta\mathcal{H}\right). \tag{20}\] Within this approximation, we can rewrite \[\frac{\int d\mu(z)\,\langle z|\,\hat{S}_{z}\exp\left(\frac{\beta \mu_{s}}{\hbar}B_{z}\hat{S}_{z}\right)|z\rangle}{\int d\mu(z)e^{-\beta\mu_{s}B_ {z}s}\left(\frac{e^{\beta\mu_{s}B_{z}}+|z|^{2}}{1+|z|^{2}}\right)^{2s}} \tag{21}\] \[\equiv\frac{\int d\mu(z)\hbar s\frac{1-|z|^{2}}{1+|z|^{2}}e^{-\beta \mu_{s}B_{z}s}\left(\frac{e^{\beta\mu_{s}B_{z}}+|z|^{2}}{1+|z|^{2}}\right)^{2 s}}{\int d\mu(z)e^{-\beta\mu_{s}B_{z}s}\left(\frac{e^{\beta\mu_{s}B_{z}}+|z|^{2}}{1+|z|^{2}} \right)^{2s}},\] which is the expression we use for our averages, as it corresponds to the same approximation as the atomistic model, as proven by the exact overlap of both the averages computed from the partition function (52) and the atomistic average over time and the number of realizations (41). What is of peculiar interest is that the ratio \[\frac{\left\langle\hat{S}_{z}\right\rangle_{\text{app}}}{\left\langle\hat{S}_{z} \right\rangle_{\text{quantum}}}\xrightarrow[]{}\xrightarrow[]{}\beta\to\infty \xrightarrow[]{}\frac{s}{s+1} \tag{47}\] which reminds us of the fact that the eigenvalues of \(\hat{\mathbf{S}}^{2}\) are \(s(s+1)\) as in \[\hat{\mathbf{S}}^{2}\left|s,m\right\rangle=s(s+1)\left|s,m\right\rangle \tag{48}\] rather than simply \(s^{2}\). Indeed, in the classical limit \(s\to\infty\) we recover \[s(s+1)\xrightarrow[s\to\infty]{}s^{2}. \tag{49}\] We would like to emphasize that this required normalization factor is identical for both the results of the atomistic simulations (41) and the results from the approximate partition function (52). The expectation values for \(\left\langle\hat{S}_{z}\right\rangle_{\text{app}}\) with and without normalization are given in Figure 4, along with the appropriate quantum solution. This is very important for more general applications of this model as this means that the normalization of the curves does not require an additional fitting parameter of any kind but is rather analytically computable and has a general, closed expression. ## Appendix E Higher order correction for the high-temperature model As mentioned in section III.2 our method can technically carry out this approximation scheme to any order in the non-commutative terms, numerically, without requiring to compute these corrections using pen and paper. But as this relies on a Taylor expansion around the high-temperature limit \(\beta\to 0\) there is a limit as to how low in temperature we can provide accurate results. Indeed there is no reason for this high-temperature expansion to converge to the quantum solution for temperatures around \(0\) K. This is shown in Figure 5.
2301.01365
James Webb Space Telescope: data, problems, and resolution
It is argued that the data presented by Hubble Space Telescope and James Webb Space Telescope, that seem to be at odds with the canonical big bang cosmology, find simple explanation if galaxy formation is seeded by massive primordial black holes (PBH), as anticipated in 1993 (A. Dolgov and J. Silk, later DS). The statement that the galaxy formation might be seeded by PBH is now rediscovered in several works. The predicted by DS log-normal mass spectrum of PBHs very well agrees with astronomical data. Abundant BH population of the Galaxy with masses of the order of tens solar masses is predicted. Extended mass spectrum of PBH together with their possible clustering allows them to make 100\% contribution into the cosmological dark matter. Another prediction of DS mechanism on noticeable amount of antimatter in the Milky Way also seems to be confirmed by the data.
A. D. Dolgov
2023-01-03T21:44:13Z
http://arxiv.org/abs/2301.01365v3
# James Webb Space Telescope: data, problems, and resolution ###### Abstract It is argued that the data presented by Hubble Space Telescope and James Webb Space Telescope, that seem to be at odds with the canonical big bang cosmology, find simple explanation if galaxy formation is seeded by massive primordial black holes (PBH), as anticipated in 1993 (A. Dolgov and J. Silk, later DS). The statement that the galaxy formation might be seeded by PBH is now rediscovered in several works. The predicted by DS log-normal mass spectrum of PBHs very well agrees with astronomical data. Abundant BH population of the Galaxy with masses of the order of tens solar masses is predicted. Extended mass spectrum of PBH together with their possible clustering allows them to make 100% contribution into the cosmological dark matter. Another prediction of DS mechanism on noticeable amount of antimatter in the Milky Way also seems to be confirmed by the data. Introduction Observations of the last decades by several astronomical instruments, especially by the Atacama Large Millimeter/submillimeter Array (ALMA), Hubble Space Telescope (HST), and very recently by James Webb Space Telescope (JWST) revealed strong tension between the data and the accepted cosmological model. It was discovered that the early universe at redshifts \(z\sim 10\) and the age of a few hundred million years was densely populated by all kinds of astronomical objects: galaxies, quasars (alias supermassive black holes), gamma-bursters, supernovae, and, in addition, it happened to be extremely dusty. Equally puzzling problems arose from the observational data on the present day universe. The discussion of the problems, which appeared in the early as well as in the contemporary universe, at the stage of art, that existed 5 years ago, can be found in review [3]. But since that time much higher amount of surprising phenomena was accumulated thanks to new and more precise astronomical instruments. Possible and, as it seems, simple and natural solution to all these problems has been suggested, long before they struck the community, by the proposal that the universe is abundantly populated with primordial black holes. A new mechanism of massive PBH creation was worked out in our paper [1], allowing for formation of highly massive primordial black hole (PBH). This mechanism was further elaborated in ref. [2]. A very simple log-normal mass spectrum of PBH was predicted that, as later verified, very well agrees with observational data. The abundant PBH population in very wide mass interval can eliminate the tension between theory and observations, in particular, because supermassive PBH could seed galaxy formation as it was envisaged in refs. [1, 2]. In addition to high mass PBH formation, the mechanism of refs [1, 2] could lead to noticeable antimatter population of galaxies. In particular antimatter, including antistars, may exist in the Milky Way. This exciting prediction seems to be confirmed by the recent studies. **Outline of the talk.** 1. Recent problems discovered by HST and JWST. 2. Earlier established cosmological problems. 3. PBH solution of new and old problems. 4. Antimatter in the MIky Way, antistars, anit-nuclej, positrons. 5. Log-normal mass spectrum of PBHs. comparison to observations. 6. Black dark matter. 7. Gravitational waves and PBH. 8. Basics of the mechanism of PBH and antimatter creation. ## 2 Very young galaxies observed by JWST (and HST) Discoveries of several recent months made by JWST, in continuous infrared \(\mu\)m range, created almost panic among traditional cosmologists and astrophysicists. It was observed that the pretty young universe with the age 200-300 million years contained a large array of bright galaxies [4] - [12], which simply could not be there according to the accepted faith or, better to say, to the canonic cosmological model. As is stated in the JWST publications: "an unexpectedly large density (stellar mass density \(\varrho_{*}\gtrsim 10^{6}M_{\odot}\) Mpc\({}^{-3}\)) of massive galaxies (stellar masses \(M_{*}\geq 10^{10.5}M_{\odot}\)) are discovered at extremely high redshifts \(z\gtrsim 10\)." Two galaxies with record redshift according to the Cosmic Evolution Early Release Science (CEERS) data have redshifts \(z=14.3\pm 0.4\) that corresponds to the universe age \(t_{U}=264\) Myr and \(z=16.7\), even younger, at \(t_{U}=235\) Myr. The early JWST data were taken somewhat sceptically because of lacking direct measurement of the galactic spectra that is impossible when working in micron infrared continuum. Measurements of the galactic redshifts by spectral line identifications was strongly desirable. An important confirmation of existence of galaxies in so young universe came from the HST **spectroscopic** observation of the most distant galaxy discovered HST at redshift \(11.58\pm 0.05\)[13]. During the last few weeks there appeared several publications of spectroscopic identification of the far away galaxies. In ref. [14] the data from JWST NIRCam 9-band near-infrared imaging of the luminous \(z=10.6\) galaxy GN-z11 from the JWST Advanced Deep Extragalactic Survey are presented. The authors have concluded that a spectral energy distribution is determined entirely consistent with the expected form of the high-redshift galaxy. At the publication [15] of the same date the spectroscopy of GN-z11, the most luminous candidate \(z>10\) Lyman break galaxy in the GOODS-North field is presented. Redshift of \(z=10.603\) is derived (somewhat lower than previous determinations) based on multiple emission lines in low and medium resolution spectra over \(0.8-5.3\,\mu\)m. The spatially-extended Lyman-\(\alpha\) in emission is observed. The NIRSpec spectroscopy confirms that GN-z11 is a remarkable galaxy with extreme properties seen 430 Myr after the Big Bang. ALMA has confirmed the age of one of the most distant JWST-identified galaxy, GHZ2/GLASS-z12, equal to 367 million years after the Big Bang [16]. by deep spectroscopic observations of the spectral emission line associated with ionized oxygen near the galaxy. Spectroscopic observations confirm JWST early galaxy discovery beyond any doubts. ### Seeding galaxy formation by PBH According to the standard approach the supermassive black holes (SMBHs) in galactic centres are formed by accretion mechanism after galaxies were created. In papers. [1, 2] the validity of the opposite scenario was conjectured, namely, SMBHs were formed first and subsequently seeded galaxy formation. The hypothesis advocated in these works allows to explain presence of SMBH in all large and several small galaxies accessible to observation and resolves the problem of very early existence of galaxies observed by HST and JWST. The advocated in this talk and suggested in our earlier papers [1, 2] idea that the galaxy are seeded by massive black hole seems to gain more and more support in recent publications. Observation of supermassive black holes (SMBH) inside galaxies also indirectly confirms the idea of seeding the galaxy formation by SMBH This above statement was rediscovered in ref. [17]: "...we show that the observed massive galaxy candidates can be explained with lower SFE than required in \(\Lambda\)CDM, if structure formation is accelerated by massive (\(\gtrsim 10^{9}\) M\({}_{\odot}\)) primordial black holes that enhance primordial density fluctuations." Very recently, in December, 2022, there appeared another paper on the possibility of SMBH impact on JWST-galaxy formation [18]. According to ref. [19], six very well developed galaxies are observed, including one galaxy with a possible stellar mass of \(\sim 10^{11}M_{\odot}\), at the redshifts \(7.4\lesssim z\lesssim 9.1\), corresponding to 500-700 Myr after the Big Bang. These galaxies are too massive to be created in so early universe. According to the existing science it is impossible to create such well developed galaxies in this very short time. The authors even make the conclusion: "May be they are **supermassive black holes of the kind never seen before** That might mean a revision of our understanding of black holes.". This statement perfectly agrees with the advocated in this talk point of view that massive **primordial** black holes populate early universe and seed galaxy formation. Recently an ultra-massive QSO at \(z=6.853\) was observed by ALMA [20]: "VIRCam and IRAC photometry perhaps suggests that COS-87259 is an extremely massive reionization-era galaxy with \(M_{*}=1.7\times 10^{11}M_{\odot}\). Such a very high AGN luminosity suggests that this object is powered by \(\sim 1.6\times 10^{9}M_{\odot}\) black hole if accreting near the Eddington limit." This looks as nearly impossible, but if there is a primordial supermassive black hole, it could easily seed such monstrous galaxy and quasar. In paper [21], published already after the Conference was over, the discovery of an accreting supermassive black hole at z=8.679 is announced in a galaxy previously observed via a Ly\(\alpha\)-break by Hubble and with a Ly\(\alpha\) redshift from Keck. The mass of the black hole is \(\log(M_{BH}/M_{\odot})=6.95\pm\)0.37, and it is estimated that it is accreting at 1.2 (\(\pm\)0.5) times of the Eddington limit. According to the authors: this presently highest-redshift AGN discovery is used to place constraints on black hole seeding models and find that a combination of either super-Eddington accretion from stellar seeds or Eddington accretion from massive black hole seeds is required to form this object by the observed epoch. In the paper [22] the seeding of galaxies was even indicated in the title of the work. The authors suggest existence of relatively light PBH with masses about \(50M_{\odot}\) gaining mass through super-Eddington accretion within the dark matter halo can explain observations of massive galaxies at redshifts of \(z\geq 6.5\) by JWST. However, it seems that supermassive PBH formation is much less cumbersome. ## 3 Comment on LSS formation According to the canonical theory of large scale structure (LSS) formation the density contrast \(\Delta\equiv\delta\varrho/\varrho\) started to rise at the onset of the matter dominated stage at \(z=10^{4}\). After that \(\Delta\) evolved as the cosmological scale factor. Since initially \(\Delta_{in}\lesssim 10^{-4}\), by the present time it may reach unity and after that fast LSS formation takes place (violent relaxation - strong rising of the gravitational field of the inhomogeneity) leading to the observed highly inhomogeneous universe at the galactic and galaxy clusters scales. In a simple way the process of structure formation can be understood as follows. The velocity of the Hubble runaway at distance \(r\) is \(v_{H}=Hr\) and the virial velocity in the gravitational field of the inhomogeneity is \[v_{grav}^{2}=\frac{4\pi r^{3}}{rm_{Pl}^{2}}\delta\varrho \tag{1}\] Using \(H^{2}=(8\pi\varrho)/(3m_{pl}^{2})\) we find \(v_{grav}\geq v_{H}\) if \(\Delta>1\). The probability of such huge density fluctuation for the flat spectrum of perturbations is quite low. There are two effects operating in the same directions. Firstly, the available time is constrained by the universe age, which is essentially equal to \(t_{U}=1/H\) and is quite short. In addition, a large value of \(H\) means expansion is very fast. That strongly suppresses the efficiency of the structure formation. ## 4 Problems preceding JWSP Similar serious problems are known already for several years. The Hubble space telescope discovered that the early universe, at \(z=6-7\) was too densely populated with quasars, alias SMBH, supernovae, gamma-bursters and happened to be very dusty. No understanding is found in conventional cosmology how all these creature were given birth in such a short time. Moreover great lots of phenomena in the present day universe, \(\sim 15\) billion year old, are also in strong tension with the conventional cosmological expectations. HST sees the universe up to \(z=6-7\), but accidentally a galaxy at \(z\approx 12\) has been discovered for which both Hubble and Webb are in good agreement, as we have already mentioned in the previous section. Still, despite the earlier discoveries by HST, only after publications of JWST data the astronomical establishment became seriously worried. To summarise: observational data of the last decades present more and more evidence indicating existence of the objects contradicting conventional astrophysics and cosmology in the present day and in quite young universe. Rephrasing Marcellus from "The Tragedy of Hamlet, Prince of Denmark" we can say "Something is rotten in the state of Denmark the Universe". However, all the problems can be neatly solved if the universe is sufficiently densely populated by primordial black holes. ## 5 BH types by formation mechanisms There three known types of BH depending upon the mechanism of their creation: 1. Astrophysical black holes. These BHs are created by the collapse of a star which exhausted its nuclear fuel. The expected masses should start immediately above the neutron star mass, i.e. about \(3M_{\odot}\), but noticeably below \(100M_{\odot}\). Instead we observe that the BH mass spectrum in the galaxy has maximum at \(M\approx 8M_{\odot}\) with the width \(\sim(1-2)M_{\odot}\). The result is somewhat unexpected but an explanations in the conventional astrophysical frameworks is not excluded. Recently LIGO/Virgo discovered BHs with masses close to \(100M_{\odot}\). Their astrophysical origin was considered to be completely impossible. Now some, though quite exotic, formation mechanisms have been suggested. 2. Accretion created BHs. Such BHs are formed by the accretion of matter on the mass excess in galactic centres. It is known that in any large galaxy at the centre there exists a supermassive black holes (SMBH) with masses varying from a few millions \(M_{\odot}\) (e,g, Milky Way) up to almost hundred billions \(M_{\odot}\). However, the conventional accretion mechanisms are not efficient enough to create such monsters during the universe life-time, \(t_{U}\approx 14.6\) Gyr. At least 10-fold longer time is necessary, some references can be found in [3], to say nothing about SMBH in 10 times younger universe. 3. Primordial black holes (PBH). PBH are supposed to be formed in the very early universe during pre-stellar epoch, i.e. prior to star formation.The idea of primordial black holes and a possible mechanism of their creation was pioneered by Zeldovich and Novikov [23]. According to their idea, the density contrast in the early universe inside the bubble radius, essentially equal to the cosmological horizon, might accidentally happen to be large, \(\delta\varrho/\varrho\approx 1\), then that piece of volume would be inside its gravitational radius i.e. it became a PBH, that decoupled from the cosmological expansion. The mechanism was elaborated later by Hawking [24], and by Carr and Hawking [25]. ## 6 BH types by masses Rather arbitrarly black holes are separated into three groups depending on their mass: 1. Supermassive black holes (SMBH): \(M=(10^{6}-10^{10})M_{\odot}\). 2. Intermediate mass black holes (IMBH): \(M=(10^{2}-10^{5})M_{\odot}\). 3. Solar mass black holes: masses from a fraction of \(M_{\odot}\) up to \(100M_{\odot}\). 4. There can be also very light black holes, not yet observed, with masses in the region \(\sim 10^{20}\) g; they might be the "particles" of the cosmological dark matter. The origin of most of these BHs is unclear, except maybe of the BHs with masses of a few solar masses, which may be astrophysical. Extremely unexpected was very high abundance of IMBH which are appearing during last several years in huge numbers. The assumption that (almost) all these black holes in the universe are primordial, except possibly the very light ones, strongly reduces or even eliminates the tension between their observed abundances and possible mechanisms of their formation. ## 7 Problems of the contemporary universe. Summary. 1. SMBH in all large galaxies. The universe age is too short for their formation through the commonly accepted accretion mechanism. 2. Several SMBH are found in very small galaxies and even in (almost) empty space, where not only the time duration but also an amount of material is insufficient. An interesting recent observation was made by the Hobby-Eberly Telescope at Texas's McDonald Observatory suggesting the presence of a black hole with a mass of about 17 billion \(M_{\odot}\) equivalent to 14% of the total stellar mass of the galaxy. Usually the mass of the central BH is about 0.1 % of the galaxy mass. This SMBH was observed by the analysis of the motions of the stars near the center of the galaxy. There appeared recently fresh evidence [26] indicating to supermassive BH with the mass \(3\times 10^{6}M_{\odot}\) in dwarf galaxy Leo 1. Much more new data are presented practically today [27]. Six dwarf galaxies are identified that have X-ray AGN. They are presumably powered by SMBHs of \(M>10^{7}M_{\odot}\). It is not excluded, that such SMBHs, that are not hosted by a large galaxy, might be pushed out of large galaxies in the process of galaxy collisions. Such catastrophic event may even create plenty of wandering single supermassive black holes. However, taking into account a large number of such exotics, much more natural seems that all SMBH in small galaxies are primordial. Simply they were unlucky not to acquire their own large galaxy, since there was not enough matter around to build large galaxies. 3. Prediction of abundant population of BHs with masses \(\sim 10M_{\odot}\) in the Galaxy, not yet observed. 4. Origin and properties of the sources of the observed gravitational waves, encounter considerable difficulties, if one tries to explain them assuming astrophysical formation of back hole binaries emitting the gravitational radiation. 5. IMBH, with \(M\sim(10^{3}-10^{5})M_{\odot}\) are unexpectedly discovered in dwarfs and globular clusters. Their origin is unclear, if they are not primordial. 6. Invisible Massive Astrophysical Compact Halo Objects (MACHOs), non-luminous objects with masses \(\sim 0.5M_{\odot}\) observed through microlensing. It is unknown what are they and how they were created. 7. Existence of very unusual stars in the Galaxy, among which there are too fast moving stars and stars with unusual chemistry. Moreover, too old stars, are found. Many of them look older than the Galaxy and maybe one is even older that the universe (sic!?). An assumption, that the black holes mentioned in the list above, are primordial eliminates all the problems. The mechanism of PBH formation suggested in papers [1,2] predicts also existence of the unusual stars mentioned in point 7. ## 8 Observations of black holes The ancient point of view is that BH are objects with so strong gravitational field that nothing can escape it. According to Mitchell (1784): there may be bodies for which the second cosmic velocity is larger than the speed of light. They do not shine and do not reflect light, i.e. are absolutely dark, invisible. However, the truth is quite the opposite, black holes are very well seen. Light BHs can emit all kind of radiation through the Hawking evaporation (though nobody has yet seen it). The most powerful sources of radiation in the universe are SMBH - quasars, point-like objects shining as a thousands of galaxies. The methods of the BH observations include: 1. Central mass estimate through analysis of stellar motion around the supposed BH as e.g. discovery of BH in the center of the Millki Way. 2. Distortion of star motion due to invisible point-like gravitating body. 3. Gravitational lensing (MACHO and some other BHs). 4. Electromagnetic radiation from the accreting matter; it is the mechanism of quasar central engine, but much smaller BHs are also observed that way. However, all these methods allow only to establish that there is a large mass inside small volume. We need theory to proceed further and to conclude that there should be a black hole inside. But the following method is free from this restriction: 4. Registration of gravitational waves from coalescing double systems of black holes. The data directly show that there are exactly coalescence of two BHs. This is the first test of General Relativity for strong fields and the first observational proof of existence of the Schwarzschild solution. ## 9 PBH and inflation The mechanism suggested in ref. [1, 2] introduced some new features which were later explored in a series of subsequent works. The proposed there scenarios are heavily based on the Affleck-Dine [28] model of baryogenesis, that permits to create very interesting features of the PBH population or some other macroscopic compact objects, see below, sec. 15 In paper [1] inflationary mechanism was first implied for PBH formation. It allowed to create PBH with huge masses, much larger than those in the previously studied models. A year later inflationary creation of PBH was explored in ref. [29], soon after that in ref. [30], and two years later in [31]. Nowadays there is an avalanche of papers on inflationary formation of PBH. However, except for predicting large masses of PBH, the models do not have much predictive power because the mass spectra of the created PBHs are quite complicated and strongly parameter dependent. No simple analytic expressions have been presented. The only exception is the mechanism of refs [1, 2], which predicts extremely simple log-normal mass spectrum of PBH: \[\frac{dN}{dM}=\mu^{2}\exp{[-\gamma\ln^{2}(M/M_{0})]}. \tag{2}\] The central value mass can be calculated theoretically [32]: \({M_{0}\sim 10M_{\odot}}\). It is equal to the horizon mass at QCD phase transition from the phase of free quark-gluon plasma to the confinement phase. To be more precise the horizon mass is approximately equal to \(10M_{\odot}\) for the cosmic plasma with vanishingly small chemical potential \(\mu\). In our case \(\mu\) is supposed to be large, of the order of the plasma temperature. Correspondingly the phase transition was probably delayed and the horizon mass could be somewhat bigger. An impressive feature of the the log-normal mass spectrum with the predicted value of \(M_{0}\) is that it is the only known spectrum tested by "experiment" in very good agreement with the observed densities of black holes in all mass intervals from the solar mass BH, up to black holes with intermediates masses, and further up to supermassive black holes. In particular, the mechanism developed in [1, 2] allows to explain the presence of SMBH in all large and several small galaxies accessible to observation. For very massive BH an account should be taken of the mass rise due to later matter accretion. Especially impressive is the confirmation of the model by the chirp mass binaries measured by LIGO/Virgo which is discussed in section 11. ## 10 Black Dark Matter The first suggestion PBH might be dark matter "particles" was made by S. Hawking in 1971 [24] and later by G. Chapline in 1975 [33] who noticed that low mass PBHs might be abundant in the present-day universe with the density comparable to the density of dark matter. The scale independent spectrum of cosmological perturbations was assumed. That led to the flat mass spectrum in log interval: \[dN=N_{0}(dM/M) \tag{3}\] with maximum mass \(\ M_{max}\lesssim 10^{22}\) g, which hits the allowed mass range. The next paper on PBH made dark matter belongs to A. Dolgov, J. Silk (Mar 13, 1992) [1] that predicted much larger PBH masses. It was the first paper where inflation was applied to PBH formation, so PBH masses as high as \(10^{6}M_{\odot}\), and even higher, can be created. The simple log-normal mass spectrum of PBH was predicted. The constraints on the cosmological mass density of black holes are reviewed in two papers [34, 35] for monochromatic and extended (in particular log-normal) mass spectrum of PBHs. As it was mentioned B. Carr in 2019 all limits are model dependent and have caveats. The summary plot on PBH density limit is presented in Fig. 1. Figure 1: Constraints on \(f(M)\) for a monochromatic mass function, There are four mass windows (A, B, C, D) in which PBHs could have an appreciable density. The arguments presented in ref. [36] permit to weaken the constraints on the PBH density in the mass range \((30-100)M_{\odot}\) and to reopen the door for dark matter in the form of PBHs registered by LIGO. The point is that PBHs were treated as point Schwarzschild masses, while the more careful analysis in an expanding universe presented in this work, leads to a time-dependent mass. This implies a stricter set of conditions for a black hole binary to form and means that black holes coalesce much more quickly than was previously calculated, namely well before the LIGO/Virgo's observed mergers. The observed binaries are those coalescing within galactic halos, with a merger rate consistent with data. This opens the door for dark matter in the form of LIGO-mass PBHs. The bounds presented in [34, 35] for the intermediate mass black holes were criticised also in ref. [37]. The most questionable step in this chain of arguments is the use of overly simplified accretion models. The same accretion models was applied to X-ray observations from supermassive black holes, M87 and Sgr \(A^{*}\). The comparison of these two SMBHs with intermediate mass MACHOs suggests that the latter could, after all, provide a significant constituent of all the dark matter. One more argument in favour of allowed large cosmological density of PBH is based on the possibility that PBHs can form clusters, as it is argued in ref. [38]. Dynamical interactions in PBH clusters offer additional channel for the orbital energy dissipation increase the merging rate of PBH binaries, and the constraints on the cosmological fraction of BH in DM, \(f_{PBH}\), made by these black holes, that were obtained by assuming a homogeneous PBH space distribution can be weaker. A recent analysis performed in [39] based on the PBH formation model [40] and [41] shows that even \(f_{PBH}=0.1-1\) is not excluded 1. Footnote 1: I thank K. Postnov for indicating these references. So the presented in the literature strong bounds on the cosmological density of PBH should be taken with a grain of salt. ## 11 Gravitational waves from BH binaries There is general agreement between several groups that the gravitational waves discovered by LIGO/Virgo interferometers originated from PBH binaries. We discuss this issue here following our paper [42]. There are three problems which indicate that the sources of GWs are most naturally primordial black holes: 1. Origin of heavy BHs (with masses \(\sim 30M_{\odot}\)). To form so heavy BHs, the progenitors should have \(M>100M_{\odot}\) and a low metal abundance to avoid too much mass loss during the evolution. Such heavy stars might be present in young star-forming galaxies but they are not observed in the necessary amount. Recently there emerged much more striking problem because of the observation of BH with \(M\sim 100M_{\odot}\). Formation of such black holes in the process of stellar collapse was considered to be strictly forbidden. Some exotic mechanisms might be possibly allowed, such as e.g. BH formation in the process of collapse of supermassive star heated by dark matter annihilation inside [43]. On the other hand, primordial black holes with the observed by LIGO masses may be easily created with sufficient density. 2. Formation of BH binaries from the original stellar binaries. Stellar binaries are formed from common interstellar gas clouds and are quite frequent in galaxies. If BH is created through stellar collapse, small non-sphericity results in a huge velocity of the BH and the binary is destroyed. BH formation from PopIII stars and subsequent formation of BH binaries with tens of \(M_{\odot}\) is estimated to be small. The problem of the binary formation is simply solved if the observed sources of GWs are the binaries of primordial black holes. They were at rest in the comoving volume, when inside horizon they were gravitationally attracted and might loose energy due to dynamical friction in the early universe. The probability for them to become gravitationally bound is significant. The conventional astrophysical scenario is not excluded but less natural. 3. Low spins of the coalescing BHs. The low values of the BH spins sae observed in GW150914 and in almost all (except for three) other events. It strongly constrains astrophysical BH formation from close binary systems. Astrophysical BHs are expected to have considerable angular momentum, nevertheless the dynamical formation of double massive low-spin BHs in dense stellar clusters is not excluded, though difficult. On the other hand, PBH practically do not rotate because vorticity perturbations in the early universe are vanishingly small. Still, individual PBH forming a binary initially rotating on elliptic orbit could gain collinear spins about 0.1 - 0.3, rising with the PBH masses and eccentricity [44, 45]. This result is in agreement with the GW170729 LIGO event produced by the binary with masses \(50M_{\odot}\) and \(30M_{\odot}\) and and GW190521. To summarise: each of the mentioned problems may be solved in the conventional frameworks but it looks much simpler to assume that the LIGO/Virgo sources are primordial. ## 12 Chirp mass distribution Two rotating gravitationally bound massive bodies are known to emit gravitational waves, as is discussed in the previous section. In quasi-stationary inspiral regime, the radius of the orbit and the rotation frequency are approximately constant and the GW frequency is twice the rotation frequency. The luminosity of the GW radiation is: \[L=\frac{32}{5}\,m_{Pl}^{2}\left(\frac{M_{c}\,\omega_{orb}}{m_{Pl}^{2}}\right) ^{10/3}\,, \tag{4}\] where \(M_{1}\), \(M_{2}\) are the masses of two bodies in the binary system and \(M_{c}\) is the so called chirp mass: \[M_{c}=\frac{(M_{1}\,M_{2})^{3/5}}{(M_{1}+M_{2})^{1/5}}\,, \tag{5}\] and \[\omega_{orb}^{2}=\frac{M_{1}+M_{2}}{m_{Pl}^{2}R^{3}}\,. \tag{6}\] In ref. [46] the available data on the chirp mass distribution of the black holes in the coalescing binaries in O1-O3 LIGO/Virgo runs are analyzed and compared with theoretical expectations based on the hypothesis that these black holes are primordial with log-normal mass spectrum. The inferred best-fit mass spectrum parameters are: \(M_{0}=17M_{\odot}\) and \(\gamma=0.9\), fall within the theoretically expected range and show excellent agreement with observations. On the opposite, binary black hole formation based on massive binary star evolution require additional adjustments to reproduce the observed chirp mass distribution. The results are presented in Figs. 2 and 3. So we can conclude that PBHs with log-normal mass spectrum perfectly fit the data, while astrophysical BHs seem to be disfavoured. New data on GW observations were analysed by K.A. Postnov in his talk at XXXIV International Workshop on High Energy Physics "From Quarks to Galaxies: Elucidating Dark Sides" are depicted in fig. 4. In this talk it is also presented an approximate fitting of the observed chirp-mass distribution in the O1-O3 LVK GW compact binary coalescences (from GWTC-3 catalog) Figure 3: Cumulative distributions \(F(<M)\) for several astrophysical models of binary BH coalescences. Figure 2: Model distribution \(F_{PBH}(<M)\) with parameters \(M_{0}\) and \(\gamma\) for two best Kolmogorov-Smirnov tests. EDF= empirical distribution function. by two independent PBH populations with initial log-normal mass distributions \(M_{0}^{(1)}=5M_{\odot}\) and \(M_{0}^{(2)}=30M_{\odot}\), see fig. 5 In fig. 6 one can see the same approximation but by the simplest model of astrophysical BH formation from the collapse of the CO-core of a massive star and standard common-envelope parameter, with taking into account evolution of star-formation rate in the universe with redshift plus a population of PBHs with log-normal initial mass distribution with \(M_{0}^{(2)}=33M_{\odot}\). The picture looks more complicated than the earlier one described at the beginning of this section. One possible interpretation is that there are two populations of PBH with log-normal mass spectrum each but with different values of \(M_{0}\). The model can be modified Figure 4: Figure 5: Approximation of the observed chirp-mass distribution in the O1-O3 LVK GW compact binary coalescences (from GWTC-3 catalog) by two independent PBH populations, from K.Postnov talk to allow that and it was even discussed earlier in the author's papers, but aesthetically it looks not so nice. Another option is that there a mixture of primordial and astrophysical black holes observed by LIGO/Virgo. and even a possibility that some binaries consist of a pair of primordial and astrophysical black holes. Hopefully future data will help to resolve all the controverses. ## 13 Dwarfs and globular clusters A large number of intermediate mass black holes, that were discovered during last decade, hardly fits the narrow frameworks of the standard cosmological model. However, if they are primordial with the determined above parameters of the log-normal mass spectrum, their number is just what is necessary to explain the data and in particular to understand the mechanism of the origin of dwarf galaxies and globular clusters which is not well understood in the conventional cosmology. Recently possible discovery of SMBH in dwarf galaxy Leo 1 was announced [26]. Such SMBH surely could not be created by the galactic matter accretion to the galaxy centre. Much more new data are presented practically today [27]. Six dwarf galaxies are identified that have X-ray AGN, powered by SMBHs of \(M>10^{7}M_{\odot}\). Most probably these dwarfs were seeded by primordial supermassive black holes in accordance with ref [47]. As argued in ref. [47] IMBHs with masses of a few thousand solar mass, or higher, can seed formation of globular clusters (GCs) and dwarf galaxies. In the last several years such IMBH inside GSs are observed confirming this suggestion. For example the BH with the mass \(M\sim 10^{5}M_{\odot}\) is discovered almost yesterday in the dwarf galaxy SDSS J1521+1404. The astrophysical origin of IMBH encounter serious problems for all mass values, but nevertheless huge number of them are discovered with all possible masses. On the contrary, the described above model of PBH formation excellently resolves all the inconsistencies. Figure 6: The simplest model of astrophysical BH formation from the collapse of the CO-core of a massive star, from K.Postniv talk Intermediate summary and antimatter in the Galaxy The mechanism of PBH formation suggested in refs [1, 2] neatly cure all the problem related to the observed population of the universe at high redshifts as well as of the present day universe; The predicted log-normal spectrum of PBH is tested and confirmed by the observations (the only one existing in the literature). The predicted existence of IMBH in globular clusters is confirmed. So the model works great. Thus the seemingly crazy by-product of refs [1, 2], namely prediction of antimatter in the Galaxy can come true as well. Probably it is indeed the case. A surprisingly huge flux of cosmic positrons, of He-antinuclei, and possibly even a population of antistars seem to be observed. ### Anti-evidence: cosmic positrons The observation of intense 0.511 line see refs [48, 49, 50], and earlier references therein, presents a strong proof of abundant positron population in the Galaxy. In the central region of the Galaxy electron-positron annihilation proceeds at a surprisingly high rate, creating the flux: \[\Phi_{\rm 511\;keV}=1.07\pm 0.03\cdot 10^{-3}\;{\rm photons\;cm^{-2}\,s^{-1}}. \tag{7}\] The width of the line is about 3 keV. It proves that the annihilation takes place at rest. The emission mostly goes from the Galactic bulge and at much lower level from the disk, The source of 0.511 MeV line in the Galactic bulge. even got the name "Great Annihihilator". Until recently the commonly accepted explanation was that \(e^{+}\) are created in the strong magnetic fields of pulsars but the recent results of AMS probably exclude this mechanism, since the spectra of \(\bar{p}\) and \(e^{+}\) at high energies are identical [51, 52]. It means that their origin is the same and since antiprotons are not created in magnetic fields of pulsars, the conclusion might follow that positrons are not produced by pulsars as well. However, this might be true only for positrons with very high energies, about or above tera-electronvolts. ### Anti-evidence: cosmic antinuclei The striking result reported by AMS team is the registration of anti-Helium-3 and anti-Helium-4 nuclei. Namely in 2018 AMS-02 announced possible observation of six \(\overline{He}^{3}\) and two \(\overline{He}^{4}\)[51]. In 2022 already 7 \(\overline{D}\) (\(E\lesssim 15\) GeV) and 9 \(\overline{He}\), (\(E\sim 50\) GeV) were observed. Surprisingly high fraction \(\overline{He}/He\sim 10^{-9}\) was registered [52, 53]. It is not excluded that the flux of anti-helium is even much higher because low energy \(\overline{He}\) may escape registration in AMS. Secondary production of different antinuclei in cosmic ray was estimated in ref. [54]. According to this work anti-deuterium could be most efficiently produced in the collisions \(\bar{p}\,p\) or \(\bar{p}\,He\) which can create the flux \(\sim\!10^{-7}/m^{2}/s^{-1}/{\rm steradian/GeV/neutron}\), i.e. 5 orders of magnitude below the observed flux of antiprotons. Antihelium could be created in the similar reactions and the fluxes of \(\overline{He}^{3}\) and \(\overline{He}^{4}\), that could be created in cosmic rays would respectively be 4 and 8 orders of magnitude smaller than the flux of anti-D. After the AMS announcements of observations of anti-\(He^{4}\) there appeared theoretical attempts to create anti-\(He^{4}\) through dark matter annihilation. This possibility does not look natural. Moreover, DM annihilation would presumably create strong cosmic ray flux of other particles, which is not observed. In accordance with our model the observed antinuclej can be signatures of primordial antimatter. However if they are synthesised in the standard big bang anti-nucleosynthesis (anti-BBN) one would naturally expect the same abundances of light elements as those created by the canonical BBN. According to the latter the abundances of deuterium and helium-3 are much smaller then that of helium-4, approximately by 4 orders of magnitude, while the relative fraction of these antinuclej are approximately equal. There might be some astrophysical explanation of that or this anomaly is related to the fact that in our model antismatter is created in bubbles with unusually high baryon-to-photon ratio \(\beta\). In the canonical BBN \(\beta\sim 10^{-9}\), while in our case it may be as large as unity. However if \(\beta\sim 1\) there is no primordial D. On the other hand in our scenario the formation of primordial elements takes place inside non-expanding compact stellar-like objects with fixed temperature. If the temperature is sufficiently high, this so called BBN may stop before abundant He formation and ends with almost equal abundances of D and He. One can see that looking at abundances of light elements at a function of temperature. If it is so, antistars may have equal amount of \(\overline{D}\) and \(\overline{He}\). ### Anti-evidence: antistars in the Galaxy Almost two years ago a sensational announcement was done [55] about possible discovery of 14 antistars in our Galaxy, Milky Way. Quoting the authios: "We identify in the catalog 14 antistar candidates not associated with any objects belonging to established gamma-ray source classes and with a spectrum compatible with baryon-antibaryon annihilation". The map of the observed anti-sources is presented in fig. 7. An additional possible method for antistar detection in the Galaxy or in its halo has been proposed in ref. [56]. In astrophysically plausible cases of the interaction of neutral atmospheres or winds from antistars with ionised interstellar gas, the hadronic annihilation Figure 7: Positions and energy flux in the 100 MeV - 100 GeV range of antistar candidates selected in 4FGL-DR2. Galactic coordinates. The background image shows the Fermi 5-year all-sky photon counts above 1 GeV will be preceded by the formation of excited \(p\bar{p}\) and \(He\bar{p}\) atoms. These atoms rapidly cascade down to low levels prior to annihilation giving rise to a series of narrow lines which can be associated with the hadronic annihilation gamma-ray emission. The most significant are L (3p-2p) 1.73 keV line (yield more than 90%) from \(p\bar{p}\) atoms, and M (4-3) 4.86 keV (yield \(\sim\) 60%) and L (3-2) 11.13 keV (yield about 25%) lines from \(He^{4}\bar{p}\) atoms. These lines can be probed in dedicated observations by forthcoming sensitive X-ray spectroscopic missions XRISM and Athena and in wide-field X-ray surveys like SRG/eROSITA all-sky survey. Bounds on the possible density if antistars in the Galaxy were studied in several papers [57, 58, 59]. It was shown that the restrictions are rather mild and an abundant density of compact anti-stars in the universe even in the Galaxy does not violate existing observations. The reason is that the annihilation proceeds on a thin surface layer with a very short depth of the order of the proton mean free path in the dense stellar medium. On the other hand if there were disperse antimatter clouds, the annihilation would be by far more efficient and if so, the anticlouds did not survive to our time, though they might have been existed in the early universe A very impressive would be star-antistar collision, which may even be a quasi-periodic process of a star-antistar direct contact, explosion forcing them apart and return to each other by gravitational attraction, etc... ## 15 PBH and anti-creation mechanism The model of PBH and antimatter creation of refs. [1, 2] is essentially based on the supersymmetry (SUSY) motivated baryogenesis, proposed by Affleck and Dine (AD) [28], though the full extend SUSY is not necessary. SUSY predicts existence of scalar field \(\chi\) (or several such fields) with non-zero baryon number, \(B\neq 0\). Another important feature of the scenario is existence of the flat directions in the self-potential of \(\chi\), i.e. the directions along which the potential does not rise. Simple examples of such quadratic and quartic potentials with flat directions are the following, for quartic self-interaction \[U_{\lambda}(\chi)=\lambda|\chi|^{4}\left(1-\cos 4\theta\right) \tag{8}\] and for the quadratic mass-like term: \[U_{m}(\chi)=m^{2}|\chi|^{2}[[1-\cos(2\theta+2\alpha)] \tag{9}\] where \(\chi=|\chi|\exp(i\theta)\) and \(m=|m|e^{\alpha}\). If \(\alpha\neq 0\), C and CP are broken. In GUT SUSY baryonic number is naturally non-conserved. In our toy model it is described by non-invariance of \(U(\chi)\) with respect to phase rotation, \(\chi\rightarrow\chi\exp(i\theta)\chi\). In the course of inflation \(\chi\) quantum-fluctuates along flat directions with increasing amplitude due to quantum instability of massless fields at De Sitter stage [60, 61]. Thus taking into account that the wave length of the fluciuations exponentially rises, \(\chi\) could effectively acquire a large classical value. When inflation is over and the symmetry maintaining flat directions brakes down, \(\chi\) starts to evolve to the equilibrium point, \(\chi=0\), according to the equation of the Newtonian mechanics with the liquid friction (Hubble friction) term: \[\ddot{\chi}+3H\dot{\chi}+U^{\prime}(\chi)=0. \tag{10}\] Due to quantum fluctuations orthogonal to the flat directions, \(\chi\) obtains momentum in the orthogonal to the valley direction. This is how baryonic number of \(\chi\) is generated: \[B_{\chi}=\dot{\theta}|\chi|^{2}, \tag{11}\] \(B\) is analogous to mechanical angular momentum in the two dimensional complex plane \([\Re e\chi,\Im m\,\chi]\). Decays of \(\chi\) into quarks and antiquarks is supposed to conserve baryonic number and transforms the baryonic number of \(\chi\) into the baryonic number of quarks. In this process a huge cosmological baryon asymmetry can be generated, much larger than the observed one, \(\beta\approx 10^{-9}\). If \(m\neq 0\), the angular momentum, B, could be generated due to possible different direction of the quartic and quadratic valleys at low \(\chi\). In this case orthogonal quantum fluctuation are unnecessary. If CP-odd phase \(\alpha\) is small but non-vanishing, both baryonic and antibaryonic domains might be formed with possible dominance of one of them. In the model of ref. [1, 2] the AD, scenario of baryogenesis was essentially modified by an addition of interaction of the Affleck-Dine field \(\chi\) with the inflaton, \(\Phi\). The interaction potential is taken in the form: \[U=g|\chi|^{2}(\Phi-\Phi_{1})^{2}+\lambda|\chi|^{4}\,\ln\left(\frac{|\chi|^{2}} {\sigma^{2}}\right)+(\lambda_{1}\chi^{4}+h.c.)+(m^{2}\chi^{2}+h.c.). \tag{12}\] The first term in this expression is the new type of interaction potential between \(\Phi\) and \(\chi\). \(\Phi_{1}\) is a constant. It is the amplitude of the inflaton field, taken by \(\Phi\) in the process of inflation. The remaining duration of inflation after \(\Phi\) passed \(\Phi_{1}\) should secure the number of e-foldings about 30-40 to allow for formation of sufficiently massive PBH. Though the interaction between \(\chi\) and \(\Phi\) looks rather artificial, it is not so. This is the general renormalizable coupling of two scalar fields. The second term in eq. (12) is the Coleman-Weinberg potential [62] which arises as a result of one-loop quantum corrections to \(\lambda|\chi|^{4}\) interaction. The remaining two terms are the toy model ones describing flat directions. The constants \(\lambda_{1}\) and \(m\) are generally speaking complex. This may lead to C and CP violation. However the charge symmetry would be broken only if the relative phase of \(\lambda_{1}\) and \(m\) is nonzero. Coupling of \(\chi\) to fermions (quarks) can break C and CP as well. When \(\Phi>\Phi_{1}\), potential (12) has deep minimum near \(\chi=0\) and \(\chi\) classically stays there. In the course of inflation \(\Phi\) drops down and at some moment reaches \(\Phi_{1}\). So the barrier disappears and the window to the flat direction opens. During this period, when \(\Phi\) stays close to \(\Phi_{1}\), the field \(\chi\) started to diffuse away from the old minimum, according to quantum diffusion equation derived by Starobinsky which we have generalised to a complex field \(\chi\). At some stage for sufficiently large \(\chi\) the diffusion turns into classical motion. If the window to the flat direction, when \(\Phi\approx\Phi_{1}\) is open only during relatively short period, cosmologically small but possibly astronomically large bubbles with high \(\beta\) could be created, occupying a small fraction of the universe volume. Indeed, when \(\Phi\) passes the value \(\Phi_{1}\) sufficiently far, the old minimum at \(\chi=0\) reappears and \(\chi\) goes back to zero. While \(\chi\) is large it propagates along the flat direction of the quartic potential. When finally \(\chi\) becomes small, it starts to feel quadratic potential and in the process of motion from quartic to quadratic potentials \(\chi\) acquires a large angular momentum, that is a large baryonic number, If the probability of \(\chi\) to reach a large value is not too big, cosmologically small but possibly astrophysically large bubbles with high baryon density would be formed while the rest of the universe has normal \(\beta\approx 6\cdot 10^{-10}\), created by small \(\chi\) or by some other mechanism of cosmologicl baryogenesis. Initially large isocurvature perturbations were generated in chemical content of massless quarks, while, density perturbations stayed practically zero. Density perturbations are generated rather late after the QCD phase transition, when massless quarks turns into heavy nucleons with masses about 1 GeV, much larger than the temperature of the phase transition, \(T_{qcd}\sim 100\) MeV. The emerging universe looks like a piece of Swiss cheese, where holes are high baryonic density objects occupying a minor fraction of the universe volume. These High-B Bubbles (HBB) mostly turn into primordial black holes This mechanism of massive PBH formation is quite different from all other known. The fundament of PBH creation is build at inflation by making large isocurvature fluctuations at relatively small scales, with practically vanishing density perturbations, that appeared only much later. The mass spectrum of PBH reflects the distribution of the bubble by size during inflation. Its log-normal form is a general feature of diffusion processes. ## 16 Summary of the results and conclusion \(\bullet\) A large lot of outstanding problems of the canonical cosmology can be nicely resolved if the universe is populated by primordial black holes with masses in the interval from the solar mass up to supermassive BHs with masses of the order of billion solar masses. \(\bullet\) The inverted mechanism of galaxy formation is proposed when firstly a SMBH was formed and later it seeded the galaxy formation by gravitational attraction of the surrounding matter. Thus an existence of SMBH in almost empty space can be understood. \(\bullet\) The early galaxies observed by HST and JWST, when the universe was only several hundred million years old, could be created if seeded by SMBHs. \(\bullet\) Existence of a noticeable number of galaxies with masses that are is too small to allow for creation the observed SMBHs inside, can be explained if these SMBH are primordial. \(\bullet\) Creation of SMBH in large contemporary galaxies by conventional accretion mechanism demands time larger than the universe age. The problem disappears if the central SPBH is primordial. \(\bullet\) Observations of several dwarf galaxies with SMBH in their centres, confirmed our prediction made several years ago. \(\bullet\) The theoretically predicted log-normal mass spectrum of PBH is verified by the chirp mass distribution of the gravitational waves observed by LIGO/Virgo. The agreement between observation and theory is impressively good. \(\bullet\) PBHs formed according to our scenario explain the peculiar features of the sources of GWs observed by LIGO/Virgo, e.g. an existence of BH with \(M=100M_{\odot}\) \(\bullet\) The density of the intermediate mass black holes (IMBH), \(M=(10^{2}-10^{5})M_{\odot}\) well agrees with their primordial origin. Assumption of the astrophysical formation of IMBH enounters serious problems. \(\bullet\) Predicted by the model extremely old stars seem to exist even, the existence of the "older than universe star" can be explained because its old age is mimicked by the unusual initial chemistry. The model also predicts too fast moving stars, which are also observed. \(\bullet\) Natural consequence of the suggested model of PBH creation leads to noticeable population of our Galaxy by antimatter. This striking consequence seems to be confirmed by recent observations. ## Acknowledgement The work is supported by the RSF grant 22-12-00103
2310.08366
Constraints on the velocity of gravitational waves from NANOGrav 15-year data set
General relativity predicts that gravitational waves propagate at the speed of light. Although ground-based gravitational-wave detectors have successfully constrained the velocity of gravitational waves in the high-frequency range, extending this constraint to the lower frequency range remains a challenge. In this work, we utilize the deviations in the overlap reduction function for a gravitational-wave background within pulsar timing arrays to investigate the velocity of gravitational waves in the nanohertz frequency band. By analyzing the NANOGrav 15-year data set, we obtain a well-constrained lower bound for the velocity of gravitational waves that $v \gtrsim 0.87\,c$, where $c$ is the speed of light.
Yan-Chen Bi, Yu-Mei Wu, Zu-Cheng Chen, Qing-Guo Huang
2023-10-12T14:38:09Z
http://arxiv.org/abs/2310.08366v1
# Constraints on the velocity of gravitational waves from NANOGrav 15-year data set ###### Abstract General relativity predicts that gravitational waves propagate at the speed of light. Although ground-based gravitational-wave detectors have successfully constrained the velocity of gravitational waves in the high-frequency range, extending this constraint to the lower frequency range remains a challenge. In this work, we utilize the deviations in the overlap reduction function for a gravitational-wave background within pulsar timing arrays to investigate the velocity of gravitational waves in the nanohertz frequency band. By analyzing the NANOGrav 15-year data set, we obtain a well-constrained lower bound for the velocity of gravitational waves that \(v\gtrsim 0.87\,c\), where \(c\) is the speed of light. ## I Introduction General relativity (GR) predicts three significant characteristics of gravitational waves (GW): propagating at the speed of light, two tensor polarization modes, and quadrupole radiation. While extensive research has been conducted on the latter two characteristics [1; 2; 3; 4; 5; 6], studies often tend to focus on scenarios involving a non-zero graviton mass when it comes to propagation [7; 8], thereby overlooking a generic modification of the velocity of GWs itself. Ground-based detectors, such as LIGO, Virgo and KAGRA, have been observing deterministic GW signals at high-frequency (Hz \(\sim\) kHz) from the final merger of compact binary systems [9]. These observations have significantly advanced our understanding of gravity [10; 11; 12; 13]. Notably, the event GW170817 has constrained the propagation velocity of GWs as \(|1-v|\lesssim 10^{-15}\) at the frequency of \(f\sim 100\)Hz [10; 11]. However, the velocity constraint at high frequencies may not necessarily apply to the lower frequency range. Therefore, it is essential to scrutinize the constraints on velocity from a lower frequency band, which are accessible by pulsar timing arrays (PTAs). PTAs are optimal for detecting the stochastic gravitational-wave background (SGWB) at nHz by monitoring the times of arrival (TOAs) of radio pulses emitted by a set of millisecond pulsars over decades. Recently, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) [6; 14], the European PTA (EPTA) align with the Indian PTA (InPTA) [15; 16], the Parkes PTA (PPTA) [17; 18], and the Chinese PTA (CPTA) [19] have announced evidence for a stochastic signal consistent with the Hellings-Downs correlations [20], pointing to the SGWB origin of this signal. The SGWB serves a valuable tool for revealing variations in the phase velocity of GWs. These variations, predicted by several modified gravity theories [21; 22; 23; 24], can impact the overlap reduction function (ORF) in PTAs [25], providing an effective diagnostic for deviations from GR. Previous attempts to constrain the velocity using the SGWB [26; 4] have been flawed as they only fit the spatial correlations while disregarding the information provided by the GW energy density. In this work, we conduct a comprehensive investigation by considering both the spatial correlations and energy density spectrum of the SGWB. In this paper, we utilize the NANOGrav 15-year data set to impose constraints on the velocity of the GW via the investigation of the SGWB. It is worth noting that we do not delve into the distinction between phase velocity and group velocity [25; 27]. Our analysis uncovers a novel constraint on the GW velocity. This constraint is robust for lower values but appears weaker at higher values. To be more precise, the posterior sharply truncates when the velocity is subluminal, while it remains relatively flat when the velocity is superluminal. This outcome suggests that the available data can only discern a lower limit for the velocity of GW. Throughout this paper, we employ geometric units with \(c=G=\hbar=1\). The rest of the paper is organized as follows. In Sec. II, we review the ORF as a function of GW velocity for an SGWB. In Sec. III, we describe the data and methodology for the analyses. Finally, in Sec. IV, we present the results and discuss their implications. ## II Overlap reduction function We now briefly review the calculation of the ORF when GWs propagate at a constant speed \(v\). We adopt a pa rameterized dispersion relation as \[\omega=vk, \tag{1}\] where \(\omega\) is the angular frequency, and \(k\) is the wave number. It's worth noting that, in this expression, both phase velocity and group velocity are identical and equal to \(v\), thus avoiding any confusion between the two. After introducing this relationship, the mode function of the GW plane wave is given by \[h_{ij}\left(t-\frac{1}{v}\hat{k}\cdot\vec{x}\right)=\int dfh_{ij}\left(f,\frac{ 1}{v}\hat{k}\right)e^{i2\pi f\left(t-\frac{1}{v}\hat{k}\cdot\vec{x}\right)}, \tag{2}\] where the velocity \(v\) encodes the deviation from GR. When setting \(v=1\), it reduces to the GR case. An SGWB causes delays in each pulsar's TOAs (or in other word timing residuals) in a characteristic spatial correlated way. The corresponding timing-residual cross power spectral density between any two pulsars, \(a\) and \(b\), can be modeled by a power-law form \[S_{ab}(f)=\Gamma_{ab}\frac{A_{\rm GWB}^{2}}{12\pi^{2}}\left(\frac{f}{f_{\rm yr }}\right)^{-\gamma}f_{\rm yr}^{-3}, \tag{3}\] where \(A_{\rm GWB}\) is the amplitude of the SGWB at the reference frequency \(f_{\rm yr}=1/\)year, \(\gamma\) is the spectral index of SGWB, and \(\Gamma_{ab}\) is the ORF that describes average correlations between pulsars \(a\) and \(b\) in the array as a function of the angular separation between them. Note that only the tensor mode is considered throughout this work. The most general ORF between two pulsars \(a\) and \(b\) can usually be expressed as \[\Gamma_{ab}(f,\xi)=\beta\int d\hat{k}\sum_{A=+,\times}R_{a}^{A}(f,\hat{k})R_{b }^{A*}(f,\hat{k}), \tag{4}\] where \(\beta\) is the normalization factor. The quantity \(R_{a}^{A}(f,\hat{k})\) represents the detector response function for a timing residual measurement. It pertains to a detector with length \(L_{a}\) (namely the distance from the pulsar \(a\) to the Earth), sensitive to a plane GW with polarization \(A\), propagation direction \(\hat{k}\), and frequency \(f\). It can be described as \[R_{a}^{A}(f,\hat{k})=\frac{1}{i2\pi f}\frac{\hat{p}_{a}^{i}\hat{p}_{a}^{j}e_{ ij}^{A}(\hat{k})}{2(1+\frac{1}{v}\hat{k}\cdot\hat{u})}\left(1-e^{-i2\pi fL_{a}(1+ \frac{k\cdot x\hat{u}}{v})}\right), \tag{5}\] where \(\hat{p}_{a}^{i}\) is the direction to the pulsar \(a\). A more sophisticated approach to express the ORF is decomposing it into spherical harmonics in the same way that is traditionally applied to the analysis of cosmic microwave background [28; 29]. In this manner, the ORF is expressed as [25; 27] \[\Gamma_{ab}(f,\xi)=\beta\sum_{l=2}^{\infty}(2l+1)\frac{2(l-2)!}{(l+2)!}|c_{l} (f)|^{2}P_{l}(\cos\xi_{ab}), \tag{6}\] where \(P_{l}(\cos\xi)\) is the Legendre polynomial and the coefficient \(c_{l}(f)\) is written as [25] \[c_{l}(f)=2i(l+1)\int_{-1}^{1}dxe^{-i\pi fL(1+x/v)}\frac{\sin(\pi fL(1+x/v))}{( 1+x/v)}(P_{l}(x)(-l+(2+l)x^{2})-2xP_{l+1}(x)), \tag{7}\] where \(L\) stands for the typical distance of pulsars and the quantity \(fL\) is set to 100 [30]. Following [25], we can safely ignore the exponential factor when \(v\geq 1\) while keeping it in the opposite case. ## III Data and Methodology The NANOGrav 15-year data set includes observations for 68 pulsars, of which 67 pulsars have an observational timespan over 3 years and have been used for the SGWB search [14]. All of these pulsars collectively generate 2211 pairs. To reduce computation cost, we have pre Figure 1: ORF for the SGWB as a function of the angular separation \(\xi\) with different GW velocity \(v\). Note that we normalize the ORF such that at \(\xi=0\), the value is chosen to be 0.5. For the case with subluminal phase velocity, the ORF tends to diverge at \(\xi=0\). Therefore we choose an arbitrary normalization for comparison. calculated the ORFs varying with \(v\) at these pair separations by interpolating the ORF into a two-dimensional function of velocity \(v\) and pair separation \(\xi\). Besides the SGWB signal characterised by the ORF obtained above, several other effects also contribute to TOAs, such as the measurement uncertainties of the timing, and the irregularities of the pulsar's motion and so on [6]. In practice, these effects should be analysed all together within the timing residuals, \[\delta t=M\epsilon+\delta t_{\text{WN}}+\delta t_{\text{RN}}+\delta t_{\text{ SGWB}} \tag{8}\] where the \(M\) is the design matrix, \(\epsilon\) is an offset vector of timing model parameters. Here, \(\delta t_{\text{WN}}\) is the white noise term accounts for the measurement uncertainty of instruments, for which are described by three parameters "EFAC", "EQUAD" and "ECORR" [31]. Besides, \(\delta t_{\text{RN}}\) represents the red noise term from intrinsic noise of pulsar, modeled as a power law with amplitude \(A_{\text{RN}}\) and index \(\gamma_{\text{RN}}\)[31; 32], \[S(f)=\frac{A_{\text{RN}}^{2}}{12\pi^{2}}\left(\frac{f}{f_{\text{yr}}}\right)^{ -\gamma_{\text{RN}}}f_{\text{yr}}^{-3}. \tag{9}\] The correlations between different TOAs, \((t_{i},t_{j})\), are calculated using the Wiener-Khinchin theorem [31], resulting in the covariance matrix elements \[C_{ij}^{\text{RN}}=\int dfS(f)\cos(2\pi(t_{i}-t_{j})). \tag{10}\] In practice, we employ the "Fourier-sum" method to model both the red noise and SGWB signal, utilizing Fourier bases \(F\) and their associate amplitudes \(a\) which are related to the spectral density Eq. (9) [33]. Following [6; 34], we use frequencies \(f_{i}=i/T\) with the observational timespan \(T=16.03\)yr, and set \(i=1-30\) for the red noise and \(i=1-14\) for the SGWB signal. To enhance computational efficiency, the stochastic processes are typically assumed to be Gaussian and stationary [35]. The log likelihood is evaluated as \[\ln L(\delta t|\Theta)=-\frac{1}{2}\left[r^{T}C^{-1}r+\ln\det(2\pi C)\right], \tag{11}\] where \(r=\delta t-Fa-M\epsilon\) and \(C=\langle rr^{T}\rangle\) is the total covariance matrix. Following the Bayesian inference approach adopted by [6], the posterior is given as \[P(\Theta|\delta\mathbf{t})\propto L(\delta\mathbf{t}|\Theta)\pi(\Theta), \tag{12}\] where \(\pi(\Theta)\) is the prior probability distribution. The parameters and their prior distributions needed for the analyses are listed in Table 1. All the aforementioned analyses rely on the JPL Solar System Ephemeris (SSE) DE440 [36]. We utilize the PINT timing software [37] to determine the design matrix \(M\) for the timing model, employ the Enterprise package [38] to compute the likelihood \(L(\delta t|\Theta)\) by marginalizing over the timing model offset parameters \(\epsilon\), and utilize the PIMCMCSampler[39] package to conduct Markov Chain Monte Carlo (MCMC) sampling for constraining the velocity of the SGWB. When conducting the analysis, we initiate noise analyses by solely considering white and red noise for each individual pulsar. Subsequently, we aggregate all 67 pulsars into a whole PTA, fix the white noise parameters to their maximum-likelihood values estimated from the single pulsar noise MCMC chain, and allow red noise parameters to vary simultaneously with the SGWB signal parameters. In signal search among all the pulsars, fixing white noise parameters has negligible impact on the results [40], but can efficiently reduce the computational cost. Figure 2: The posterior distribution for the velocity of GWs. ## IV Result and Discussion As previously discussed, the ORF of an SGWB exhibits variations as the velocity of GWs changes. In this work, we derive constraints on the velocity by analyzing these variations. The posterior distribution of the velocity is depicted in Fig. 2, which has been smoothed using the kernel density estimation (KDE) method. For this analysis, we employ the Gaussian function as the kernel function with a bandwidth set to \(0.09\). Additionally, we implement boundary correction [41, 42] for the KDE using the mirroring method. The posterior of the velocity exhibits the clear lower limit and flattens for velocity larger than the speed of light. As there is not a well-established method for estimating the confidence level (CL) in this particular scenario, we propose a reasonable approach. Specifically, the posterior displays a peak at \(\log_{10}v_{\text{peak}}\sim 0.127\). Assuming the left side of the peak approximately follows a Gaussian distribution, we use the \(1/e^{2}\) height width to represent the \(2\sigma\) CL. This method yields a lower bound of \(\log_{10}v\gtrsim-0.059\), or equivalently, \(v\gtrsim 0.87\). The posterior distribution of \(v\) is consistent with the variation of ORF with \(v\) in Fig. 1. Due to significant differences in the ORF with the subluminal case, a natural lower bound can be determined. However, the relatively flat posterior for velocities greater than \(1\) indicates that distinguishing the superluminal case from the normal luminal one using the currently detected SGWB remains challenging. Furthermore, a massive gravity with a non-zero graviton mass seems to correspond to our superluminal velocity case [4, 26]. However, the dispersion relation \(\omega=\sqrt{m^{2}+|k|^{2}}\) is not equivalent to the dispersion relation we used. Therefore, our approach allows for the exploration of possibilities beyond the commonly assumed massive gravity when introducing variations in the dispersion relation. Its capacity to encompass both the superluminal and subluminal cases also makes our approach unique and generic. ## Acknowledgements We acknowledge the use of HPC Cluster of ITP-CAS. QGH is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15). ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429.
2301.11176
A simple model for pink noise from amplitude modulations
We propose a simple model for the origin of pink noise (or 1/f fluctuation) based on the beat of cooperative waves. These cooperative waves arise spontaneously in a system with synchronization, resonance, and infrared divergence. Many cooperative waves with close frequencies can produce signals of arbitrary small frequencies from a system of small size. This beat mechanism can be understood as amplitude modulation. The pink noise can appear after the demodulation process, which produces a variety of pink noise in many fields. The pink noise thus formed from the beat has nothing to do with dissipation or long-time memory. We also suggest new ways of looking at pink noise in shallow earthquakes, solar flares, and stellar activities.
Masahiro Morikawa, Akika Nakamichi
2023-01-26T15:33:19Z
http://arxiv.org/abs/2301.11176v1
# A simple model for pink noise from amplitude modulations ###### Abstract We propose a simple model for the origin of pink noise (or 1/f fluctuation) based on the beat of cooperative waves. These cooperative waves arise spontaneously in a system with synchronization, resonance, and infrared divergence. Many cooperative waves with close frequencies can produce signals of arbitrary small frequencies from a system of small size. This beat mechanism can be understood as amplitude modulation. The pink noise can appear after the demodulation process, which produces a variety of pink noise in many fields. The pink noise thus formed from the beat has nothing to do with dissipation or long-time memory. We also suggest new ways of looking at pink noise in shallow earthquakes, solar flares, and stellar activities. ## 1 Introduction Pink noise is ubiquitous. This noise is characterized by the power-law behavior in the very low-frequency region of the power spectrum density (PSD) with power \(-\alpha\), (\(0.5<\alpha<1.5\)). This noise is also known as 1/f fluctuation or flicker noise. Since the first discovery of pink noise in a vacuum tube current [1], the same noise has been observed in many systems: semiconductors, thin metals, biomembranes, crystal oscillators, very long-term temperature variations, the loudness of orchestral music, fluctuations in the Earth's rotation speed, fluctuations in the intensity of cosmic rays, heartbeats, postural control, magnetoencephalography and electroencephalography in the brain, etc. [2, 3]. There have been many discussions about the origin of pink noise [2, 3, 4], but there seems to be no clear conclusion. Many models have been proposed that give rise to pink noise, but no universal mechanisms have been discovered. Since pink noise is ubiquitous, the mechanism should be simple enough. However, all the applications of the basic concepts and techniques of the standard statistical mechanics seem to have encountered conflicts and disputes. Then people have tended to consider more fundamental concepts that can rewrite the theory of standard statistical mechanics. A typical mechanism for producing arbitrary low-frequency fluctuations would be the wave beat, or amplitude modulation, of the primary high-frequency fluctuations. This amplitude modulation would be successful for pink noise if the frequencies were more concentrated in a small range. Then the secondary beat wave can have lower frequencies. One of the authors has already proposed this mechanism for the pink noise of sounds and music [5]. Furthermore, this concentration should be cooperative and systematic to form the power-law PSD. We propose at least three types of cooperative systems that can produce pink noise. They are a) synchronization (section three), b) resonance (section four), and c) the infrared (IR) divergence (section five). If the pink noise were an amplitude modulation, the demodulation mechanism should also exist. This is because the entire modulated data has only high-frequency information, while the data after demodulation can explicitly show the low-frequency information, including the pink noise. The demodulation mechanism can be intrinsic to the system or it can be prepared in the measurement procedure. Many demodulation mechanisms make the pink noise phenomena diverse: taking the square of the original signal, rectification, thresholding, etc. For example, when the electrical current or voltage exceeds the threshold in the biological body, ignition occurs and produces spikes in the nerve cells. Thus the possible pink noise in the electric current is transferred to the nerve signal. We begin our discussion in section two, listing crucial clues to the origin of pink noise, all of which point to the possibility that pink noise is amplitude modulation. We then propose three mechanisms that lead to the modulation. In section three, we discuss the most typical mechanism synchronization. We show that a) exponential synchronization yields a power index of \(-1\), and power-law synchronization yields a power index slightly different from \(-1\). In section four, b) resonance also yields pink noise since the concentration of the excited eigenmodes around the fiducial frequency is systematically approximated by the exponential function in the relevant domain. In section five, c) infrared divergence in the bremsstrahlung can give pink noise. In section six, we discuss the robustness of pink noise and several demodulation mechanisms that yield a variety of pink noise. In the final conclusion section seven, we summarize our proposal and possible verifications based on the points presented in section two. We also summarize our future prospects of amplitude modulation on a variety of systems. ## 2 Some crucial clues for pink noise We will now list some crucial clues to the origin of the pink noise. This process is quite important, because it can clarify which principles of statistical mechanics are useful and which are not useful to describe the pink noise. 1. Wave Systems that exhibit pink noise are often waves: sound waves, electric current, air-fluid, liquid flow, etc. Waves can interfere with each other. Thus the interference of waves can be a clue to get pink noise. 2. Small system and seemingly long memory It is bizarre that an ultra-low frequency signal can come from a very small system. As an extreme example [6], the semiconductor films of 2.5nm layers give observable pink noise. A small semiconductor can have pink noise down to \(10^{-7}\)Hz [7], and voltage fluctuations through a semiconductor show pink noise from about 1Hz to \(10^{-6.3}\)Hz [8]. These remarkable low frequencies sound almost impossible for ordinary small systems. In this context, if the Wiener-Khinchin theorem \(S(\omega)=\int_{-\omega}^{\omega}d\tau\int_{-\omega}^{\infty}dt\,\langle x(t)x( t-\tau)\rangle e^{-2\pi i\omega\tau}\) were correct, then the strong low-frequency signal in \(S(\omega)\) of the pink noise would necessarily indicate the non-vanishing long-time correlation \(\langle x(t)x(t-\tau)\rangle\). Therefore, the Wiener-Khinchin theorem may not hold for pink noise. 3. Apparent no lower cutoff in the PSD It is often discussed that the pink noise does not seem to have an explicit lower cutoff in the PSD determined by any physics governing the system. Therefore, the system exhibiting pink noise may not be in a stationary state. Therefore, it may be useless to have discussions based on the stationarity of the system. 4. Independence from dissipation It is remarkable that the pink noise appears even in the Hamiltonian mean-field (HMF) model, which is a strictly conservative system [9] and has nothing to do with dissipation. Thus the usual fluctuation-dissipation theorem of the type \(\left\langle\delta x^{2}\right\rangle\propto RkT\) may not hold for the pink noise(\(R\) is the electric resistance and \(kT\) is the temperature). 5. Square of the original signal When deriving the pink noise, it is often the case that the original time sequence is squared prior to the PSD analysis. For example, in the case of music [10], the sound wave data should always be squared for PSD; the authors claim that this squared data is the loudness. Similarly, in the case of the HMF model [9], the authors always take a square of the original variables in order to obtain the pink noise. In both cases, the original data before taking the square does not show any pink noise. In the case of the electric current, this procedure is not manifest, although the seminal paper [1] emphasizes the square of voltage \(V^{2}\) for PSD. From the above five clues, we speculate that the beets of many synchronized waves may be the origin of 1/f noise. A simple superposition of two waves \(\sin(\omega t+\lambda t)+\sin(\omega t-\lambda t)=2\cos(\lambda t)\sin(\omega t)\) with \(\omega\gg\lambda>0\) has no low frequency component around \(\lambda\) in the PSD. On the other hand, the square of the superposed wave above has a low-frequency signal, _i.e._, the beats, around \(2\lambda\) in its PSD. Incidentally, it is sometimes confusing that the wave beat is "audible" although the PSD of the original superposition of the two waves does not show the corresponding low-frequency signal. The above argument reminds us of a typical musical instrument, the Theremin [11], which uses the wave beat. By mixing the high-frequency signals of 1000kHz and 999.560kHz generated by an electric circuit, the low-frequency signal of 440Hz can be extracted as audible sound. The latter frequency can be varied slightly by the player's hand, antenna distance, and capacitance, to produce the desired frequency signal. Thus the amplitude modulation can produce arbitrary low-frequency signals within a small-size system. The modulated signal has no intrinsic memory and has nothing to do with dissipation. Another familiar device is the AM radio which clearly shows the wave beat or amplitude modulation (AM). By using 526.5kHz to 1606.5kHz radio waves, the low-frequency audible signal is extracted. In this case, the rectification (demodulation) process is essential to obtain audible low-frequency signals. This demodulation process is also essential for the pink noise in our proposal. In later sections, we will see a variety of pink noises in the many ways to demodulate. The above five points will also be an elementary verification of our proposal. This will be discussed in later sections. There appear to be several causes of the wave beat that forms pink noise, but the concentration of the wave frequencies is the essence of low-frequency signals. We will now focus on such causes separately in the following sections: a) cooperative waves, b) resonance, and c) infrared divergence. ## 3 Beats from cooperative waves In this section, we will analyze the cause of wave beat, especially when the frequencies of the waves spontaneously approach with each other. We consider cooperative systems that exhibit this behavior. ### Exponential Approach The most typical type of synchronization would be the exponential approach, such as in the case of the Kuramoto model [12], \(\omega=e^{-\lambda t}\) where \(\omega\) is the frequency and \(\lambda\) is the approach speed, and \(t\) is the time. Then the frequency distribution function \(P(\omega)\) and the time distribution function \(p(t)\) are related to each other by \(P(\omega)d\omega=p(t)dt\). If we assume the stationarity of the fluctuation, we set \(p(t)\equiv p=const\). Then, \[P(\omega)=p(t)|d\omega/dt|^{-1}=p\lambda^{-1}\omega^{-1}\propto\omega^{-1}. \tag{1}\] It is interesting that the exponential function gives the power index exactly \(-1\). The observed beat is the interference of the pair of frequency distributions above, and the beat frequency \(\Delta\omega\) has its probability distribution function \(Q(\Delta\omega)\) as \[Q(\Delta\omega) =\int_{\omega_{1}}^{\omega_{2}}d\omega P(\omega+\Delta\omega)P(\omega)\] \[=\frac{p^{2}}{\lambda^{2}\Delta\omega}\ln\left[\frac{\omega_{2} \left(\omega_{1}+\Delta\omega\right)}{\omega_{1}\left(\omega_{2}+\Delta\omega \right)}\right] \tag{2}\] which again is proportional to \(\left(\Delta\omega\right)^{-1}\)with small modification factor of \(\ln[...\Delta\omega]\). The detail of the full form \(Q(\Delta\omega)\) depends on the boundaries of the integration domain \(\omega_{1}<\Delta\omega<\omega_{2}\). Typical examples are shown in Fig.1. The pink noise is robust, and the frequency distribution is directly reflected in the PDF of the waves at those frequencies, \[\phi\left(t\right)=\sum_{i}\sin\left(2\pi\omega(1+ce^{-r_{i}})t\right), \tag{3}\] where \(\omega\) is a fiducial frequency, \(c\) is a mixing constant, \(r_{i}\) the Poisson random variable in some range for each sinusoid and \(i\) runs from \(1\) to some upper limit. This is demonstrated in Fig. 2 where the PSD of \(\phi^{2}\) is shown. The pink noise is robust, and the randomization of each phase of the sin-wave does not change the PDF except that the power index is slightly reduced, as shown in Fig.3. It is essential that the square of the signal \(\phi^{2}\) does show pink noise in PSD as in Fig. 1 while the original signal itself \(\phi\) does not show any feature at low-frequency region as shown in Fig.4. This fact manifestly demonstrates the pink noise comes from the wave beat. Figure 4: Same as Fig.2, but this is PDF for the original signal \(\phi\). Pink noise never appears in this case, indicating that the noise arises from the wave beat. Figure 3: Same as Fig. 2, but the sine waves are superimposed with random phase \(\theta_{i}\) for each:\(\sin{(2\pi\omega(1+ce^{-r_{i}})t+\theta_{i})}\). The power index drops a bit to \(-0.7\), but this PDF shows the robustness of the pink noise from the wave beat. Figure 2: The PSD of \(\phi^{2}\) is shown with \(\omega=10\), \(c=0.2\), and \(r\) is a random field in the range [0,30]. 1000 sine waves are superimposed according to Eq.3 The power index can change up to about 0.1 for each run. This PDF shows the pink noise of index -1 for four decades. ### Power Approach Another popular type of synchronization would be the power approach \(\omega=t^{-\alpha}\). Repeating the same calculations as above, we obtain the frequency distribution function as \[P(\omega)=\underbrace{p(t)}_{p\text{ const.}}|d\omega/dt|^{-1}=c\omega^{-\beta} \tag{4}\] where \(c\equiv p\alpha^{-1}\), \(\beta\equiv\left(1+\frac{1}{\alpha}\right).\) The probability distribution function \(Q(\Delta w)\) of the beat frequency \(Q(\Delta w)\) is given by \[Q(\Delta\omega)=\int_{\omega_{1}}^{\omega_{2}}d\omega P(\omega+\Delta\omega)P (\omega) \tag{5}\] Then, \[Q(\Delta\omega) =\frac{1}{\Delta\omega\left(1-\beta\right)} \tag{6}\] \[\left[c^{2}\omega^{1-\beta}(\Delta\omega+\omega)^{1-\beta}{}_{2} F_{1}\left(1,2-2\beta;2-\beta;-\frac{\omega}{\Delta\omega}\right)\right]_{ \omega_{1}}^{\omega_{2}}\] \[\propto\Delta\omega^{-1-(2/\alpha)},\] if we expand with respect to small \(\omega_{1}\) and small \(\Delta\omega\). The exponent is less than \(-1\) for \(\alpha>0\), and greater than \(-1\) for \(\alpha<0\) but the fiducial power is \(-1\). Typical examples are shown in Fig.5. A typical wave signal can be constructed as before, \[\phi=\sum_{i}\sin\left(2\pi\omega(1+cr_{i}^{-\alpha})t\right), \tag{7}\] and PSD for \(\phi^{2}\) are shown in Fig.6 for \(\alpha=3\), and in Fig. 7 for \(\alpha=-3\). Although the above demonstrations are typical simple models of the cooperative waves, the frequencies are fixed. However, it is also possible to consider dynamical cooperative systems with time-dependent frequencies, and they often show pink noise; macroscopic coupled spin models [13] and the Hamiltonian mean field model [9]. Since the discussion of these is beyond the scope of this paper, we will cover them in a separate paper soon. ## 4 Beats from Resonance We now consider the resonance, which produces the spontaneous concentration of frequencies and the wave beats. When the system with the intrinsic eigenfrequency \(\Omega\) is stimulated (repeatedly), it emits the wave mode of the frequency \(\Omega\) as well as Figure 6: The PSD is shown for \(\phi^{2}\) with \(\alpha=3,\omega=440\), \(c=0.3\), and \(r_{i}\) is a random field in the range [0,20]. 200 sine waves are superimposed according to Eq.7 This PDF shows the pink noise of index -1.3 for four decades. Figure 7: The PSD is shown for \(\phi^{2}\) with \(\alpha=-3,\omega=440\), \(c=0.01\), and \(r_{i}\) is a random field in the range [0,1]. 200 sine waves are superimposed according to Eq.7 This PDF shows the pink noise of index -1 for three decades. those close to \(\Omega\). Resonance thus ensures the concentration of frequencies in a small range. Since these frequencies are close to each other, the waves of these frequencies beat and produce a signal in low-frequency regions. Suppose a typical case of the resonance characterized by the resonance curve, the Cauchy distribution \[R[\omega]=\frac{1}{\left(\frac{\kappa}{2}\right)^{2}+\left(\omega-\Omega\right) ^{2}}, \tag{8}\] where \(\Omega\) is the resonance frequency and \(\kappa\) characterizes the sharpness of the resonance. We will interpret that this function \(R[\omega]\) as proportional to the number of \(\omega\)-modes in the resonator. Then the frequency distribution function \(P(\omega)\) is given by the inverse function of \(R[\omega]\), as \[\omega=R^{-1}[t]=\frac{\sqrt{-t\left(\kappa^{2}t-4\right)}}{2t}+\Omega, \tag{9}\] where we have chosen the upper half of the inverse of \(R[\omega]\), since the lower half is symmetric to the upper half. It is possible to make a naive approximation of Eq.9 by the exponential function \(\omega=Ae^{-\beta t}\), where the constants \(A,B\) are determined at the inflection point of Eq.9, as shown in Fig.8. We already know that this exponential function gives the exact pink noise of slope \(-1\) in PSD. This is demonstrated in Fig.9, where the PSD is plotted for the square \(\phi(t)^{2}\)of the time sequence \(\phi(t)\) generated by \[\phi(t)=\sum_{i}\sin\left(2\pi R^{-1}\left(r_{i}\right)t\right). \tag{10}\] However, the system analysis is not easy. Using the relation \(P(\omega)d\omega=p(t)dt\) with \(p(t)\equiv p=const\), we obtain the frequency distribution function \(P(\omega)\) as \[P(\omega)=p|d\omega/dt|^{-1}=\frac{32p(\omega-\Omega)}{\left(\kappa^{2}+4( \omega-\Omega)^{2}\right)^{2}}, \tag{11}\] which cannot be reduced to a single power form if \(\kappa\) is finite. Further complications arise from the actual resonant system, which has complicated overtones and multiple eigenfrequencies that systematically contribute to the pink noise. A fully systematic derivation of pink noise for each concrete resonant system requires further investigation. Since this is beyond the scope of this paper, we do not discuss it further here, but it will be analyzed in a separate paper soon. ## 5 Beats from IR divergence We now consider the third cause of the spontaneous concentration of frequencies from the infrared divergence. This class of systems exhibiting pink noise is quite diverse, but can be reduced to the system composed of electrons and photons described by electrodynamics. Figure 8: Demonstration of Eq.9 in the log-linear graph. The function \(\omega(t)\) can be approximated by the exponential function (red straight line) with the same inclination at the inflection point of \(\omega(t)\), especially in the large-t range that is relevant for the low-frequency beats. In this context, a quantum origin of pink noise was once proposed by using quantum interference [14, 15]. It claims that the scattered electron state, after emission of a photon of frequency \(\omega\), and the unscattered electron state interfere with each other to produce a beat of frequency \(\omega\). However, this theory has been criticized [16, 17], mainly because quantum interference does not really occur; the scattered and non-scattered states are orthogonal to each other and have no chance to interfere. Even the introduction of the coherent state basis does not work. Incidentally, some other criticisms are not valid. The essence of the pink noise is not the quantum interference, but the back-reaction of the emission of massless particles to the classical current and the classical wave beat interference. In this paper, we focus on such a classical description of electromagnetism. In the semiconductor, the electric current can be classical beyond the scale of the free streaming length, about \(10nm\), which is several tends of the lattice size. When the system size is about \(1mm\), there are \(10^{10}\) classical current elements. When the electron meets any impurity, it changes its momentum from \(p^{i}\) to \(p^{f}\), emitting the photon of momentum \(p^{i}-p^{f}\). Starting with the classical current, \[j_{\mu}(x)==-\frac{ie}{(2\pi)^{4}}\int d^{4}ke^{-ik\cdot x}\left(\frac{p_{\mu} ^{i}}{p^{i}\cdot k}-\frac{p_{\mu}^{f}}{p^{f}\cdot k}\right), \tag{12}\] the number of emitted photon is given by \[dN=e^{2}\left|\frac{\epsilon\cdot p^{i}}{k\cdot p^{i}}-\frac{\epsilon\cdot p ^{f}}{k\cdot p^{f}}\right|^{2}\frac{d^{3}k}{2(2\pi)^{3}k^{0}}, \tag{13}\] which is IR divergent [18], and \(\epsilon\) is the polarization vector. We assume that the average classical electric current has a fiducial frequency \(\Omega\), which is determined by the applied voltage and the conductivity before the interaction with the impurities. Each scattering with the impurities emits light of energy \(\omega\) and exerts a back-reaction on the current of amount energy-shift \(\omega\) with the probability \(1/\omega\) (bremsstrahlung). Then the original current cascades into the superposition of an enormous number of local currents with frequencies \(\Omega-\omega_{i},i=1,2,...N\). Then each pair of these currents makes beats with all the possible differences \((\Omega-\omega_{i})-(\Omega-\omega_{j})=\omega_{j}-\omega_{i},1\leqq i,j\leqq N\). This process is the same as the previous case of the exponential approach 3.1, and many classical currents with slightly different frequencies interfere to give the wave beat as in Eq.(2) and thus the pink noise as in Fig.2. In any case, quantum interference is not the essence of the origin of pink noise, but the classical synchronized waves are crucial. In this context, the coherent dressed state formalism for QED was developed to cancel the infrared divergence associated with the massless photon. This theory is well summarized in [19, 20], although most authors assume (semi-)classical background currents ab initio, and the classical degrees of freedom are not correctly derived. The derivation of the classical degrees of freedom in QED is possible in the closed time-path formalism of the effective action associated with an unstable state. The IR divergence of the theory requires the separation of the classical statistical kernel from the complex effective action. Then the Langevin equation with classical noise is derived from the effective action and can describe the classical evolution of currents [21]. Figure 9: PSD of the time sequence \(\phi(t)^{2}\) generated by Eq.(10) with \(\kappa=0.1,\Omega=10\), and the domain of the random field \(r_{i}\) is \([0,10]\). We have superimposed 100 sine waves, and this PSD shows the approximate power law of index \(-1.2\). This formalism requires a more systematic discussion than we can give here. However, we will report this theory, including the classical-quantum interference, in a separate paper. ## 6 Discussions So far, we have proposed three kinds of origin of the synchronizing waves, which gives systematic beats and produce pink noise. Since the pink noise is generated by the wave beat or the amplitude modulation, any demodulation process is required for observation. This demodulation process may be a)intrinsic mechanisms associated with the system or b) operational processes associated with the data reduction for PSD. In either case, the demodulation process provides robustness and a variety of pink noise. This section is devoted to showing some examples of such robustness and variety. 1. _fiducial_: The fiducial signal is the one discussed in 3.1, with the same parameters of Fig.2: \(\omega=10\), \(c=0.2\), and \(r_{i}\) is a random field in the range [0,30]. There, \(10^{3}\) sinusoids are superimposed according to Eq.7. The squared signal \(\phi^{2}\)shows a clear pink noise of slope \(-1.0\) as in Fig.2. 2. the threshold for \(\phi^{2}\): We set the new data zero for the \(\phi^{2}\) data that is _smaller than the mean_ and leave the other data _as they are_. The PSD shows pink noise with slope -1.0, almost no change from the fiducial case. This case may apply to the nerve system, where only a voltage greater than some threshold can produce a spike signal. 3. on-off threshold for \(\phi^{2}\): We set the new data zero for the \(\phi^{2}\) data that is _less than the mean_ and set the other data _to 1_. The PSD shows pink noise with a slope of -0.94. 4. on-off inverse threshold for \(\phi^{2}\): This is _the opposite of case 3_. We set the value 1 for the \(\phi^{2}\) data that is smaller than the mean and set the other \(\phi^{2}\)data to 0. The PSD shows pink noise with a slope of -0.94, exactly the same as in case 3. 5. threshold for original _data \(\phi\):_ we set the new data zero for the \(\phi\) data that is smaller than the mean and set the other data as is. The PSD shows pink noise with a slope of -0.98. 6. rectification of the original data \(\phi\): We set the new data _to zero for the \(\phi\) data that is negative_ and leave the other data as is. The PSD shows pink noise with a slope of -1.2. This may apply to some electric circuits contaning transistors, diodes, and vacuum tubes. 7. sequence of locally averaged \(\phi^{2}\): We divide the entire time sequence of \(\phi\) into \(10^{3}\)_segments_ and apply _a quadratic average in each_ segment. The PSD shows pink noise with a slope of -1.1. This is the data treatment in the original experiment [1]. 8. sequence of locally averaged \(\phi\): _Same as case 7, but we apply a simple_ average in each segment. The PSD shows NO pink noise at all, and the power is positive +0.8. 9. coarse time resolution for \(\phi^{2}\): We _reduce the number of sample points_ to half of the original. The PSD shows an almost pink noise with a slope of -1.1. 10. fewer superimposed waves: We _reduce the number of superimposed waves_ from the fiducial \(10^{3}\) to 10. The PSD shows NO pink noise. 11. more superimposed waves: We _increase the number of superimposed waves_ from the fiducial \(10^{3}\) to \(10^{4}\). The PSD shows pink noise with a power of -0.94. 12. longer time sequence: We extend the _time sequence from the fiducial \(10^{4}\) to \(10^{5}\)_. The PSD shows pink noise with a slope of -1.0; the same as before, but with a power law extended by a decade. 13. multiple fiducial frequencies; We changed the fiducial frequency from the _original single to 5, randomly selected from 0 to 20_. The PSD shows pink noise with a slope of -1.5. As examined above, there are multiple demodulation processes. They are classified as a) system-intrinsic and b) operational in the data reduction, although the classification is not exclusive. Examples of a) are thresholding and rectification: cases 3,4,5,6. Examples of b) are data squaring: cases 1,2,7. Cases 9,11,12,13 show some robustness of pink noise. ## 7 Conclusions and prospects We have discussed the origin of pink noise from the beat of cooperative waves. We have examined three possible causes for this cooperative effect: synchronization, resonance, and IR divergence. There may be more mechanisms. We point out the verifiability/falsifiability of our model based on the five crucial observations for the pink noise in section 2. 1. Wave The wave is essential for producing beat and amplitude modulation. The wave may be hidden inside the system, and the data may be obtained after it passes through the threshold. If we cannot find a coherent wave in the system, our model cannot be applied. 2. Small system and apparent long memory Although the amplitude-modulated fluctuation, the primary fluctuation, may accept the Wiener-Khinchin theorem, the demodulated fluctuation, the secondary fluctuation, does not accept the theorem because the secondary fluctuation does not appear in the PSD before any demodulation process. If we find the successful Wiener-Khinchin theorem for pink noise, our model cannot be applied. 3. Apparent no lower cutoff in the PSD The beat of the cooperative wave or the amplitude modulation can yield an infinitely low-frequency signal from inside a finite system within the observational constraints. Therefore, if an intrinsic lower-cutoff frequency is found in the pink noise, our model cannot be applied. 4. Independence from dissipation The beat of the cooperative wave or the amplitude modulation is a secondary fluctuation caused by wave synthesis. Therefore, the dissipation may destroy the pink noise because it may cancel the fragil wave beats. 5. Square of the original signal (necessity of the demodulation process) The amplitude modulation needs some demodulation process for observation. The primary fluctuations before the demodulation do not appear in the PSD. Our model for pink noise predicts the demodulation process as either a)intrinsic to the system or b)operational in the data reduction. If the demodulation is found in the system of pink noise, and the pink noise disappears when the demodulation process is removed, then our model is strongly favored. Although we have proposed a basic model of pink noise, we still have many problems with elaborating the present formalism. Some of them have already been described in appropriate places with the keyword'separate paper'. They are dynamical cooperative systems, actual resonant systems, and systems with IR divergence. Among them, we summarize the possibly resonant systems in Table 1. The list in Table 1 is tentative and incomplete. It will be completed in our future publications, including the verification of our simple pink noise model. acknowledgments We would like to acknowledge many valuable discussions with the members of the Lunch-Time Remote Discussion Meeting, with the members of the Department of Physics Ochanomizu University, with Manaya Matsui and Izumi Uesaka at Kyoto-Sangyo University.
2310.07055
A categorical view of varieties and equations
We present a common framework to study varieties in great generality from a categorical point of view. The main application of this study is in the setting of algebraic categories, where we introduce Birkhoff varieties which are essentially subvarieties of algebraic categories, and we get a generalization of Birkhoff's variety theorem. In particular, we show that Birkhoff varieties are coreflexive equalizers. The key of this generalization is to give a more general concept of equation for subvarieties of algebraic categories. In order to get our characterization of Birkhoff varieties, we study inserters over algebraic categories, where we generalize some well-known results of algebras for finitary endofunctors over $Set$. By duality, we obtain a characterization of cosubvarieties of coalgebraic categories. Surprisingly, these cosubvarieties turn to be varieties according to our theory of varieties.
Jose Avila
2023-10-10T22:36:13Z
http://arxiv.org/abs/2310.07055v1
# A categorical view of varieties and equations ###### Abstract. We present a common framework to study varieties in great generality from a categorical point of view. The main application of this study is in the setting of algebraic categories, where we introduce Birkhoff varieties which are essentially subvarieties of algebraic categories, and we get a generalization of Birkhoff's variety theorem. In particular, we show that Birkhoff varieties are coreflexive equalizers. The key of this generalization is to give a more general concept of equation for subvarieties of algebraic categories. In order to get our characterization of Birkhoff varieties, we study inserters over algebraic categories, where we generalize some well-known results of algebras for finitary endofunctors over _Set_. By duality, we obtain a characterization of cosubvarieties of coalgebraic categories. Surprisingly, these cosubvarieties turn to be varieties according to our theory of varieties. _2020 Mathematics Subject Classification_ 03C05, 08Bxx, 08C05, 18A05, 18A20, and 18C05. _Keywords:_ Varieties, equations, categories of algebras, inserters, and Birkhoff's variety theorem The author is grateful to Guillermo Ortiz and Sergio Troncoso for their useful comments and suggestions. There are many classes of well-known varieties. We give special attention to varieties of algebras. In universal algebra, varieties are equationally presentable collections of (one or many-sorted) algebras. In the case of one-sorted algebras, varieties are characterized by Birkhoff's variety theorem [10], as classes of algebras closed under homomorphic images, subalgebras, and products, also called HSP classes of algebras. A significant step in generalizing these algebraic structures is in the context of algebraic categories. Algebraic theories and its algebras were defined by F. W. Lawvere in his doctoral dissertation [12]. An excellent modern account of algebraic theories, and general algebra, is given in [1]. The following characterizations, see [1, Theorem 6.9], of algebraic categories correspond to generalizations of the concepts introduced by Lawvere. Let \(\mathcal{A}\) be a locally small category. Then the following conditions are equivalent: * \(\mathcal{A}\) is algebraic, i.e., equivalent to \(Alg\,\mathcal{T}\), for some algebraic theory \(\mathcal{T}\). * \(\mathcal{A}\) is cocomplete and has a set \(\mathcal{G}\) of perfectly presentable objects such that every object of \(\mathcal{A}\) is a sifted colimit of objects of \(\mathcal{G}\). * \(\mathcal{A}\) is cocomplete and has a strong generator \(\mathcal{G}\) consisting of perfectly presentable objects. Recall that an algebraic theory is a small category with finite products. We denote by \(\mathbf{AlgTh}\) to the category of algebraic theories. The morphisms between algebraic theories are finite product preserving functors. Likewise, we denote by \(\mathbf{AlgCat}\) to the category of algebraic categories, with algebraic functors as morphisms, where a functor between algebraic categories is algebraic if preserves limits and sifted colimits. Algebraic categories have many notable properties. They are locally finitely presentable, in particular, they are complete, wellpowered and cowellpowered. They have regular factorizations, and regular epimorphisms in such categories are stable under pullbacks and products. The perfectly presentable (also called strongly presentable) objects of an algebraic category are precisely the finitely presentable regular projective objects. In algebraic categories there is also a Birkhoff's variety theorem, see [1, Theorem 10.22]. The subvarieties of an algebraic category \(\mathcal{A}\) are precisely the full subcategories of \(\mathcal{A}\) closed under regular quotients, subalgebras, products, and direct unions. The assumption of closure under directed unions cannot be omitted, see [1, Example 10.23] for a counterexample in the case \(\mathcal{A}=Set^{\mathbb{N}}\). However, subvarieties are defined in first place by equations, but equations are defined only in algebraic categories of the form \(Alg\,\mathcal{T}\). Here, we present an alternative definition of equation which does not depend on the presentation of \(\mathcal{A}\), and corresponds to the theory presented in Section 2. This lead us to introduce Birkhoff varieties and Lawvere covarieties, which are defined by Birkhoff and Lawvere equations, respectively. A Birkhoff equation, see Definition 5.3, is an equation \(P\approx Q\) in \(\mathbf{AlgCat}\) such that there exists an algebraic, faithful, conservative, and amnestic functor \(U\) such that \(UP=UQ\). On the other hand, a Lawvere equation, see Definition 5.7, is an equation \(P\approx Q\) in \(\mathbf{AlgTh}\) such that there exists a morphism of theories \(U\) surjective on objects such that \(PU=QU\). Our main results are the following characterizations. Characterization of Birkhoff varieties 1.1.: _Let \(F:\mathcal{B}\to\mathcal{A}\) be an algebraic functor. Then the following conditions are equivalents:_ * \(F\) _is a Birkhoff variety, i.e., a general solution of some system of Birkhoff equations._ * \(F\) _is isomorphic to the inclusion_ \(\mathcal{V}\hookrightarrow\mathcal{A}\)_, for some subvariety_ \(\mathcal{V}\) _of_ \(\mathcal{A}\)_._ * \(F\) _is a coreflexive equalizer of some coreflexive Birkhoff equation._ Characterization of Lawvere covarieties 1.2.: _Let \(M:\mathcal{T}\to\mathcal{Q}\) be a morphisms of algebraic theories. Then the following conditions are equivalents:_ * \(M\) _is a Lawvere covariety, i.e., a general cosolution of some cosystem of Lawvere equations._ * \(M\) _is isomorphic to the canonical morphism_ \(\mathcal{T}\to\mathcal{T}/\sim\)_, for some congruence_ \(\sim\) _(in the sense of_ [1, Definition 10.4]_) on_ \(\mathcal{T}\)_._ * \(M\) _is a reflexive coequalizer of some reflexive Lawvere equation._ Moreover, we prove that every system of Birkhoff equations (resp. every cosystem of Lawvere equations) has a general solution (resp. a general cosolution). Also, we give another characterization of Birkhoff varieties in terms of Lawvere covarieties, see Theorem 5.13. These similarities between Birkhoff varieties and Lawvere covariies are to be expected since **AlgCat** and **AlgTh** are almost dual to each other, see [1, Theorem 9.15]. In order to get our characterization of Birkhoff varieties, we study inserters over algebraic categories. Inserters have been studied in [1] for the quasicategory of all categories and all functors, and more generally in [10] for 2-categories. We consider them only in the **Cat** setting, for **Cat** the non-locally small category of all locally small categories and all functors. Inserters are a straight generalization of algebras and coalgebras for endofunctors. Both algebras and coalgebras are very well-known, and they are dual to each other. Algebras (for an endofunctor) model common algebraic structures like one-sorted algebras, and coalgebras model structures like deterministic automata. In particular, algebras for finitary endofunctors over \(Set\) have been studied in [1, Chapter 12]. The results there have generalizations, as noted at the beginning and the end of that Chapter, for locally finitely presentable and algebraic categories. We recover some of these results for inserters over algebraic categories in Section 4. The key idea is to give conditions under which inserters are concretely isomorphic to algebras or coalgebras, see Theorem 4.2. Our characterization of Birkhoff varieties is easily generalizable for coalgebraic categories. We just state this result. A concise definition of the category **CoAlgCat** of coalgebraic categories is the following: a locally small category \(\mathcal{A}\) is coalgebraic if only if \(\mathcal{A}^{\mathrm{op}}\) is algebraic, a functor \(F:\mathcal{A}\to\mathcal{B}\) between coalgebraic categories is coalgebraic if only if \(F^{\mathrm{op}}:\mathcal{A}^{\mathrm{op}}\to\mathcal{B}^{\mathrm{op}}\) is algebraic. Observe that the opposite functor \[F:\mathcal{A}\to\mathcal{B}\quad\longmapsto\quad F^{\mathrm{op}}:\mathcal{A}^{ \mathrm{op}}\to\mathcal{B}^{\mathrm{op}}\] is an isomorphism between **CoAlgCat** and **AlgCat**. Let \(\mathcal{V}\) be a full subcategory a coalgebraic category \(\mathcal{A}\), we call \(\mathcal{V}\) a cosubvariety of \(\mathcal{A}\) if \(\mathcal{V}^{\mathrm{op}}\) is a subvariety of \(\mathcal{A}^{\mathrm{op}}\). A co-Birkhoff equation is an equation \(P\approx Q\) in **CoAlgCat** such that there exists a coalgebraic, faithful, conservative, and amnestic functor \(U\) such that \(UP=UQ\). Thus, the dual of Characterization of Birkhoff varieties 1.1 is: Characterization of co-Birkhoff varieties 1.3.: _Let \(F:\mathcal{B}\to\mathcal{A}\) be a coalgebraic functor. Then the following conditions are equivalents:_ * \(F\) _is a co-Birkhoff variety, i.e., a general solution of some system of co-Birkhoff equations._ * \(F^{\mathrm{op}}\) _is a Birkhoff variety._ * \(F\) _is isomorphic to the inclusion_ \(\mathcal{V}\hookrightarrow\mathcal{A}\)_, for some cosubvariety_ \(\mathcal{V}\) _of_ \(\mathcal{A}\)_._ * \(F\) _is a coreflexive equalizer of some coreflexive co-Birkhoff equation._ In summary, what we have actually done in Characterization of Birkhoff varieties 1.1 is to characterize, in a precise sense (see Inverse main problem of varieties 2.13 and Proposition 2.14), the class of all equations in the context of **AlgCat**, which define subvarieties of algebraic categories. This gives us a generalization of Birkhoff's variety theorem [1, Theorem 10.22]. From this, we get a generalization of the dual of Birkhoff's variety theorem given in [1]. _Remark 1.4_.: Our definition of covariety differs to the usual definition found in the literature, see for example [1, 2]. What they call covarieties in those papers is what we call cosubvarieties. We define covarieties, see Definition 2.17, as the categorical dual of varieties, see Definition 2.3. Surprisingly, covarieties in the sense of [1, 2], turn to be varieties according to our definition of varieties, for the setting of coalgebraic categories. This is due to Characterization of co-Birkhoff varieties 1.3. _Remark 1.5_.: In no way this note is a complete account of varieties in their utmost generality. We mention many other examples, see Section 3, like varieties for polynomial equations or differential equations. Of course, there are still many open questions to be solved, we list some of these in the last section. ## 2. Abstract equations and varieties In this section we consider a (non-necessarily locally small) category \(\mathcal{C}\). **Definition 2.1**.: An **equation** is a pair \(f,g\) of parallel arrows, i.e., a pair of morphisms with same source and target, which we denote by \(f\approx g\). The **domain** of \(f\approx g\) is the common domain of \(f\) and \(g\). A **system of equations** is a non-empty set of equations with same domain. **Definition 2.2**.: Let \(E\) be a system of equations defined on \(A\). A **solution** of (the equations in) \(E\) is an arrow \(a\) with target \(A\) such that \(fa=ga\) for all equations \(f\approx g\) in \(E\). We write \(a\models E\) if \(a\) is a solution of \(E\). More generally, if \(S\) is a set of solutions of \(E\), we write \(S\models E\). **Definition 2.3**.: A **general solution** of a system of equations \(E\) is a solution \(v\) of \(E\) with the property that for each solution \(a\) of \(E\) there exists a unique morphism \(\tilde{a}\) such that \(a=v\tilde{a}\). We write \(v\underrightarrow{\overline{\mathrm{g.s}}}\;E\) to denote \(v\) is a general solution of \(E\). A **variety (morphism)** is a general solution of some system of equations. It is clear that every variety is a monomorphism, and every regular monomorphism is a variety. Note that general solutions of a system of equations \(E\) correspond to limits for the diagram in \(\mathcal{C}\) defined by \(E\). A general solution of \(E\) gives a unique representation to the set of all its solutions. Given morphisms \(f,g\), we write \(f\leq g\) if only if \(f=gh\) for some \(h\). We said \(f\) and \(g\) are isomorphic if there exists an isomorphism \(h\) such that \(f=gh\). In particular, if \(f\) and \(g\) are monomorphisms, then \(f\) and \(g\) are isomorphic if only if \(f\leq g\) and \(g\leq f\). Similarly, if \(E\) and \(K\) are systems of equations with same domain, we write \(E\implies K\) if every solution of \(E\) is a solution of \(K\), and we said \(E\) is equivalent to \(K\) if \(E\) and \(K\) have the same solutions, i.e., \(E\implies K\) and \(K\implies E\). Given two classes \(\Sigma,\Theta\) of equations, we said \(\Sigma\) and \(\Theta\) are equivalents if each equation in \(\Sigma\) is equivalent to some equation in \(\Theta\), and each equation in \(\Theta\) is equivalent to some equation in \(\Sigma\). _Remark 2.4_.: It is clear that there is only one general solution for a given system of equations, up to isomorphism. Arrows act on equations by right composition, and this action extents to an action over system of equations. Explicitly, let \(E\) be a system of equations with domain \(A\) and let \(g:B\to A\) be an arrow with target \(A\), then \(Eg=\{pg\approx qg\mid p\approx q\in E\}\). Let \(\Sigma\) be a class of equations. A \(\Sigma\)-variety is a general solution to some system of \(\Sigma\)-equations. We write \(a\underrightarrow{\overline{\mathrm{\Gamma}}}\;E\) if \(a\models E\) and \(E\) is a system of \(\Sigma\)-equations. **Proposition 2.5**.: The following statements holds: 1. Let \(E\) and \(K\) be systems of equations with same domain and suppose that \(v\underrightarrow{\overline{\mathrm{g.s}}}\;E\) and \(w\underrightarrow{\overline{\mathrm{g.s}}}\;K\). Then \(E\implies K\) is equivalent to \(v\leq w\). 2. Let \(E\) be a system of equations with domain \(A\) and \(g:B\to A\). Then \(a\models Eg\) if only if \(ga\models E\). 3. Let \(E\) and \(K\) be system of equations with domain \(A\) and \(g:B\to A\). If \(E\implies K\) then \(Eg\implies Kg\). 4. Varieties are strict monomorphisms. In particular, varieties are extremal monomorphisms. Proof.: It is clear. Proposition 2.6.: Suppose \(\Sigma\) is closed under right action by arrows. Then: 1. \(\Sigma\)-varieties are closed under intersections, products, and are stable under pullbacks. 2. If \(vw\) is a \(\Sigma\)-variety and \(v\) is a monomorphism, then \(w\) is also a \(\Sigma\)-variety. Proof.: 1. Let \(\{v_{\gamma}\}_{\gamma\in\Gamma}\) be a set of \(\Sigma\)-varieties with same target. Let \(v_{\gamma}\mathrel{\hbox to 0.0pt{\vrule height 6.0pt depth -0.0pt width 0.4pt \vrule height 6.0pt depth -0.0pt width 0.4pt}\hrule^{\Sigma}}E_{\gamma}\), thus, if \(v\) is a intersection of \(\{v_{\gamma}\}_{\gamma\in\Gamma}\), then \(v\mathrel{\hbox to 0.0pt{\vrule height 6.0pt depth -0.0pt width 0.4pt \vrule height 6.0pt depth -0. Proposition 2.10.: \(\mathcal{C}\) is complete w.r.t. system of equations if some of the following conditions holds: 1. For every object \(A\), the class of equations defined on \(A\) is small, up to equivalences. 2. \(\mathcal{C}\) is complete w.r.t. varieties and has equalizers. 3. \(\mathcal{C}\) has coproducts and cokernel pairs. 4. \(\mathcal{C}\) has cokernel pairs and colimits for diagrams of shape \(P\), where \(P\) is a poset which has only two minimal elements, and either other element of \(P\) is maximal and greater than the two minimal elements of \(P\). Proof.: Let \(S=\{f_{i}:A_{i}\to A\}_{i\in I}\) be a family of morphisms with target \(A\). Then: 1. Let \(\Sigma\) be the class of all equations \(p\approx q\) defined on \(A\) such that \(S\models p\approx q\). By hypothesis we found a subset \(E\) of \(\Sigma\) such that every equation in \(\Sigma\) is equivalent to some equation in \(E\). It follows easily that \(E\) is generated by \(S\). 2. Let \(v\) be a variety generated by \(S\), with \(v\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}\hss}\raise 2.0pt \hbox{$\mathchar 318$}}\over{\hbox{\kern-3.0pt\lower 3.0pt\hbox{$ \mathchar 536$}\hss}\raise 2.0pt\hbox{$\mathchar 318$}}}E\). Therefore, it follows by the existence of equalizers that \(E\) is generated by \(S\). 3. Let \(B=\coprod_{i\in I}A_{i}\) with coprojections \(\mu_{i}:A_{i}\to B\). By universal property of \(B\) we get \(f:B\to A\) such that \(f\mu_{i}=f_{i}\) for all \(i\in I\). Let \(p,q\) be a cokernel pair of \(f\). Therefore, we have \(p\approx q\) is generated by \(S\). 4. For each \(i\in I\), let \(p_{i},q_{i}:A\to B_{i}\) be a cokernel pair of \(f_{i}\). By hypothesis we found morphisms \(p,q:A\to B\), and \(r_{i}:B_{i}\to B\) such that \(r_{i}p_{i}=p\) and \(r_{i}q_{i}=q\) for all \(i\in I\), with the following universal property: if \(p^{\prime},q^{\prime}:A\to B^{\prime}\), and \(r_{i}^{\prime}:B_{i}\to B^{\prime}\) are morphisms such that \(r_{i}^{\prime}p_{i}=p^{\prime}\) and \(r_{i}^{\prime}q_{i}=q^{\prime}\) for all \(i\in I\), then there exists a unique morphism \(t:B\to B^{\prime}\) such that \(tr_{i}=r_{i}^{\prime}\). Therefore, we have \(p\approx q\) is generated by \(S\). Proposition 2.11.: \(\mathcal{C}\) is complete w.r.t. varieties if some of the following conditions holds: 1. \(\mathcal{C}\) is wellpowered w.r.t. varieties and has intersections. 2. \(\mathcal{C}\) is complete w.r.t. systems of equations and every system of equations has a general solution. Proof.: Let \(S=\{f_{i}:A_{i}\to A\}_{i\in I}\) be a family of morphisms with target \(A\). Then: 1. Let \(\mathcal{V}\) be the class of all varieties \(v\) such that \(f_{i}\leq v\) for all \(i\in I\). Since \(\mathcal{C}\) is wellpowered w.r.t. varieties, we found a subset \(\mathcal{V}_{0}\) of \(\mathcal{V}\) such that every variety in \(\mathcal{V}\) is isomorphic to some variety in \(\mathcal{V}_{0}\). Let \(v\) be a intersection of the morphisms in \(\mathcal{V}_{0}\). Therefore, we have \(v\) is a variety generated by \(S\). 2. Let \(E\) be a system of equations generated by \(S\). Let \(v\) be a general solution of \(E\). Therefore, we have \(v\) is generated by \(S\). Main problem of varieties 2.12. _Given a class \(\Sigma\) of equations, to characterize all \(\Sigma\)-varieties._ Inverse main problem of varieties 2.13. _Given a class \(\mathcal{V}\) of varieties, to find a class \(\Sigma\) of equations such that every variety in \(\mathcal{V}\) is a general solution to some system of \(\Sigma\)-equations, and every system of \(\Sigma\)-equations has a general solution in \(\mathcal{V}\)._ It is evident the importance of the main problem, since it means to solve all possible systems of \(\Sigma\)-equations, and after all, mathematics is pretty much about to solve equations. The above general results give some tools to answer to this problem in particular cases. For example, for varieties of one-sorted algebras, Birkhoff's variety theorem is the best solution in this sense. The inverse main problem is also of great interest. Let us elaborate a little more. Note that there is not necessarily a unique solution to the inverse main problem (for a given class of varieties), and this is not a problem at all! Suppose that \(v\) is a general solution of a pair of systems \(E_{1},E_{2}\), it could be the case that from \(E_{1}\) we could to find some properties of \(v\) which are not as easily to find them as from \(E_{2}\). Therefore, the more we know about which are the systems of equations with \(v\) as general solution, the better. Ideally, the finest solution to the inverse main problem is one from which we can get a complete description of the varieties of interest. That is what physics is about, i.e., to find good equations which model (physical) phenomenons. More generally, this corresponds to inverses problems for ordinary or partial differential equations. The class of all \(\Sigma\)-varieties is closed under isomorphisms. Now let \(\mathcal{V}\) to be a non-empty class of varieties. Note that there no exists necessarily a solution to the inverse main problem for \(\mathcal{V}\). Indeed, in this case, some regular monomorphism in \(\mathcal{V}\) must exists. In particular, for a category with equalizers, there exists some solution (to the inverse main problem for \(\mathcal{V}\)) if only if there exists some class of regular monomorphism \(\mathcal{W}\) such the morphisms in \(\mathcal{V}\) are precisely (up to isomorphisms) intersections of morphisms in \(\mathcal{W}\). If there exists some solution, then there exists the largest one, just take the union over the class of all these solutions. Moreover, the following proposition characterize the largest solution: Proposition 2.14.: Let \(\mathcal{V}\) be a class of varieties. We have that any two solutions to the inverse main problem for \(\mathcal{V}\) are equivalents. Therefore, a solution to the inverse main problem for \(\mathcal{V}\) is the largest one if only if it is closed under equivalences. Proof.: It follows straightforward from the definitions. We should not to expect to solve both the main problem and its inverse in their utmost generality. Each context has its particular difficulties. As a general rule, we should give a correct setting in which to find proper answers to these type of questions, what we mean is to find a correct choice of the morphisms between the objects and varieties of interest. In the following section we show some examples which illustrate the concepts introduced before. A more elaborate example is given in Section 5 for subvarieties of algebraic categories. To end this section, we consider the duals of some of the previous concepts which we are going to use in Section 5 w.r.t. Lawvere covarieties. Of course, all propositions given above has its respective dual. For example, the dual of Proposition 2.7 (ii) tell us that covarieties are regular epimorphisms if \(\mathcal{C}\) has coproducts or kernel pairs. Note that the concept of equation is self-dual. Definition 2.15.: A **cosystem of equations** is a non-empty set of equations with same target. Definition 2.16.: Let \(E\) be a cosystem of equations with target \(A\). A **cosolution** of (the equations in) \(E\) is an arrow \(a\) with source \(A\) such that \(af=ag\) for all equations \(f\approx g\) in \(E\). Definition 2.17.: A **general cosolution** of a cosystem of equations \(E\) is a cosolution \(v\) of \(E\) with the property that for each cosolution \(a\) of \(E\) there exists a unique morphism \(\tilde{a}\) such that \(a=\tilde{a}v\). A **covariety (morphism)** is a general cosolution of some cosystem of equations. Similarly, we order covarieties and cosystems of equations, and define a left action of arrows over cosystems of equations. In any case, the context makes clear in which sense we are comparing arrows, varieties, covarieties, and so on. ## 3. Examples Example 3.1.: Consider an open interval of real numbers \((a,b)\) and let \(\textbf{Functional Spaces}(a,b)\) be the category of real vectorial spaces of continuous functions from \((a,b)\) to \(\mathbb{R}\), with linear transformations as morphisms. We are going to consider linear differential homogeneous equations of order \(n\) defined on \(C^{(n)}(a,b)\), i.e., equations \(L\approx 0\), where \(L:C^{(n)}(a,b)\to C(a,b)\) is a linear operator of the form \(L(y)=y^{(n)}+p_{1}y^{(n-1)}+\cdots+p_{n}y\), with \(p_{1},\ldots,p_{n}\) continuous. The Wronskian criterion determines the linear subspaces of \(C^{(n)}(a,b)\) which are general solutions to some linear differential homogeneous equation of order \(n\). Given \(n\) functions \(f_{1},\ldots,f_{n}\) in \(C^{(n)}(a,b)\), define their Wronskian as \[W(f_{1},\ldots,f_{n})=\begin{vmatrix}f_{1}&f_{2}&\cdots&f_{n}\\ f_{1}^{\prime}&f_{2}^{\prime}&\cdots&f_{n}^{\prime}\\ \vdots&\vdots&\ddots&\vdots\\ f_{1}^{(n-1)}&f_{2}^{(n-1)}&\cdots&f_{n}^{(n-1)}\end{vmatrix}\] Let \(V\) be a linear subspace of \(C^{(n)}(a,b)\) of dimension \(n\). Note that if there is some basis \(\{y_{1},\ldots,y_{n}\}\) of \(V\) such that \(W(y_{1},\ldots,y_{n})\) is never zero on \((a,b)\), then the Wronskian of any basis of \(V\) is never zero on \((a,b)\). In this case we have that every linear transformation \(T:V\to C(a,b)\) is of the form \(T(y)=y^{(n)}+p_{1}y^{(n-1)}+\cdots+p_{n}y\) for all \(y\in V\), for some \(p_{1},\ldots,p_{n}\) continuous, moreover, this representation of \(T\) is unique. In particular, this shows that the inclusion transformation from \(V\) into \(C^{(n)}(a,b)\) is a general solution to some linear homogeneous equation of order \(n\). The reciprocal of the last implication is also true, i.e., if the inclusion transformation from \(V\) into \(C^{(n)}(a,b)\) is a general solution to some (actually, a unique) linear homogeneous equation of order \(n\), then the Wronskian of any basis of \(V\) is never zero on \((a,b)\). Example 3.2.: Let \(\mathcal{R}\) be a subcategory of \(Set\) with only one object \(R\). A \(\mathcal{R}\)-variety is a subset \(V\) of \(R\) such there exists a family \(\{f_{\gamma},g_{\gamma}:R\to R\}_{\gamma\in\Gamma}\) of pairs of \(\mathcal{R}\)-morphisms such that \[V=\{x\in R\mid f_{\gamma}(x)=g_{\gamma}(x)\;\text{for all}\;\gamma\in\Gamma\}.\] We denote by \(\mathbf{Var}(\mathcal{R})\) to the category of \(\mathcal{R}\)-varieties. A morphism in \(\mathbf{Var}(\mathcal{R})\) from \(V\) to \(W\) is a function \(f:V\to W\) which is the restriction of some \(\mathcal{R}\)-morphism \(\varphi:R\to R\), i.e., the following diagram commutes Observe that \(\mathcal{R}\) and \(\mathbf{Var}(\mathcal{R})\) are concrete categories over \(Set\), and there is a concrete functor from \(\mathcal{R}\) to \(\mathbf{Var}(\mathcal{R})\). Also, every inclusion between \(\mathcal{R}\)-varieties is a variety in \(\mathbf{Var}(\mathcal{R})\), and every system of equations in \(\mathbf{Var}(\mathcal{R})\) has a general solution in \(\mathbf{Var}(\mathcal{R})\), which is an inclusion of \(\mathcal{R}\)-varieties. In particular, let \(R\) be an integral domain with a derivative map, i.e., for each \(f\in R\), the derivative of \(f\), denoted by \(f^{\prime}\), is an element of \(R\) and this map satisfies Leibniz's product rule \((fg)^{\prime}=f^{\prime}g+fg^{\prime}\). Let \(F\) be the quotient field of \(R\), therefore, there is a unique extension of the derivative map from \(R\) to \(F\). An element \(r\in F\) is called a constant if \(r^{\prime}=0\), thus \(0\) and \(1\) are constants. We assume that every constant belongs to \(R\). We define the set of differential operator over \(R\) as the set of functions over \(R\) generated under sums, products, and compositions by constant maps, the identity map, and the derivative map. Note that differential operators over \(R\) are closed under composition, and the identity map on \(R\) is a differential operator. Therefore, \(R\) with the differential operators define a category \(\mathcal{R}\), which is a subcategory of \(Set\). For each \(n\geq 1\) let \(T_{n}\) be a differential operator over \(R\). Define \(V_{n}\subset R\) such that \(f\in V_{n}\) if there exist constants \(0\). We have \(V_{n}\) is \(\mathcal{R}\)-variety, in fact, \(V_{n}=\{f\in R\mid W(T_{1}(f),T_{2}(f),\ldots,T_{n}(f))=0\}\), where \(W\) is the Wronskian map. As a corollary of the above we have the following non-trivial fact \[W(T_{1}(f),T_{2}(f),\ldots,T_{n}(f))=0\quad\implies\quad W(T_{1}(f),T_{2}(f), \ldots,T_{n}(f),T_{n+1}(f))=0.\] There is two main applications of this example: * Let \(p\) be a complex number and define \(F\) as the set of all germs at \(p\) defined by functions \(\mathbb{C}\to\mathbb{C}\) holomorphic at \(p\). Recall the germ at \(p\) of a function \(f:\mathbb{C}\to\mathbb{C}\) is the equivalence class of all functions \(g:\mathbb{C}\to\mathbb{C}\) which are locally equal to \(f\) at \(p\). It is clear \(F\) is a field under usual sums and products, with complex differentiation as a derivative map on \(F\). The varieties defined in this context account as local solutions to differential equations at \(p\), for complex value functions of complex variable. In particular, a function \(f:\mathbb{C}\to\mathbb{C}\) is a solution of some linear homogeneous differential equation with constant coefficients of order at most \(n\) in some neighbourhood of \(p\), i.e., there exist \(a_{0},a_{1},\ldots,a_{n}\in\mathbb{C}\) not all zeros such that \(a_{0}f^{(n)}+a_{1}f^{(n-1)}+\cdots+a_{n-1}f^{\prime}+a_{n}f\) vanishes in some neighbourhood of \(p\) if only if \(f\) is holomorphic at \(p\) and \(W\big{(}f,f^{\prime},\ldots,f^{(n)}\big{)}\) is locally zero at \(p\). * Let \(R\) be an integral domain and consider the ring \(R[[x]]\) of formal power series with coefficients in \(R\). It is clear \(R[[x]]\) is an integral domain, and we have a derivative map on \(R[[x]]\): \((\sum_{n\geq 0}a_{n}x^{n})^{\prime}=\sum_{n\geq 0}(n+1)a_{n+1}x^{n}\). Note that all constants belongs to \(R[[x]]\). The varieties defined in this context account as solutions of some recurrence equations. In particular, we can define linear recurrence equations in this context, in fact, for \(a_{0},a_{1},\ldots,a_{n}\in R\) not all zeros we have \(f\in R[[x]]\) satisfies \(a_{0}f_{k}+a_{1}f_{k+1}+\cdots+a_{n}f_{k+n}=0\) for all \(k\geq 0\) if only if \(\left((a_{0}x^{n}+a_{1}x^{n-1}+\cdots+a_{n})f\right)^{(n)}=0\). Therefore, \(f\in R[[x]]\) is a solution of some linear recurrence equation of order at most \(n\) if only if \(W\big{(}f^{(n)},(xf)^{(n)},\ldots,(x^{n}f)^{(n)}\big{)}=0\). Example 3.3.: Consider the category **Affine Varieties** whose objects are the common zero locus of some finite set of polynomials with complex coefficients, with regular maps as morphisms. Let \(\mathbb{A}^{n}\) be the affine complex space of dimension \(n\). A polynomial equation defined on \(\mathbb{A}^{n}\) is an equation \(p\approx q\), where \(p,q\) are polynomial maps defined on \(\mathbb{A}^{n}\). We have that every inclusion map into \(\mathbb{A}^{n}\) is a general solution to some finite system of polynomial equations defined on \(\mathbb{A}^{n}\), and every system of polynomial equations defined on \(\mathbb{A}^{n}\) has a general solution, which it is an inclusion map into \(\mathbb{A}^{n}\). Example 3.4.: Let **Projective Varieties** be the category whose objects are the common zero locus of some finite set of homogeneous polynomials with complex coefficients, with regular maps as morphisms. Let \(\mathbb{P}^{n}\) be the projective complex space of dimension \(n\). A polynomial equation defined on \(\mathbb{P}^{n}\) is an equation \(p\approx q\), where \(p,q\) are polynomial maps defined on \(\mathbb{P}^{n}\). We have that every inclusion map into \(\mathbb{P}^{n}\) is a general solution to some finite system of polynomial equations defined on \(\mathbb{P}^{n}\), and every system of polynomial equations defined on \(\mathbb{P}^{n}\) has a general solution, which is an inclusion map into \(\mathbb{P}^{n}\). Example 3.5.: Consider the category **CMet** of complete metric spaces with non-expansive maps as morphisms. Let \(X\) be a complete metric space and consider the equation \(T\approx 1_{X}\) defined on \(X\), where \(T:X\to X\) is a non-expansive map. By Banach's fixed point theorem we have that this equation has exactly only one solution. Therefore, if \(E\) is the trivial metric space with only one point, then there exists a unique morphism \(F:E\to X\) such that \(TF=1_{X}F\), which it is a general solution of the equation \(T\approx 1_{X}\). On the other hand, note that \(E\) is a final object of **CMet**, and any map \(E\to X\) is a general solution of some equation \(T\approx 1_{X}\) for some non-expansive map \(T:X\to X\) (trivially, a constant map). Example 3.6.: Consider the category \(\mathbf{Grp}\) of groups and homomorphisms. Let \(G\) be a group. If \(N\) is a normal subgroup of \(G\), then the inclusion \(N\hookrightarrow G\) is a general solution to the equation \(\pi\approx 0\), where \(\pi:G\to G/N\) is canonical. Next, for a non-empty subset \(S\) of \(G\), the inclusion \(C_{G}(S)\hookrightarrow G\) of the centralizer \(C_{G}(S)\) of \(S\) in \(G\), is a general solution of \(\{\varphi_{g}\approx 1_{G}\}_{g\in S}\), where \(\varphi_{g}:G\to G\) is the inner automorphism of \(G\) defined by \(g\). Similarly, let \(G^{\mathrm{ab}}\) be the abelianization of \(G\), thus, the canonical homomorphism \(G\to G^{\mathrm{ab}}\) is a general cosolution of \(\{\varphi_{g}\approx 1_{G}\}_{g\in G}\). Observe that both the center and the abelianization of \(G\) are defined by the same equations \(\{\varphi_{g}\approx 1_{G}\}_{g\in G}\), the former as a general solution and the latter as a general cosolution. In these examples we have shown the canonical equations which define normal subgroups, centralizers, and abelianizations. However, it is well-known that every monomorphism (of groups) is a regular monomorphism, and every monomorphism is, up to isomorphism, an inclusion homomorphism. Analogously, the epimorphisms of groups are regular epimorphisms (actually, surjective homomorphisms). Example 3.7.: In \(\mathbf{Cat}\), a functor is a monomorphism if only if it is an embedding, a full embedding functor is a regular monomorphism, and regular monomorphisms are conservative embeddings. It is very natural how varieties are defined in \(\mathbf{Cat}\). Explicitly, let \(\{P_{\gamma},Q_{\gamma}:\mathcal{A}\to\mathcal{B}_{\gamma}\}_{\gamma\in\Gamma}\) be a family of parallel functors with same domain. Define the subcategory \(\mathcal{V}\) of \(\mathcal{A}\) such that an object \(x\) of \(\mathcal{A}\) belongs to \(\mathcal{V}\) if \(P_{\gamma}x=Q_{\gamma}x\) for all \(\gamma\in\Gamma\), and a morphism \(f:x\to y\) in \(\mathcal{A}\) between objects of \(\mathcal{V}\) belongs to \(\mathcal{V}\) if \(P_{\gamma}f=Q_{\gamma}f\) for all \(\gamma\in\Gamma\). It is clear that the inclusion \(\mathcal{V}\hookrightarrow\mathcal{A}\) is a general solution of \(\{P_{\gamma}\approx Q_{\gamma}\}_{\gamma\in\Gamma}\). ## 4. Inserters in algebraic categories In this section we consider only locally small categories. Let \(F,G:\mathcal{A}\to\mathcal{B}\) be a pair of parallel functors. Definition 4.1.: An **inserter** from \(F\) to \(G\) is a category \(\mathcal{V}\), with a functor \(U:\mathcal{V}\to\mathcal{A}\) and a natural transformation \(\lambda:FU\to GU\) with the property that for each functor \(V:\mathcal{D}\to\mathcal{A}\) and natural transformation \(\alpha:FV\to GV\) there exists a unique functor \(W:\mathcal{D}\to\mathcal{V}\) such that \(V=UW\) and \(\alpha=\lambda W\). Concretely, let \(\mathbf{Ins}(F,G)\) be the category whose objects are pairs \((A,r)\), with \(A\) an object of \(\mathcal{A}\) and \(r:F(A)\to G(A)\). A morphism \(d:(A,r)\to(B,s)\) in \(\mathbf{Ins}(F,G)\) is a morphism \(d:A\to B\) in \(\mathcal{A}\) such that the diagram commutes. Define the forgetful functor \(U:\mathbf{Ins}(F,G)\to\mathcal{A}\) by \[U\left((A,r)\xrightarrow{d}(B,s)\right)=A\xrightarrow{d}B\] and the inserted transformation \(\lambda:FU\to GU\) by \(\lambda(A,r)=r\). We have \(\mathbf{Ins}(F,G)\), with \(U\) and \(\lambda\), is an inserter from \(F\) to \(G\). Clearly, the forgetful functor \(U\) is faithful, this makes \(\mathbf{Ins}(F,G)\) a concrete category over \(\mathcal{A}\). Also it is clear that there is only one inserter from \(F\) to \(G\), up to concrete isomorphism over \(\mathcal{A}\). The forgetful functor \(U\) has many others properties, for example, \(U\) is uniquely transportable (in particular, \(U\) is amnestic) and conservative. The above is valid in general. Now, if \(\mathcal{A}\) has all limits of shape \(\mathcal{J}\) and \(G\) preserves them, then \(\textbf{Ins}(F,G)\) also has all limits of shape \(\mathcal{J}\) and \(U\) preserves them. Similarly, if \(\mathcal{A}\) has all colimits of shape \(\mathcal{J}\) and \(F\) preserves them, then \(\textbf{Ins}(F,G)\) also has all colimits of shape \(\mathcal{J}\) and \(U\) preserves them. These facts about inserters categories are well-known, and they are easy to prove. Theorem 4.2.: _If \(H\) is a left adjoint to \(G\), then \(\textbf{Ins}(F,G)\) and \(\textbf{Ins}(HF,1_{\mathcal{A}})\) are isomorphic as concrete categories over \(\mathcal{A}\). Similarly, if \(H\) is a right adjoint to \(F\), then \(\textbf{Ins}(F,G)\) and \(\textbf{Ins}(1_{\mathcal{A}},HG)\) are isomorphic as concrete categories over \(\mathcal{A}\)._ Proof.: Let \(H\) be a left adjoint to \(G\), with unit \(\eta\) and counit \(\varepsilon\). Consider \(\textbf{Ins}(HF,1_{\mathcal{A}})\) with forgetful functor \(V\). Define functors \(\Psi:\textbf{Ins}(F,G)\rightarrow\textbf{Ins}(HF,1_{\mathcal{A}})\) and \(\Phi:\textbf{Ins}(HF,1_{\mathcal{A}})\rightarrow\textbf{Ins}(F,G)\) by \[\Psi\left((A,r)\xrightarrow{d}(B,s)\right)=(A,\varepsilon_{A}H(r)) \xrightarrow{d}(B,\varepsilon_{B}H(s)),\] and \[\Phi\left((A,r)\xrightarrow{d}(B,s)\right)=(A,G(r)\eta_{FA})\xrightarrow{d}( B,G(s)\eta_{FB}).\] It is clear that \(\Psi\) and \(\Phi\) are concrete functor over \(\mathcal{A}\), i.e., \(U\Phi=V\) and \(V\Psi=U\). Also, it follows easily by triangular identities of the adjunction \((\eta,\varepsilon):H\vdash G\) that \(\Psi\) and \(\Phi\) are inverses of each other. The proof for the case in which \(H\) is a right adjoint to \(F\) is quite similar. Theorem 4.3.: _Suppose \(\mathcal{A}\) is cocomplete, \(F\) is finitary, and \(G\) has a left adjoint. Then \(U\) has a left adjoint. Similarly, if \(\mathcal{A}\) is algebraic, \(F\) preserves sifted colimits, and \(G\) has a left adjoint, then \(\textbf{Ins}(F,G)\) and \(U\) are algebraics._ Proof.: Let \(H\) be a left adjoint to \(G\), with unit \(\eta\) and counit \(\varepsilon\). Consider \(\textbf{Ins}(HF,1_{\mathcal{A}})\) with forgetful functor \(V\). By a generalization of [1, Chapter 12] we have \(V\) has a left adjoint since \(\mathcal{A}\) is cocomplete and \(HF\) is finitary because \(F\) is finitary (by assumption) and \(H\) is finitary too (it preserves all colimits). Let \(K\) be a left adjoint to \(V\), with unit \(\theta\) and counit \(\tau\). Consider \(\Phi\) and \(\Psi\) defined in the first paragraph of the proof of Theorem 4.2. Therefore, we have \(\Phi K\) is a left adjoint to \(U\), with unit \(\theta\) and counit \(\Phi\tau\Psi\). Now assume that \(\mathcal{A}\) is algebraic, \(F\) preserves sifted colimits, and \(G\) has a left adjoint. Since \(\mathcal{A}\) is cocomplete and \(F\) preserves sifted colimits, we have \(U\) preserves sifted colimits. Therefore, it remains to prove \(\textbf{Ins}(F,G)\) is algebraic. Let \(L\) be a left adjoint to \(U\) (like the functor \(\Phi K\) given above). We have \(L\) preserves strong generators and perfectly presentable objects, this is easy to prove. Therefore, \(\textbf{Ins}(F,G)\) has a strong generator consisting of perfectly presentable objects since \(\mathcal{A}\) is algebraic. Finally, we will conclude this proof if we prove \(\textbf{Ins}(F,G)\) is cocomplete. Once again, by a generalization of [1, Chapter 12] we have \(\textbf{Ins}(HF,1_{\mathcal{A}})\) is cocomplete since \(\mathcal{A}\) is cocomplete and \(HF\) preserves sifted colimits. Thus, \(\textbf{Ins}(F,G)\) is cocomplete because it is isomorphic to \(\textbf{Ins}(HF,1_{\mathcal{A}})\). As an application of the results in this section we have the following: Corollary 4.4.: The category \(\Sigma\)-Alg of \(\Sigma\)-algebras, for an \(S\)-sorted signature \(\Sigma\), is an algebraic inserted category. Proof.: We refer the reader to [1, Example 1.5 and Example 1.10] for a definition of the category \(\Sigma\)-Alg of \(\Sigma\)-algebras. These examples show that \(Set^{S}\) is an algebraic category equivalent to \(Alg\,S^{*}\). Explicitly, the functor \(Set^{S}\to Alg\,S^{*}\) defined on objects by \[A=\langle A_{s}\rangle_{s\in S}\quad\longmapsto\quad A^{*}=\langle A_{s_{0}} \times\cdots\times A_{s_{n-1}}\rangle_{s_{0},\ldots,s_{n-1}\in S^{*}},\] is an equivalence. The arity function gives two functions \(s:\Sigma\to S^{*}\) and \(t:\Sigma\to S\) by composition with the projections from \(S^{*}\times S\). These functions naturally define functors \(s^{\prime}:Set^{S^{*}}\to Set^{\Sigma}\) and \(\hat{t}:Set^{S}\to Set^{\Sigma}\) by composition with \(s\) and \(t\) respectively. Define the functor \(\hat{s}:Set^{S}\to Set^{\Sigma}\) as the composition It is clearly seen that the category \(\Sigma\)-Alg is exactly the category \(\mathbf{Ins}(\hat{s},\hat{t})\). It remains to prove \(\mathbf{Ins}(\hat{s},\hat{t})\) is algebraic. Thus, it is sufficient to show that \(\hat{s}\) preserves sifted colimits and \(\hat{t}\) has a left adjoint, see Theorem 4.3. In first place, observe that \(Set^{\Sigma}\), \(Set^{S}\) and \(Set^{S^{*}}\) has limits and sifted colimits and these are computed objectwise, this implies that \(s^{\prime}\) and \(\hat{t}\) preserves these limits and colimits. On the other hand, the inclusion \(Alg\,S^{*}\hookrightarrow Set^{S^{*}}\) also preserves these limits and colimits, see [1, Proposition 1.21 and Proposition 2.5], thus \(\hat{s}\) preserves sifted colimits since it is a composition of sifted colimit preserving functors. As \(S\) is a small discrete category and \(Set\) has a cogenerator family, then \(Set^{S}\) also has a cogenerator family. In the same way we have \(Set^{S}\) is wellpowered. Therefore, it follows by Special adjoint functor theorem [1, Theorem 3.3.4] that \(\hat{t}\) has a left adjoint. ## 5. Birkhoff varieties and Lawvere covarieties Let us take the Birkhoff's variety theorem as a definition of subvarieties of algebraic categories. **Definition 5.1**.: Let \(\mathcal{A}\) be an algebraic category. A **subvariety** of \(\mathcal{A}\) is a full subcategory \(\mathcal{V}\) of \(\mathcal{A}\) closed under products, subalgebras, regular quotients, and direct unions. _Remark 5.2_.: Subvarieties of algebraic categories, and their inclusions functors, are algebraic. This follows by [1, Corollary 10.15 and Corollary 10.17]. **Definition 5.3**.: A **Birkhoff equation** is an equation \(P\approx Q\) in \(\mathbf{AlgCat}\) such that there exists an algebraic, faithful, conservative, and amnestic functor \(U\) such that \(UP=UQ\). Observe that the class of Birkhoff equations is closed under right action by algebraic functors, therefore the results of Proposition 2.6 are valid in this context. By Theorem 5.4 below, we have that the class of Birkhoff equations is a solution to the inverse main problem of varieties for the class of inclusions of varieties of algebraic categories. Note how the conditions on \(U\) for a Birkhoff equation \(P\approx Q\) correspond to several properties of forgetful functors of inserters over algebraic categories. As it can be seen in the proof of Theorem 5.4, we could take \(U\) like these forgetful functors. However, the conditions on \(U\) given in Definition 5.3 are sufficient to get Theorem 5.4. Another fact to notice in the following proof is how the varieties of an algebraic category \(\mathcal{A}\) are close related to inserters and equifiers, where this relation is given by a set of perfectly presentable objects of \(\mathcal{A}\) which generates to \(\mathcal{A}\) under sifted colimits. **Theorem 5.4**.: _Every system of Birkhoff equations has a general solution, which it is an inclusion functor \(\mathcal{V}\hookrightarrow\mathcal{A}\), where \(\mathcal{V}\) is a subvariety of \(\mathcal{A}\). Reciprocally, if \(\mathcal{V}\) is a subvariety of \(\mathcal{A}\) then the inclusion functor \(\mathcal{V}\hookrightarrow\mathcal{A}\) is a general solution of some system of Birkhoff equations._ Proof.: Let \(E\) be a system of Birkhoff equations defined on an algebraic category \(\mathcal{A}\). Define \(\mathcal{V}\) as the full subcategory of \(\mathcal{A}\) of objects \(x\) such that \(P(x)=Q(x)\) for all equations \(P\approx Q\) in \(E\). We have \(\mathcal{V}\hookrightarrow\mathcal{A}\mathrel{\framebox{\text{\ref{eq:v- of the equifiers defined by the pairs of natural transformations \(\mathcal{A}(p_{x},-)\) and \(\mathcal{A}(q_{x},-)\), with \(x\in\mathcal{G}\). Let us see that \(\mathcal{V}\subset\mathcal{V}^{\prime}\). If \(y\in\mathcal{V}\), to show that \(y\in\mathcal{V}^{\prime}\) we must to prove that \(\mathcal{A}(p_{x},y)=\mathcal{A}(q_{x},y)\) for all \(x\in\mathcal{G}\). Now, if \(x\in\mathcal{G}\), then \(\mathcal{A}(p_{x},y)=\mathcal{A}(q_{x},y)\) if \(fp_{x}=fq_{x}\) for all \(f:x\to y\). By universal property of \(r_{x}:x\to Rx\) we get a unique \(g:Rx\to y\) such that \(f=gr_{x}\) because \(y\in\mathcal{V}\). Thus, \(fp_{x}=gr_{x}p_{x}=gr_{x}q_{x}=fq_{x}\). Therefore, \(y\in\mathcal{V}^{\prime}\) and \(\mathcal{V}\subset\mathcal{V}^{\prime}\). Now we want to show \(\mathcal{V}^{\prime}\subset\mathcal{V}\). Let \(R^{\prime}:\mathcal{A}\to\mathcal{V}^{\prime}\) be an epireflector of \(\mathcal{V}^{\prime}\hookrightarrow\mathcal{A}\), with regular epireflections \(r^{\prime}_{x}:x\to R^{\prime}x\). For each \(x\in\mathcal{G}\), we have \(R^{\prime}x\in V^{\prime}\), hence \(R^{\prime}x\) satisfies \(\mathcal{A}(p_{x},R^{\prime}x)=\mathcal{A}(q_{x},R^{\prime}x)\). Thus \(r^{\prime}_{x}p_{x}=r^{\prime}_{x}q_{x}\), and we get a unique \(t_{x}:Rx\to R^{\prime}x\) such that \(r^{\prime}_{x}=t_{x}r_{x}\). Since \(r^{\prime}_{x}\) is regular epi, then \(t_{x}\) is also a regular epi. Therefore, \(R^{\prime}x\) is a regular quotient of \(Rx\in\mathcal{V}\), and \(R^{\prime}x\in\mathcal{V}\) since \(\mathcal{V}\) is closed under regular quotients. For \(y\in\mathcal{V}^{\prime}\) let \(\{\mu_{i}:x_{i}\to y\}_{i\in D}\) be a sifted colimit cocone such that \(x_{i}\in\mathcal{G}\) for all \(i\in D\). Since \(R^{\prime}\) is a left adjoint, then \(R^{\prime}\) preserves colimits, thus \(\{R^{\prime}\mu_{i}:R^{\prime}x_{i}\to R^{\prime}y\}_{i\in D}\) is a sifted colimit cocone. We have \(R^{\prime}x_{i}\in\mathcal{V}\) for all \(i\) since \(x_{i}\in\mathcal{G}\), thus \(\{R^{\prime}\mu_{i}:R^{\prime}x_{i}\to R^{\prime}y\}_{i\in D}\) is a sifted colimit cocone in \(\mathcal{A}\) of objects in \(\mathcal{V}\). Since \(\mathcal{V}\) is closed under sifted colimits, we have \(R^{\prime}y\) belongs to \(\mathcal{V}\), as \(y\) and \(R^{\prime}y\) are isomorphic in \(\mathcal{A}\) because \(y\in\mathcal{V}^{\prime}\), then \(y\in\mathcal{V}\). Therefore, we have concluded that \(\mathcal{V}^{\prime}\subset\mathcal{V}\), \(\mathcal{V}=\mathcal{V}^{\prime}\). Theorem 5.5.: _Every Birthoff variety is a coreflexive equalizer of some coreflexive Birkhoff equation._ Proof.: Recall that the product category of a family of algebraic categories, with the respective projections, are algebraics. In other words, \(\mathbf{AlgCat}\) is closed under products as a subcategory of \(\mathbf{Cat}\). Let \(v:\mathcal{B}\to\mathcal{A}\) be a Birkhoff variety. By Theorem 5.4\(v\) is isomorphic to \(\mathcal{V}\hookrightarrow\mathcal{A}\) for some subvariety \(\mathcal{V}\) of \(\mathcal{A}\). Following the proof of Theorem 5.4 we found a system of coreflexive Birkhoff equations \(\{P_{x},Q_{x}:\mathcal{A}\to\mathcal{V}_{x}\}_{x\in\mathcal{G}}\), with \(\mathcal{V}\hookrightarrow\mathcal{A}\) as a general solution, and a family of algebraic, faithful, conservative, and amnestic functors \(\{U_{x}:\mathcal{V}_{x}\to\mathcal{A}\}_{x\in\mathcal{G}}\) such that \(U_{x}P_{x}=U_{x}Q_{x}=1_{\mathcal{A}}\). Now are going to almost verbatim the proof of Proposition 2.7 (ii). Define \(\mathcal{B}=\prod_{x\in\mathcal{G}}\mathcal{V}_{x}\) with projections \(\pi_{x}:\mathcal{B}\to\mathcal{V}_{x}\), and let \(P,Q:\mathcal{A}\to\mathcal{B}\) be defined by \(P_{x}=\pi_{x}P\) and \(Q_{x}=\pi_{x}Q\) for all \(x\in\mathcal{G}\). Define \(\mathcal{C}=\prod_{x\in\mathcal{G}}\mathcal{A}\) with projections \(\rho_{x}:\mathcal{C}\to\mathcal{A}\), and let \(U:\mathcal{B}\to\mathcal{C}\) be define by \(\rho_{x}U=U_{x}\pi_{x}\) for all \(x\in\mathcal{G}\). It is clear that \(UP=UQ\), and \(U\) is an algebraic, faithful, conservative, and amnestic functor, thus \(P\approx Q\) is a Birkhoff equation defined on \(\mathcal{A}\) which has the same solutions of \(\{P_{x}\approx Q_{x}\}_{x\in\mathcal{G}}\). Therefore, \(v\) is a general solution of \(P\approx Q\), and \(P\approx Q\) is a coreflexive pair since for any \(x\in\mathcal{G}\) we have \(\rho_{x}UP=\rho_{x}UQ=1_{\mathcal{A}}\). _Remark 5.6_.: Characterization of Birkhoff varieties 1.1 follows by Theorem 5.4 and Theorem 5.5. In the following, we are going to use the theory of congruences on algebraic theories given in [1, Chapter 10]. Definition 5.7.: A **Lawvere equation** is an equation \(P\approx Q\) in \(\mathbf{AlgTh}\) such that there exists a morphism of theories \(U\) surjective on objects such that \(PU=QU\). Similarly to Birkhoff equations, Lawvere equations are closed under left composition by morphism of theories. Thus, the dual of Proposition 2.7 holds for Lawvere covarieties. Theorem 5.8.: _Every cosystem of Lawvere equations has a general cosolution, Lawvere covarieties are precisely full morphisms of theories bijective on objects, and each Lawvere covariety is a reflexive coequalizer of some reflexive Lawvere equation._ Proof.: Let \(\{P_{i},Q_{i}:\mathcal{T}_{i}\to\mathcal{T}\}_{i\in I}\) be a cosystem of Lawvere equations. Note that if \(P\approx Q\) is a Lawvere equation, then \(P\) and \(Q\) coincide on objects. Define \(\sim\) as the congruence on \(\mathcal{T}\) generated by the set of equations \(E=\{P_{i}(g),Q_{i}(g)\mid i\in I\text{ and }g\text{ an arrow in }\mathcal{T}_{i}\}\). Let \(M:\mathcal{T}\to\mathcal{T}/\sim\) be the canonical morphism of theories from \(\mathcal{T}\) onto \(\mathcal{T}/\sim\). Observe that \(M\) is a full morphism of theories bijective on objects. By construction, it is clear that \(MP_{i}=MQ_{i}\) for all \(i\in I\). Now suppose \(M^{\prime}:\mathcal{T}\to\mathcal{T}^{\prime}\) is a morphisms of theories such that \(M^{\prime}P_{i}=M^{\prime}Q_{i}\) for all \(i\in I\). Thus, each equation in \(E\) belongs to the congruence \(\approx_{M^{\prime}}\), so \(\sim\) is contained in \(\approx_{M}\) because \(\sim\) is generated by \(E\), then there exists a unique morphism of theories \(N\) such that \(NM=M^{\prime}\). Therefore, \(M\) is a general cosolution of the cosystem \(\{P_{i},Q_{i}:\mathcal{T}_{i}\to\mathcal{T}\}_{i\in I}\). Next, suppose that \(M:\mathcal{T}\to\mathcal{Q}\) is a full morphism of theories bijective on objects. Let \(\pi_{1},\pi_{2}:\mathcal{T}\times\mathcal{T}\to\mathcal{T}\) be the canonical projections, and let \(\mathcal{R}\) be the subcategory of \(\mathcal{T}\times\mathcal{T}\) whose objects are the pairs \((x,x)\), \(x\in\mathcal{T}\), and \((f,g):(x,x)\to(y,y)\) is a morphism in \(\mathcal{R}\) if \(M(f)=M(g)\). Let \(J\) be the inclusion from \(\mathcal{R}\) into \(\mathcal{T}\times\mathcal{T}\), and let \(J_{i}=\pi_{i}J\) for \(i=1,2\). Define \(U:\mathcal{T}\to\mathcal{R}\) such that \(U\left(x\xrightarrow{f}y\right)=(x,x)\xrightarrow{(f,f)}(y,y)\). It is easy to verify that \(\mathcal{R}\) is an algebraic theory, \(J_{1}\approx J_{2}\) is a Lawvere coequation, \(J_{1},J_{2}\) is a kernel pair of \(M\), \(M\) is a coequalizer of \(J_{1},J_{2}\), and \(J_{1}U=J_{2}U=1_{\mathcal{T}}\). _Remark 5.9_.: Characterization of Lawvere covarieties 1.2 follows by Theorem 5.8. **Corollary 5.10**.: **AlgTh** is cowellpowered w.r.t. Lawvere covarieties. Proof.: Let \(M:\mathcal{T}\to\mathcal{Q}\) be a Lawvere covariety, and let \(M^{\prime}:\mathcal{T}\to\mathcal{T}/\approx_{M}\) be the canonical morphisms of theories. Then, there exists a unique morphism of theories \(P\) such that \(PM^{\prime}=M\). Since \(M\) is full and bijective on objects, we have \(P\) is an isomorphism of theories, so \(M\) and \(M^{\prime}\) are isomorphic. Therefore, the claim of this corollary is true since the class of all congruences on \(\mathcal{T}\) is small. To conclude this note we are going to study some relations between algebraic theories and algebraic categories, w.r.t. varieties and covarieties. **Lemma 5.11**.: Let \(\mathcal{T}\) be an algebraic theory and let \(F\) be an algebraic functor with target \(Alg\,\mathcal{T}\). Then \(F\) is isomorphic to \(\mathcal{V}\hookrightarrow Alg\,\mathcal{T}\) for some subvariety \(\mathcal{V}\) of \(Alg\,\mathcal{T}\) if only if \(F\) is isomorphic to \(Alg\,M:Alg\,\mathcal{Q}\to Alg\,\mathcal{T}\) for some Lawvere covariety \(M:Alg\,\mathcal{T}\to Alg\,\mathcal{Q}\). Proof.: The necessity follows by [1, Corollary 10.15] and Theorem 5.8. Conversely, let \(M^{\prime}:\mathcal{T}\to\mathcal{T}/\approx_{M}\), so \(M\) and \(M^{\prime}\) are isomorphic, then \(Alg\,M\) and \(Alg\,M^{\prime}\) are also isomorphic, and \(Alg\,M^{\prime}\) is isomorphic to \(\mathcal{V}\hookrightarrow Alg\,\mathcal{T}\) for some subvariety \(\mathcal{V}\) of \(Alg\,\mathcal{T}\) by [1, Corollary 10.15]. **Lemma 5.12**.: Let \(L:\mathcal{A}\to\mathcal{B}\) be an equivalence functor between algebraic categories. Then, for each subvariety \(\mathcal{V}\) of \(\mathcal{A}\) there exist a subvariety \(\mathcal{W}\) of \(\mathcal{B}\) and an equivalence functor \(K:\mathcal{V}\to\mathcal{W}\) such that the following diagram is a pullback square Proof.: Denote by \(V:\mathcal{V}\to\mathcal{A}\) to the inclusion of \(\mathcal{V}\) into \(\mathcal{A}\). Let \(R:\mathcal{B}\to\mathcal{A}\) be an equivalence functor and let \((\eta,\varepsilon):L\vdash R\) be an adjoint equivalence. Define \(\mathcal{W}\) as the full subcategory of \(\mathcal{B}\) such that \(x\) belongs to \(\mathcal{W}\) if only if \(Rx\) belongs to \(\mathcal{V}\). Denote by \(W:\mathcal{W}\hookrightarrow\mathcal{B}\) to the inclusion of \(\mathcal{W}\) into \(\mathcal{B}\). Thus, we get a functor \(J:\mathcal{W}\to\mathcal{V}\) such that \(VJ=RW\). Observe that if \(x\in\mathcal{V}\) then \(Lx\in\mathcal{W}\). Indeed, since \(\eta_{x}:x\to RLx\) is an isomorphism and \(x\in\mathcal{V}\), then \(RLx\in\mathcal{V}\), thus \(Lx\in\mathcal{W}\). Therefore, we get a functor \(K:\mathcal{V}\to\mathcal{W}\) such that \(WK=LV\). Define natural transformations \(\tau:1_{\mathcal{V}}\to JK\) and \(\theta:KJ\to 1_{\mathcal{W}}\) such that \(V\tau=\eta V\) and \(W\theta=\varepsilon W\). It is easily seen that \((\tau,\theta):K\vdash J\) is an adjoint equivalence. Now we are going to prove that \(\mathcal{W}\) is a subvariety of \(\mathcal{B}\). Let \(x\in\mathcal{W}\) and \(m:y\to x\) be a monomorphism in \(Alg\,\mathcal{T}\). Thus, \(Rm:Ry\to Rx\) is a monomorphism and \(Rx\in\mathcal{V}\), then \(Ry\in\mathcal{V}\) and \(y\in\mathcal{W}\). Therefore, \(\mathcal{W}\) is closed under subalgebras. Let \(\{\nu_{i}:x_{i}\to y\}_{i\in D}\) be a limit cone in \(\mathcal{B}\) such that \(x_{i}\in\mathcal{W}\) for all \(i\in D\). Thus, \(\{Rv_{i}:Rx_{i}\to Ry\}_{i\in D}\) is a limit cone in \(\mathcal{A}\) and \(Rx_{i}\in\mathcal{V}_{i}\) for all \(i\in D\), so \(Ry\in\mathcal{V}\) and \(y\in\mathcal{W}\). Therefore, \(\mathcal{W}\) is closed under limits. Analogously we prove that \(\mathcal{W}\) is closed under regular quotients and sifted colimits, so \(\mathcal{W}\) is a subvariety of \(\mathcal{B}\). It follows that \(J\) and \(K\) are algebraic functors. Let us check that \(\mathcal{W}\overset{J}{\leftarrow}\mathcal{V}\overset{V}{\hookrightarrow} \mathcal{A}\) is a pullback of \(\mathcal{W}\overset{W}{\hookrightarrow}\mathcal{B}\overset{L}{\leftarrow} \mathcal{A}\). Suppose that \(F:\mathcal{C}\rightarrow\mathcal{A}\) and \(G:\mathcal{C}\rightarrow\mathcal{W}\) are algebraic functors such that \(WG=LF\). Let \(x\in\mathcal{C}\) and \(y=Fx\), so \(Ly=WGx\in\mathcal{W}\), \(RLy\in\mathcal{V}\), and \(y\in\mathcal{V}\). Therefore, we get an algebraic functor \(R:\mathcal{C}\rightarrow\mathcal{V}\) such that \(F=VR\). Now, \(WKR=LVR=LF=GW\), then \(KR=G\) since \(W\) is a monomorphism. It is clear that \(R\) is unique with \(F=VR\) and \(G=KR\) due to \(V\) is a monomorphism. **Theorem 5.13**.: _An algebraic functor \(F:\mathcal{V}\rightarrow\mathcal{A}\) is a Birkhoff variety if only if there exist a Lawvere covariety \(M:\mathcal{T}\rightarrow\mathcal{Q}\) and algebraic functors \(\mathcal{B}\rightarrow\mathit{Alg}\,\mathcal{Q}\), \(\mathcal{A}\rightarrow\mathit{Alg}\,\mathcal{T}\) such that following diagram is a pullback square_ Proof.: Let us factorize \(F\) as \(F=VG\), where \(\mathcal{V}\) is a subvariety of \(\mathcal{A}\), \(V:\mathcal{V}\hookrightarrow\mathcal{A}\) is the inclusion of \(\mathcal{V}\) into \(\mathcal{A}\), and \(G:\mathcal{B}\rightarrow\mathcal{V}\) is an isomorphism. Since \(\mathcal{A}\) is an algebraic category, there exists an algebraic theory \(\mathcal{T}\) and an adjoint equivalence \((\eta,\varepsilon):L\vdash R\), where \(R:Alg\,\mathcal{T}\rightarrow\mathcal{A}\) and \(L:\mathcal{A}\rightarrow\mathit{Alg}\,\mathcal{T}\). By Lemma 5.12 we found a subvariety \(\mathcal{W}\) of \(Alg\,\mathcal{T}\) and an algebraic functor \(K:\mathcal{V}\rightarrow\mathcal{W}\) such that \(\mathcal{W}\overset{J}{\leftarrow}\mathcal{V}\overset{V}{\hookrightarrow} \mathcal{A}\) is a pullback of \(\mathcal{W}\overset{W}{\hookrightarrow}\mathit{Alg}\,\mathcal{T}\overset{L}{ \leftarrow}\mathcal{A}\). By Lemma 5.11 we have a Lawvere covariety \(M:\mathcal{T}\rightarrow\mathcal{Q}\) and an isomorphism \(H:\mathcal{W}\rightarrow\mathit{Alg}\,\mathcal{Q}\) such that \(W=Alg\,MH\). It is easy to check that is a pullback square. Conversely, it follows by Lemma 5.11 that \(Alg\,M\) is a Birkhoff variety. Since Birkhoff varieties are closed under pullbacks (see Proposition 2.6), we conclude that \(F\) is also a Birkhoff variety. **Corollary 5.14**.: **AlgCat** is wellpowered w.r.t. Birkhoff varieties. Proof.: For a given algebraic theory \(\mathcal{T}\), the class of all subvarieties of \(Alg\,\mathcal{T}\) is small because the the class of all morphisms in \(\mathcal{T}\) is small. Then, the class of all Birkhoff varieties with target \(Alg\,\mathcal{T}\) is small, up to isomorphisms, by Theorem 5.4. Therefore, the claim of this corollary follows by Theorem 5.13. ## 6. Some open problems In the following list, we mention some open problems which are worth to investigate: 1. In \(\mathbf{Cat}\), are the conservative embeddings precisely the regular monomorphisms? 2. Is the class of Birkhoff equations the largest solution to the inverse main problem of varieties for the class of inclusions of varieties of algebraic categories? Or equivalently, is the class of Birkhoff equations closed under equivalences? Is the largest solution closed under action by algebraic functors? 3. Could the results and proofs of the previous section to be generalize to the category of locally finitely presentable categories with right adjoint functors as morphisms, for an appropriate definition of varieties? 4. With respect to algebraic geometry, it would be very appropriate as an application of this theory of varieties, to find a solution to the inverse main problem in the category of schemes over an algebraically closed field \(k\), for the class of algebraic varieties over \(k\), i.e., the class of reduced separated schemes of finite type over \(k\). Examples 3.2, 3.3 and 3.4 are a motivation to this problem.
2302.08177
$K^{*0}$ meson production using a transport and a statistical hadronization model at energies covered by the RHIC beam energy scan
In this paper, we discuss the centrality and energy dependence of $K^{*0}$ resonance production using ultrarelativistic quantum molecular dynamics (UrQMD) and thermal models. The $K^{*0}/K$ ratios obtained from the UrQMD and thermal models are compared with measurements done by the STAR experiment in Au+Au collisions at $\sqrt{s_{NN}}$ = 7.7, 11.5, 14.5, 19.6, 27, and 39 GeV. The $K^{*0}/K$ ratio from the thermal model is consistent with data in most-peripheral collisions, however it overpredicts the ratio in central Au+Au collisions. This could be due to the fact that the thermal model does not have a hadronic rescattering phase, which is expected to be dominant in more central collisions. Furthermore, we have studied the $K^{*0}/K$ ratio from UrQMD by varying the hadron propagation time ($\tau$) within the range 5 to 50 fm/c. It was found that the $K^{*0}/K$ ratio decreases with increasing $\tau$. Comparison between data and UrQMD suggest, one needs to consider a $\tau$ $\approx$ 10-50 fm/c to explain data at $\sqrt{s_{NN}}$ = 7.7-39 GeV in Au+Au collisions. We also predict the rapidity distribution of $K^{*0}$ from UrQMD which could be measured in the STAR beam energy scan phase II (BES-II) program.
Aswini Kumar Sahoo, Md. Nasim, Subhash Singha
2023-02-16T09:49:15Z
http://arxiv.org/abs/2302.08177v2
The study of \(K^{*0}\) meson production using a transport and a statistical hadronization model at RHIC BES energies. ###### Abstract In this paper, we have discussed the centrality and energy dependence of \(K^{*0}\) resonance production using UrQMD and thermal models. The \(K^{*0}/K\) ratio obtained from the UrQMD and thermal models are compared with measurements done by the STAR experiment in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7, 11.5, 14.5, 19.6, 27 and 39 GeV. The \(K^{*0}/K\) ratio from thermal model is consistent with data in most-peripheral collisions, however it over-predicts the ratio in central Au+Au collisions. This could be due to the fact that the thermal model does not have a hadronic rescattering phase, which is expected to be dominated in more central collisions. Furthermore, we have studied the \(K^{*0}/K\) ratio from UrQMD by varying the lifetime of the hadronic medium within the range 5 to 20 fm/c. It was found that \(K^{*0}/K\) ratio decreases with increasing lifetime of the hadronic medium. Comparison between data and UrQMD suggest, one needs to consider a hadronic lifetime \(\sim\) 10-20 fm/c to explain data at \(\sqrt{s_{NN}}\) = 7.7 - 39 GeV in Au+Au collisions. We also predict rapidity distribution of \(K^{*0}\) from UrQMD which could be measured in the STAR BES-II program. pacs: 25.75.Ld ## I Introduction One of the major goals of heavy-ion collision is to study the properties of QCD matter produced in these collisions [1]. Just after the collision between two heavy-nuclei at relativistic speed, a de-confined state of quarks and gluons, commonly known as Quark Gluon Plasma (QGP), is expected to be created [2]. Due to expansion, temperature of QGP decreases. When temperature reaches quark-hadron transition temperature, quarks and gluons confined again to make hadrons. In the hadronic phase, particles can interact with themselves both elastically and inelastically. Chemical freeze-out happens when inelastic scattering between hadrons stop and Kinetic or final freeze-out happens when particles do not interact among themselves, and elastic collision between the particles also ceases [3; 4; 5]. After kinetic freeze-out, particles hit the detector. Hadronic resonances can serve as unique probes to study the properties of hot QCD matter at different time scales, due to the different lifetime of the different resonances [6; 7]. For example, \(K^{*0}(892)\) has lifetime \(\sim\) 4.16 fm/c [8]. Due to a short lifetime, \(K^{*0}(892)\) meson decay inside the fireball formed after the collision. The decay daughters of \(K^{*0}(892)\) can undergo in-medium effects like re-scattering and regeneration. For example, decay daughters of \(K^{*0}(892)\) may undergo elastic scattering with other particles present in the medium. During the scattering process, momentum of daughters particle may get modified. Therefore, it may not be possible to re-construct the parent. Hence it could cause a loss in the measured yield of \(K^{*0}\). On the other hand, \(\pi\) and \(K\) mesons, present in the medium, can undergo pseudo-elastic scattering [28] and form a \(K^{*0}\) resonance between chemical and kinetic freeze-out. This is called regeneration. Due to regeneration, \(K^{*0}\) yield is increased [9; 10; 11; 12]. In order to have an insight over these effects one can take the help of resonance to non-resonance ratio (e.g. \(K^{*0}/K\)). If the rescattering process dominates, one naively expects \(K^{*0}/K\) ratio to decrease with increasing multiplicity. If regeneration process dominates the ratio is expected to increase with the increasing multiplicity [13; 14; 15; 16]. The loss or gain of resonance yield could depend on various factors,e.g., the lifetime of hadronic phase, hadronic interaction cross section of decay daughters, particle density in the medium. Therefore, systematic study of the properties of resonances like \(K^{*0}\) may help understand the effect of late-stage hadronic interactions. Previous measurements from STAR [13; 14; 15; 16], PHENIX [17], NA49 [18], NA61 [19; 20], ALICE [21; 22; 23; 24; 25; 26; 27] collaborations show that rescattering effect can be dominant mechanism in late stage of hadronic medium produced in relativistic heavy-ion collisions. Various phenomenological studies also support this observation [28; 29; 30]. Recently, the STAR collaboration has reported the measurement of \(K^{*0}\) production in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7, 11.5, 14.5, 19.6, 27 and 39 GeV [31]. This data can be compared with UrQMD and Thermal model [32; 33] calculation to get insight into the late stage hadronic medium produced at these energies. Thermal model has no hadronic phase, while UrQMD includes hadronic interaction among the particles. In UrQMD, one can vary the hadronic cascade lifetime. In this paper, we probe hadronic phase by varying hadronic cascade time in UrQMD, and compare the results with thermal model as a baseline. The study is done by combining the \(K^{*0}\) and \(\overline{K^{*0}}\) and is denoted by \(K^{*0}\) in the text, unless specified. Also the Charged kaons (\(K^{\pm}\)) are combined and is denoted by \(K\). This paper is organised as follows. In Sec. II, we briefly discuss the Thermal and UrQMD model. In Sec. III, we describe the study of \(K^{*0}\) at RHIC Beam Energy Scan (BES) phase-I energies using the thermal and UrQMD model (version 2.3). A comparison with STAR data is shown. The results are summarized in Sec. IV. ## II Model Description ### The Thermal Model The K\({}^{*0}\)/K are obtained from statistical thermal model analyses of the produced particles using the THERMUS package [33] taking Grand-Canonical Ensemble (GCE). In the GCE, for a hadron gas of volume \(V\) and temperature \(T\), the particle multiplicities are given by \[N_{i}^{GC}=\frac{g_{i}V}{2\pi^{2}}\sum_{k=1}^{\infty}(\mp 1)^{k+1} \frac{m_{i}^{2}T}{k}K_{2}\left(\frac{km_{i}}{T}\right)\] \[\times e^{\beta k\mu_{i}} \tag{1}\] where \(K_{2}\) is the Bessel function of second order. The chemical potential for particle species \(i\) in this case is given by \[\mu_{i}=B_{i}\mu_{B}+Q_{i}\mu_{Q}+S_{i}\mu_{S}, \tag{2}\] where \(B_{i}\), \(S_{i}\), and \(Q_{i}\) are the baryon number, strangeness, and charge number, respectively, of hadron species \(i\), and \(\mu_{B}\), \(\mu_{Q}\), and \(\mu_{S}\) are the respective chemical potentials. The freeze-out parameters (\(T\), \(\mu_{B}\), \(\mu_{Q}\), and \(\mu_{S}\)) at different centre-of-mass energies are taken from the ref. [34] They are obtained by fitting the yields of \(\pi^{\pm}\), \(K^{\pm}\), \(p\), \(\bar{p}\), \(\Lambda\), \(\bar{\Lambda}\), \(\Xi^{-}\) and \(\bar{\Xi}^{-}\), assuming the GCE. The freeze-out parameters are summarized in table 1. ### The UrQMD Model The UrQMD (Ultra relativistic Quantum Molecular Dynamics) model [32] is based on a microscopic transport theory where the phase space description of the reactions are important. It allows for the propagation of all hadrons on classical trajectories in combination with stochastic binary scattering, color string formation and resonance decay. It incorporates baryon-baryon, meson-baryon and meson-meson interactions, the collisional term includes more than 50 baryon species and 45 meson species. [32] In the UrQMD model, one can vary lifetime of hadronic cascade time. Hence, it could provide opportunity to study the effect of hadronic re-scattering/regeneration on the yield of short lived resonance particles, like \(K^{*0}\). ## III Results ### Yield of \(K^{*0}\) and charged kaons (\(K^{\pm}\)) from UrQMD model Fig. 1 shows yield (dN/dy) of \(K^{*0}\) and charged kaon (\(K^{\pm}\)) as a function of number of participating nucleons (\(N_{part}\)). The measurements are done at mid-rapidity (\(|y|<1.0\) for \(K^{*0}\) and \(|y|<0.1\) for Kaons) in Au+Au collisions at \(\sqrt{s_{NN}}=11.5\) and 39 GeV in order to make it consistent with published results from STAR [31; 34]. The results are obtained by varying the lifetime of hadronic cascade (\(\tau\)) from 5 to 20 fm/c for all STAR BES energies from 7.7-39 GeV. Fig. 1 shows that the centrality dependence of charged kaon yield remains independent of \(\tau\). Where as the \(K^{*0}\) yield decreases with increase in \(\tau\). The decrease in \(K^{*0}\) yield is due to the rescattering of daughter particles in the hadronic phase, which is included in UrQMD. ### Resonance to non-resonance ratio vs \(N_{part}\) from UrQMD model, thermal and STAR data The resonance to non-resonance ratios (\(K^{*0}/K\) and \(\phi/K\)) as a function of \(N_{part}\) from the UrQMD model are shown in Fig. 2, and compared with STAR data measured in Au+Au collisions at \(\sqrt{s_{NN}}=11.5\) and 39 GeV. The \(K^{*0}/K\) ratios are found to decrease with increasing \(N_{part}\). The \(N_{part}\) dependence of \(K^{*0}/K\) ratios is found to be similar to that measured by the STAR experiment for all STAR BES energies. The thermal model calculation are also shown in Fig. 2. The thermal model calculations for different \(N_{part}\) are done by using different freeze-out parameters for corresponding centrality classes as mentioned in the Figure 2: (Color online) The \(K^{*0}/K\) ratio vs. \(N_{part}\) measured at mid rapidity from STAR experiment [31] compared with corresponding UrQMD model at 11.5 and 39 GeV. The systematic and statistical uncertainties are shown by the caps and boxes on experimental data Figure 1: (Color online) The \(p_{T}\)-integrated yield of \(K^{*0}\) and charged kaons are measured from UrQMD model for Au+Au collisions at 11.5 and 39 GeV Table 1. Note that there is no hadronic phase in thermal model. Unlike UrQMD, the \(K^{*0}/K\) ratio from thermal model remains independent of \(N_{part}\). The UrQMD measurements are done by varying the hadronic cascade lifetime from 5 to 20 fm/c. The \(K^{*0}/K\) ratio at \(\tau\)= 5 fm/c remains almost independent of centrality, while a suppression can be observed for \(\tau\)= 10 and 20 fm/c. For \(\sqrt{s_{NN}}\)= 39 GeV, the results from UrQMD with \(\tau\)= 20 fm/c seems to be consistent with that from data at most central collision within uncertainties, but over predicts the data at peripheral collisions. The data at \(\sqrt{s_{NN}}\)= 11.5 GeV can be explained by the UrQMD calculations with \(\tau\)= 20 fm/c. However, if we consider the large statistical uncertainties the data is also consistent with the measurement for \(\tau\)= 10 fm/c. Hence measurement with higher statistics is needed to have precise conclusion. The high statistics data collected in STAR beam energy scan phase-II program will help reducing the uncertainty in the measurement. As \(\phi\) has nearly ten times longer life time than \(K^{*0}\), one could naively expect it to decay outside the medium. Hence it would remain immune to the hadronic medium produced during the heavy-ion collisions. Hence \(\phi/K\) remains almost independent of centrality and \(\tau\). The trend is well explained by the thermal model, While UrQMD under predicts the data. The comparison of data with UrQMD and the thermal model indicates that decay daughters of \(K^{*0}\) can suffer from late hadronic interaction, with re-scattering playing dominant role over regeneration. ### \(K^{*0}/K\) vs transverse momentum (\(p_{T}\)) from UrQMD model The \(K^{*0}/K\) ratio vs \(p_{T}\) is measured from the UrQMD model is shown in Fig. 3 for central (0-10%) and peripheral (60-80%) Au+Au collisions at \(\sqrt{s_{NN}}\) = 39 GeV, which could help detect the \(p_{T}\) dependence of the rescattering effect. The \(K^{*0}/K\) ratio is found to increase with \(p_{T}\), which indicate that the low \(p_{T}\)\(K^{*0}\) mesons are more prone to undergo the rescattering effect than higher \(p_{T}\). At low \(p_{T}\) region the \(K^{*0}/K\) vs. \(p_{T}\) weakly depends on the choice of \(\tau\) in peripheral collisions than that in central collisions. A similar \(p_{T}\) dependence was observed for \(\sqrt{s_{NN}}\)= 7.7-39 GeV. ### \(K^{*0}/K\) vs \(\sqrt{s_{NN}}\) (0-10% and 60-80%) from UrQMD model and Thermal model Fig. 4 shows energy dependence of \(K^{*0}\)/K for 0-10% central and 60-80% peripheral Au+Au collisions. The STAR data do not show any significant energy dependence of \(K^{*0}\)/K for both 0-10% central and 60-80% peripheral Au+Au collisions within present uncertainties. The UrQMD model calculation are shown for different \(\tau\) values from 5-20 fm/c along with the thermal model prediction. Figure 3: (Color online) The \(p_{T}\) dependence of \(K^{*0}/K\) measured from UrQMD model for 39 GeV at 0-10% centrality and 60-80% centrality. The thermal model shows no centrality dependence. The over prediction of the data by the thermal model in central collisions is consistent with the expectation of dominance of hadronic rescattering. The \(K^{*0}/K\) ratio measured from the UrQMD model seems to increase with collision energy. However, a strong dependence on the hadronic cascade lifetime selection can be seen in central collisions as compared to the peripheral collisions. UrQMD results for \(\tau\)= 20 fm/c is consistent with the energy dependence of the \(K^{*0}/K\) ratio at central collisions. The results below \(\sqrt{s_{NN}}\)= 14.5 GeV is also consistent with model prediction at \(\tau\)= 10 fm/c, within uncertainty. However the model result at peripheral collisions seems to be independent of \(\tau\). This could indicate a smaller hadronic rescattering at 60-80% centrality as compared to 0-10%. ### Rapidity dependence of \(K^{*0}\) yield from UrQMD model The STAR experiment at RHIC has just finished data taking for its phase-II of beam energy scan program. The data has been taken with upgraded detectors providing opportunity for measurement at a wider rapidity (\(|y|<1.5\)) [35]. With high statistics data, The measurement for the \(K^{*0}\) can be done as a function of rapidity to understand possible effect of hadronic rescattering when moving away from mid-rapidity. In Fig 5, the dN/dy of \(K^{*0}\) meson is plotted as a function of rapidity for both central and peripheral collisions at at 11.5 and 19.6 GeV respectively. A clear rapidity dependance is observed for the \(K^{*0}\) yield for all BES energies. However, a weak dependance on \(\tau\) is observed in peripheral collisions than in central collisions. In order to elucidate the effect of rescattering with rapidity (\(y\)) in Fig 6, the ratio of \(K^{*0}\) yield (dN/dy) for \(\tau\)= 10, 20 fm/c is taken with that for \(\tau\)= 5 fm/c and plotted as a function of rapidity. For central collisions the ratio increases as one move towards larger rapidity. It indicates that the rescattering could be more dominant at mid-rapidity, which is expected as the medium has more particle density at mid-rapidity region. For \(\sqrt{s_{NN}}\)= 19.6 GeV, the ratio remains almost independent of \(y\), upto \(|y|<1.5\). With increase in \(\tau\) the ratio is more suppressed. However for peripheral collisions the ratio remains almost independent of rapidity, which indicates the rescattering is not dominant in peripheral collisions. ## IV Summary We presented a comparison of the \(K^{*0}\)/K ratio measured at mid rapidity in various centralities at RHIC BES energies with the UrQMD and thermal model. The UrQMD model calculations are done by taking different lifetimes of hadronic phase ranging from 5-20 fm/c. We found that \(K^{*0}/K\) Figure 4: (Color online) The \(K^{*0}/K\) ratio vs. center of mass energy for central (0-10%) and peripheral (60-80%) Au+Au collisions at mid-rapidity [31] compared with corresponding measurements from thermal and UrQMD model. Figure 5: (Color online) The \(p_{T}\) integrated yield (dN/dy) for \(K^{*0}\) meson vs rapidity for 0-10% and 60-80% centrality at \(\sqrt{s_{NN}}\)= 11.5 GeV (upper panel) and 19.6 GeV (lower panel) respectively. Figure 6: (Color online) The \(p_{T}\) integrated yield (dN/dy) for \(K^{*0}\) meson for \(\tau\)= 10 and 20 fm/c, divided by the dN/dy for \(\tau\)= 5 fm/c as a function of rapidity for 0-10% and 60-80% centrality. ratio decreases with the increasing lifetime of the hadronic medium. One needs to consider a hadronic lifetime \(\sim\) 10-20 fm/c to explain dat at \(\sqrt{s_{NN}}\) = 7.7-39 GeV in Au+Au collisions. Furthermore, \(K^{*0}/K\) ratio from the thermal model, which does not include any hadronic rescattering, is consistent with data in most peripheral collisions but over-predicts the ratio in central Au+Au collisions. This may suggest that the observed suppression of \(K^{*0}/K\) ratio in central Au+Au collisions compared to peripheral collisions due to the effect of hadronic rescattering suffered by the daughter particles of \(K^{*0}\) resonance. The study of the \(\phi/K\) ratio from the UrQMD model further supports the idea of hadronic rescattering that the daughters of \(K^{*0}\) resonance may be suffered in central Au+Au collisions. In the end, we have made a prediction of the rapidity distribution of \(K^{*0}\) yield using the UrQMD model. The study from the UrQMD model suggests that the rescattering is more dominant in the mid rapidity region. These predicted values can be used to compare with STAR BES-II results to get more insight from the rapidity dependence study. **Acknowledgments** AKS acknowledges discussions with Tribhuban Parida regarding thermal model calculations. SS acknowledges support from the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB34000000)
2302.10738
An inverse modeling method to estimate undertain spatial configurations from 2d information and time-based visual discriminations
This paper focuses on a specific aspect of human visual discrimination from computationally generated solutions for CAAD ends. The bottleneck at work here concern informational ratios of discriminative rates over generative ones. The amount of information that can be brought to a particular sensory modality for human perception is subject to bandwidth and dimensional limitations. This problem is well known in Brain-Computer Interfaces, where the flow of relevant information must be maintained through such interaction for applicative ends and adoption of use in many fields of human activity. While architectural modeling conveys a high level of complexity in its processes, let alone in the presentation of its generated design solutions, promises in applicative potentials of such interfaces must be made aware of these fundamental issues and need developments of appropriate sophistication. This paper addresses this informational bottleneck by introducing a method to retrieve spatial information from the rapid serial visual presentation of generated pictures. This method will be explained and defined as inverse modeling, based on inverse graphics, and its relation to human visual processing.
Pierre Cutellic
2023-02-21T15:40:00Z
http://arxiv.org/abs/2302.10738v1
An inverse modeling method to estimate uncertain spatial configurations from 2D information and time-based visual discriminations ###### Abstract This paper focuses on a specific aspect of human visual discrimination from computationally generated solutions for CAAD ends. The bottleneck at work here concern informational ratios of discriminative rates over generative ones. The amount of information that can be brought to a particular sensory modality for human perception is subject to bandwidth and dimensional limitations. This problem is well known in Brain-Computer Interfaces, where the flow of relevant information must be maintained through such interaction for applicative ends and adoption of use in many fields of human activity. While architectural modeling conveys a high level of complexity in its processes, let alone in the presentation of its generated design solutions, promises in applicative potentials of such interfaces must be made aware of these fundamental issues and need developments of appropriate sophistication. This paper addresses this informational bottleneck by introducing a method to retrieve spatial information from the rapid serial visual presentation of generated pictures. This method will be explained and defined as inverse modeling, based on inverse graphics, and its relation to human visual processing. Keywords:Neurodesign, Design Computing and Cognition, Brain-Computer Interfaces, Generative Design, Computer Vision. ## 1 Introduction Human capacities in natural communication that find applicative echoes to remote intervention, rationalization, heuristics, and adaptivity in the face of complexity and uncertainty, have unparalleled performances compared to any state-of-the-art artificial model (Korteling et al., 2021). Stochastic problems, on the other hand, are legion in the industry, and the practical modeling of artificial intelligence has been proven to be of significant research interest across the spectrum of AEC fields (Mohammadpour and Asadi, 2019; Darko et al., 2020; Pan and Zhang, 2021). Virtuous coupling of both human and machine intelligence in guiding the diversity of sought performances in AEC technologies supports the right approach for more desirable outcomes, fluency, and literacy in both control and communication. However, numerous questions and bottlenecks exist at a general level of this hybrid model. They need to be answered in due time of research, both theoretically and practically. This paper focuses on a specific aspect of human visual discrimination from computationally generated solutions. The bottleneck at work here concern informational ratios of discriminative rates over generative ones. The amount of information that can be brought to a particular sensory modality for human perception is subject to bandwidth and dimensional limitations (Wickens, 1974; Venturino and Eggemeier, 1987; Klingberg, 2000; Marois and Ivanoff, 2005; Riener, 2017)--exceeding these rates cause well-known fatigue and disengagement. They can be measured in perceived mental efforts at various levels from physiological and behavioral signals as a cognitive load (Sweller and al., 1998; Xie and Salvendy, 2000; Paas and al., 2003). This might lead to counterproductive effects when the speed and quantity of solutions produced by a computer cannot be processed at a similar pace to their generation through human-computer interaction. This particular problem is well known in the field of Brain-Computer Interfaces (BCI), where the flow of relevant information must be maintained through such interaction for applicative ends and adoption of use in many areas of human activity (Lotte and Jeunet, 2018; Perkidis and Millan, 2020; Gramann and al., 2021). While architectural modeling conveys a high level of complexity in its processes, let alone in the presentation of its generated design solutions, promises in applicative potentials of such interfaces cannot escape from these fundamental issues and need developments of appropriate sophistication. Based on previous research (Cutellic, 2019, 2021) and the case study of an ongoing project tailoring BCI methods for architectural modeling at early design phases, this paper will address this informational bottleneck by introducing a technique to retrieve spatial information from the Rapid Serial Visual Presentation (RSVP) of generated pictures (Spence and Witkowski, 2013; Lees et al., 2018). The visual task at hand consists of focusing on sequences of generated pictures and discriminating visually those which seem relevant to the concept of a "ROOM" (or loosely describing an enclosing space). The visual discrimination is then further correlated with peculiar neural phenomena of study within the computational loop of a BCI (Figure 1). While the specifics of this experiment and the overall research project lie mainly outside of the scope of this paper, the intent of specific interest here is to identify presented pictures that may express stronger correlations with this concept and use them as sources of spatial information for subsequent designs. An Inverse Modeling Method to Estimate Uncertain Spatial Configurations from 2D Information and Time-Based Visual Discriminations These pictures are necessarily framed by three constraints defined by minimal spatial cues: the monocular viewpoint to which they are presented, the aforementioned informational transfer rates, and maintaining a high degree of uncertainty to increase variance in the results of such interaction. The apparent trade-off that will be discussed throughout the presented method in this paper arises when one intends to model 3D information from insufficient and partial ones in 2D. The increased degree of uncertainty comes with increased intractability in modeling spatially complex 3D aggregates from 2D pictures containing low-level features. This method will be explained and defined as an inverse modeling method based on inverse graphics and their relation to human visual processing. Such a model of reference is usually called Vision-as-Inverse-Graphics (Kulkarni and Whitney, 2015) or scene Analysis-By-Synthesis (Grenander, 1976, 1978), where prior geometric information generally serves as ground truth. We will focus here on the primary input information only available by what is shown to the eye, and 3D correlates do not exist as priors. ## 2 Background The study of early visual processes seeks to model fast and efficient information recovery involved with cognitive functions of higher levels, such as object detection (Tomasi, 2006). One of the most basic and well-known feature extraction processes performed by the early visual system, and with computational equivalence, must deal with the segmentation of textures to extract shape information (Gibson, 1950; Marr et al., 1982). The so-called Shape From Texture problem in vision research focuses on modeling the recovery of 3D information of object surfaces from distorted texture segments which have been projected with specific angles onto a 2D Picture Plane, such as the retinal one (Gibson, 1950). This distortion, or Gradient, is generally measured by assuming the pattern distribution across the texture segment, whether a non-distorted reference is provided. Without reference, stochastic methods must be applied (Kanatani and Tsai-Chan, 1989; Clerc and Mallat, 2002). In the case of low-level and Figure 1: Photo of the conducted BCI experiment and its designed RSVP task for visual discrimination of spatial configurations. abstract geometry processing, such as for planar surfaces, distortions can be assumed to be linearly distributed and homogeneous. This assumption becomes even more practical when projections are considered within a Euclidean visual space because of the linear kind of its transformations (Erkelens, 2017), and deterministic methods can then be applied. From a perceptual point of view, perspective projection models are considered the most ideal, convenient, and realistic compared to transformations applied in physical and visual ones, which may depict both distance and size estimation in monocular vision with invariance (Erkelens, 2017). When a texture contains a dominant oriented structure, perspective convergence can be defined, and a mathematical relationship can be established between that 2D convergence in the texture and the parametric rotations of a planar surface in 3D by exploiting local symmetries (Saunders and Knill, 2001). Such convergence in texture gradients has been shown to represent further effective perceptual spatial cues for the extended generalization of ruled surfaces (Andersen, 1998; Gilliam, 1968; Todd et al., 2005). Given the abstract character of the visual objects to be detected in the designed experiment (a room), sets of abstract and basic geometry, such as planar surfaces, represent the most practical approach to describing 3D spatial segments. Using the perspective projection model, their projected distortion can be assumed uniform and reflected as homogeneous pattern distributions. Since only simple linear deformations are supposed to be present in the generated visual stimuli, deterministic numerical estimations of inverse monocular perspective projections of the deformed planar surfaces will be used. ## 3 Methods Procedural methods that convey visuospatial information with adequate geometry must be developed to provide a serial and tractable way to model randomly generated pictures compound of such texture gradients. It is assumed that given a picture domain, providing a projection plane, and a singular viewpoint, given by the observer position, all kinds of randomly positioned and rotated planar surfaces defining a spatial enclosure within a field of view would produce straight intersections that delimit the viewed portions of these planes, and that such projection on the picture domain would map to the geometric properties of Voronoi space partition diagrams (Voronoi, 1909; Aurenhammer and Klein, 2000). Accordingly, a generative 2D domain is set to provide for the subsequent multiple texture segments that compose the picture (Figure 2). To deduce the segments from the diagram, all diagram cells must be closed, and their centroids must be randomly generated within an inset of the picture domain and an offset of the previously generated points for perceptually visible boundaries within the picture domain. A most minimal texture gradient in the form of alternating black and white bands of equal dimensions (i.e., Square-Wave grating) is then taking advantage of the convex nature of the produced Voronoi cells and their bounding trapezoids (Figure 3). To generate the frequency bands, only one random integer parameter arbitrarily ranging from 1 to 10 is used with a similar distribution strategy to generating the coordinates of points on a plane. Another binary random integer is used to define whether the first band is to be black or white, and their orientation is predefined by using the boundary of each region. Both random cell distribution and texture frequencies are done to account for variance in the produced image. The longest edge of each area and its largest opposing edge (i.e., with at least one other edge separating them and the most parallel) are used to orient the texture bands. Most of the time, the two selected edges are not parallel, and each band may linearly vary its width. If a region contains only three edges, the opposing edge is generated by offsetting the largest edge to the opposite vertex and recreating an area with at least four vertices. The generated frequency bands remain bounded by the Voronoi cell and mapped onto the 2D picture with binary Figure 2: Relative generation of texture domains from a given 2D picture domain (Pp in red). A generative domain is set (Pg in blue) with a ratio to Pp width and height (PpW, PpH). Points are randomly placed within Pp (VIn) together with 8 points outside in their respective subdomains of Pg (VO0-7) to produce a closed Voronoi diagram containing all VIn. The resulting diagram provides for texture domains at each of its cells (S0 in green). luminance values. The procedural logic that defines the generation of texture segments on the picture plane is made to be reproducible and non-destructive. Given the same sets of geometrical elements, the same texture segments must be generated. Therefore, it must also account for rare cases of singularities where more than one element of the set may fit selection criteria and then apply further discrimination from their distinctive sign coordinates from the picture center. Since this procedure is aimed at generative modeling studies of increasing complexity, this principle must also be maintained across iterative sequences. A generator encodes the procedural generation of these pictures, and each generative information is stored in a dataset for retrieval and further modeling. Eventually, a token may serve as the seed for the generation of new texture segments with additional random points updating the initial Voronoi diagram along the iterative sequence. A simple domain union is made between the new and existing texture segments. That way, previously generated texture segments are preserved, and only their clipping boundaries may be updated regardless of whether the edges initially used for defining the texture have been altered (Figure 4). Figure 4: Two samples of generated images with texture segments through an iterative sequence. Left: the image generated after 4 iterations contains 4 texture segments. Right: the same image iterated over after 9 iterations. Highlighted in green is the same texture segment being updated over iterations. Figure 3: Selection of the longest edge (L0) and its opposing most-parallel edge (L1) to define orientation and limit of the texture. Middle: Selection of the longest connected edge (L2) and its opposed (L3) for each of the two previously selected to divide them by a generated frequency parameter (f) and define the alignment of the bands. Right, random order of the bands (O) by starting as 0-1 or 1-0. The remaining edges (L4, L5) form connected sets to clip the generated bands within the texture domain. An Inverse Modeling Method to Estimate Uncertain Spatial Configurations from 2D Information and Time-Based Visual Discriminations To recall the initial problem of inverse modeling from a monocular viewpoint (Palmer, 1999; Zygmunt, 2001): for a given surface projected on a 2D plane, there is an infinite number of possible configurations in the 3D field of view (Figure 5). The missing information allowing to position and orient a plane in a 3D domain is the distance from the viewpoint and Tilt and Slant rotation angles. Slant is the angle between the line of sight and the normal surface vector measured perpendicular to the picture plane, and tilt is the rotation angle of the normal surface vector around the line of sight (Stevens, 1983). Texture gradients provide for that missing information to perceive an uncertain but most probable surface orientation and position from a distance (Figure 6). As previously introduced, the geometric information provided by a texture segment on the picture plane, which also serves as perceptual information, is a function of the texture distribution. Segment-wise, local symmetry can be exploited to extract the slant and tilt angles. Two vanishing points may be retrieved from the longest edges of the trapezoid orienting the pattern generation, and their opposing ones generating its distribution (i.e., respectively L0, L1 and L2, L3 in Figure 3). Their bisector intersects at the center of transformations to be applied on the planar surface; the origin of the surface normal vector projected on the picture plane. Following (Ribeiro and Hancock, 1999), a linear approximation of the local distortion due to perspective can be derived. The projected normal vector components (p, q) can be retrieved by measuring the angle between the bisectors and the XY axes of the picture plane. From there, slant and tilt angles can be found. To serve as ground truth, a reference measure is used as the biggest possibly perceived pattern from a picture (half of the diagonal of the presentation screen). That Figure 5: As seen from the eye (point in red), any projected surface (in green) on the 2D picture plane (as seen from the front view - top left) may actually be projected from an infinite number of possible configurations within the 3D field of view (as seen from the side view - top right). The three degrees of freedom which might be assessed from texture gradients on the picture plane to fix a plane in 3D are the slant (in blue) and tilt (in red) angles of the plane and the distance (D1 in black) from the picture plane to add with the eye distance from the picture plane (D0 in black), as seen from iso view - bottom. reference is put in the Thales reciprocity formula with the measure of the distorted pattern along the bisector produced by the trapezoid edges generating the pattern distribution. The resulting distance is used for translating the rotated surface along the line of sight relatively to each other texture segments. The resulting planes must then be extended to fit with their visible boundaries punctuated by the extended line of sight passing through every vertex of the texture segment on the picture plane. The monocular perspective may only assume that a plane that is behind another may be occluded and then is extended until meeting the orthogonal projection of the boundary of the picture plane or simply intersecting with another plane. Similarly, a possibly occluding plane must stop at its sight boundaries for an occluded segment to be seen. ## 4 Results For reproducibility of the model under variance, random samples have been taken in the dataset of generated pictures (Figure 7). Since similar transformations must be found for identical texture segments regardless of their boundary conditions and appearance in the iterative process, the samples have been taken for different steps of the same sequence. Since the model must also be able to recover these transformations across the entire range of textures possibly generated under these design principles, samples have been taken for miscellaneous sequences. For every transformed sample, exactly similar transformations have been found for identical texture segments with different boundaries and were found to share the same plane in the inverse 3D space. Figure 6: Left, front view, in the picture plane. The point of transformation V1 is retrieved by intersecting the bisectors B0 and B1. B0 and B1 pass respectively by the vanishing points V2 and V3 retrieved from the trapezoid edges generating the texture (L0, L1 and L2, L3). The linear equation that approximates the slant and tilt angles can be retrieved by measuring the angles alpha and beta between the bisectors and the axes of the picture plane. A distance measure of the texture Ls is taken along B1. - Right, side view, in the plane of the line of sight passing from the viewpoint origin V0 and V1. The distance D1 along this line to position the plane at V4 is deduced from the reference measure Lr by reciprocity. An inverse modeling method to estimate uncertain spatial configurations from 2D information and time-based visual discriminations The transformations were equally successfully applied to every other configuration presented in other generated sequences and matched perceptual evaluation of their orientation and relative distance as per the developed method. As expected, plane intersections vary greatly across generated samples. Since information regarding boundary conditions cannot be retrieved from the pictures, or if so, could be found in conflict with the present information regarding planar transformations, the former is considered to result from the latter. The retrieved boundary conditions become either the edge limiting a planar surface occluding geometries on the background and leaving a gap in-between, or planar intersections along a line or a point. From these resulting conditions, procedural geometric operations have been deduced and systematically applied to generate a CAD model and fabricate a physically tangible mock-up. Depending on the degree of control involved at this modeling stage, these features could be discussed further for designing pre-specific constraints or their parametrization for subsequent design explorations. ## 5 Conclusion and Outlook This paper presents a method to design an inverse geometric model of 3D planar geometries from 2D perceptual information under a certain degree of uncertainty and for the rapid presentation of early design scenarios. Its low level of 2D information and presentation pace aims to develop applicative methods in BCI for design and architectural modeling that support the increase of fluency between humans and computers. Followingly, minimal and basic 2D texture information, which does not necessitate large amounts of time to generate procedurally, has been used to model the retrieval of 3D information. The core of such an approach lies in two basic assumptions of productivity driven by cumulative empirical findings from cognitive science Figure 7: Two random samples of 2D generated pictures and their 3D planar geometries transformed through the inverse model (top and bottom). Columns left and middle show a comparison between 2D/3D for the corresponding generative steps 6 and 12. The third column (right) shows a comparison between 3D surfaces from both steps of the same generative sequence. Highlighted in green is a corresponding texture patch for a given sequence and across steps. For reasonable productions, surfaces have been bounded to a standard enclosing box and onto which trimmed surface regions have projected for visual continuity along the monocular perspective. literature: the generative productivity of computers to output solutions from large dimensional spaces and the discriminative productivity of humans to output responses of higher dimensions from reduced ones. Intractable problems due to a lack of sufficient and timely information may be approached heuristically and iteratively. Several assumptions have been made during the development of this method and kept here for discussion because their relevance mainly concerns the degree of generalization for further research. The basic assumption which allows for correlating a geometric inverse model with perceptual cues examines the hypothesis that regardless of the complexity of visual inputs (i.e., natural images), the principle of inverse geometric modeling operates naturally at the level of human perceptual segmentation. In addition, monocular models are known to leave out a significant amount of uncertainty. For example, a perceived slant angle may vary from two different instances of the same texture segment (Blake, Bulthoff, & Sheinberg, 1993; Cutting & Millard, 1984; Knill, 1998a). The estimation error in perceived angles may also vary by angle thresholds and pattern differences. Additionally, the statistical nature of such information is related to the presentation context (e.g., the overall composition of all texture segments, the modality from person to person,...). Because RSVP integrates by design the repetitive presentation of similar instances on an individual basis, and parameters of uncertainty in the perception of texture gradient are continuously accumulated in the literature, a certain degree of confidence could be further developed for these 3D transformations. Regarding the remaining uncertainty parameters regarding relative distance estimations and surface occlusions, further investigation in providing additional viewpoints could give the method more accuracy. Optimal cue combination methods can support this development, and the deployment of such a process could be thought either stationary (presentation of multiple viewpoints of the same scene on a static screen) or mobile (in the context of augmented reality devices). Finally, generated texture gradients should account for visual discomfort under prolonged presentation modalities such as RSVP. Artificially generated distributions could mitigate this effect by eventually accommodating noise found in natural images. The generation of such noise-based gradients would involve applying generalized methods, including stochastically distributed patterns and non-planar surface geometries. ## Acknowledgments This research is part of the Neuromod project (105213_192500), fully funded by the Swiss National Science Foundation SNSF Project Funding Div. 1-2 for Humanities, Social sciences, Mathematics, Natural, and Engineering sciences. ## References * Andersen et al. (1998) Andersen, G. J., Braunstein, M. L., & Saidpour, A. (1998). _The perception of depth and slant from texture in three-dimensional scenes_. Perception, 27(9), 1087-1106. [https://doi.org/10.1068/p271087](https://doi.org/10.1068/p271087) * Aurenhammer & Klein (2000) Aurenhammer, F., & Klein, R. (2000). _Voronoi Diagrams_. Handbook of Computational Geometry, 5(10), 201-290. An inverse modeling method to estimate uncertain spatial configurations from 2D information and time-based visual discriminations Blake, A., Bulthoff, H. H., & Sheinberg, D. (1993). _Shape from texture: Ideal observers and human psychophysics._ Vision Research, 33(12), 1723-1737. [https://doi.org/10.1016/0042-6989](https://doi.org/10.1016/0042-6989)(93)90037-w BuHamdan, S., Alwisy, A., & Bouferguene, A. (2021). _Generative systems in the architecture, engineering, and construction industry: A systematic review and analysis_. International Journal of Architectural Computing, 19(3), 226-249. [https://doi.org/10.1177/1478077120934126](https://doi.org/10.1177/1478077120934126) Clerc, M., & Mallat, S. (2002). _The texture gradient equation for recovering shape from texture._ IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4), 536-549. [https://doi.org/10.1109/34.993560](https://doi.org/10.1109/34.993560) Cutellic, P. (2019). _Towards encoding shape features with visual event-related potential based brain-computer interface for generative design_. International Journal of Architectural Computing, 17(1), 88-102. [https://doi.org/10.1177/1478077119832465](https://doi.org/10.1177/1478077119832465) Cutellic, P. (2021). _Growing Shapes with a Generalised Model from Neural Correlates of Visual Discrimination_. In P. F. Yuan, J. Yao, C. Yan, X. Wang, & N. Leach (Eds.), Proceedings of the 2020 DigitalFUTURES (pp. 68-78). Springer. [https://doi.org/10.1007/978-981-33-4400-6_7](https://doi.org/10.1007/978-981-33-4400-6_7) Cutting, J. E., & Millard, R. T. (1984). _Three gradients and the perception of flat and curved surfaces_. Journal of Experimental Psychology: General, 113(2), 198-216. [https://doi.org/10.1037/0096-3445.113.2.198](https://doi.org/10.1037/0096-3445.113.2.198) Darko, A., Chan, A. P. C., Adabre, M. A., Edwards, D. J., Hosseini, M. R., & Ameyaw, E. E. (2020). _Artificial intelligence in the AEC industry: Scientometric analysis and visualization of research activities_. Automation in Construction, 112, 103081. [https://doi.org/10.1016/j.autcon.2020.103081](https://doi.org/10.1016/j.autcon.2020.103081) Emaminejad, N., North, A. M., & Akhavian, R. (2022). _Trust in AI and Implications for the AEC Research: A Literature Analysis_. ArXiv:2203.03847 [Cs]. [http://arxiv.org/abs/2203.03847](http://arxiv.org/abs/2203.03847) Erkelens, C. J. (2017). _Perspective Space as a Model for Distance and Size Perception_. I-Perception, 8(6), 2041669517735541. [https://doi.org/10.1177/2041669517735541](https://doi.org/10.1177/2041669517735541) Gibson, J. J. (1950). _The perception of the visual world_ (pp. xii, 242). Houghton Mifflin. Gillam, B. J. (1968). _Perception of slant when perspective and stereopsis conflict: Experiments with aniseikonic lenses_. Journal of Experimental Psychology, 78(2), 299-305. [https://doi.org/10.1037/h0026271](https://doi.org/10.1037/h0026271) Gramann, K., McKendrick, R., Baldwin, C., Roy, R. N., Jeunet, C., Mehta, R. K., & Vecchiato, G. (2021). _Grand Field Challenges for Cognitive Neuroergonomics in the Coming Decade_. Frontiers in Neuroergonomics, 2. [https://www.frontiersin.org/article/10.3389/fnrgo.2021.643969](https://www.frontiersin.org/article/10.3389/fnrgo.2021.643969) Grenander, U. (1976). _Pattern Synthesis: Lectures in Pattern Theory_. Springer-Verlag. [https://doi.org/10.1007/978-1-4612-6369-2](https://doi.org/10.1007/978-1-4612-6369-2) Grenander, U. (1978). _Pattern Analysis: Lectures in Pattern Theory II_ (Softcover reprint of the original 1st ed. 1978 edition). Springer. Kanatani, K., & Chou, T.-C. (1989). _Shape from texture: General principle_. Artificial Intelligence, 38(1), 1-48. [https://doi.org/10.1016/0004-3702](https://doi.org/10.1016/0004-3702)(89)90066-0 Klingberg, T. (2000). _Limitations in information processing in the human brain: Neuroimaging of dual task performance and working memory tasks_. In Progress in Brain Research (Vol. 126, pp. 95-102). Elsevier. [https://doi.org/10.1016/S0079-6123](https://doi.org/10.1016/S0079-6123)(00)26009-3 Knill, D. C. (1998). _Discrimination of planar surface slant from texture: Human and ideal observers compared_. Vision Research, 38(11), 1683-1711. [https://doi.org/10.1016/S0042-6989](https://doi.org/10.1016/S0042-6989)(97)00325-8 * Koch et al. (2018) Koch, E., Baig, F., & Zaidi, Q. (2018). _Picture perception reveals mental geometry of 3D scene inferences_. Proceedings of the National Academy of Sciences, 115(30), 7807-7812. [https://doi.org/10.1073/pnas.1804873115](https://doi.org/10.1073/pnas.1804873115) * Korteling (Hans) Korteling, J. E. (Hans)., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom, A. R. (2021). _Human- versus Artificial Intelligence_. Frontiers in Artificial Intelligence, 4. [https://www.frontiersin.org/article/10.3389/frai.2021.622364](https://www.frontiersin.org/article/10.3389/frai.2021.622364) * Kulkarni et al. (2015) Kulkarni, T. D., Whitney, W., Kohli, P., & Tenenbaum, J. B. (2015). _Deep Convolutional Inverse Graphics Network_. ArXiv:1503.03167 [Cs]. [http://arxiv.org/abs/1503.03167](http://arxiv.org/abs/1503.03167) * Lees et al. (2018) Lees, S., Dayan, N., Cecotti, H., McCullagh, P., Maguire, L., Lotte, F., & Coyle, D. (2018). _A review of rapid serial visual presentation-based brain-computer interfaces_. Journal of Neural Engineering, 15(2), 021001. [https://doi.org/10.1088/1741-2552/aa9817](https://doi.org/10.1088/1741-2552/aa9817) * Lotte & Jeunet (2018) Lotte, F., & Jeunet, C. (2018). _Defining and quantifying users' mental imagery-based BCI skills: A first step_. Journal of Neural Engineering, 15(4), 046030. [https://doi.org/10.1088/1741-2552/aac577](https://doi.org/10.1088/1741-2552/aac577) * Manzoor et al. (2021) Manzoor, B., Othman, I., & Pomares, J. C. (2021). _Digital Technologies in the Architecture, Engineering and Construction (AEC) Industry--A Bibliometric--Qualitative Literature Review of Research Activities_. International Journal of Environmental Research and Public Health, 18(11), Article 11. [https://doi.org/10.3390/ijerph18116135](https://doi.org/10.3390/ijerph18116135) * Marois & Ivanoff (2005) Marois, R., & Ivanoff, J. (2005). _Capacity limits of information processing in the brain_. Trends in Cognitive Sciences, 9(6), 296-305. [https://doi.org/10.1016/j.tics.2005.04.010](https://doi.org/10.1016/j.tics.2005.04.010) * Marr et al. (1982) Marr, D., Poggio, T. A., & Ullman, S. (1982). _Vision: A Computational Investigation into the Human Representation and Processing of Visual Information_. The MIT Press. * Maruya & Zaidi (2020) Maruya, A., & Zaidi, Q. (2020). _Mental geometry of perceiving 3D size in pictures_. Journal of Vision, 20(10), 4. [https://doi.org/10.1167/jov.20.10.4](https://doi.org/10.1167/jov.20.10.4) * Mohammadpour et al. (2019) Mohammadpour, A., Karan, E., & Asadi, S. (2019). Artificial Intelligence Techniques to Support Design and Construction. ISARC Proceedings, 1282-1289. * O'Hare & Hibbard (2011) O'Hare, L., & Hibbard, P. B. (2011). _Spatial frequency and visual discomfort_. Vision Research, 51(15), 1767-1777. [https://doi.org/10.1016/j.visres.2011.06.002](https://doi.org/10.1016/j.visres.2011.06.002) * Oliver (2019) Oliver, S. (2019). _Communication and trust: Rethinking the way construction industry professionals and software vendors utilise computer communication mediums_. Visualization in Engineering, 7(1), 1. [https://doi.org/10.1186/s40327-019-0068-y](https://doi.org/10.1186/s40327-019-0068-y) * Paas et al. (2003) Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). _Cognitive Load Measurement as a Means to Advance Cognitive Load Theory_. Educational Psychologist, 38(1), 63-71. [https://doi.org/10.1207/S15326985EP3801_8](https://doi.org/10.1207/S15326985EP3801_8) * Palmer (1999) Palmer, S. E. (1999). _Vision science: Photons to phenomenology (3rd printing)_. MIT Press. * Pan & Zhang (2021) Pan, Y., & Zhang, L. (2021). _Roles of artificial intelligence in construction engineering and management: A critical review and future trends_. Automation in Construction, 122, 103517. [https://doi.org/10.1016/j.autcon.2020.103517](https://doi.org/10.1016/j.autcon.2020.103517) * Perdiks & Millan (2020) Perdiks, S., & Millan, J. del R. (2020). _Brain-Machine Interfaces: A Tale of Two Learners_. IEEE Systems, Man, and Cybernetics Magazine, 6(3), 12-19. [https://doi.org/10.1109/MSMC.2019.2958200](https://doi.org/10.1109/MSMC.2019.2958200) * Pizlo (2001) Pizlo, Z. (2001). _Perception is viewed as an inverse problem_. Vision Research, 41(24), 3145-3161. [https://doi.org/10.1016/S0042-6989](https://doi.org/10.1016/S0042-6989)(01)00173-0 * Ribeiro & Hancock (1999) Ribeiro, E., & Hancock, E. R. (1999). _Improved orientation estimation for texture planes using multiple vanishing points_. * Riener (2017) Riener, A. (2017). _Chapter 19--Subliminal Perception or "Can We Perceive and Be Influenced by Stimuli That Do Not Reach Us on a Conscious Level?"_ In M. Jeon (Ed.), Emotions and Affect in Human Factors and Human-Computer Interaction (pp. 503-538). Academic Press. [https://doi.org/10.1016/B978-0-12-801851-4.00019-7](https://doi.org/10.1016/B978-0-12-801851-4.00019-7) * Saunders & Knill (2001) Saunders, J. A., & Knill, D. C. (2001). _Perception of 3D surface orientation from skew symmetry_. Vision Research, 41(24), 3163-3183. [https://doi.org/10.1016/S0042-6989](https://doi.org/10.1016/S0042-6989)(01)00187-0
2302.11426
Mining compact high utility sequential patterns
High utility sequential pattern mining (HUSPM) aims to mine all patterns that yield a high utility (profit) in a sequence dataset. HUSPM is useful for several applications such as market basket analysis, marketing, and website clickstream analysis. In these applications, users may also consider high utility patterns frequently appearing in the dataset to obtain more fruitful information. However, this task is high computation since algorithms may generate a combinatorial explosive number of candidates that may be redundant or of low importance. To reduce complexity and obtain a compact set of frequent high utility sequential patterns (FHUSPs), this paper proposes an algorithm named CHUSP for mining closed frequent high utility sequential patterns (CHUSPs). Such patterns keep a concise representation while preserving the same expressive power of the complete set of FHUSPs. The proposed algorithm relies on a CHUS data structure to maintain information during mining. It uses three pruning strategies to eliminate early low-utility and non-frequent patterns, thereby reducing the search space. An extensive experimental evaluation was performed on six real-life datasets to evaluate the performance of CHUSP in terms of execution time, memory usage, and the number of generated patterns. Experimental results show that CHUSP can efficiently discover the compact set of CHUSPs under different user-defined thresholds.
Tai Dinh, Philippe Fournier-Viger, Huynh Van Hong
2023-02-22T15:05:18Z
http://arxiv.org/abs/2302.11426v1
# Mining compact high utility sequential patterns ###### Abstract High utility sequential pattern mining (HUSPM) aims to mine all patterns that yield a high utility (profit) in a sequence dataset. HUSPM is useful for several applications such as market basket analysis, marketing, and website clickstream analysis. In these applications, users may also consider high utility patterns frequently appearing in the dataset to obtain more fruitful information. However, this task is high computation since algorithms may generate a combinatorial explosive number of candidates that may be redundant or of low importance. To reduce complexity and obtain a compact set of frequent high utility sequential patterns (FHUSPs), this paper proposes an algorithm named CHUSP for mining closed frequent high utility sequential patterns (CHUSPs). Such patterns keep a concise representation while preserving the same expressive power of the complete set of FHUSPs. The proposed algorithm relies on a CHUS data structure to maintain information during mining. It uses three pruning strategies to eliminate early low-utility and non-frequent patterns, thereby reducing the search space. An extensive experimental evaluation was performed on six real-life datasets to evaluate the performance of CHUSP in terms of execution time, memory usage, and the number of generated patterns. Experimental results show that CHUSP can efficiently discover the compact set of CHUSPs under different user-defined thresholds. keywords: data mining, high utility sequential patterns, closed high utility sequential patterns ## 1 Introduction Frequent high utility sequential pattern mining (FHUSPM) finds sequential patterns with high utility and frequently appear in sequence datasets. Such patterns appear commonly in various real-life applications such as market basket analysis, website clickstream analysis, customer behavior analysis, and stock market analysis. In market basket analysis, when analyzing customer transactions, a retail store manager may be interested in finding the high utility patterns that appear regularly and have a high sale volume. Detecting these purchase patterns is useful for understanding customers' behavior and thus adopting effective sales and marketing strategies. For example, high-end electronic devices and jewelry may generate more profit than many daily-life products. However, they may be sold infrequently, and their sales volumes may greatly fluctuate. Suppose retailers know that some products yield a high profit and are frequently purchased; they can change business strategies for these items to increase sales and improve inventory management. In marketing, marketers want to know some sets of products frequently sold with high revenue. They can better understand customers' preferences and then design efficient marketing strategies. In website clickstream analysis, the number of clicks or time spent on each web page or user interface (UI) element can be viewed as the quantities of items in sequences. Thus, administrators can discover the elements where users spend most of their time. Based on that, administrators can improve functions and UI to suit these important behaviors better. Although the problem of HUSPM and its extensions have been studied in several previous [1; 2; 3; 4; 5; 6; 7], these algorithms discover a full set of HUSPs requiring exponential complexity. Therefore, in this paper, we extend the concept of closed patterns from frequent sequential pattern mining [8] for HUSPM. A closed (frequent) high utility sequential pattern (CHUSP) is a HUSP having no proper super-sequences that are HUSPs and appear in the same number of sequences. Such patterns are also meaningful for real-life applications since they are the largest FHUSPs common to groups of customers. Detecting the largest sets of items yielding high profit and frequently sold supports sellers to understand better what customers need, adapt their business and marketing strategies, and improve their services. There is a work [9] focusing on this topic in literature. However, the computational complexity of this algorithm is still high. In addition, the experimental evaluation was conducted on small-scale datasets which a few differences in characteristics. Last, this work did not provide the application accompanying its proposed algorithms. The above observations motivated the design of an efficient algorithm that can mine CHUSPs. Generally, we highlighted the major contributions and innovations of this paper as follows: * We proposed an efficient pattern-growth-based algorithm named CHUSP to discover the set of CHUSPs interesting for some tasks. CHUSP mines the patterns from the dataset in a divide-and-conquer approach. It first derives the set of size-1 quantitative sequences, and for each sequence \(p\), it derives \(p\)'s conditional (or projected) dataset by partitioning it and recursively mining the projected dataset. An innovation of the CHUSP is that the algorithm checks the "closed" property of the generated pattern at each round of the mining process. Thanks to this property, at the end of the mining process, we obtain a small set of CHUSPs. The algorithm uses two pruning strategies to eliminate early low-utility and non-frequent patterns. Thus, the al gorithm achieves good performances on large-scale datasets. * An extensive experiment was conducted on real datasets to evaluate the performance of CHUSP in terms of runtime, memory usage, and the number of generated patterns. Experimental results show that CHUSP can efficiently discover all CHUSPs. In addition, its performance is independent of the datasets' characteristics as long as they contain utility information, i.e., it can work on both quantitative transaction or quantitative sequence datasets. * We provide the application of CHUSP. The application can be used for any dataset if its format matches the input requirement. The rest of this paper is organized as follows. Section 2 reviews related work; section 3 introduces the preliminaries; section 4 describes the proposed CHUSP algorithm; section 5 shows a comparative experiment; section 6 concludes and outlines the direction for future work. ## 2 Related work High utility sequential patterns mining aims to find all sequential patterns with a utility greater than or equal to a minimum utility threshold _minUtil_ in a sequence dataset. HUSPM is quite challenging as the utility measure is neither monotone nor anti-monotone, unlike the support measure traditionally used in SPM. Numerous algorithms have been proposed for HUSPM, and its extension [1; 2; 3; 4; 5; 6; 7; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. A thorough survey of HUSPM can be found at this work [21]. Yin et al. [1] proposed an algorithm named USpan for HUSPM. This algorithm builds a lexicographic q-sequence tree (LQS-Tree) to maintain all generated sequences during the mining process. In addition, it uses two concatenation mechanisms: I-Concatenation and S-Concatenation, in combination with two pruning strategies: width and depth pruning. Wang et al. [2] proposed an algorithm named HUSpan. The algorithm uses a utility-chain structure to represent the search space of HUSPM. It also introduces two tight utility upper bounds: prefix extension utility (PEU) and reduced sequence utility (RSU), as well as two companion pruning strategies to identify HUSPs. The experimental evaluation showed that HUS-Span outperforms USpan in terms of execution time. The reason is that by using PEU and RSU, HUS-Span can generate fewer candidates than USpan. Le et al. [3] proposed two algorithms named AHUS and AHUS-P. The algorithms use a pure array structure (PAS) to represent sequences. This data structure is very compact and contains sufficient information on sequences. Thus, it can reduce memory usage and effectively support the mining process. Moreover, the two algorithms use two upper bounds to prune the search space. AHUS-P uses a parallel mining strategy to discover patterns concurrently by sharing the search space with multiple processors. Each processor independently performs its mining task and does not wait for other tasks. AHUS-P is more efficient than the serial AHUS algorithm for large-scale datasets. Lin et al. [22] proposed a sequence-utility (SU)-Chain algorithm for HUSPM. A lexicographic enumeration (LE)-tree is used in the algorithm to represent the search space for promising candidates. The projecting approach is used to accelerate the progress of generating promising candidates. In addition, multiple pruning strategies are used to identify information not relevant to the mining progress. For frequent high utility sequential pattern mining, Gupta et al. [23] proposed a hybrid pattern growth-based algorithm named HUFI-SPM to mine sequential patterns satisfying both frequency and utility thresholds. It uses support-utility table to maintain information on support and utility at various time intervals. It uses sequence support as the downward closure property to reduce the search space. Ni et al. [24] proposed an algorithm named FHUSOM to mine the architecture design requirements from the operational scenario data. The algorithm uses a data structure called FHUDS to keep all patterns and combines four pruning strategies called SWU, PEU, RSU, and MFP to reduce the search space. The algorithm supports the design of an integrated multi-platform mission system (MPMS) architect and is efficient in the process of integrated architecture design. For closed high utility sequential pattern mining, Truong et al. [9] proposed an algorithm named FMaxCloHUSM to mine the set of frequent maximal and closed high utility sequences. The algorithm uses the width and depth pruning strategies to remove low utility sequences and a novel local pruning strategy named LPCHUS to remove non-closed and non-maximal high utility sequences. FMaxCloHUSM uses a data structure called SIDUL to represent the dataset in a vertical format and calculate utility information of sequences and their extensions. ## 3 Preliminaries Given a set of \(m\) distinct items \(I=\{i_{1},i_{2},\ldots,i_{m}\}\). A quantitative item (q-item) is a pair of the form \((i,q)\) where \(i\in I\) and \(q\) is a positive number representing how many units of this item were purchased (internal utility). The quantity of a q-item \(i\) in \(s\) is denoted as \(q(i,s)\). Each item \(i_{k}\in I\) (\(1\leq k\leq m\)) is associated with a weight denoted as \(p(i_{k})\) representing the unit profit or importance (external utility) of \(i_{k}\). A quantitative itemset (q-itemset) \(X=[(i_{1},q_{1})(i_{2},q_{2})\ldots(i_{k},q_{k})]\) is a set of one or more q-items where \((i_{j},q_{j})\) is a q-item \((1\leq j\leq k)\). In the following, brackets are omitted for brevity if a q-itemset contains only one q-item. In addition, without loss of generality, assume that q-items in a q-itemset are sorted according to the lexicographical order (e.g., \(a\prec b\prec c\prec d\prec e\prec f\prec g\)). A quantitative sequence (q-sequence) \(s\) is an ordered list of q-itemsets \(s=\langle I_{1}I_{2}\ldots I_{l}\rangle\) where \(I_{j}(1\leq j\leq l)\) is a q-itemset. A quantitative sequence dataset is a set of \(n\) q-sequences _SDB_\(=\{s_{1},s_{2},\ldots,s_{n}\}\), where each sequence \(s_{sid}\in S\) (\(1\leq sid\leq n\)) is a subset of \(I\), and \(sid\) is its unique identifier. **Example 1**.: Table 1 shows the items and their respective unit profits appearing in an online retail store. In this example, the external utility of each item \(a\), \(b\), \(c\), \(d\), \(e\), \(f\), and \(g\) are 2, 5, 3, 4, 6, 1, and 7, respectively. Table 2 shows five shopping q-sequences with quantities, having the sequence identifiers (\(sid\)) 1 to 5 (denoted \(s_{1}\) to \(s_{5}\)). Each q-sequence comprises one or more transactions (q-itemsets). Each transaction in a q-sequence has a unique transaction identifier \(tid\), and consists of one or many q-items. The q-sequence \(s_{4}\) contains three q-itemsets \([(b,1)c(1)(e,2)(g,5)]\), \([(b,2)(c)((e,2)]\) and \([(a,3),(b),2(e,4)(f,2)]\) in which the internal utility of q-item \(e\) in the first, second and third q-itemsets are 2, 4 and 2, respectively. We use the notation \(i_{id}\) to refer to the occurrence of the item \(i\) in the \(tid\)-th transactions of a q-sequence. In \(s_{2}\), the notation \(c_{1}\) means that the q-item \(c\) appears in the first q-itemset of \(s_{2}\), that is \((c,2)\), while \(c_{3}\) represents \((c,1)\) in the third q-itemset of \(s_{2}\), and \(c_{1}\prec c_{3}\) in \(s_{2}\). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline item & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline unit profit & 2 & 5 & 3 & 4 & 6 & 1 & 7 \\ \hline \end{tabular} \end{table} Table 1: External utility values **Definition 1** (The size and length of a q-sequence).: The size of \(s\) is the number of q-itemsets it contains. The length of \(s\) is the number of q-items in \(s\). In other words, \(s\) is called k-q-sequence if and only if there are k q-items in \(s\), i.e. \(|s|=k\), where \(|s|=\sum_{I_{j}\leq s}|I_{j}|\) and \(|I_{j}|\) is the total number of q-items in the q-itemset \(I_{j}\). For example, the size and length of \(s_{4}\) in Table 2 are 3 and 11, respectively. **Definition 2** (q-itemset containment).: Let \(X_{\text{q-i}}\)= [\((i_{\text{q}_{1}},\,q_{\text{a}_{1}})\) (\(i_{\text{q}_{2}},\,q_{\text{a}_{2}})\)... \((i_{\text{q}_{n}},\,q_{\text{a}_{m}})\)] and \(X_{\text{b}}\)= [\((i_{\text{b}_{1}},\,q_{\text{b}_{1}})\) (\(i_{\text{b}_{2}},\,q_{\text{b}_{2}})\)... \((i_{\text{b}_{r}},\,q_{\text{b}_{r}})\)] be two q-itemsets, where \(i_{\text{q}_{i}}\in I\) (\(1\leq s\leq m\)) and \(i_{b_{r}}\in I\) (\(1\leq s\leq m^{\prime}\)). If there exist positive integers \(1\leq l_{j}\leq l_{z}\)... \(l_{\text{b}_{j}}\)... \(m^{\prime}\), such that \(i_{\text{a}_{1}}=i_{\text{b}_{j_{1}}}\wedge i_{\text{q}_{1}}=q_{\text{b}_{j_{1 }}}\), \(i_{\text{q}_{2}}=i_{\text{b}_{j_{2}}}\wedge i_{\text{q}_{2}}=q_{\text{b}_{j_{2 }}}\),... \(i_{\text{a}_{m}}=i_{\text{b}_{m}}\wedge q_{\text{a}_{m}}=q_{\text{b}_{j_{m}}}\) then \(X_{\text{b}}\) is said to contain \(X_{\text{a}}\), denoted as \(X_{\text{a}}\subseteq\overline{X}_{\text{b}}\). For example, q-itemset \([(a,1)(b,1)(e,3)]\) in \(s_{3}\) contains \((a,1)\), \((b,1)\), \((e,3)\), \([(a,1)(b,1)]\), \([(a,1)(e,3)]\), \([(b,1)(e,3)]\), \([(a,1)(b,1)(e,3)]\). **Definition 3** (q-subsequence).: Given q-sequences \(A\) = \(\langle\)\(A_{1}\)\(A_{2}\)... \(A_{n}\) \(\rangle\) and \(B\)= (\(B_{1}\)\(B_{2}\)... \(B_{n^{\prime}}\) ) (\(n\leq n^{\prime}\)), where \(A_{\text{a}}\), \(B_{\text{B}}\) are q-itemsets (\(1\leq\alpha\leq n\), \(1\leq\beta\leq n^{\prime}\)). If there exists positive integers \(1\leq j_{1}\leq j_{2}\leq\ldots\leq j_{n}\leq n^{\prime}\), such that \(A_{1}\subseteq B_{j_{1}}\), \(A_{2}\subseteq B_{j_{2}}\),..., \(A_{n}\subseteq B_{j_{n}}\), then \(X_{\text{b}}\) is a q-subsequence of \(B\) and \(B\) is a q-supersequence of \(A\), denoted as \(A\subseteq B\). For example, \([(a,5)(c,2)(g,5)]\) and \(\langle[(a,3)(b,1)(c,3)(f,2)]\rangle\) are two q-subsequences of \(s_{1}\). **Definition 4** (Utility of a q-sequence).: The utility of an \((i,q)\) in \(s\) is denoted and defined as \(u(i,q)=p(i)\times q(i)\). The utility of a q-itemset \(X\) in \(s\) is denoted and defined as \(u(X)=\sum\limits_{k=1}^{m}u(i_{k},q_{k})\). The utility of \(s\) is denoted and defined as \(u\left(s\right)=\sum\limits_{j=1}^{n}u(X_{j})\). **Example 2**.: The utility of \(g\) in \(s_{1}\) (i.e. \(g_{1}\)) is \(u(g,5)=7\times 5=35\). The utility of \([(a,5)(c,2)(g,5)]\) in \(s_{1}\) is \(u([a,5)(c,2)(g,5)]\) = \(u(a,5)+u(c,2)+u(g,5)=2\times 5+3\times 2+7\times 5=51\). The utility of \(s_{1}\) is \(u(s_{1})=u([(a,5)(c,2)(g,5)])+u([a,3)(b,1)(c,3)(f,2)]\) + \(u([b,3)(d,2)(e,2)])=51+22+35=108\). **Definition 5** (Utility matrix).: A utility matrix of \(s\) is \(m\times n\) matrix, where \(m\) and \(n\) are the number of q-items and q-itemsets (transactions) in \(s\), respectively. The element at the position \((k,j)\) (\(0\leq k<m\), \(0\leq j<n\)) of the utility matrix stores the utility matrix \(u(i_{k},q)\) of the q-item \((i_{k},q)\) in the q-itemset \(j\). Table 3 shows the utility matrix of \(s_{3}\) for the sequence dataset \(SDB\) depicted in Table 2. **Definition 6** (Remaining utility).: Given \(s=\langle X_{1}X_{2}...X_{n}\rangle\) where \(X_{\text{t}}\)=\([(i_{k},q_{k_{1}})\) (\(i_{k},q_{k_{2}})\)...\((i_{\text{b}_{m}},q_{\text{a}_{m}})]\) is a q-itemset of \(s\). The remaining utility of q-item \(i_{\text{a}_{m}}\) in \(s\) is denoted and defined as \(ru(i_{\text{a}_{m}},s)=\sum\limits_{\begin{subarray}{c}\rho\in s_{1}...s_{m} \end{subarray}}u(i^{\prime})\). For example, the values \(ru(a_{1},s_{3})\), \(ru(b_{1},s_{3})\) and \(ru(b_{3},s_{3})\) are respectively equal to 89, 84 and 18. **Definition 7** (Remaining utility matrix).: A remaining utility matrix of \(s\) is \(s_{1}\times m\) matrix, where \(m\) and \(n\) are the number of q-items and q-itemsets (transactions) in \(s\). The element at the position \((k,j)\)(\(0\leq k<m\), \(0\leq j<n\)) of the remaining utility matrix stores the \(ru(i_{k},q)\) of q-item \((i_{k},q)\) in q-itemset \(j\). For example, Table 4 shows the remaining utility matrix of \(s_{3}\) of Table 2. **Definition 8** (Matching).: Given \(s=\langle(i_{1},q_{1})(i_{2},q_{2})...(i_{n},q_{n})\rangle\) and a sequence \(t=\langle t_{1}t_{2}...t_{m}\rangle\), \(s\) is said to match \(t\) if and only if \(n=m\) and \(i_{k}=t_{k}\) for \(1\leq k\leq n\), denoted as \(t\sim s\). **Example 3**.: Sequence \(\langle(acg)(abcf)(bde)\rangle\) matches \(s_{1}\). Note that because of quantities, two q-items may be considered different, although they contain the same item. Hence there could be multiple q-subsequences of a q-sequence matching a given sequence. For instance, sequence \(\langle(e)\rangle\) matches respectively the q-subsequence \(\langle(e,3)\rangle\) and \(\langle(e,1)\rangle\) in the first and third q-itemsets of \(s_{3}\). Sequence \(\langle[ac]\rangle\) matches both the q-subsequences \(\langle[(a,5)(c,2)]\rangle\) and \(\langle[(a,3)(c,3)]\rangle\) of \(s_{1}\). **Definition 9** (Ending q-item maximum utility).: Given a sequence \(s=\langle x_{1}x_{2}...x_{n}\rangle\) where \(x_{j}(1\leq j\leq n)\) is a q-itemset and a sequence \(t=\langle t_{1}t_{2}...t_{m}\rangle\). If any q-subsequence \(s_{\text{a}}=\langle x_{\text{a}_{1}x_{2}}...x_{\text{a}_{m}}\rangle\) (\(s_{\text{a}}\subseteq s\) and \(s_{\text{a}}\sim t\)) where \(x_{\text{a}_{m}}=[(i_{\text{a}_{1}},q_{\text{a}_{1}})(i_{\text{a}_{2}},q_{\text{a}_{2 }})\ldots(i_{\text{a}_{m}},q_{\text{a}_{m}})]\), then \((i_{\text{a}_{m}},q_{\text{a}_{m}})\) is called the ending q-item sequence \(t\) in \(s\). The ending q-item maximum utility of a sequence \(t\) in \(s\) is denoted and defined as \(u(t,s)=\max\{u(s^{\prime})|s^{\prime}\sim t\wedge s^{\prime}\subseteq **Example 5**.: The utility of \(t=\langle\,c\,b\,\rangle\) in \(s_{1}\) is calculated as \(v(t,s_{1})\) = \(\{u(\langle(c,2)(b,1)\rangle),\,u(\langle(c,2)(b,3)\rangle),\,u(\langle(c,3)(b,3) \rangle)\}\) = \(\{11,\,21,\,24\}\). The utility of \(t\) in \(SDB\) is \(v(t)\)= \(\{v(t,s_{1}),\,v(t,s_{2}),\,v(t,s_{3}),\,v(t,s_{4})\}\) = \(\{11,\,21,\,24,\,16,\,16,\,19,\,13,\,13\}\). **Definition 11** (Sequence maximum utility).: Given a sequence \(t\), the maximum utility of \(t\) in \(s\) is denoted and defined as \(u_{\max}(t,s)\) = \(\max\{u(t,i,s):\forall i\in S^{\prime}\wedge s^{\prime}\sim t\wedge s^{\prime} \subseteq s\}\). The maximum utility of a sequence \(t\) in a q-sequence dataset \(SDB\) is denoted and defined as \(u_{\max}(t)\)= \(\sum u_{\max}(t,s):\forall s\in S\}\). **Example 6**.: The maximum utility of the sequence \(t=\langle c\rangle\) in the sequence dataset \(SDB\) shown in Table 2 is \(u_{\max}(t)\) = \(u_{\max}(\langle c\rangle\), \(s_{1})\) + \(u_{\max}(\langle c\rangle)\), \(s_{2})\) + \(u_{\max}(\langle c\rangle)\), \(s_{3})\) + \(u_{\max}(\langle c\rangle)\), \(s_{4})\) = \(24+16+19+13=72\). **Definition 12** (high utility sequential pattern).: A sequence \(t\) is said to be a high utility sequential pattern if \(u_{\max}(t)\geq\xi\), where \(minUtil\) is a given user-specified minimum utility threshold. For example, given \(minUtil=154\), the complete set of HUSPs in the sequence dataset \(SDB\) (Table 2) is shown in Table 5 **Definition 13** (Support of a pattern).: Given a sequence \(t\) and the dataset \(SDB\) = \(\{s_{1},s_{2},...,s_{n}\}\), the support (or absolute support or support.count) of the sequence \(t\) in \(SDB\) is defined as the number of q-sequences that contain \(t\) and is denoted by \(supp(t)\). Mathematically, the support of \(t\) is defined as \(supp(t)=|\{s|s\sim t\wedge s\in SDB\}|\). For example, \(supp((c\rangle))=|\{s_{1}s_{2}s_{3}s_{4}\}|\) = 4, \(supp((\langle c\rangle(be\rangle))\) = \(|\{s_{1},s_{3},s_{4}\}|\) = 3. **Definition 14** (Frequent high utility sequential patterns).: Given a sequence \(t\) and the dataset \(SDB\) = \(\{s_{1},s_{2},...,s_{n}\}\), \(t\) is said to be a frequent high utility sequential pattern (FHUSP) if and only if \(t\) is a HUSP and \(sup(t)\geq minSup\), for a threshold \(minSup\) set by the user. **Definition 15** (Closed frequent high utility sequential patterns).: Given a sequence \(t\) and the dataset \(SDB\) = \(\{s_{1},s_{2},...,s_{n}\}\), \(t\) is said to be a closed frequent high utility sequential pattern (CHUSP) if and only if \(t\) is a FHUSP and there exists no FHUSP that is a proper super-sequence of \(t\) and has the same support. Mathematically, the set of all CHUSPs is defined as: \[CHUSP=\{s\in FHUSP|s^{\prime}\notin FHUSP:s\subseteq s^{\prime}\wedge supp(s) =supp(s^{\prime})\}\] The goal of CHUSPM is to discover the set of CHUSPs that satisfies the definition 15. For example, given \(minUtil=130\), \(minSup\)=50%, the set of CHUSPs is shown in Table 6. **Definition 16** (ULS: utility list structure).: Assume that a sequence \(t\) has \(k\) (\(k>0\)) ending q-items \(i\) in a q-sequence \(s\) where \(i_{1}<i_{2}<...<i_{k}\). The ULS of \(t\) in \(s\) is a list of \(k\) elements, where the \(\alpha^{th}(1\leq\alpha\leq k)\) element in the ULS contains \[\begin{cases}tid:\text{ is the itemset ID of }i_{\alpha}\text{ of }t\text{ in }s\\ acu:\text{ is the maximum utility of }i_{\alpha}\text{ in }t\\ link:\text{ is a pointer pointing to either the }(\alpha+1)^{th}\text{ element or }null\\ \end{cases}\] **Definition 17** (UCS: utility chain structure).: Given a sequence \(t\) and \(s\). The \(UCS\) of \(t\) in \(s\) is denoted and defined as \[UCS(t,s)\)=\begin{cases}p\text{ }\ \[\begin{cases}\text{\ \ procedure also checks if \(t\) is in \(-CHUSP\_Set\); if No, and \(t\) into this set (line 4). The purpose of this action is to track all non-candidate sequences. During mining, \(t\) may be extended to other \(t^{\prime}\) by doing other concatenations. In this case, \(t\) involves in other checking procedures. The procedure then inserts the current sequence \(t^{\prime}\) into the \(CHUSP\_Set\) (line 5). It is worth noting that CHUSP is a recursive algorithm. Thus the current sequence \(t^{\prime}\) will be later called in other rounds of the algorithm to extend itself. In other words, the sequence \(t^{\prime}\) is the super-sequence of a sequence \(t\) at this stage, but it will be the sub-sequence of another sequence in another stage. Thus, any sequences in the \(CHUSP\_Set\) are candidates and may be removed from the set when the algorithm detects super-sequences having the same support. If the supports of \(t\) and \(t^{\prime}\) are different, the two patterns become candidates. The procedure adds the current pattern \(t^{\prime}\) to the \(CHUSP\_Set\) as a candidate (line 8). Next, the procedure checks if the previous sequence \(t\) is in the \(-CHUSP\_Set\). If yes, then it will not be a CHUSP candidate. Otherwise, \(t\) is inserted into \(CHUSP\_Set\). The CHUSP recursively calls itself to expand \(t^{\prime}\) (line 15). A similar process is performed for all items in sExts. It passes a sequence and its projected dataset to each recursive call as input parameters. The sequence dataset \(SDB\) and lines 1-4 are used only for initializing the algorithm and are not performed during recursive calls. For each item in sExts, a new pattern is generated by performing an S-Extension (lines 16 to 22). When the algorithm completes recursive calls, the algorithm traverses all patterns in the \(CHUSP\_Set\) to remove non-CHUSPs from this list (line 23). Finally, it returns all CHUSPs as the output. ## 5 Comparative experiment Experiments were performed to evaluate the performance of CHUSP on a computer with a 64-bit Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz, 12 GB of RAM, running Windows 10 Enterprise LTSC. The source code is publicly available on Github. All the algorithms were implemented in C#. The proposed algorithm was compared with two algorithms. The first algorithm is the HUS-Span algorithm [2] for mining HUSPs. The second algorithm is FHUSP, an extension of HUS-Span for mining FHUSPs. The performance of the three algorithms has been compared on real datasets previously used in [3; 14]. The characteristics of these datasets are shown in Table 7. They are eight real-life datasets. They have varied characteristics, such as sparse and dense datasets; short and long sequences. For each dataset, the \(minUtil\) was decreased until a clear winner was observed or algorithms became too long to execute. In some cases, a constraint on the maximum length of CHUSP (_maxLength_) was used to speed up the experiments. For \(minSup\), a suitable empirical value was chosen for each dataset to ensure that the algorithms discovered a certain number of CHUSPs. The \(minSup\) values for Sign, Kosarak10k, BMSWebView1, BMSWebView2, Fifa and Bible were set to 50%, 5%, 20%, 20%, 0.5%, and 0.5%, respectively. First, the execution time of CHUSP is compared with HUS-Span and FHUSP. Figure 2 show that CHUSP outperforms the compared algorithms on all datasets. Each subfigure's vertical and horizontal axes represent the execution time (milliseconds) and minimum utility threshold values, respectively. In general, for all datasets, when the minimum utility threshold is decreased or when datasets contain more sequences or longer sequences, the running time of the algorithms increase. In that case, CHUSP can be much more efficient than the two algorithms, especially on Sign, Bible, BMSWebview1, and FIFA datasets. On Sign (_minSup=_50%) CHUSP is respectively up to 295.7, 250.3, 222.7, 188.9, 156.6, 125.9, 116.2, 75.8, 50.9, and 37.5 times faster than HUS-Span for \(minUtil\) from 12,000 to 35, 000. It is respectively up to 292.72, 245.01, 215.88, 188.51, 154.50, 118.80, 111.47, 73.24, 50.24, and 35.94 times faster than FHUSP. On BMSWebView2 (_minSup=_50%) CHUSP is re-spectively up to 122, 8.9, 7.6, 6.3, 4.7, 3.4, 3.2, 2.1, 1.9, and 1.7 times faster than HUS-Span for \(minUtil\) from 10,000 to 100, 000, 000, 100, 000, 11.5, 8, 7.1, 5.8, 4.1, 3, 2.3, 1.8, 1.5, and 1.3 times faster than FHUSP. Similar results can be observed for other datasets. The results indicate that the MSP pruning strategy of CHUSP is effective and can prune many non-frequent patterns. In addition, the CHUS structure and pruning strategies are suitable for mining CHUSPs. Thus, the algorithm can facilitate the mining process and prune more non-candidates than HUS-Span and FHUSP algorithms. Second, the algorithms have also been compared in terms of memory performance for the six datasets for the same \(minUtil\), \(minSup\), and \(maxLength\) values as in the runtime experiment. Results are shown in Figure 3 in terms of memory usage (vertical axes) for various minimum utility values (horizontal axes). CHUSP consumes less memory than HUS-Span in all cases. It means that the CHUSP structure is more effective than the structure used by the HUS-Span algorithm. In addition, the MSP strategy can filter many non-frequent candidates. CHUSP is also better than FHUSP in most cases, although they are very close in some cases. On FIFA and Bible, we can observe that CHUSP performs much better than FHUSP. Except for the BMSWebview1 dataset, FHUSP consumes less memory than CHUSP on large \(minUtil\) values. However, for low \(minUtil\) values, when the algorithms need more time to mine patterns, CHUSP outperforms FHUSP. Generally, for each dataset, the memory usage increases when the minimum utility threshold is decreased, and it is also greater for larger datasets. Finally, the number of patterns was measured for various \(minUtil\) threshold values on each dataset. In Figure 4, vertical axes denote the number of patterns, and horizontal axes indicate the corresponding maximum threshold values. The number of patterns generated by CHUSP is much less than that of HUS-Span and FHUSP for each dataset. On Sign (\(minSup\)=50%), for \(minUtil\) from 12,000 to 35,000, CHUSP found 62, 58, 51, 48, 41, 35, 31, 13, 5, and 3, respectively. It can be observed that the number of patterns by CHUSP was respectively up to 169.1, 121.2, 93.7, 68.2, 37.7, 21.5, 16.5, 7.3, 3.4, and 1.3 times less than those found by HUS-Span. In addition, the number of patterns by CHUSP was up to 1.13, 1.09, 1.1, 1.04, 1.02, 1.03, 1.03, 1.08, 1.00, and 1.00 times less than those found by FHUSP. On Kosarak10k (_minSup=_5%), the \(maxLength\) was set to 3 for the \(minUtil\) values of 10, 000 and 20,000 for HUS-Span and FHUSP; for CHUSP, this parameter was set to \(full\). For \(minUtil\) from 10,000 to 100, 000, CHUSP found 21, 14, 10, 5, 3, 2, 2, 2, 1, and 1 CHUSPs, respectively. It can be observed that the number of patterns by CHUSP was respectively up to 3.9, 1.9, 1.4, 1.2, 1.3, 1.5, 1.5, 1.5, 2.0, and 2.0 times less than those \begin{table} \begin{tabular}{|l|c|c|c|} \hline Dataset & \#Sequence & \#Item & Avg. seq length \\ \hline \hline Sign & 800 & 310 & 51.99 \\ \hline Kosarak10k & \(10,000\) & \(10,094\) & 8.14 \\ \hline BMSWebView1 & \(59,601\) & 497 & 2.51 \\ \hline BMSwebview2 & \(77,512\) & \(3,340\) & 4.62 \\ \hline Fifa & \(20,450\) & \(2,990\) & 34.74 \\ \hline Bible & \(36,369\) & \(13,905\) & 21.64 \\ \hline \end{tabular} \end{table} Table 7: Characteristics of the datasets Figure 1: The user interface of the CHUSP application Figure 2: Runtimes for various minimum utility threshold values by HUS-Span. In addition, the number of patterns by CHUSP was up to 1.4, 1.4, 1.2, 1.2, 1.3, 1.5, 1.5, 1.5, 2.0, and 2.0 times less than those by FHUSP. On BMSwebview1 (_minSup_=0.5%). The _maxLength_ was set to 3 for HUS-Span and FHUSP; for FHUSP, the _maxLength_ was set to 3 for HUS-Span and FHUSP. The _maxLength_ was set to 3 for HUS-Span and FHUSP. CHUSP, this parameter was set to \(full\). For \(minUtil\) from \(5,000\) to \(35,000\), CHUSP found \(45,42,39,38,31,18,10\), \(3,2,\) and \(2\) CHUPS, respectively. It can be observed that the number of partners by CHUSP was respectively up to \(3.6\), \(3.5\), \(3.5\), \(3.2\), \(3.0\), \(3.3\), \(3.4\), \(6\), \(7\), and \(5.5\) times less than those by HUS-Span. In addition, the number of patterns by CHUSP was up to \(3.6\), \(3.5\), \(3.2\), \(3.0\), \(3.3\), \(3.4\), \(6\), \(7\), and \(5.5\) times less than those by FHUSP. Similar results can be observed for the BMSweb-view1, FIFA, and BIBLE datasets. These results indicate that the CHUSP algorithm can eliminate many non-candidate patterns from the search space and reduce the number of patterns from the mining process. ## 6 Conclusion This paper proposed an algorithm named CHUSP for mining closed high utility sequential patterns. The proposed algorithm uses the CHUS structure for efficiently mining CHUSP. Experimental results indicate that CHUSP outperforms HUS-Span and FHUSP algorithms in terms of execution time and memory usage. The number of patterns generated by the three algorithms was also measured for various minimum utility threshold values. The results show that all the pruning strategies used in CHUSP can eliminate many non-CHUSP and thus speed up the mining process. In future work, we will design a parallel framework that can enhance the computational cost of CHUSP and extend the pattern mining framework for other tasks [14; 16; 25; 26; 27].
2303.06273
Consistency Analysis of ChatGPT
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, adding extra support to the claim that AI can now assist and even replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour, focusing specifically on semantic consistency and the properties of negation, symmetric, and transitive consistency. Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions. We also ascertain via experiments that prompt designing, few-shot learning and employing larger large language models (LLMs) are unlikely to be the ultimate solution to resolve the inconsistency issue of LLMs.
Myeongjun Erik Jang, Thomas Lukasiewicz
2023-03-11T01:19:01Z
http://arxiv.org/abs/2303.06273v3
# Consistency Analysis of ChatGPT ###### Abstract ChatGPT, a question-and-answer dialogue system based on a large language model, has gained huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, including the law, medical, and finance domains, adding extra support to the claim that AI now can assist and, even, replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. In this paper, we investigate ChatGPT's trustworthiness regarding logically consistent behaviours. Our findings suggest that, although ChatGPT seems to achieve an improved language understanding ability, it still fails to generate logically correct predictions frequently. Hence, while it is true that ChatGPT is an impressive and promising new technique, we conclude that its usage in real-world applications without thorough human inspection requires further consideration, especially for risk-sensitive areas. ## 1 Introduction AI systems can be more reliable and trustworthy provided they behave in a similar manner to humans (De Visser et al., 2016; Jung et al., 2019). In this regard, ChatGPT, a large language model (LLM) that simulates human-like conversations (Fares, 2023), is gaining widespread popularity, reaching 100 million users only two months after its launch (Milmo, 2023). It offers many convenient features to users, such as summarising documents, writing essays, answering questions, and programming computer languages. Also, ChatGPT has performed astoundingly well on various examination cases, including passing the United States Medical Licensing Examination (Kung et al., 2023), achieving passing grades in four real exams at the University of Minnesota Law School (Choi et al., 2023), and providing decent answers to Operation Management exam questions, which is a core MBA course (Terwiesch, 2023). These surprising results make people believe that LLMs can assist humans even in professional areas and greatly influence various academic and industrial fields. Others, however, question ChatGPT's reliability, pointing out its overconfidence in generating factually incorrect information (Skopeliti and Milmo, 2023), inability to comprehend the complexity of human language (Bogost, 2022), and imperfect mathematical abilities (Frieder et al., 2023). Even though these mistakes may appear insignificant in normal daily tasks, e.g., drafting an email, they provoke crucial concerns in conservative and risk-sensitive domains, such as law, medicine, and finance. In this article, we investigate the reliability and trustworthiness of ChatGPT in terms of the language model's consistency. By using the BECEL dataset (Jang et al., 2022), which is designed to ascertain whether language models satisfy various types of consistency, we analyse ChatGPT's ability to generate logically consistent predictions based on three properties: semantic equivalence, logical negation, and symmetricity. Our experimental results show that although ChatGPT understands negation expressions and antonyms much better than previous pre-trained language models (PLMs) like BERT (Devlin et al., 2019), it still violates semantic equivalence and symmetricity quite frequently. Our contributions can be briefly summarised as follows: 1. We analyse the consistency behaviour of ChatGPT by measuring semantic, negation, and symmetric consistency. 2. We observe that ChatGPT achieves a much lower negation inconsistency compared to other PLMs, proving its improved understanding of negation expressions and antonyms. 3. We ascertain that ChatGPT is likely to generate different predictions on text inputs that deliver the same meaning, i.e., paraphrased inputs. 4. Even worse, we confirm that ChatGPT is self-contradictory, meaning that it violates semantic consistency for paraphrased inputs generated by ChatGPT itself. 5. We find that ChatGPT is extremely sensitive to the input sentence order for order-invariant tasks, e.g., semantic textual similarity (STS). Hence, we conclude that despite its favourable reputation and positive media coverage, ChatGPT is not completely reliable, suggesting that using ChatGPT without human confirmation would be hazardous, particularly in a highly risky industry. ## 2 Related Works The consistency of language models has been an important topic in natural language processing (NLP) but conducted under various definitions. The idea of _semantic consistency_ is the most widely used concept in consistency analysis, meaning that a model should make consistent decisions in semantically equivalent contexts Elazar et al. (2021). Semantic consistency is an indispensable property that should be satisfied in every textual data and NLP task. Ravichander et al. (2020) observed that PLMs are likely to generate different masked language modelling predictions when an object in queries is replaced with its plural form. Elazar et al. (2021), on the other hand, found that PLMs generate different masked language modelling predictions when given paraphrased queries. Another line of work employed the idea by introducing a consistency regularisation term for training, which penalises the violation of semantic consistency, to train more robust NLP models Wang and Henao (2021); Zheng et al. (2021); Kim et al. (2021). _Symmetric consistency_ is a consistency type based on symmetric inference, defined as \(f(x,y)=f(y,x)\). This implies that a model should be input-order invariant for tasks where the symmetric property holds. Regarding the natural language inference (NLI) task, Wang et al. (2019) believed that symmetric consistency applies to data points with "not entailment", i.e., "contradiction" and "neutral", as a label. They showed that many deep-learning-based NLI models change their predictions when the premise and hypothesis are switched. On the other hand, Li et al. (2019) only considered "contradiction" labels for analysis and ascertained that NLI models based on BERT Devlin et al. (2019) are likely to violate symmetric consistency. Kumar and Joshi (2022) performed a symmetric consistency analysis on NLI and STS tasks in a more conservative manner, arguing that a model should generate not only the same predictions but also the same confidence scores if it is truly input-order invariant. They also observed that PLM-based models violated symmetric consistency and introduced a consistency regularisation term to compensate for the issue. The fundamental idea lying in _negation consistency_ is the logical negation property (\(p\) is true \(\Leftrightarrow\neg p\) is false; Aina et al.2018). Intuitively, the main idea behind it is that a model's prediction should differ for text inputs delivering the opposite meaning. Several studies investigated the negation consistency of BERT and found that the model often generates the same outputs when asked negated and non-negated masked queries, e.g., "Birds can lay [MASK]" and "Birds cannot lay [MASK]" Kassner and Schutze (2020); Ettinger (2020). Hossain et al. (2020) created negated versions of NLI datasets and also observed the violation of negation consistency, suggesting that PLMs lack the understanding of negation expressions. To alleviate the issue, several works adopted data augmentation to train a model with abundant data containing negation expressions Asai and Hajishirzi (2020); Hosseini et al. (2021). Jang et al. (2022) expanded the evaluation scope from negation expressions to antonyms and ascertained the same tendency in recent PLMs. They proposed a new training task named _meaning-matching_ to enhance PLMs' textual understanding ability and observed performance improvements. _Transitive consistency_ is a consistency type that can measure the deductive reasoning ability. It is derived from transitive inference, represented as \(X\to Y\wedge Y\to Z\) then \(X\to Z\) for three predicates X, Y, and Z Gazes et al. (2012); Asai and Hajishirzi (2020). In the NLI task, Li et al. (2019) employed the concept to generate four transitive inference rules. For three sentences \(P\), \(H\), and \(Z\), the rules are defined as: \[E(P,H)\wedge E(H,Z) \to E(P,Z), \tag{1}\] \[E(P,H)\wedge C(H,Z) \to C(P,Z),\] (2) \[N(P,H)\wedge E(H,Z) \rightarrow\neg C(P,Z),\] (3) \[N(P,H)\wedge C(H,Z) \rightarrow\neg E(P,Z), \tag{4}\] where \(E\), \(N\), and \(C\) refer to entailment, neutral, and contradiction, respectively. Based on the rules, they collected a new evaluation set to assess the transitive consistency of BERT-based NLI models and showed the inconsistency of the models. Other studies investigated the transitive consistency in question answering (QA) [14, 15] and WordNet word senses [13] and ascertained that PLMs lack the ability to perform transitive inference. Jang et al. jang2022exploring proposed a universal definition of the language model's consistency and a taxonomy of various consistency types. They also created a new benchmark dataset that enables the evaluation of multiple types of consistencies on various downstream tasks. They assessed diverse PLMs on the new benchmark and confirmed that, like studies stated above, none of PLMs show consistent behaviour on all test cases. All the aforementioned works investigated the consistency of PLMs that emerged before the advent of LLMs like ChatGPT. To our knowledge, this paper is the first evaluation of LLMs from a consistency viewpoint. ## 3 Experimental Design ### Evaluation Scope The BECEL dataset provides 19 test sets for assessing five types of consistency on seven downstream tasks. However, we reduced the scope of our experiments, mainly because of the extremely competitive usage of ChatGPT. Specifically, our experiments do not consider additive and transitive consistency, because most PLMs were highly consistent on the former [13], and the latter requires much more difficult reasoning ability compared to other consistency types. Regarding downstream tasks, we used the SNLI [1], RTE [12], and MRPC [14] datasets, which contain test cases for measuring semantic, negation, and symmetric consistency. Table 1 shows the size of the test sets for each downstream task and consistency type. ### Consistency Evaluation Method This section briefly demonstrates the process of consistency evaluation by using the BECEL dataset. The evaluation consists of two steps. First, the predictions of the original test set and its corresponding perturbed test set are generated. Next, the predictions of the two test sets are compared to measure the consistency. For the three downstream tasks in our evaluation scope, Jang et al. jang2022exploring collected the perturbed test sets for semantic and negation consistency evaluation by modifying "sentence 2" for the RTE and MRPC tasks and "hypothesis" for the SNLI task, i.e., generating paraphrase and the opposite meaning sentences for semantic and negation consistency, respectively. They switched the order of the two input texts for symmetric consistency evaluation. Figure 1 illustrates the overall process for measuring the three consistency types on the MRPC task. ### Generating ChatGPT Predictions For test cases where the size of data exceeds 1K, e.g., SNLI task and symmetric consistency of RTE and MRPC, we sampled 200 data points due to the heavy usage of ChatGPT. We conducted zero-shot experiments by using the same prompts designed by Eleuther AI 1. The prompts of each downstream task and examples are presented in Table 2. Our experiments are conducted on the _30 Jan_ version of ChatGPT by using the _pyChatGPT_ package, an unofficial Python wrapper for OpenAI's ChatGPT API 2. Footnote 1: [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) Footnote 2: [https://github.com/terry3041/pyChatGPT](https://github.com/terry3041/pyChatGPT) Normally, ChatGPT gave us an answer "True/False/Neither (Entailment/Contradiction/Neutral)" for SNLI, "Yes/No (Equivalent/Not Equivalent)" for MRPC, and "True/False (Entailment/Not Entailment)" along with (or without) explanations for the decision. However, we observed a few cases where the output does not follow the aforementioned format but just gives an explanation. We reviewed such cases and gave the correct answers. ### Evaluation Metrics Basically, we used the same inconsistency metric as in [13]. Specifically, the metric measures the ratio of predictions that violate the target consistency type. Thus, semantic and symmetric inconsistency count the number of predictions \begin{table} \begin{tabular}{l c c c} \hline \hline & SNLI & RTE & MRPC \\ \hline semantic & 4,406 & 248 & 202 \\ negation & 2,204 & 153 & 290 \\ symmetric & 3,237 & 1,241 & 3,668 \\ \hline \hline \end{tabular} \end{table} Table 1: Size of the test sets of consistency evaluation data points of the SNLI, RTE, and MRPC tasks. where ChatGPT generates different answers for the original and its corresponding perturbed input. In contrast, negation inconsistency counts the results where the two predictions are the same. Unlike semantic consistency, which holds unconditionally, negation and symmetric consistencies are conditional properties. For example, negation consistency applies when the gold label is "Entailment" for the NLI task and "Equivalent" for the STS task. Regarding symmetric consistency, it applies unconditionally for the STS task and only to "Not Entailment" for the NLI task. As the BECEL dataset already reflects these conditions, Jang et al. (2022) calculated the inconsistency metrics based on all test data points. However, it can exaggerate the inconsistency of language models if their performance is insufficient. For example, consider the below example of the MRPC task: **S1**: In the evening, he asked for six pepperoni pizzas and two six-packs of soft drinks, which officers delivered. **S2**: In the evening, he asked for six pizzas and soda, which police delivered. **S2-neg**: In the evening, he asked for six pizzas and soda, which police did not deliver. The gold label of the **S1-S2** pair is "Equivalent". However, if the model believes that the answer is "Not Equivalent", then generating "Not Equivalent" as an answer of the **S1-S2-neg** pair is hard to be considered as violating negation consistency. We observed that the zero-shot accuracy of ChatGPT is much lower than that of fine-tuned PLMs reported by Jang et al. (2022). Therefore, we introduce a conditioned inconsistency metric, which only uses data points where ChatGPT makes correct predictions. ## 4 Experimental Results We now present our experimental results on the performance of ChatGPT and do a comparison with fine-tuned PLMs. The BECEL dataset perfor \begin{table} \begin{tabular}{l l} \hline \hline & **SNLI** \\ \hline Format & \{_s1_\} Question: \{s2\} True, False or Neither? Answer: \\ Example & A land rover is being driven across a river. Question: A Land Rover is splashing water as it crosses a river. True, False, or Neither? Answer: \\ \hline \hline Format & \{_s1_\} Question: \{s2\} True or False? Answer: \\ Example & The harvest of sea-weeds is not allowed in the Puget Sound because of marine vegetation’s vital role in \\ & providing habitat to important species. Question: Marine vegetation is harvested. True or False? Answer: \\ \hline \hline Format & Sentence 1: \{_s1_\} Sentence 2: \{s2\} Question: Do both sentences mean the same thing? Answer: \\ Example & Sentence 1: The increase reflects lower credit losses and favorable interest rates. Sentence 2: The gain \\ & came as a result of fewer credit losses and lower interest rates. Question: Do both sentences mean \\ & the same thing? Answer: \\ \hline \hline \end{tabular} \end{table} Table 2: Format and example of prompts used in our experiments for each downstream task. Figure 1: Consistency evaluation process of (a) semantic, (b) negation, and (c) symmetric consistency on MRPC. mances of four PLMs (BERT-large Devlin et al. (2019), RoBERTa-large Liu et al. (2019), Electralarge Clark et al. (2020), and T5 Raffel et al. (2020)) are taken from Jang et al. (2022). ### Semantic Consistency It is widely known that ChatGPT can perform various NLP tasks, including summarisation, question answering, and paraphrasing. Therefore, in addition to the original BECEL dataset, we generated paraphrased sentences using ChatGPT and used them for evaluation. The overall procedure of this evaluation is illustrated in Figure 2. The results are summarised in Table 3. In the SNLI task, ChatGPT often fails to distinguish "Neutral" and "Contradiction", which can underestimate its consistency. Therefore, we integrate the two classes into the "Not Entailment" class, which is noted as **SNLI-2C** in the table. Our experimental results show that ChatGPT produces much higher levels of inconsistency in the BECEL dataset than fine-tuned PLMs, yielding 3.8 times higher inconsistency on average than the best-performing fine-tuned PLM. This suggests that ChatGPT is not completely trustworthy regarding semantic consistency. Moreover, we ascertain that ChatGPT is self-contradictory, i.e., it even produces inconsistent outputs for paraphrased inputs generated by itself with a probability of more than 10%. This implies that ChatGPT failed to generate a proper paraphrased sentence or to capture the meaning of texts delivering the same meaning; either case undermines its reliability. Several examples where ChatGPT violates semantic consistency are presented in Table 5. ### Negation Consistency Table 4 presents the experimental results of the negation consistency evaluation. Compared to the fine-tuned PLMs, ChatGPT attains lower negation inconsistency in all three downstream tasks, improved by 19% on average than the best-performing fine-tuned PLM, and considerably outperformed BERT-large model. In addition, the conditional inconsistency is 3.8% on average and perfectly consistent on the SNLI task. The results suggest that ChatGPT can better understand negation expressions and antonyms, which has been a critical issue for PLMs trained in a self-supervised fashion Kassner and Schutze (2020); Ettinger (2020); Hossain et al. (2020); Hosseini et al. (2021); Jang et al. (2022). We believe that incorporating human feedback into ChatGPT training Ouyang et al. (2022) plays a crucial role in learning the meaning of negation expressions and antonyms, compared to previous PLMs that infer their meaning by simply relying on the context information based on the distributional hypothesis. Investigating the impact of providing human feedback on learning textual meaning is an interesting future research direction. Several examples of negation consistency violation \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Model & \begin{tabular}{c} MRPC \\ \(\tau_{B}\) \\ \end{tabular} & \begin{tabular}{c} RTE \\ \(\tau_{C}\) \\ \end{tabular} & \begin{tabular}{c} SNLI \\ \(\tau_{B}\) \\ \end{tabular} & \begin{tabular}{c} SNLI-2C \\ \(\tau_{C}\) \\ \end{tabular} \\ \hline BERT-large & 90.8 & - & 75.8 & - & 11.7 & - & - & - \\ RoBERTa-large & 84.2 & - & 24.6 & - & 5.9 & - & - & - \\ Electra-large & 77.0 & - & 17.3 & - & 5.4 & - & - & - \\ T5-large & 25.2 & - & 15.9 & - & 5.8 & - & - & - \\ \hline ChatGPT & **21.3** & 4.6 & **10.5** & 6.9 & **5.0** & 0.0 & 9.0 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results of the negation consistency evaluation. \(\tau\) and \(\tau_{C}\) denote the original and conditioned negation inconsistency, respectively. The best performance is in bold. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Model & \begin{tabular}{c} MRPC \\ \(\tau_{B}\) \\ \end{tabular} & \begin{tabular}{c} RTE \\ \(\tau_{S}\) \\ \end{tabular} & \begin{tabular}{c} SNLI \\ \(\tau_{B}\) \\ \end{tabular} & \begin{tabular}{c} SNLI \\ \(\tau_{S}\) \\ \end{tabular} & \begin{tabular}{c} SNLI-2C \\ \(\tau_{B}\) \\ \end{tabular} \\ \hline BERT-large & 12.5 & - & 12.3 & - & 9.9 & - & - & - \\ RoBERTa-large & 8.4 & - & 9.8 & - & **7.9** & - & - & - \\ Electra-large & 5.5 & - & 8.9 & - & **7.9** & - & - & - \\ T5-large & **4.5** & - & **8.6** & - & 9.3 & - & - & - \\ \hline ChatGPT & 29.7 & 9.9 & 11.3 & 10.5 & 28.0 & 21.0 & 15.0 & 11.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results of semantic consistency evaluation. \(\tau_{B}\) and \(\tau_{S}\) denote the inconsistency of the BECEL dataset and paraphrases generated by ChatGPT, respectively. The best performance is formatted in bold. Figure 2: Overall process of measuring semantic consistency by using paraphrases generated by ChatGPT. are presented in Table 7. ### Symmetric Consistency The results of symmetric consistency evaluation are described in Table 6. There is a surprising degree of inconsistency in ChatGPT compared to fine-tuned PLMs. Compared to the best-performing PLM, ChatGPT produces three times higher symmetric inconsistency in the MRPC task and five times higher in the RTE task, even in conditioned inconsistency, which reduces the overestimation of inconsistent behaviour. We observe that the extremely high inconsistency for the SNLI task is mainly because ChatGPT fails to distinguish between the labels of "Neutral" and "Contradiction". However, the model is still not completely consistent even in SNLI-2C, which integrates "Neutral" and "Contradiction" into the same class. Although the inconsistency rate might be considered trivial, especially in the SNLI-2C case, the issue should not be overlooked, considering the simple nature of the symmetric property. Consider a model that takes a list of symptoms and generates prescriptions. For such a model that should operate conservatively, it would greatly undermine the model's trustworthiness if it generates entirely different prescriptions whenever the order of symptoms changes, even if such an error occurs with a probability of 2%. Hence, an effort should be made to make LLMs satisfy logical consistencies to enhance their reliability and safe usage in real-world applications. Table 7 presents examples of symmetric consistency violations. ### ChatGPT's Explainablity Providing explanations is a core property of trustworthy systems (Huang et al., 2020). It is widely known that generative language models like ChatGPT can provide answers with explanations. However, we observed that while ChatGPT generates plausible explanations, those explanations are not perfectly reliable. Table 8 presents some examples. For the first example, the explanations of the original and perturbed inputs contradict each other. Regarding the second example, the explanation of the perturbed input is not correct, i.e., the input did mention the age and gender of the person pushing the shopping cart ("boy" and "A young man"). These wrong explanations also contribute to undermining ChatGPT's trustworthiness. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \multirow{2}{*}{Model} & MRPC & RTE & \multicolumn{2}{c}{SNLI} & \multicolumn{2}{c}{SNLI-2C} \\ & \(\tau\) & \(\tau_{C}\) & \(\tau\) & \(\tau_{C}\) & \(\tau\) & \(\tau_{C}\) & \(\tau\) & \(\tau_{C}\) \\ \hline BERT-large & 6.8 & - & 15.8 & - & 10.2 & - & - & - \\ RoBERTa-large & 4.3 & - & 11.6 & - & 9.7 & - & - & - \\ Electra-large & 5.3 & - & **6.7** & - & **6.4** & - & - & - \\ T5-large & **4.2** & - & 8.0 & - & 8.3 & - & - & - \\ \hline ChatGPT & 12.5 & - & 35.5 & 32.6 & 40.5 & 49.23 & 3.0 & 2.52 \\ \hline \hline \end{tabular} \end{table} Table 6: Experimental results of the symmetric consistency evaluation. \(\tau\) and \(\tau_{C}\) denote the original and conditioned symmetric inconsistency, respectively. The best performance is in bold. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Task: RTE, Paraphrase Type: BECEL} \\ \multicolumn{1}{c}{Original Inputs} & \multicolumn{1}{c}{Perturbed Inputs} \\ S1: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian. & \multirow{2}{*}{S1: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian.} \\ S2: The French railway company is called SNCF. & \multirow{2}{*}{S2: SNCF is the French railway company.} \\ Prediction: Not Entailment & & Prediction: Entailment \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Task: SNLI, Paraphrase Type: BECEL} \\ \multicolumn{1}{c}{Original Inputs} & \multicolumn{1}{c}{Perturbed Inputs} \\ Premise: Kids play in the water in the middle of the street. & \multirow{2}{*}{Premise: Kids play in the water in the middle of the street.} \\ Hypothesis: Kids are running from zombies. & \multirow{2}{*}{Hypothesis: Children are fleeing from zombies.} \\ Prediction: Not Entailment (Contradiction) & Prediction: Entailment \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Task: MRPC, Paraphrase Type: ChatGPT} \\ \multicolumn{1}{c}{Original Inputs} & \multicolumn{1}{c}{Perturbed Inputs} \\ S1: Looking to buy the latest Harry Potter? & \multirow{2}{*}{S1: Looking to buy the latest Harry Potter?} \\ S2: Harry Potter ’s latest wizard trick? & \multirow{2}{*}{S2: The newest medical feat of Harry Potter?} \\ Prediction: Not Equivalent & & Prediction: Equivalent \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Task: SNLI, Paraphrase Type: ChatGPT} \\ \multicolumn{1}{c}{Original Inputs} & \multicolumn{1}{c}{Perturbed Inputs} \\ Premise: A person swimming in a swimming pool. & \multirow{2}{*}{Premise: A person swimming in a swimming pool.} & \multirow{2}{*}{Hypothesis: An individual is relishing the water.} \\ Hypothesis: A person enjoying the waters. & & & \\ Prediction: Not Entailment (Neutral) & Prediction: Entailment \\ \hline \hline \end{tabular} \end{table} Table 5: Examples of semantic consistency violation. ## 5 Discussion **Can Prompt Design be a Solution?** Prompts are input text consisting of a task demonstration and, for a few-shot task, some examples Lester et al. (2021). Prompt design has been shown to be an effective method of regulating the behaviour of GPT-3 Brown et al. (2020). Hence, one might argue that searching for an optimal prompt for each task can improve consistency. However, we are sceptical of this claim. The consistency metrics could be improved with different prompts, but we believe that it cannot fundamentally resolve the inconsistency problem, because prompt design cannot go beyond inductive reasoning. The underlying idea behind prompt design is that prompts created by experimenters might not be optimal, because language models might have acquired target information from completely different contexts Jiang et al. (2020). That is, prompt design can be regarded as maximising the generalisation effect by searching for the most closely related prompts to perform the target task during training. As a result, no matter how prompt design allows us to find the best prompt that maximises the generalisation effect, it cannot resolve the issue, as our experimental results suggest that various consistency properties are not reflected in ChatGPT's inductive bias. Moreover, consistency improvements with prompt design can be considered another violation of semantic consistency, because the prompts will deliver identical semantic meaning, i.e., task description. **Data Augmentation is Not Sustainable.** Creating new data points based on certain consistency types and using them for training Asai and Hajishirzi (2020); Hosseini et al. (2021) or consistency regularisation Wang and Henao (2021); Zheng et al. (2021); Kim et al. (2021) is the most widely used approach to reflect logical consistency in the model's inductive bias. This remedy, however, is unsustainable. First, the data augmentation process requires a tremendous effort. For simple consistency types, e.g., symmetric and negation consistency, generating or collecting data points is relatively simple, but for complex consistency types, such as transitive and semantic consistency, it can be extremely challenging to cover all possible variations. Second, even if we successfully expand the data, it is doubtful whether we can afford to update an LLMs on the new dataset. Considering the ever-changing character of language, the data expansion and update of an LLM should be performed continuously. However, training an LLM entails tremendous financial and environmental costs Bender \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{} & Task: MRPC, Consistency Type: Negation \\ Original Inputs & Perturbed Inputs \\ S1: He arrives later this week on the first state visit by a US President. & S2: The dead cavallary have been honored for more than a century with a hilltop granite obelisk and white headstones. \\ S2: The dead cavallaryne are honored with a hilltop granite obelisk and white headstones. & S2: The dead cavallaryne are honored with a hilltop granite obelisk and black headstones. \\ Prediction: Equivalent & Prediction: Equivalent \\ \hline \hline \multicolumn{1}{c|}{} & Task: MRPC, Consistency Type: Negation \\ Original Inputs & Perturbed Inputs \\ S1: He arrives later this week on the first state visit by a US President. & S1: He arrives later this week on the first state visit by a US President. \\ S2: Mr Bush arrives on Tuesday on the first state visit by an American President. & S2: Mr Bush doesn’t arrive on Tuesday on the first state visit by an American President. \\ Prediction: Equivalent & Prediction: Equivalent \\ \hline \hline \multicolumn{1}{c|}{} & Task: SNLI, Consistency Type: Symmetric \\ Original Inputs & Perturbed Inputs \\ Premise: There is a man climbing as the boy holds the rope & Premise: A man holds a rope for a boy who’s about to climb a wall. \\ Hypothesis: A man holds a rope for a boy who’s about to climb a wall. & Hypothesis: There is a man climbing as the boy holds the rope \\ Prediction: Not Entailment (Contradiction) & Prediction: Entailment \\ \hline \hline \multicolumn{1}{c}{} & Task: MRPC, Consistency Type: Symmetric \\ Original Inputs & Perturbed Inputs \\ S1: In 2001, the doices reached a $ 15 million settlement & S1: The doices reached a settlement in 2001 involving five priests and 26 plaintiffs. & S1: The doices reached a settlement in 2001 involving five priests and 26 plaintiffs for an undisclosed sum. \\ S2: The doices reached a settlement in 2001 involving five priests and 26 plaintiffs for an undisclosed sum. & S2: In 2001, the doices reached a $ 15 million settlement involving five priests and 26 plaintiffs. \\ Prediction: Not Equivalent & Prediction: Equivalent \\ \hline \hline \end{tabular} \end{table} Table 7: Examples of negation and symmetric consistency violations. et al., 2021). For instance, training a BERT-base model without hyperparameter tuning, which is 1590 times smaller than ChatGPT, requires a CO2 emission of 650kg, which is comparable to flying from New York to San Francisco for one passenger (Strubell et al., 2019). A simple expectation of CO2 emission for re-training ChatGPT is 1033t, while a human is responsible for 5t CO2 emission per year. Therefore, it is desirable to enlarge our viewpoint beyond LLMs to implement sustainable remedies that can fundamentally solve the inconsistency problem, particularly in a modern society facing the global climate crisis. ## 6 Summary and Outlook The recent advent of ChatGPT is accelerating the developments in the NLP field driven by LLMs. Its outstanding performance captured considerable attention, resulting in many articles, posts, and analyses highlighting ChatGPT's positive aspects across numerous media. There are others, however, who question its reliability based on the model's faulty behaviours. To this end, this study aims to examine the trustworthiness of ChatGPT in terms of the language model's consistency. We have investigated the consistency behaviour of ChatGPT across three consistency types and downstream tasks. Our experimental results demonstrated that ChatGPT achieves a certain level of enhanced language understanding ability, especially in negation expressions and antonyms, showing considerable improvements in negation consistency compared to the earlier version of PLMs. However, contrary to the widespread belief regarding the outstanding performance of ChatGPT, its overall consistency falls short of expectations. It frequently changes its decision when an input text is replaced with a paraphrased sentence, even though it is generated from ChatGPT itself, i.e., the model is self-contradictory. Moreover, in input-order invariant tasks, ChatGPT is likely to make a different decision when the order of the input sentences is switched. Given how simple and natural the symmetric consistency is in human reasoning, violating symmetric consistency is a huge blow to ChatGPT's reliability. These fallacious behaviours are lethal to domains operating conservatively and at high risk. Although LLMs are a revolutionary technique that brought an unprecedented era to NLP, such issues should be resolved before ChatGPT is used in real applications, particularly considering the huge economic and environmental costs for training and inference of LLMs.
2307.14380
Robust Assignment of Labels for Active Learning with Sparse and Noisy Annotations
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe. Their performance is strictly connected with the quality of labels used in training. Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice. To tackle this challenge, active learning algorithms are commonly employed to select only the most relevant data for labeling. However, this is possible only when the quality and quantity of labels acquired from experts are sufficient. Unfortunately, in many applications, a trade-off between annotating individual samples by multiple annotators to increase label quality vs. annotating new samples to increase the total number of labeled instances is necessary. In this paper, we address the issue of faulty data annotations in the context of active learning. In particular, we propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space. The proposed methods require little to no intersection between samples annotated by different experts. Our experiments on four public datasets indicate the robustness and superiority of the proposed methods in both, the estimation of the annotator's reliability, and the assignment of actual labels, against the state-of-the-art algorithms and the simple majority voting.
Daniel Kałuża, Andrzej Janusz, Dominik Ślęzak
2023-07-25T19:40:41Z
http://arxiv.org/abs/2307.14380v1
# Robust Assignment of Labels for Active Learning with Sparse and Noisy Annotations ###### Abstract Supervised classification algorithms are used to solve a growing number of real-life problems around the globe. Their performance is strictly connected with the quality of labels used in training. Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice. To tackle this challenge, active learning algorithms are commonly employed to select only the most relevant data for labeling. However, this is possible only when the quality and quantity of labels acquired from experts are sufficient. Unfortunately, in many applications, a trade-off between annotating individual samples by multiple annotators to increase label quality vs. annotating new samples to increase the total number of labeled instances is necessary. In this paper, we address the issue of faulty data annotations in the context of active learning. In particular, we propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space. The proposed methods require little to no intersection between samples annotated by different experts. Our experiments on four public datasets indicate the robustness and superiority of the proposed methods in both, the estimation of the annotator's reliability, and the assignment of actual labels, against the state-of-the-art algorithms and the simple majority voting. ## 1 Introduction Supervised learning algorithms are commonly used to create prediction models for a considered classification task. The quality of such models, for the majority of algorithms, strongly depends on the labeled dataset used during the model construction. In real-life scenarios, we often start with no or only a few labeled samples, as the data annotation process is expensive and requires laborious human involvement. To optimize this process and cut costs, active learning algorithms are commonly employed [17]. Active learning algorithms can be defined as follows - _a set of algorithms that given a limited labeling budget try to obtain as best model as possible, assuming that they can iteratively query an oracle (usually human experts) to annotate chosen samples._ In some cases, labels might be also obtained in an automated manner, e.g., by computer simulations. However, for many classification problems, e.g. security alert notifications [6], humans have to manually annotate chosen samples. For this reason, considerable domain knowledge and experience from human annotators is required, thus we usually refer to the annotators as experts. As humans are imperfect by nature, the acquired annotations might contain mistakes, influencing the quality of obtained models. The frequency of those mistakes usually depends on the difficulty of the task itself and the expertise of annotators. If those mistakes occur too often and the quality of acquired labels is insufficient, corrective measures have to be employed. In this field, there are two dominant approaches: annotations unification algorithms [16] (also known as consensus algorithms), and faulty labels identification and removal methods [8]. The first approach uses annotations from multiple human experts to assign a refined label to the sample, thus benefiting from the fact that some of the experts will assign it to the appropriate class. These methods require samples to be labeled by multiple experts and as the labeling budget is usually limited, enforce a trade-off between label quality and the quantity of labeled samples. The second approach tries to identify and filter out mislabeled samples, which might also result in the removal of some correctly labeled instances. This is an undesirable side-effect, especially in low-budget annotation scenarios or in the case of imbalanced datasets. It leads to the oversimplification of the constructed model and losing important details about more complex data instances. Because of this risk, our work focuses on the first of the aforementioned groups, i.e. the label unification algorithms. In this paper, we propose two algorithms based on the Expectation-Maximization technique (EM) and an intuitive idea to augment every expert using a machine learning model. A detailed description of our approach is available in Section 3. Proposed algorithms infer labels for the whole data, and therefore do not have the major drawback of requiring many annotations per sample to achieve high-quality labels. We compare our methods with baseline reference, i.e. majority voting and commonly used EM-based algorithm in our experimental setup on four publicly available datasets. Our experiments are further described in Section 5. Since two of the datasets used in experiments are highly imbalanced, applying the most commonly used probability cut-off of 0.5 to assign labels leads to poor performance of models according to metrics adjusted for imbalanced classification, such as the balanced accuracy (BAC). To tackle this challenge, a novel cut-off computation method is proposed in Section 4. The proposed method can be used even without prior knowledge about class distribution, which is suitable for typical active learning scenarios. Related Work Reaching a consensus among labelers is one of the fundamental issues for active learning research [17]. In this setting, the main objective is to iteratively select the most informative unlabeled samples and request their labels from an oracle, e.g., human annotators or other labeling sources. This approach has been successfully applied to various classification tasks such as text analysis and classification [19], image classification [3, 18], and medical diagnosis [1, 22]. The most popular approach to the active selection of training instances is so-called, pool-based uncertainty sampling [13]. It assumes that there is an unlabelled pool of data available, from which an active learning algorithm can select the next batch of samples to be annotated by the oracle in the next labeling iteration. Data instances are chosen for labeling based on some estimation of the prediction uncertainty that can be computed using various approaches [4, 21]. Active learning has also been applied to many other types of prediction tasks, such as the multi-label classification, where each sample may belong to multiple classes simultaneously [11]. In this case, the annotation process becomes even more complex, as multiple labels need to be assigned to each sample. Approaches that have been proposed to address this issue include models investigating correlations between label occurrences or methods that select samples based on the uncertainty of the whole label set. These methods have been shown to be effective in reducing the labeling cost and improving the performance of multi-label classification models [15]. However, active learning has also been successful for regression problems [7] and many other ML tasks. A comprehensive survey of active learning applications and sample selection techniques can be found in [17]. In practice, annotations provided by human labelers quite often contain errors or inconsistencies which can negatively impact the performance of active learning algorithms. A number of research papers have addressed this issue by proposing annotation aggregation methods that can improve the quality of labels. For example, there are methods that combine multiple annotations using majority voting [20] or EM-based algorithms [16]. Some of the recent approaches include learning-based methods that incorporate information about annotator expertise to improve annotation quality [9]. An example of such an approach is the multi-label consensus maximization for ranking (MLCM-r) algorithm proposed by [23]. Another example is the Dawid-Skene model [2]. It assumes that annotators have different error rates for different decision classes and models the probability of a correct label for each sample, given the annotations provided by multiple annotators. Additionally, it uses the EM algorithm to estimate the true labels of the samples and the reliability of each annotator. Several studies have demonstrated the effectiveness of these approaches in reducing the impact of noisy annotations on the performance of active learning algorithms [5] and in scenarios when the federated learning techniques were applied [24]. ## 3 Annotations Unification Algorithms In this section, we delve into the details of proposed algorithms denoted as inferred consensus and simulated consensus algorithms. We consider simulated consensus as a more stable and refined version of the inferred consensus algorithm, however, we present both to comprehensively describe the intuition behind them. Both proposed algorithms have been developed to overcome a major drawback of consensus algorithms, i.e. degradation of performance if many samples are not labeled by multiple experts. We have developed them as extensions of the EM algorithm, as it is the most well-known consensus algorithm, tested in many production implementations. Actually, proposed extensions are independent of the EM itself and can be viewed as meta-techniques. We are convinced they might also be used with other annotation unification algorithms and lead to a refined performance in many circumstances. However, as our experiments cover only the case when they are used together with EM, we will describe them in that context in the rest of this section. ### Expectation-maximization The application of the EM algorithm to the task of estimation of the labels based on multiple noisy annotations has been originally proposed by Raykar et al. [16]. It was shown to be a robust solution when labels are abundant. Here we briefly paraphrase the theory for binary classification, but it can also be easily used for multi-label scenarios which can be modeled as multiple binary classifications or extended to multi-class problems as shown in the original paper. Let us denote a true label of the sample \(i\) as \(y_{i}\), a label assigned to this sample by expert \(j\) as \(y_{i}^{j}\), a representation of this sample as \(x_{i}\), the number of all samples as \(N\), and the number of all experts as \(R\). As this work focuses on sparse annotations, we will denote indices of samples annotated by expert \(j\) as \(S^{j}\subseteq\{1,\ldots,N\}\), and symmetrically the set of experts that have labeled sample \(i\) as \(E_{i}\subseteq\{1,\ldots,R\}\). This probabilistic algorithm makes the following simplifications: * Each expert \(j\) is modeled by two latent variables measuring expertise for the given class, namely specificity (true negative rate) \(\beta^{j}\) and sensitivity (true positive rate) \(\alpha^{j}\). * Probability that an expert assigns a specific class to the sample depends only on the true hidden label of this sample and latent variables of this expert. In other words, they do not depend on the representation of this sample given the true label. I.e.: \[P(y_{i}^{j}=1|x_{i},y_{i})=P(y_{i}^{j}=1|y_{i}).\] * Each expert annotates samples independently from other annotators, therefore assigned classes are independent given the true labels. \[P(y_{i}^{j}=1|y_{i},y_{i}^{k})=P(y_{i}^{j}=1|y_{i})\quad\text{if}\,j\neq k.\] The EM algorithm starts by initializing the first estimated probability of true labels with majority voting and then iteratively repeats E and M steps until convergence to stable parameters and probability estimation of true labels. #### 3.1.1 E-step We will denote the set of all learned parameters of the algorithm as \(\Theta\), containing \(\alpha,\beta\), and the parameters of the machine learning model if one is used for posterior probability estimation. Then, based on the independence of the annotators given a true label and Bayes' theorem, the probability of a positive class \(\mu_{i}=P(y_{i}=1|y_{i}^{1},...y_{i}^{R},\Theta,x_{i})\) can be written as: \[\mu_{i} =\frac{P(y_{i}^{1},...y_{i}^{R}|y_{i}=1,\Theta)\cdot P(y_{i}=1| \Theta,x_{i})}{P(y_{i}^{1},...y_{i}^{R}|\Theta,x_{i})} \tag{1}\] \[\propto P(y_{i}^{1},...y_{i}^{R}|y_{i}=1,\Theta)\cdot P(y_{i}=1| \Theta,x_{i}). \tag{2}\] Where \(P(y_{i}=1|\Theta,x_{i})\) is posterior probability and can be modeled with a machine learning model, we will denote it as \(p_{i}\). \(P(y^{i}_{i},...y^{R}_{i}|\Theta,x_{i})\) does not depend on the label, therefore it is of no interest to us and can be handled by normalization of scores to a proper probability distribution. If we define \(a_{i}=P(y^{i}_{i},...y^{R}_{i}|y_{i}=1,\alpha)\) and \(b_{i}=P(y^{1}_{i},...y^{R}_{i}|y_{i}=0,\beta)\), we can rewrite equation for \(\mu_{i}\) to: \[\mu_{i} = \frac{a_{i}p_{i}}{a_{i}p_{i}+b_{i}(1-p_{i})}, \tag{3}\] \[a_{i} = \prod_{j\in E_{i}}[\alpha^{j}]^{y^{j}_{i}}[1-\alpha^{j}]^{(1-y^{j }_{i})},\] (4) \[b_{i} = \prod_{j\in E_{i}}[\beta^{j}]^{(1-y^{j}_{i})}[1-\beta^{j}_{i}]^{y ^{j}_{i}}. \tag{5}\] The last set of equations can be used to efficiently compute the expected probability for the positive class. #### 3.1.2 M-step The maximization step is used to update parameters \(\Theta\) of the algorithm. The equations resulting from computing the gradient of log-likelihood of estimated labels over the parameters \(\alpha,\beta\) look as follows: \[\alpha^{j} = \frac{\sum_{i\in Sj}\mu_{i}y^{j}_{i}}{\sum_{i\in Sj}\mu_{i}} \tag{6}\] \[\beta^{j} = \frac{\sum_{i\in Sj}(1-\mu_{i})(1-y^{j}_{i})}{\sum_{i\in Sj}(1-\mu _{i})}. \tag{7}\] An update of the parameters of a machine learning model used for the posterior probability prediction can be done using the regular gradient descent method. ### Inferred consensus As the performance of the EM algorithm degrades with smaller numbers of annotations for each sample, the main idea of inferred consensus algorithm is to propagate the annotations to unlabelled samples, using the knowledge from the samples that an expert has labeled. The intuition behind this idea is expressed by the following question: "What label do we expect annotator \(j\) would have given sample \(i\), which hasn't been annotated by him?" To be able to answer this question and infer the predictions, for every expert a machine learning model is trained on annotations given by this expert. More formally, for expert \(j\) we create a model \(f^{j}\) trained on samples \(<x_{i},y^{j}_{i}>_{i\in S_{i}}\). Then, this model is used to infer predictions for the whole dataset obtaining new annotations, \(y^{j}_{i}=f^{j}(x_{i})\) for \(i\in\{1,...,N\}\) and every expert \(j\in\{1,...,R\}\). As the majority of machine learning models return not only a label but also a probability distribution of classes, we utilize the returned distribution as soft annotations, e.g., an artificial expert says that from its perspective there is a 10% chance that the object has the positive class and 90% chance it belongs to the negative class. Finally, the EM algorithm can be run on inferred annotations \(y^{\prime}\) for all of the samples, potentially leading to a better estimation of the hidden true labels, as we have a full inferred annotation set of size \(R\) for every sample. The algorithm can be presented as the following set of steps: 1. Train machine learning model \(f^{j}\) for each expert using \(<x_{i},y^{j}_{i}>_{i\in S_{i}}\). 2. Infer predictions \(y^{\prime j}_{i}=f^{j}(x_{i})\) for \(i\in\{1,...,N\}\) 3. Call EM algorithm using \(y^{\prime}\) instead of original annotations. This algorithm can be viewed as the creation of a new labeling task, that was annotated by artificial experts derived from the original annotators. The advantage of this task is that it is fully labeled by each annotator, therefore it is more suitable for the EM algorithm, and the downside is that artificially created annotators usually have worse quality than original experts, as they are trained only on the small subset of samples, and dependant on the used machine learning model. Moreover, since we associate real experts with models trained on samples annotated by them, we obtain unreliable estimations of experts' reliability, which changes during the annotation process, as the model usually gets better with the increasing number of samples annotated by the expert. ### Simulated consensus To fix downsides of the inferred consensus algorithm we have prepared a more mature and refined version called simulated consensus. The schematic illustration of the algorithm can be seen on Figure 1. Once again we start by training a machine learning model for each expert, but now we infer predictions only on samples that has not been annotated by this expert, i.e. were not used in training of this model. Then, we use the predictions (in form of probability distributions) as annotations from a new expert fully separate from the original one. In this way we obtain \(2R\) annotators, when first \(R\) of them are human experts, and second \(R\) are simulated. Finally, the EM algorithm is used on the combined, partially soft, annotations set. The algorithm works as follows: 1. Train machine learning model \(f^{j}\) for each expert using \(<x_{i},y^{j}_{i}>_{i\in S_{i}}\). 2. Infer predictions \(y^{\prime j}_{i}=f^{j}(x_{i})\) for \(i\notin S_{i}\) 3. Create new annotations \(\hat{y}\) as concatenation of \(y^{j}|_{j\in\{1,...,R\}}\) and \(y^{\prime}\) 4. Call EM algorithm using \(\hat{y}\) instead of original annotations. This algorithm also leads to performing consensus on a set of \(R\) annotations for each sample, therefore tackling major drawback of original EM in case of sparse annotations. Moreover, it has several advantages over inferred consensus algorithm from the theoretical point of view. First of all, we are using original annotations of the experts and, as they are fully separated from the artificially created annotators, we reliably evaluate their quality. We also believe that the quality of the experts might be better evaluated as there is always quorum of \(R\) annotators participating in voting for each sample. Besides that, the algorithm is not so prone to errors because of poor quality of created machine learning models, because their quality is also separately evaluated in EM (we also expect this to be a decent evaluation as none of the artificial experts make predictions on its training samples) and if they achieve poor performance their influence in the voting diminishes. Intuitively this algorithm also can be viewed as a new labelling task in the following mind experiment. Lets imagine that we have a joint set of original experts annotations and another group of slightly worse artificial annotators. In real world, there might exist a person who would return the same annotations as our artificial annotator. Therefore, those should be perfectly fine annotations from the perspective of annotations unification algorithm, and if they are of poor quality algorithm should evaluate them as such and be only slightly guided by them. ## 4 Prediction for imbalanced data Expectation-Maximization algorithm results in estimation of probability distribution of class labels for each sample, but many models cannot be trained using such soft labels. Usually, we humans also expect a definitive answer whether sample should be considered as belonging to particular class or not. If considered machine learning task corresponds to balanced machine learning distribution a standard \(0.5\) cut-off for binary classification or \(\frac{1}{K}\), where \(K\) is the number of classes, is a sound solution. However, if we are dealing with imbalanced classification and try to optimize metric, which attaches the same importance to recognition of each class like balanced accuracy, it is not a good threshold. In some active learning scenarios with noisy annotations even an approximated class distribution might not be known a priori, e.g. cybersecurity attacks detection. In such cases, threshold tuning is infeasible, as we do not have a reliable validation set to evaluate the threshold efficiency. Therefore, a reliable method is needed nor requiring prior knowledge nor true labels. This is why we propose the following method of class distribution approximation using available model. 1. Compute a probability distribution from the perspective of the model for all samples available during training, lets call a distribution for sample \(i\) a \(\tilde{y}_{i}\) and probability of class \(c\) for this sample \(\tilde{y}_{i,c}\). 2. Compute mean probability for each class across all of the samples. This mean probability will be a threshold adjustment lets call it \(t_{c}\) for class \(c\). Formally: \[t_{c}=\frac{1}{N}\sum_{i=1}^{N}\tilde{y}_{i,c}.\] 3. For multi label classification we assign class \(c\) to the sample \(i\) if \(\tilde{y}_{i,c}\geq t_{c}\). 4. For single label classification we choose class \(c^{\prime}\) from the set of all classes \(C\) in the following way: \[c^{\prime}=\operatorname*{argmax}_{c\in C}\tilde{y}_{i,c}-t_{c}.\] This method allows us to determine the cut-off without any prior knowledge about the problem. It can be used for EM algorithms, then considered model is the EM itself, and predictions are the resulting probability distribution. Moreover, it can also be used to choose a threshold for machine learning models trained on top of that. In such case, the \(t_{c}\) is computed on all pool data available during training, over the predictions of the model we wish to threshold. Thanks to this fact, no additional computation is needed in production and the adjustment is independent from the data set we are making the prediction on. When we use this procedure on an unbiased model, it leads to an unbiased estimator of true class distribution. Moreover, for balanced datasets, it converges with the increasing number of training samples to the regular \(\frac{1}{K}\) threshold. ## 5 Experiments ### Experimental setup To properly evaluate proposed algorithms, we have created an experimental setup similar to a real-life active learning scenario. As data labeling by human experts only for the purpose of experiments is too expensive and in real-life scenarios with annotators you usually do not have access to the hidden true labels, we have prepared a randomized procedure for creating annotations based on true labels of known public datasets. The procedure generates a set of binary annotations for a specified number of experts, which is a parameter of the method, in the following way: 1. Number of labeled samples differs for each expert. We model the probability that expert \(j\) annotates a sample, denoted as \(r^{j}\), as a Beta distribution with parameters \(\tilde{\alpha}=1\), \(\tilde{\beta}=20\), therefore average probability is equal to \(\frac{1}{21}\approx 0.048\). Intuitively, we can think of it that experts will on average label one per every 21 samples. 2. The fact that an expert \(j\) has annotated a sample \(i\) is decided by drawing from Bernoulli distribution with a success ratio equal \(r^{j}\). 3. The hidden true positive and true negative rates of each expert \(j\), denoted as \(\tilde{\alpha}^{j}\) and \(\tilde{\beta}^{j}\), are drawn from the Beta distribution with parameters \(\tilde{\alpha}=4\), \(\tilde{\beta}=1\), that have the expected value equal \(0.8\). 4. Classes assigned by the expert \(j\) to annotated sample \(i\) are drawn from Bernoulli distribution with the probability of success \(\tilde{\alpha}^{j}\) if the true label of this sample is positive or is equal to \(1-v\), where \(v\) is drawn from Bernoulli with the probability of success equal to \(\tilde{\beta}^{j}\) when the true label of the sample \(i\) is negative. We set the number of experts to 15 for our experiments because the randomization of samples annotated by each expert might lead to experts labeling only a few samples (which is consistent with a real-life Figure 1: Visualization of Simulated consensus algorithm steps. Algorithm outputs are denoted with green, annotations data yellow and algorithm steps are shown as blue. scenario when somebody leaves a company after a few days of annotation). Using the above procedure, we obtained annotations assigned by diverse artificial experts. Thanks to the fact that it is based on public datasets, we had true labels for all of the samples and the hidden quality of each expert to properly evaluate tested algorithms. As the proposed annotation generation procedure assumes the binary classification task and works in an independent manner for each class, we used one-hot encoded labels of every problem as in a multi-label setting. The evaluation for each dataset was performed five times to obtain the statistical significance of the experiments, each time with a fixed seed creating a different set of expert annotations. The evaluation procedure was as follows: 1. Create expert annotations for a given random seed. 2. Use each consensus method to generate label probabilities and experts' quality estimations. 3. Generate labels using all tested cut-off techniques. 4. Train a machine learning model on the obtained labels. 5. Make predictions with the obtained model on the separate test set. Use the cut-off techniques again to assign labels to test cases. 6. Compute evaluation metrics, both on the consensus results and the model predictions. We have included a quality assessment of resulting machine learning models, trained on the obtained labels, as this is usually the ultimate result of an active learning system. If a sophisticated consensus method led to a better estimation of labels but would not lead to a better machine learning model, there would be no advantage in using this method in a production environment. To reduce the computational complexity, a machine learning model used for posterior probability distribution prediction inside EM-based algorithms (that has to be retrained with every iteration) was a dummy model predicting always the class prior probability estimated on the training set regardless of the passed sample. Nevertheless, a regular machine learning model chosen for each task was trained on top of computed labels for the evaluation. ### Evaluation metrics The evaluation metrics used in our experiments can be divided into three groups. In the first group, there are metrics computed on the probabilities from the annotations unification algorithms. All of these metrics are computed on the set of samples that were annotated by at least one expert in the experiment. We considered the following measures: area under the receiver operating characteristic curve (AUC) with the macro average on the probabilities returned by the algorithms, and balanced accuracy (BAC) on the labels generated by each of the cut-off methods. The second group contains evaluations of the estimated quality of each expert by the compared algorithms. Metrics used in the comparison: the mean absolute error of the true positive rate estimation (MAE), Pearson correlation, and the Spearman rank correlation between the estimated true positive rates and the hidden true positive rates. The third group contains evaluations of the machine learning models trained on estimated labels. A separate model is trained for each consensus method and each cut-off technique. For each model, a BAC score on the test set is reported. As BAC requires labels and the models return probability scores, the same cut-off method as was used to generate training labels is applied. ### Datasets We have used four datasets for the purpose of evaluation. * A dataset of handwritten digits, one of the most widely used benchmark datasets in machine learning research. * A dataset with measurements from wearable inertial sensors placed on fire-fighters during various fire and rescue-related activities from the _AAIA'15 Data Mining Competition: Tagging Firefighter Activities at a Fire Scene_ organized at the KnowledgePit.ai platform. * A dataset describing cybersecurity network logs with a prediction task to identify events that should be notified as suspicious. This dataset was originally published in a competition _IEEE BigData 2019 Cup: Suspicious Network Event Recognition_ on the KnowledgePit.ai platform. * A public dataset of transactions made by European credit card holders, fully anonymized via PCA transformation. The dataset is publicly available both in the OpenML repository and on the Kaggle competition platform. The prediction task is to detect fraudulent transactions. Those datasets were chosen to diversify both, domains and class distributions used to evaluate our methods. MNIST is a balanced dataset with ten classes, firefighters data have five classes with slightly imbalanced distribution, cybersec is a binary classification task and has imbalanced distribution with less than \(6\%\) of positive samples, and credit-fraud is a binary and highly imbalanced dataset with less than \(1\%\) fraud examples. Moreover, all of these datasets required human annotations at some point to create the labels for the corresponding tasks. We cannot be sure whether there are errors in the labels, but such investigation remains outside of the scope of this study. For both MNIST and credit-fraud, test sets for evaluation were created by a stratified split with \(40\%\) of all available samples, whereas for cybersec and firefighters, splits from the corresponding data science competitions were used. Moreover, for the cybersec and firefighters datasets, the same preprocessing as described in the referenced competition papers was performed. Additionally, each dataset was min-max scaled. Model architectures with hyper-parameters used for evaluation are shown in Table 1. For MNIST and firefighters, a logistic regression model with default parameters was used. For cybersec and credit-fraud, the XGBoost classifier was used. Since those are highly imbalanced datasets, an appropriate scaling parameter was used with a value equal to the ratio of negative and positive samples was used for training the models. ### Consensus methods and cut-off threshold In our experiments we have evaluated the following consensus methods: * a refined version of the proposed algorithm generating additional annotations for each sample with machine learning models described in Section 3.3. * the first revision of the proposed algorithm, substituting expert annotations with machine learning models described in detail in Section 3.2. * the original expectation-maximization algorithm. * the regular majority voting algorithm with a slight modification to make it more comparable with other methods. The modification is as follows - it returns a distribution of votes for individual classes instead of just indicating the class with the highest number of votes. In both, inferred consensus and simulated consensus, models representing experts had exactly the same architecture and hyperparameters as the final model used in the evaluation. The parameter values are given in Table 1. The following cut-off thresholding techniques were used: * Default \(0.5\) threshold used in the majority of machine learning frameworks. * A threshold computed using true labels from the training pool. This threshold represents the ratio of samples having a particular class to all of the samples in the pool. * The proposed thresholding technique that uses the probability distribution predicted for the whole available training data pool, as described in Section 4. Keep in mind that for each model, the prediction was done over all available samples from the pool, not only those which were annotated by experts. Those cut-off thresholds were used to generate labels in the same way as described in Section 4. For the purpose of multi-label model training, a probability distribution was compared with the corresponding threshold to determine whether a class should be assigned to the sample. For the BAC estimation, a difference between the maximal predicted probability and the threshold value was used. ## 6 Results ### Annotations quality A summary of annotation quality results can be found in Table 2. The simulated consensus algorithm has obtained significantly better results than all other methods on all datasets but firefighters in both ROC AUC and BAC metrics. On the firefighters dataset, the inferred consensus obtained slightly better ROC AUC than the simulated consensus, which turned out to be the second for this metric. Moreover, we computed the one-sided Wilcoxon signed rank test to check the statistical significance of these results. Scores obtained by the simulated consensus turned out to be significantly greater than the scores of the EM algorithm for all of the datasets in terms of both AUC and BAC-model-posterior with a p-value of \(0.03125\), which we consider a good result taking into account the limited expressiveness of Wilcoxon test. These results show the robustness and superiority of the proposed annotation unification algorithm. Noteworthy are also the results of the BAC-model-posterior cut-off, which obtained comparable performance for the balanced datasets and better results than the default threshold for most of the consensus methods on imbalanced dataset combinations. For some imbalanced cases (cybersec and credit-fraud for the inferred consensus and the EM method), it led to good quality labels even when all other cut-off strategies failed. Tested using the one-sided Wilcoxon signed rank test against the default cut-off method, it obtained p-values: \(0.026\), and \(0.001\) for the cybersec and credit-fraud datasets, respectively. It suggests that this technique, which does not require a priori knowledge about label distribution, is the safest choice for new active learning scenarios. ### Expert's reliability estimation Results of experts' true positive rate estimations are shown in Table 3. As suspected, the proposed inferred consensus method leads to distortion of expert reliability estimation. Therefore, it obtains larger mean absolute errors than the regular EM algorithm. Interestingly, the inferred consensus still results in greater correlations for the MNIST and firefighters datasets, which might be caused by better estimation of actual labels. Nevertheless, the refined version of our algorithm, i.e. simulated consensus, achieves highly superior scores in all three metrics and for all of the datasets. The p-values of the one-sided Wilcoxon rank test were: \(0.03125\), \(0.06250\), \(0.31250\), and \(0.03125\) for MNIST, firefighters, cybersec, and credit-fraud, respectively. The same p-values were obtained for both correlation metrics. Similar results were obtained for MAE: \(0.03125\), \(0.03125\), \(0.09375\), and \(0.03125\) for the corresponding datasets. Therefore, leading to statistically significant differences in two datasets for correlations and for three datasets for MAE. Noteworthy is the fact that due to the relatively small number of experiment repetitions, the expressiveness of the statistical test was severely limited. However, it still shows the potential of our method considering the fact that the MAE metric on MNIST and credit-fraud datasets was two times smaller on average than for other methods. ### Quality of trained models Results of model-related metrics can be found in the supplementary materials Appendix A. Our methods led to better machine learning models on the MNIST and firefighters datasets. The simulated consensus model achieved BAC of \(0.878(\pm 0.003)\) with model-posterior cut-offs technique on the MNIST dataset and the inferred consensus model achieved BAC of \(0.791(\pm 0.012)\), also model-posterior cut-offs, on the firefighters dataset. This finding is consistent with the label quality results. Surprisingly, on both imbalanced datasets classical majority voting with the default 0.5 threshold achieved better performance than any other model, i.e., \(0.773(\pm 0.015)\), and \(0.758(\pm 0.019)\) for the cybersec and credit-fraud datasets, respectively. This result is interesting, as other methods have obtained distinctively better label quality estimations on those datasets. The 0.5 threshold on model predictions looks sound from our perspective, as those models were trained with scaled weights for each class to balance the training data, however, we do not have a good explanation for why this threshold is also good for assigning labels for the majority voting algorithm. Therefore, as there is no clear correlation between labels and the resulting model's quality for the imbalanced datasets, this remains a topic for future research. \begin{table} \begin{tabular}{|c|c|c|} \hline Dataset & Model & Hyperparameters \\ \hline MNIST & Logistic Regression & max\_iter=500, n\_jobs=10 \\ \hline firefighters & Logistic Regression & max\_iter=500, n\_jobs=10 \\ \hline cybersec & XGBClassifier & neg\_pos\_ratio \#\ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & AUC & BAC-default & BAC-GT-prior & BAC-model-posterior \\ \hline \multicolumn{5}{|c|}{MNIST} \\ \hline Simulated consensus & \(\mathbf{0.988}(\pm\mathbf{0.002})\) & \(\mathbf{0.911}(\pm\mathbf{0.010})\) & \(0.908(\pm 0.010)\) & \(0.907(\pm 0.011)\) \\ Inferred consensus & \(0.978(\pm 0.001)\) & \(0.870(\pm 0.004)\) & \(0.868(\pm 0.004)\) & \(0.867(\pm 0.005)\) \\ EM & \(0.882(\pm 0.018)\) & \(0.588(\pm 0.033)\) & \(0.589(\pm 0.034)\) & \(0.590(\pm 0.034)\) \\ Majority Voting & \(0.801(\pm 0.008)\) & \(0.405(\pm 0.030)\) & \(0.419(\pm 0.020)\) & \(0.459(\pm 0.024)\) \\ \hline \multicolumn{5}{|c|}{firefighters} \\ \hline Simulated consensus & \(0.979(\pm 0.009)\) & \(0.872(\pm 0.046)\) & \(0.874(\pm 0.042)\) & \(\mathbf{0.875}(\pm\mathbf{0.042})\) \\ Inferred consensus & \(\mathbf{0.985}(\pm\mathbf{0.003})\) & \(0.845(\pm 0.051)\) & \(0.840(\pm 0.047)\) & \(0.842(\pm 0.047)\) \\ EM & \(0.875(\pm 0.027)\) & \(0.647(\pm 0.053)\) & \(0.681(\pm 0.040)\) & \(0.687(\pm 0.036)\) \\ Majority Voting & \(0.798(\pm 0.016)\) & \(0.581(\pm 0.034)\) & \(0.569(\pm 0.041)\) & \(0.573(\pm 0.037)\) \\ \hline \multicolumn{5}{|c|}{cyersec} \\ \hline Simulated consensus & \(\mathbf{0.909}(\pm\mathbf{0.022})\) & \(0.635(\pm 0.054)\) & \(\mathbf{0.887}(\pm\mathbf{0.020})\) & \(0.873(\pm 0.019)\) \\ Inferred consensus & \(0.784(\pm 0.131)\) & \(0.500(\pm 0.001)\) & \(0.556(\pm 0.073)\) & \(0.729(\pm 0.022)\) \\ EM & \(0.876(\pm 0.021)\) & \(0.821(\pm 0.027)\) & \(0.515(\pm 0.029)\) & \(0.827(\pm 0.022)\) \\ Majority Voting & \(0.797(\pm 0.023)\) & \(0.805(\pm 0.025)\) & \(0.789(\pm 0.041)\) & \(0.789(\pm 0.041)\) \\ \hline \multicolumn{5}{|c|}{credit-fraud} \\ \hline Simulated consensus & \(\mathbf{0.869}(\pm\mathbf{0.045})\) & \(0.538(\pm 0.035)\) & \(0.683(\pm 0.178)\) & \(\mathbf{0.838}(\pm\mathbf{0.057})\) \\ Inferred consensus & \(0.747(\pm 0.151)\) & \(0.500(\pm 0.000)\) & \(0.628(\pm 0.159)\) & \(0.769(\pm 0.145)\) \\ EM & \(0.801(\pm 0.061)\) & \(0.616(\pm 0.059)\) & \(0.500(\pm 0.000)\) & \(0.781(\pm 0.052)\) \\ Majority Voting & \(0.807(\pm 0.045)\) & \(0.810(\pm 0.041)\) & \(0.804(\pm 0.051)\) & \(0.804(\pm 0.051)\) \\ \hline \end{tabular} \end{table} Table 2: Results of annotation quality metrics, each dataset has a separate subsection. Each row features the results of one annotation unification method for the corresponding dataset. The first column named AUC denotes the area under the ROC curve computed between obtained probabilities and true labels for annotated samples. The rest of the columns denote the balanced accuracy between labels obtained with the thresholding method indicated in the column name and true labels for annotated samples. Standard deviations across the experiments are shown in brackets next to each value. Bold values indicate the largest value in the AUC column and across all BAC columns for each of the datasets. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Method & MAE & Pearson & Spearman \\ \hline \multicolumn{5}{|c|}{MNIST} \\ \hline Simulated consensus & \(\mathbf{0.045}(\pm\mathbf{0.009})\) & \(\mathbf{0.902}(\pm\mathbf{0.044})\) & \(\mathbf{0.894}(\pm\mathbf{0.051})\) \\ Inferred consensus & \(0.175(\pm 0.009)\) & \(0.775(\pm 0.063)\) & \(0.763(\pm 0.065)\) \\ EM & \(0.090(\pm 0.020)\) & \(0.757(\pm 0.077)\) & \(0.671(\pm 0.114)\) \\ Majority Voting & NA & NA & NA \\ \hline \multicolumn{5}{|c|}{firefighters} \\ \hline Simulated consensus & \(\mathbf{0.083}(\pm\mathbf{0.013})\) & \(\mathbf{0.689}(\pm\mathbf{0.109})\) & \(\mathbf{0.700}(\pm\mathbf{0.096})\) \\ Inferred consensus & \(0.179(\pm 0.012)\) & \(0.608(\pm 0.088)\) & \(0.677(\pm 0.056)\) \\ EM & \(0.122(\pm 0.028)\) & \(0.567(\pm 0.149)\) & \(0.566(\pm 0.190)\) \\ Majority Voting & NA & NA & NA \\ \hline \multicolumn{5}{|c|}{cyersec} \\ \hline Simulated consensus & \(\mathbf{0.065}(\pm\mathbf{0.015})\) & \(\mathbf{0.756}(\pm\mathbf{0.129})\) & \(\mathbf{0.713}(\pm\mathbf{0.220})\) \\ Inferred consensus & \(0.275(\pm 0.073)\) & \(0.358(\pm 0.230)\) & \(0.408(\pm 0.280)\) \\ EM & \(0.101(\pm 0.032)\) & \(0.689(\pm 0.226)\) & \(0.634(\pm 0.175)\) \\ Majority Voting & NA & NA & NA \\ \hline \multicolumn{5}{|c|}{credit-fraud} \\ \hline Simulated consensus & \(\mathbf{0.126}(\pm\mathbf{0.053})\) & \(\mathbf{0.456}(\pm\mathbf{0.261})\) & \(\mathbf{0.448}(\pm\mathbf{0.199})\) \\ Inferred consensus & \(0.268(\pm 0.053)\) & \(0.164(\pm 0.178)\) & \(0.221(\pm 0.147)\) \\ EM & \(0.250(\pm 0.025)\) & \(0.211(\pm 0.162)\) & \(0.253(\pm 0.122)\) \\ Majority Voting & NA & NA & NA \\ \hline \end{tabular} \end{table} Table 3: Results of experts’ quality estimation metrics. Each dataset has a separate subsection. Each row features results for one annotation unification method for the corresponding dataset. The first column, named MAE, denotes the mean absolute error across estimations of true positive rates for experts. Pearson and Spearman indicate values of the corresponding correlation coefficients between the estimated true positive rates and the ground truths assigned during the experiment setup. Standard deviations across the experiments are shown in brackets next to each value. Bold values indicate the smallest MAE or the largest correlation for each of the datasets. Conclusions In this paper, we have addressed the issue of faulty data annotations in the context of active learning for classification. We proposed two novel annotation unification algorithms based on Expectation-Maximization (EM) and machine learning models, which require little to no intersection between samples annotated by different experts. Our experiments on four public datasets showed that the proposed methods outperform the state-of-the-art algorithms and simple majority voting, both in terms of the estimation of annotator reliability and the assignment of actual labels. We also proposed a novel cut-off method to tackle the challenge of imbalanced datasets, which can be used even without prior knowledge about class distribution. This approach can be useful in many active learning scenarios where the distribution of classes is unknown or changes over time. In conclusion, our proposed methods offer an effective solution to the issue of faulty data annotations. By utilizing unlabeled parts of the sample space and incorporating machine learning models, we can improve the quality of labeled datasets and ultimately enhance the performance of supervised classification algorithms. We hope that our work will contribute to further advancements in this field and encourage more research on consensus algorithms for data labeling. Moreover, our research opens new, as far as we know, yet unexplored topics. Namely, one may ask why for some datasets labels quality does not clearly correlate with the quality of trained machine learning models. As this has a strong influence on all actively annotated machine learning tasks, it requires additional investigation in the future. Of course, this observation might be a result of a relatively small number of experiments, therefore to properly confirm the findings of this paper additional experiment repetitions and validation on new datasets are needed. ## Acknowledgements This research was co-funded by Smart Growth Operational Programme 2014-2020, financed by European Regional Development Fund, in frame of project POIR.01.01.01-00-0213/19, operated by National Centre for Research and Development in Poland.
2303.16060
Distribution and Kinematics of H I through Raman He II Spectroscopy of NGC 6302
The young planetary nebula NGC 6302 is known to exhibit Raman-scattered He II features at 6545 and 4851 Angstrom. These features are formed through inelastic scattering of He II$\lambda\lambda$ 1025 and 972 with hydrogen atoms in the ground state, for which the cross sections are $1.2 \times 10^{-21}$ and $1.4\times 10^{-22} {\rm\ cm^2}$, respectively. We investigate the spectrum of NGC 6302 archived in the ESO Science Portal. Our Gaussian line fitting analysis shows that the Raman-scattered He II features are broader and more redshifted than the hypothetical model Raman features that would be formed in a cold static H I medium. We adopt a simple scattering geometry consisting of a compact He II emission region surrounded by a H I medium to perform Monte Carlo simulations using the radiative transfer code ${\it STaRS}$. Our simulations show that the H I region is characterized by the H I column density $N_{\rm HI}=3\times 10^{21}{\rm\ cm^{-2}}$ with the random speed component $v_{\rm ran}=10{\rm\ km\ s^{-1}}$ expanding with a speed $v_{\rm exp}= 13{\rm\ km\ s^{-1}}$ from the He II emission region. Based on our best fit parameters, we estimate the H I mass of the neutral medium $M_{\rm HI} \simeq 1.0\times 10^{-2}\ {\rm M_\odot}$, pointing out the usefulness of Raman He II spectroscopy as a tool to trace H I components.
Seok-Jun Chang, Hee-Won Lee, Jiyu Kim, Yeon-Ho Choi
2023-03-28T15:40:33Z
http://arxiv.org/abs/2303.16060v1
# Distribution and Kinematics of H I through Raman He II Spectroscopy of NGC 6302 ###### Abstract The young planetary nebula NGC 6302 is known to exhibit Raman-scattered He II features at 6545 A and 4851 A. These features are formed through inelastic scattering of He II\(\lambda\lambda\) 1025 and 972 with hydrogen atoms in the ground state, for which the cross sections are \(1.2\times 10^{-21}\) and \(1.4\times 10^{-22}\) cm\({}^{2}\), respectively. We investigate the spectrum of NGC 6302 archived in the ESO Science Portal. Our Gaussian line fitting analysis shows that the Raman-scattered He II features are broader and more redshifted than the hypothetical model Raman features that would be formed in a cold static H I medium. We adopt a simple scattering geometry consisting of a compact He II emission region surrounded by a H I medium to perform Monte Carlo simulations using the radiative transfer code _STaRS_. Our simulations show that the H I region is characterized by the H I column density \(N_{\rm HI}=3\times 10^{21}\) cm\({}^{-2}\) with the random speed component \(v_{\rm ran}=10\) km s\({}^{-1}\) expanding with a speed \(v_{\rm exp}=13\) km s\({}^{-1}\) from the He II emission region. Based on our best fit parameters, we estimate the H I mass of the neutral medium \(M_{\rm HI}\simeq 1.0\times 10^{-2}\) M\({}_{\odot}\), pointing out the usefulness of Raman He II spectroscopy as a tool to trace H I components. Radiative transfer -- Planetary nebulae -- Scattering -- Individual NGC 6302 0000-0002-2080-8003]Seok-Jun Chang 0000-0002-3188-7886]Hee-Won Lee 0000-0002-4888-7886]Jiyu Kim 0000-0002-4883-0888]Yeon-Ho Choi ## 1 Introduction NGC 6302 is a young planetary nebula exhibiting a well-known butterfly morphology. The two main lobes are divided by an equatorial torus composed of atomic, molecular, and dusty material (Matsuura et al., 2005; Kastner et al., 2022). With high helium and nitrogen abundances, NGC 6302 is classified as a Type I planetary nebula according to the classification scheme proposed by Peimbert (1978). It belongs to the highest excitation class with prominent emission lines including N V\(\lambda\lambda\)1238, 1243, C IV\(\lambda\lambda\)1548, 1551 and Ne VI at 7.7 \(\mu\)m (Feibelman, 2001; Pottasch et al., 1985). Many researchers investigated NGC 6302 for the internal kinematics and various components, including the ionized, atomic, molecular and dust components. Meaburn et al. (2008) investigated the proper motions of the outflowing knots to propose that the distance to NGC 6302 is 1.17 kpc. They also derived a kinematic age of 2200 years from their analysis of the Hubble-type expansion (e.g., Szyszka et al., 2011). Dinh-V-Trung et al. (2008) proposed that the molecular torus is expanding with a speed of \(\sim 15\) km s\({}^{-1}\) from their measurement using the Submillimeter Array. Santander-Garcia et al. (2017) conducted a kinematical analysis using ALMA data to confirm the Hubble type expansion. With recent history of significant mass loss, a young planetary nebula is expected to harbor abundant H I behind the ionization front, which is expanding with respect to the central hot source. The hyperfine structure 21 cm line is regarded as currently the most effective spectroscopic tracer of atomic hydrogen. The first successful detection of a neutral component in planetary nebulae was made by Rodriguez & Moran (1982), who conducted radio observations of NGC 6302 using the Very Large Array. CO and H I components were also detected from radio observations for a number of planetary nebulae (e.g., Gussie & Taylor, 1995). However, severe confusion from the Galactic emission prevents one from investigating the distribution and the kinematics of the H i components in planetary nebulae. For young planetary nebulae, a very unique and useful spectroscopic probe is provided by the Raman scattering process of far UV line radiation with atomic hydrogen. In young planetary nebulae, far UV He ii lines can be Raman scattered by atomic hydrogen to form broad features blueward of hydrogen Balmer lines (Nussbaumer et al., 1989). The first report of Raman-scattered He ii features was made for the young planetary nebula NGC 7027 by Pequignot et al. (1997), who identified the broad feature at 4852 A as Raman-scattered He ii. Groves et al. (2002) found the same feature in NGC 6302 while they investigated the extinction in the nebula. Subsequently, Raman-scattered He ii at 6545 A was detected in the young planetary nebulae IC 5117, NGC 6790, NGC 6881, and NGC 6886 (Lee et al., 2006; Kang et al., 2009; Choi & Lee, 2020). Raman-scattered He ii features are clearly detected in the high resolution optical spectrum of NGC 6302 provided by the ESO Science Archive Facility. In this paper, we investigate the physical properties of H i in NGC 6302 using these features. The paper is organized as follows. In Section 2, we briefly explain the basic atomic physics of Raman scattering with atomic hydrogen. In Section 3, we analyze Raman-scattered He ii features at 4851 A as well as at 6545 A. We present the results from our Monte Carlo simulations in Section 4. A brief summary and discussions are presented in the final section. ## 2 Atomic Physics Schmid (1989) proposed that the broad emission features at 6825 A and 7082 A that appear in about a half of symbiotic stars are formed through inelastic scattering processes of the far UV resonance doublet lines O VI\(\lambda\lambda\)1032 and 1038. When a far UV photon blueward of Ly\(\alpha\) incident on a hydrogen atom in the ground state can be converted into a lower energy photon by an amount of 10.2 eV, the energy of a Ly\(\alpha\) photon, then the hydrogen atom makes a final de-excitation into the excited \(2s\) state instead of the initial ground state. Additional examples are provided from far UV He ii lines. He ii \(\lambda\)1025 arising from transitions of \(6\to 2\) is Raman scattered with atomic hydrogen to form an optical spectral feature at 6545 A, blueward of He ii \(\lambda\)6560, which is associated with transitions \(6\to 4\). Similarly, Raman scattering of He ii \(\lambda\)972 and \(\lambda\)949 yields spectral features at 4851 A and 4332 A, respectively (Nussbaumer et al., 1989; Lee, 2012). Figure 1 shows a schematic illustration of the Raman scattering process expected to operate in young planetary nebulae, when far UV He ii line photons enter the neutral region behind the ionization front. The atomic line center wavelengths for far UV He ii emission lines and their optical Raman lines are shown in Table 1, where the cross sections \(\sigma_{1s}^{\rm Ray}\) and \(\sigma_{2s}^{\rm Ram}\) for Rayleigh and Raman scattering are also available (e.g., Lee, 2012; Chang et al., 2015). The energy conservation requires \[\nu_{i}=\nu_{o}+\nu_{\rm Ly\alpha}, \tag{1}\] where \(\nu_{i},~{}\nu_{o}\) and \(\nu_{\rm Ly\alpha}\) are frequencies of the incident, Raman-scattered and Ly\(\alpha\) photons, respectively. One may immediately note that the line widths of the incident and the Raman-scattered radiation are related by \[\frac{\Delta\nu_{i}}{\nu_{i}}=\left(\frac{\nu_{o}}{\nu_{i}}\right)\frac{\Delta \nu_{o}}{\nu_{o}}, \tag{2}\] which shows that the Raman He ii features blueward of Balmer lines are broadened by the factor \(\nu_{i}/\nu_{o}\). The line broadening effect associated with the inelasticity of Raman scattering allows one to readily identify Raman-scattered features (Schmid, 1989). ## 3 Observation ### Data We retrieved the spectra of NGC 6302 taken with UV-Visual Echelle Spectrograph (UVES) attached on ESO Very Large Telescope (VLT) from ESO Science Archive1 under the ESO programme 65.I-0465 (P.I. Casassus). NGC 6302 was observed on May 24, 2000 through a 0.3 arcsec slit, resulting in spectral resolution of \(R\sim 107,200\). A total of 4 spectra were obtained. The exposure times are 1200 seconds for the first 2 spectra and 2400 seconds for the other 2 spectra. The median signal-to-noise ratio of each exposure ranges from 7.1 to 11.5. Footnote 1: [http://archive.eso.org/scienceportal/home](http://archive.eso.org/scienceportal/home) The 1-D and wavelength-calibrated spectrum covers the spectral range from 4727 A to 6835 A. In Figure 2, we show the full UVES spectrum of NGC 6302. The strong lines of H\(\alpha\), H\(\beta\), [O III]\(\lambda\lambda\)4959, 5007 and [N II]\(\lambda\lambda\)6548, 6583 are all severely saturated in this spectrum. The presence of He ii emission lines at 6560 A and 4859 A is barely noticed in the scale. In the inset of the upper panel of Figure 2, we show a part around H\(\beta\) to show clearly the broad feature at 4851 A, which is identified as Raman-scattered He ii. In a similar way, the inset of the lower panel shows a part of the spectrum near H\(\alpha\), where we find a broad feature blended with [N II]\(\lambda\)6548. The intrinsic flux ratios \(F_{6583}/F_{6548}\) of [N ii]\(\lambda\lambda 6548,6583\) as well as \(F_{5007}/F_{4959}\) of [O iii]\(\lambda\lambda 4959,5007\) is fixed to be 3 due to the fact that they arise from the same excited state. Hence, if the broad weak emission feature near [N ii]\(\lambda 6548\) is due to N ii, then it implies the presence of a more conspicuous broad feature by a factor 3 near [N ii]\(\lambda 6583\). No such broad feature is apparent near [N II]\(\lambda 6583\), which confirms that the broad feature around [N II]\(\lambda 6548\) is not associated with [N II] but is to be identified with Raman-scattered He ii. ### Gaussian Fitting In this subsection, we provide a quantitative profile analysis of He ii emission lines and Raman-scattered He ii lines. In addition to He ii \(\lambda 6560\) and He ii \(\lambda 4859\), we also consider He ii \(\lambda 6527\), which is formed from transitions between \(n=14\) and \(n=5\). This emission line is very convenient because Raman He ii \(\lambda 6545\) is located redward of He ii \(\lambda 6527\) with comparable strength. The atomic line centers of these He ii \(\lambda 4859\), \(\lambda 6527\) and \(\lambda 6560\) are 4859.32 A, 6527.10 A, 6560.10 A, respectively (e.g., Lee et al., 2006; Hyung & Feibelman, 2004). The He ii emission lines and Raman-scattered features of NGC 6302 appear to be well-fitted using a single Gaussian function in the form given by \[G(\lambda)=\frac{F}{\sqrt{2\pi}~{}\sigma_{\lambda}}\exp\left(-\frac{(\lambda- \lambda_{c})^{2}}{2\sigma_{\lambda}^{2}}\right), \tag{3}\] where \(\lambda_{c}\) and \(\sigma_{\lambda}\) are the wavelength of line center and the line width, respectively. In Figure 3, we show the result of our single Gaussian fits. The observational data are shown with the solid gray lines, and the Gaussian fitting results are shown with the black dashed lines. In the two left panels, the results for He ii \(\lambda 6527\) and \(\lambda 6560\) and Raman He ii \(\lambda 6545\) are shown, and the right panels show the results for the counterparts blueward of H\(\beta\). The left lower panel shows the Raman He ii \(\lambda 6545\), which is severely blended with [N ii]\(\lambda 6548\). The right lower panel shows the Raman He ii \(\lambda 4851\). \begin{table} \begin{tabular}{c c c c c} \hline Transition & \(\lambda_{0}\) of He ii Emission\({}^{a}\) [Å] & \(\lambda_{0}\) of Raman He ii \({}^{b}\) [Å] & \(\sigma_{2s}^{\rm Ram}\) [cm\({}^{2}\)] & \(\sigma_{1s}^{\rm Ray}\) [cm\({}^{2}\)] \\ \hline \(n=6\to 2\) & 1025.28 & 6544.70 & \(1.2\times 10^{-21}\) & \(6.2\times 10^{-21}\) \\ \(n=8\to 2\) & 972.13 & 4851.30 & \(1.4\times 10^{-22}\) & \(8.3\times 10^{-22}\) \\ \(n=10\to 2\) & 949.32 & 4331.74 & \(2.9\times 10^{-23}\) & \(1.9\times 10^{-22}\) \\ \hline \end{tabular} \({}^{a}\) vacuum wavelength, \({}^{b}\) air wavelength \end{table} Table 1: Atomic line center wavelength \(\lambda_{0}\) of He ii emission and Raman He ii (first and second columns) and cross sections for Rayleigh scattering and Raman scattering into \(2s\) (third and fourth columns) Figure 1: A schematic illustration to show the formation of Raman-scattered He ii features blueward of H i Balmer lines in the H i region neighboring the He ii emission region. Far UV He ii lines at 1025, 972 and 949 Å are slightly more energetic than H i Ly\(\beta\), Ly\(\gamma\), and Ly\(\delta\), resulting in optical lines at 6545 Å, 4851 Å, and 4332 Å blueward of H\(\alpha\), H\(\beta\) and H\(\gamma\). Table 2 provides the fitting parameters used for the results that are shown with the black dashed lines in Figure 3. The widths \(\sigma_{\lambda}\) of the two He ii emission lines \(\lambda\)6560 and \(\lambda\)6527 near H\(\alpha\) are 0.283 A, and that of He ii \(\lambda\)4859 is 0.212 A. In contrast, the line width \(\sigma_{\lambda}=3.25\) A of Raman He ii \(\lambda\)6545 is 11.5 times wider than those of He ii \(\lambda\)6527 and \(\lambda\)6560. In the case of Raman He ii \(\lambda\)4851, the line width \(\sigma_{\lambda}=1.70\) A is wider than that of He ii \(\lambda\)4859 by a factor of 8.02. These factors exceed the line broadening factor \(\lambda_{o}/\lambda_{i}\) due to the inelasticity of Raman scattering given in Eq. (2). This is consistent with the proposal that the Raman-scattered He ii features of NGC 6302 are formed in expanding neutral regions, where the expansion provides additional line broadening (e.g. Jung & Lee, 2004; Choi et al., 2020). ### Line Ratio of He ii \(\lambda\)6560 & \(\lambda\)4859 The observed flux ratio of H\(\alpha\)/H\(\beta\) is regarded as an extinction indicator, because the Case B recombination theory predicts the flux ratio of \(\sim 2.8\)(e.g. Osterbrock, 1989; Storey & Hummer, 1995). However, H\(\alpha\) and H\(\beta\) in the UVES spectra are saturated near the line center, preventing one from estimating the extinction. Instead, the nearby emission lines He ii \(\lambda\)6560 and \(\lambda\)4859 can be used for the same purpose. According to Case B recombination theory for He ii, the intrinsic line ratio of He ii \(\lambda\)6560 and \(\lambda\)4859 is \(\sim 2.5\), whereas it is observed to be \(\sim 6.12\) from the UVES spectra (Storey & Hummer, 1995). By adopting the dust model of our Galactic interstellar medium provided by Draine (2003), the observed line ratio of He ii \(\lambda\)6560 and \(\lambda\)4859 is consistent with \(N_{\rm H}=1.35\times 10^{22}\,{\rm cm}^{-2}\) corresponding to \(A_{V}\sim 7.5\). ### Line Broadening Figure 2: The optical spectrum of NGC 6302 retrieved from the UVES ESO public data archive. Strong emission lines, including H\(\alpha\), H\(\beta\), [O III]\(\lambda\lambda\)4959, 5007, and [N II]\(\lambda\lambda\)6548, 6583 are heavily saturated and He ii emission lines at 4859 Å and 6560 Å are barely noticeable. Due to the heavy saturation, [N II]\(\lambda\)6548 appears comparable to [N ii]\(\lambda\)6583, even though [N ii]\(\lambda\)6548 is in fact 3 times weaker than [N ii]\(\lambda\)6583. The insets of the upper and lower panels zoom in spectral regions around H\(\beta\) and H\(\alpha\), respectively. In the insets, the two broad features at 4851 Å and 6545 Å are clearly seen, which are Raman-scattered He ii features. Raman He ii lines are broadened significantly due to inelasticity of Raman scattering, which is illustrated in Eq. (2). Therefore, a correction factor of \(\lambda_{\rm UV}/\lambda_{\rm Ram}\) is required in order to yield the velocity width in the parent UV spectral space, where \(\lambda_{\rm UV}\) and \(\lambda_{\rm Ram}\) are the wavelengths of the incident UV He ii emission and the corresponding Raman scattered feature, respectively. The velocity width \(\sigma_{v}\) of Raman-scattered He ii is calculated by \[\sigma_{v}=\frac{c\sigma_{\lambda}}{\lambda_{c}}\biggl{(}\frac{\lambda_{\rm UV }}{\lambda_{\rm Ram}}\biggr{)}=\sigma_{v}^{\rm app}\biggl{(}\frac{\lambda_{\rm UV }}{\lambda_{\rm Ram}}\biggr{)}, \tag{4}\] where \(\sigma_{v}^{\rm app}=c\sigma_{\lambda}/\lambda_{c}\) is an apparent velocity width of Raman He ii. For example, the apparent width \(\sigma_{\lambda}=3.25\) A for Raman He ii \(\lambda 6545\) is converted to \(\sigma_{v}=23.4\) km s\({}^{-1}\) instead of \(\sigma_{v}^{\rm app}=150\) km s\({}^{-1}\). \begin{table} \begin{tabular}{c c c c c c} \hline Line & He ii 6560 & Raman He ii 6545 & He ii 6527 & He ii 4859 & Raman He ii 4851 \\ \hline \(\lambda_{c}\) [Å] & 6559.00 & 6546.06 & 6525.98 & 4858.45 & 4851.45 \\ \(\sigma_{\lambda}\) [Å] & 0.283 & 3.25 & 0.283 & 0.212 & 1.70 \\ F [10\({}^{-14}\)erg s\({}^{-1}\) cm\({}^{-2}\)] & 7.08 & 1.17 & 0.292 & 1.16 & 0.121 \\ \(\sigma_{v}\) [ km s\({}^{-1}\) ] & 12.9 & 23.4\({}^{a}\) & 12.9 & 13.09 & 21.03\({}^{a}\) \\ \(\Delta V_{c}\) [ km s\({}^{-1}\) ] & - & 17.86\({}^{b1}\) & - & - & 12.88\({}^{b2}\) \\ \hline \multicolumn{6}{l}{a: corrected for Raman broadening, b1 \& b2: velocity offset from He ii \(\lambda 6560\) \& \(\lambda 4859\)} \\ \end{tabular} \end{table} Table 2: Parameters of Gaussian fitting in Figure 3 Figure 3: Gaussian line profile fits of NGC 6302 near H\(\alpha\) (left panels) and H\(\beta\) (right panels). The observed data are represented by the gray solid lines. The black dashed lines show the results of our single Gaussian fit adopting parameters in Table 2. In the lower two panels, the red dashed lines show hypothetical Raman-scattered features that would be formed in a H i region stationary with respect to the He ii emission region. The observed Raman He ii are broader and more red-shifted than the hypothetical profiles. Our single Gaussian fit analysis shows that \(\sigma_{v}=23.4\) km s\({}^{-1}\) and \(21.0\) km s\({}^{-1}\) for Raman He ii\(\lambda\)6545 and \(\lambda\)4851, respectively. These values of \(\sigma_{v}\) for Raman He ii lines are broader than those of He ii emission lines by \(8-10\) km s\({}^{-1}\). In the lower two panels of Figure 3, we show the hypothetical profiles of Raman scattered features, which would be formed in a static H i medium with \(\sigma_{v}=13\) km s\({}^{-1}\). In Section 4, we use a Monte Carlo approach to find that the difference of \(\sigma_{v}\) between Raman He ii and He ii emission lines originates from the random motion of scattering medium. ### Line Center Shift In the bottom row of Table 2, we show the velocity offset \(\Delta V_{c}\) of Raman He ii lines relative to nearby optical He ii emission lines. The apparent velocity line shift \(V_{\rm app}\) is given by \[V_{\rm app}=\left(\frac{\lambda_{c}-\lambda_{0}}{\lambda_{0}}\right)c \tag{5}\] where \(\lambda_{0}\) is the atomic line center wavelength. In turn, the velocity offset \(\Delta V_{c}\) of Raman He ii\(\lambda\)6545 is calculated by \[\Delta V_{c}=\frac{\lambda_{0,1025}}{\lambda_{0,6545}}V_{\rm app,6545}-V_{\rm app,6560}. \tag{6}\] The wavelengths \(\lambda_{0,1025},\lambda_{0,6545}\), and \(\lambda_{c,6545}\) are defined analogously. In the case of Raman He ii\(\lambda\)4851, the velocity offset \(\Delta V_{c}\) is obtained with \(\lambda_{0,972}\) and \(\lambda_{0,4859}\) replacing \(\lambda_{0,1025}\) and \(\lambda_{0,6560}\), respectively. From our line fit analysis, \(\Delta V_{c}=10.24\) km s\({}^{-1}\) and \(13.10\) km s\({}^{-1}\) for Raman He ii\(\lambda\)6545 and \(\lambda\)4851, respectively. In the bottom panels of Figure 3, the Raman He ii features are clearly displayed redward of the hypothetical features shown in the red dashed lines. This redward line shift of Raman He ii indicates that the H i medium is moving away from the He ii emission region (e.g. Choi et al., 2020). ### Raman Conversion Efficiency In this subsection, we present the Raman conversion efficiency (RCE), which is defined as the ratio of the number of Raman-scattered photons divided by that of the incident far UV photons. Explicitly, RCE of Raman He ii\(\lambda\)6545 is given by \[{\rm RCE}_{6545} = \frac{F_{6545}/E_{6545}}{F_{1025}/E_{1025}} \tag{7}\] \[= \left(\frac{\lambda_{0,6545}}{\lambda_{0,1025}}\right)\left(\frac {F_{6545}}{F_{6560}}\right)\left(\frac{F_{6560}}{F_{1025}}\right),\] where \(E_{\lambda}=hc/\lambda\) is the energy of a photon with wavelength \(\lambda\). RCE of Raman He ii\(\lambda\)4851 is also given in an analogous way by replacing 1025 and 6560 by 972 and 4859. Firstly, the ratio of Raman and neighboring optical He ii line fluxes, (\(F_{6545}/F_{6560}\)) or (\(F_{4851}/F_{4859}\)), is obtained using the observed values presented in Table 2. Secondly, the flux ratio of the neighboring optical He ii and directly unavailable incident far UV He ii lines, (\(F_{6560}/F_{1025}\)) or (\(F_{4859}/F_{972}\)), is deduced, assuming that Case B recombination is valid. According to Storey and Hummer (1995), (\(F_{6560}/F_{1025}\)) \(\sim 0.19\) and also (\(F_{4859}/F_{972}\)) \(\sim 0.19\). Combining these results, we obtain \({\rm RCE}_{6545}=0.21\) and \({\rm RCE}_{4851}=0.10\). ## 4 Monte Carlo approach Raman He ii lines carry important physical information of the H i region. The line center, profile width and the strength of Raman-scattered He ii can be used to put strong constraints on the distribution and kinematics of the H i component near the He ii emission nebula (e.g., Nussbaumer et al., 1989). Due to the difference in scattering cross sections of He ii\(\lambda\) 1025 and He ii\(\lambda\)972, the Raman conversion efficiencies also differ so that much detailed information can be obtained if both Raman-scattered features are secured with sufficient data quality. In particular, Choi et al. (2020) investigated the line formation of Raman-scattered He ii in an expanding H i region to show that the redward shift of the line center of Raman-scattered He ii features is conspicuously enhanced due to the sharp rise of the cross section toward H i Lyman line centers (see also Jung and Lee, 2004). In this section, we adopt a Monte Carlo approach to propose a simple scattering geometry consistent with the observed spectrum considered in this work. The Monte Carlo code '_STaRS_' developed for radiative transfer in thick neutral regions by Chang and Lee (2020) is used to simulate the formation of Raman-scattered He ii and find the best fitting parameters. In this simulation, we set the numbers of the incident far UV He ii 1025 and 972 photons to be \(10^{8}\). The ratio of the two UV lines is \(\sim 2.5\) in accordance with Case B recombination. Because optical Raman-scattered He ii features are subject to dust extinction after leaving the H i region, the Raman conversion efficiency is computed using the optical Raman photons before dust extinction. However, the final line fitting of Raman-scattered He ii is carried out after correction for dust extinction. The incident far UV He ii photons are assumed to be described by a single Gaussian profile with a velocity width \(\sigma_{v}=13\) km s\({}^{-1}\) corresponding to the widths of He ii\(\lambda\)6560 and \(\lambda\)4859 as illustrated in Table 2. ### Scattering Geometry The central region of NGC 6302 is highly obscured and can be probed with high angular resolution observations achievable by radio interferometry. Peretto et al. (2007) investigated the kinematics of the molecular torus to report the expansion velocity of 8 km s\({}^{-1}\). Wright et al. (2011) presented their 3D photoionization computation to propose that the inner and outer radii of the circumstellar disk are \(r_{\rm in}=1.2\times 10^{16}\) cm and \(r_{\rm out}=3.0\times 10^{17}\) cm, respectively, based on their best-fitting model result. Furthermore, the circumstellar disk is geometrically thin with a half-opening angle \(\sim 10^{\circ}\). In our Monte Carlo simulation, the He ii emission region is assumed to be an unresolved compact source surrounded by a circumstellar disk, where Raman scattering takes place. Figure 4 shows a schematic illustration of the scattering geometry considered in this work. For the sake of simplicity, the neutral region is assumed to take the form of a disk with the inner and outer radii \(R_{\rm i}=10^{16}\) cm and \(R_{\rm o}=5R_{\rm i}\) and also characterized by the half opening angle \(\theta_{o}\) of the H i disk with respect to the point-like central He ii region. The neutral region is of uniform H i density \(n_{\rm HI}=N_{\rm HI}/(R_{\rm o}-R_{i})\), where \(N_{\rm HI}\) is H i column density. Figure 4: Schematic illustration of the scattering geometry considered in this work for Monte Carlo radiative transfer. The scattering geometry is composed of a point-like central He ii emission source (orange), a disk-like H i region (green), and an outer dusty and molecular region (gray). Closely related to the line shift and broadening of Raman He ii, the H i region has two kinematic components: one is a radial expansion with a velocity \(v_{\rm exp}\), and the other is a random motion with a representative speed of \(v_{\rm ran}\). The scattering geometry is specified by the inner and outer radii \(R_{i}\), \(R_{\rm o}\) and the half opening angle \(\theta_{o}\) of the H i region with respect to the central source. Figure 5: Raman conversion efficiency (RCE) of Raman He II 4851 (right) and 6545 (center) lines and the ratio of RCE of those two Raman He II lines in our model of Figure 4 for various half opening angle \(\theta_{\rm o}\). The black vertical lines represent the values from Gaussian fitting in Table 2. The H i medium is assumed to move away from the central He ii emission region in the radial direction. In addition, we denote by \(v_{\rm ran}\) the random motion component contributed by the thermal motion. A Hubble-type outflow is chosen in this work in accordance with the observations of the ionized and molecular components by Szyszka et al. (2011) and Santander-Garcia et al. (2017), respectively. Specifically, the radial velocity \(v(r)\) at a distance \(r\) from the He ii source is chosen to follow \[v(r)=v_{\rm exp}\left(\frac{r}{R_{o}}\right), \tag{8}\] where the parameter \(v_{\rm exp}\) is the expansion velocity at the outer radius \(R_{o}\). In our simulation, we collect Raman scattered photons escaping along the line of sight, which coincides with the direction specified by the polar angle \(\theta\)= 90\({}^{\circ}\), in view of the fact that the central star of NGC 6302 is highly obscured by dust (Kastner et al., 2022). Thus, we consider dust extinction in the line of sight, which is coincident with the equatorial direction of the scattering medium. In Figure 4, a dust component is added outside the neutral region so that optical Raman He ii photons are subject to dust extinction before reaching the detector. The dust optical depth is chosen to be consistent with the reddening found in the line ratio of He ii \(\lambda\)6560 and \(\lambda\)4859 discussed in Section 3.3. The central He ii emission region is assumed to inject far UV He ii line photons with a profile described by a single Gaussian and strengths in accordance with the Case B recombination theory. ### Simulated Raman Conversion Efficiency In Figure 5, the Raman conversion efficiencies for Raman-scattered He ii at 6545 A and 4851A are shown in the middle and right panels for three values of the half opening angle \(\theta_{\rm o}=10\), 20, and 30\({}^{\circ}\) and for a range of H i column densities \(10^{21}-10^{22}\,\rm cm^{-2}\). Here, we fix the expansion speed \(v_{\rm exp}=13\,\rm km\,s^{-1}\) and the random speed \(v_{\rm ran}=10\,\rm km\,s^{-1}\) according to the best fitting result (see Appendix A). The horizontal dotted lines indicate RCEs of 0.21 and 0.10 for Raman-scattered He ii at 6545 and 4851, respectively, presented in Section 3.6. In the left panel, the ratio of the two Raman conversion efficiencies is shown. The horizontal dotted line represents the ratio of the two RCEs \(\sim\) 2.1. In the range of H i column density \(N_{\rm HI}\) considered in Figure 5, the Raman conversion efficiency is nearly proportional to the half opening angle \(\theta_{\rm 0}\). However, in contrast, the ratio of the two Raman conversion efficiencies is relatively insensitive to \(\theta_{\rm 0}\) and decreases as \(N_{\rm HI}\) increases. In this range of \(N_{\rm HI}\), too small RCEs are obtained from the cases for \(\theta_{\rm o}=10^{\circ}\) and those with \(\theta_{\rm o}=30^{\circ}\) lead to much larger RCEs to account for the measured values. In view of this, we propose that the measured RCEs are consistent with the scattering geometry characterized by \(N_{\rm HI}=3\times 10^{21}\,\rm cm^{-2}\) and \(\theta_{\rm o}=20^{\circ}\). ### Best Fit Profiles In the left panel of Figure 6, we show the best line fit to the observed spectrum of Raman He ii \(\lambda\)4851, for which the parameters adopted are \(v_{\rm exp}=13\,\rm km\,s^{-1}\) and \(v_{\rm ran}=10\,\rm km\,s^{-1}\) in addition to \(N_{\rm HI}=3\times 10^{21}\,\rm cm^{-2}\) and \(\theta_{\rm o}=20^{\circ}\). The red line shows the best fit and the black dotted line is the Gaussian fit to the observed data. Using our best fit parameters, the H i number density is estimated to \(n_{\rm HI}=7.5\times 10^{4}\,\rm cm^{-3}\), from which we estimate the total mass of the H i disk \(\simeq 1.0\times 10^{-2}\) M\({}_{\odot}\). Taylor et al. (1990) investigated the H i mass of several planetary nebulae from 21 cm radio observations. For example, they propose that the planetary nebula BD + 30\(\cdot\)3639 has the H i mass of 0.028 M\({}_{\odot}\). The random speed of H i medium \(\sim 10\,\rm km\,s^{-1}\) corresponds to the thermal speed at \(T=5000\) K, which is also consistent with the excitation temperature deduced from 21 cm radio observations. We use the same set of parameters to show the fitting result for the Raman He ii \(\lambda\)6545 by the blue line in the right panel. The fit quality of Raman He ii \(\lambda\)6545 is slightly poorer than of Raman He ii at 4851 A. More specifically, the simulated profile is shifted blueward of the observed data by an amount of \(\sim 6\rm\ km\ s^{-1}\). Because of the severe blending of Raman He ii \(\lambda\)6545 with N ii\(\lambda\)6548, we consider it a more appropriate strategy to focus on the Raman He ii \(\lambda\)4851 in the determination of the best fit parameters pertinent to NGC 6302. In Appendix A, we show the dependence of spectral profile on parameters, demonstrating the best fit convincingly. ## 5 Summary and Discussion Using the archived UVES spectrum of NGC 6302, we have carried out profile analyses of He ii emission lines and Raman-scattered He ii at 6545 A and 4851 A. Raman He ii features are found to be broader and more redshifted than the hypothetical Raman features that would be formed in a static H i medium. The Monte Carlo simulation code _'STaR5_(Chang & Lee, 2020) is used to produce satisfactory line fits to the Raman-scattered He ii features. For our Monte Carlo approach, the He ii emission region is assumed to be compact near the hot central star surrounded by the H i region with H i column density \(N_{\rm HI}=3\times 10^{21}\,\rm cm^{-2}\) and a half opening angle \(\theta_{o}=20^{\circ}\). The kinematics of the H i region is characterized by the expansion speed \(v_{\rm exp}=13\) km s\({}^{-1}\) and a random speed of \(v_{\rm ran}=10\) km s\({}^{-1}\). The physical properties of H i region are imprinted in Raman He ii features via scattering. In this work, dust particles are assumed to be distributed outside the neutral medium, presuming that those inside the neutral region are completely destroyed by strong UV radiation from the central star. However, additional complications are expected if we introduce a considerable amount of dust in the H i medium. In a sophisticated model involving a dusty neutral medium, the formation of Raman He ii \(\lambda\)4851 would be more suppressed than that of Raman He ii \(\lambda\)6545, because He ii \(\lambda\)972 has the Raman cross section smaller almost by an order of magnitude than He ii \(\lambda\)1025. A line photon of He ii \(\lambda\)972 has to traverse a longer dusty path in order to yield a Raman optical photon than He ii \(\lambda\)1025 so that dust extinction is more effective for He ii \(\lambda\)972 than He ii \(\lambda\)1025. We defer the formation of Raman features in a dusty neutral medium for future work. On the observational side, we expect that Raman-scattered He ii feature at 4332 A can be obtained from very deep spectroscopic observations. Raman He ii \(\lambda\)4332 was reported in the symbiotic stars, including V1016 Cygni and RR Telescopii, and also in young planetary nebulae including NGC 7027 (e.g., Lee, 2012; van Groningen, 1993; Pequignot et al., 1997). With the future availability of Raman He ii \(\lambda\)4332, strong constraints on the scattering geometry and the amount of dust extinction will be placed. We are grateful to an anonymous referee for constructive comments. This research has made use of the services of the ESO Science Archive Facility. H.L. and J.K. were supported by the National Research Foundation of Korea (NRF) grants funded by the Korea government (NRF-2023R1A2C1006984). ESO Science Archive Facility ## Appendix A Monte Carlo Profile Fitting The expanding velocity and random speed of H i are mainly responsible for the center shift and line broadening of Raman He ii, respectively. However, the line profiles of Raman-scattered He ii are determined in a complicated way involving the multiple scattering effect and sharply varying cross sections as a function of wavelength in addition to the scattering geometry and the kinematics. For example, when the covering factor of the neutral scattering region is significant, far UV He ii photons that escape through Rayleigh scattering may re-enter the scattering region, introducing an additional line broadening effect to the final Raman-scattered He ii (e.g., Choi et al., 2020). However, the half opening angle of 20\({}^{\circ}\) deduced from Figure 5 is not big enough to enhance the line broadening through the re-entry effect. For this reason, we consider the random speed component of the H i medium in our simulation as the main factor affecting the broadened Raman-scattered He ii. Figure 6: Best fit simulation profiles of Raman He ii 4851 (left) and 6545 (right) superimposed on the observed spectrum (gray) and the single Gaussian fits (black). The simulated profiles are shown by red and blue solid lines in the left and right panels, respectively. The model parameters are \(\theta_{0}=20^{\circ}\), \(N_{\rm HI}=3\times 10^{21}\) cm\({}^{-2}\), \(v_{\rm exp}=13\) km s\({}^{-1}\), and \(v_{\rm ran}=10\) km s\({}^{-1}\). In Figure 11, we show line profiles obtained from our Monte Carlo simulations adopting the scattering geometry illustrated in Figure 4. In particular, we show the dependence of the Raman line profiles on the parameters a H i column density \(N_{\rm HI}\), an half opening angle \(\theta_{\rm o}\), an expansion velocity \(v_{\rm exp}\), and a random speed \(v_{\rm ran}\). The upper and lower panels show the results for Raman He ii \(\lambda 4851\) and Raman He ii \(\lambda 6545\), respectively. In the two left panels, we investigate the effect of \(N_{\rm HI}\), where the three cases for \(N_{\rm HI}/(10^{21}\ {\rm cm}^{-2})=1,3\) and 5 are shown. The other parameters are fixed to the best fit values, i.e., \(\theta_{\rm o}=20^{\circ}\), \(v_{\rm exp}=13\,{\rm km\,s}^{-1}\) and \(v_{\rm ran}=10\,{\rm km\,s}^{-1}\). Because of the best fit value \(N_{\rm HI}=3\times 10^{21}\ {\rm cm}^{-2}\) corresponding to Raman optical depth exceeding unity for He ii \(\lambda 1025\), the Raman conversion efficiency increases only slightly compared to that for He ii \(\lambda 972\). The next two panels show the effect of \(\theta_{\rm o}\), where the three values of \(\theta_{\rm o}=10^{\circ},20^{\circ}\) and \(30^{\circ}\) are considered. The Raman conversion efficiency is nearly proportional to \(\theta_{\rm o}\), which determines the covering factor of the scattering region with respect to the He ii emission region. The third two panels show the dependence of \(v_{\rm exp}\), which mainly affects the line center location of the Raman-scattered He ii. The line centers move redward as \(v_{\rm exp}\) increases. It is particularly notable that Raman He ii \(\lambda 4851\) becomes stronger as \(v_{\rm exp}\) increases. This is due to the increase of Raman scattering cross section as He ii photons get redshifted toward the hydrogen resonance, while the Raman optical depth is less than unity (e.g., Jung & Lee, 2004; Choi et al., 2020). No such conspicuous increase is seen in the case of Raman He ii \(\lambda\)6545 because the Raman optical depth exceeds unity. In the right two panels, the simulated line profiles for three values of \(v_{\rm ran}=0,10\) and \(20\) km s\({}^{-1}\) are illustrated. The line profile becomes broader with increasing \(v_{\rm ran}\). Because the He ii emission region is assumed to be characterized by a random velocity \(\sigma_{v}=13\) km s\({}^{-1}\) corresponding to the width of He ii emission lines in Table 2, the resultant profiles are significantly affected by choice of \(v_{\rm ran}\) in the range 0-20 km s\({}^{-1}\). In view of the profile fit quality, we may safely conclude that the random velocity component in the neutral region is \(v_{\rm ran}\simeq 10\) km s\({}^{-1}\). We conclude that the best fitting parameters are obtained from the fitting of Raman He ii \(\lambda\)4851 and the blueward of Raman He ii \(\lambda\)6545. In the third two panels, the simulated spectra of Raman He ii \(\lambda\)6545 for \(v_{\rm exp}=13\) and 26 km s\({}^{-1}\) provide well-fitted features in the blueward and redward, respectively. We set \(v_{\rm exp}=13\) km s\({}^{-1}\) since the strong [N ii] emission above Raman He ii \(\lambda\)6545 can affect the spectral profile in the redward of 6545 A. Figure A2 shows the simulated line spectra for various values of escaping direction \(\mu_{z}=\cos\theta_{z}\), where \(\theta_{z}\) is the angle of the emergent photon making with the symmetry axis. The spectra for the line of sight (\(\mu_{z}=0\)) in our simulation are stronger than those for other directions with \(\mu_{z}\geq 0.8\). This behavior is explained by the fact that both Raman and Rayleigh processes prefer scattering in the forward and backward directions to lateral directions, where the phase function of the two scattering processes is given by \[\Phi(\mu_{s})=\frac{3}{8}(1+\mu_{s}^{2}).\] (A1) Here, \(\mu_{s}=\mathbf{\hat{k}_{i}}\cdot\mathbf{\hat{k}_{s}}\) is the cosine of the angle between the wavevectors \(\hat{k}_{i}\) and \(\hat{k}_{s}\) of the incident and scattered photons, respectively (Chang & Lee, 2020).
2305.12188
Transparent and Traceable Food Supply Chain Management
The food supply chain has a number of challenges, including a lack of transparency and disengagement among stakeholders. By providing a transparent and traceable digital ledger of transactions and movements for all supply chain actors, blockchain technology can provide a resolution to these problems. We propose a blockchain-based system for tracking a product's full path, from its raw components to the finished item in the store. Many advantages of the offered system include improved quality assessment, increased product transparency and traceability, and sophisticated fraud detection capabilities. By reinventing the way transactions are carried out and enabling stakeholders to obtain a complete record of each product's journey, the system has the potential to completely alter the food supply chain. Also, by minimising inefficiencies, waste, and fraudulent activities that have a negative influence on the supply chain, the deployment of this system can remove limits imposed by the current supply chain. Overall, the suggested blockchain-based system has the potential to significantly increase the efficiency, transparency, and traceability of the food supply chain.
Narayan Subramanian, Atharva Joshi, Daksh Bagga
2023-05-20T13:27:37Z
http://arxiv.org/abs/2305.12188v1
# Transparent and Traceable Food Supply Chain Management ###### Abstract The food supply chain has a number of challenges, including a lack of transparency and disengagement among stakeholders. By providing a transparent and traceable digital ledger of transactions and movements for all supply chain actors, blockchain technology can provide a resolution to these problems. We propose a blockchain-based system for tracking a product's full path, from its raw components to the finished item in the store. Many advantages of the offered system include improved quality assessment, increased product transparency and traceability, and sophisticated fraud detection capabilities. By reinventing the way transactions are carried out and enabling stakeholders to obtain a complete record of each product's journey, the system has the potential to completely alter the food supply chain. Also, by minimising inefficiencies, waste, and fraudulent activities that have a negative influence on the supply chain, the deployment of this system can remove limits imposed by the current supply chain. Overall, the suggested blockchain-based system has the potential to significantly increase the efficiency, transparency, and traceability of the food supply chain. intrusion detection system, federated learning, blockchain, decentralized. ## I Introduction The food supply chain is an intricate and multifaceted network with many different stakeholders, all of whom are essential to making sure that food products get to consumers. Despite recent considerable technical improvements, there is still a lack of transparency in the food supply chain at the manufacturing and distribution level. The sector faces many difficulties as a result of this lack of transparency, including inefficiencies, waste, and fraudulent actions that may compromise the integrity of the entire supply chain. Lack of comprehensive product traceability to the original source is one of the biggest problems facing the present food supply chain. Keeping track of the origins, steps taken, and people involved in the handling of food goods when they are transferred from one stakeholder to another can be difficult. As a result, it can be challenging to guarantee product quality and safety because items may be the target of adulteration, mislabeling, or other fraudulent actions. We therefore suggest a blockchain-based system that offers a transparent and verifiable supply chain network as a solution to these problems. All stakeholders, including customers, may access and examine a transparent and unchangeable digital ledger of transactions and movement thanks to blockchain technology. With the help of the suggested system, stakeholders will be able to follow a specific product all the way from the farm where its raw materials, such tomatoes, are grown to the retailer store where it is processed and put up for sale. By providing a number of advantages, the suggested blockchain-based solution will transform the food supply chain. First off, the technology will enhance how products are evaluated for quality. The harvest includes several quality reports, which will be kept in a blockchain network. Processors might utilise these data as a criterion to judge the calibre of raw materials. Second, the system will improve the items' transparency and traceability. The processor adds the relevant reports (along with a timestamp) to a blockchain network after receiving the raw materials. This will make it possible for all parties involved to examine all records, including quality, processor, and retailer reports, from the store to the farmer before buying a product. The suggested method will also improve fraud prevention capabilities. As the system is completely transparent and each step includes a timestamp, any fraud or forgery (even hoarding) can be tracked. Stakeholders may confirm product authenticity and guarantee that goods are not tampered with or compromised in any manner by utilizing blockchain technology The remainder of the paper is organized in the following Sections. Section II describes the existing architecture followed by Section III which describes the infrastructure. Section IV contains information about blockchain followed by Section V which tells us more about Federated Learning. Section VI describes the proposed architecture followed by the concluding remarks of the study and the future works in Sections VII and VIII. ## II Abbreviations and Acronyms 1. IoT - Internet of Things 2. MLAV - Multi-Layer Aggregate Verification 3. SCOR - Supply Chain Operations Reference 4. AHP - Analytic Hierarchy Process 5. FCA - Fuzzy Comprehensive Analysis ## III Existing Architecture ### _Related Works_ The utilization of blockchain technology can address the challenges faced by agricultural producers, particularly in relation to supply chain management. Hegde et. al [1] emphasize the need for a reliable database to transfer knowledge in the agricultural industry. The use of blockchain technology can mitigate the spread of misinformation by providing a trustworthy and incorruptible data ledger. A model incorporating blockchain technology into the agricultural supply chain as a transparent and dependable transaction mechanism is proposed. Smart contracts can be used to ensure that all parties agree and deliver their parts, without marginalizing any one tier. The use of blockchain technology in the Indian agricultural supply chain can lead to increased efficiency, decreased waste, and an overall improvement in the industry. Tiaobin et.al [2] proposed three specific applications of the IOT in fresh agricultural product supply chain management: monitoring of fresh agricultural products, strict quality control to ensure food safety, and the creation of a management information system based on IOT to increase supply chain integration. The EPC is used as the foundation of IOT operations to provide unique identities for physical objects, which can improve monitoring levels for commodity production, distribution, warehousing, and sales. The IOT has the potential to completely change supply chain processes and management methods, providing new opportunities for the development of supply chain management in enterprises. Efficient fresh agricultural product supply chain management is key to improving the competitiveness of fresh produce enterprises. Shen et. al [3] examines the importance of third-party certification agencies in supplier selection for supply chain management, particularly in situations with incomplete market information. Using the signalling game method, a mathematical model is established to analyse the dynamics of the market and discuss the equilibrium. The study finds that in a separating equilibrium, the signal conveyed by the supplier represents their true type, which allows for the efficient selection of certified suppliers by the buyer, while speculative suppliers are discouraged from participating due to prohibitive costs. The most ideal and efficient equilibrium is the separating equilibrium, where the supplier is subject to "telling the truth" And the market's performance is maintained. The study highlights the significance of third-party certification agencies in ensuring the credibility of the supplier selection process. Sihuan et al [4] focused on the vulnerability of the agricultural product supply chain in the Eastern Area of Hunan Province and proposes a risk management approach based on the AHP-FCS method. The methodology consists of two parts: the AHP and the method FCS. The AHP-FCS model provides a more reliable and efficient approach for identifying and assessing agricultural product supply chain risks. The conclusion suggests that simple risk assessment is not enough and that a scientific and practical risk tracking system is necessary for effective agricultural product supply chain management. Wang et al [5] presented decision models to analyze the impact of supply chain coordination on the order quantity and ordering cycle for deteriorating goods with stock-dependent demand rates. Two scenarios, decentralized and centralized supply chains, are considered. In the decentralized supply chain, each entity maximizes its own profit functions, while in the centralized supply chain, the order quantity and replenishment cycle are determined to maximize the overall profit incurred by both the retailer and the manufacturer. The efficacy of the proposed models is demonstrated through a numerical study, and sensitivity analysis is conducted to investigate the impact of various model parameters on the supply chain profit increase percentages generated by supply chain coordination. The study shows that a centralized policy is always more efficient than a decentralized policy, in terms of supply chain profit. Yuniaristanto et al [6] aims to address the issues faced by Vietnamese coffee's competitiveness by proposing a standard coffee supply chain model using the SCOR model. The model is used to investigate external and internal issues that reduce the efficiency of the coffee supply chain in Kontum province, Vietnam, and to measure performance in the coffee supply chain using the SCOR model. The case study methodology was used to extend the SCOR model and validate the suitability of developed models. The study found that the overall coffee supply chain performance is 68.28, with an average category, and most processes have a low performance value. The proposed model can assist supply chain members, particularly coffee companies in other countries with similar coffee supply chains to Vietnam, map and evaluate the supply chain's success. Harding et al [7] presented a MLAV solution for IoT Blockchain devices to improve supply chain management in Agriculture 4.0. The existing blockchain solutions are inadequate as they only serve large-scale production suppliers and do not facilitate smallholders' participation in the agricultural blockchain. The proposed solution employs a multi-layer architecture to allow smallholders' participation and reduce costs. The methodology involves the use of periodic firmware updates and an aggregate verification method to efficiently verify many signatures at once. The proposed solution significantly reduces network traffic from IoT devices on the blockchain network and shifts computing overhead to aggregator nodes. This framework reduces the workload of all participants and speeds up the agricultural industry's financial processes. Arora et al [8] proposed a new blockchain implementation for tracking the carbon footprint of food production and transportation stages to mitigate the effects of increased food demand on the environment and climate. The proposed system uses cluster-based record keeping to track the carbon footprint of food processing facilities and transportation parties while protecting their privacy. The system uses a Raft-like consensus algorithm to arbitrate decisions on leader election within a cluster, node addition, and block updates. The carbon footprint chain is divided into six clusters, each representing a stage of the food life cycle, and the blockchain nodes are the facilities within each life cycle stage. The proposed system enables lightweight distributed-record keeping for tracking carbon footprint in food transportation, which occurs when food is transported from one stage to the next in the food life cycle. Malik et al [9] highlighted the importance of agri-food supply chain traceability and proposes a generic framework for a traceability model that includes blockchain as a key component. The framework includes building blocks from the data, storage, application, and blockchain layers, and the paper discusses the challenges and potential benefits of using blockchain to improve supply chain resilience. The methodology takes a holistic design approach and shows which physical entities in the supply chain the technical components must be linked to. The paper emphasizes the need for blockchain-assisted agri-food supply chain traceability systems to guarantee data authenticity and immutability and to address challenges such as compromised food safety, data modification fraud, counterfeiting, and food waste. ### _The Existing System_ In India iinefficiency and losses in the supply chain are observed and are caused by a fragmented supply chain. India's agricultural productivity has increased by 40% to 500% in the last 40 years. Food availability, on the other hand, still remains a major concern due to loopholes in supply chain. Quality control is a significant challenge due to the massive size of the supply chain. The absence of technology has a significant impact on the supply chain lead time in India. The lack of organised logistics impedes the transportation of produce from farms to consumers. In India, some compliance is lost in the traditional route of buying from traders and wholesalers. People frequently misunderstand the relationship between sustainability and traceability. The ability to track materials from the beginning of the supply chain to the customer who purchases a product is referred to as traceability. Traceability provides visibility across the value chain on inputs and processes, as well as the source information for its origins and sustainability certifications. Traceability using a blockchain based system is still in a nascent stage and has still not been addressed properly. ## IV Proposed Methodology The proposed implementation process for blockchain technology in agricultural supply chain management entails a number of steps. First, the farmer and the buyer will agree on a smart contract defining the specifics of the transaction. After that, the commodities will be transported from one place to another with an RFID tag connected to the produce consignment. As a product reaches a warehouse, the RFID tag will be read, updating the position, allowing for real-time tracking of the product as it moves through the supply chain. The blockchain will be used to securely store the location information as well as other quality indicators like temperature and humidity. The RFID tag will continue to update the location and quality data on the blockchain as the product continues to move through the supply chain. The smart contract will be satisfied and the transaction will be finished once the item arrives at its destination. The proposed solution will revolutionize the entire supply chain by transforming the current supply chain by following implementations :- \({}^{\star}\) A lot's quality reports (part of the harvest) are stored in a blockchain network. These reports can be used by processors as a metric to assess the quality of raw materials. \({}^{\star}\) After receiving raw materials, the processor adds the respective reports (along with a timestamp) to a blockchain network. \({}^{\star}\) Before purchasing a product, the customer can review all the reports from retailer to farmer (quality reports -\(>\) processor report -\(>\) retailer report). \({}^{\star}\) The system is completely open. Because the timestamp is recorded at each step, any forgery or other fraudulent activity (hoarding) can be traced. Overall, the suggested methodology will offer a safe and open system for following agricultural products along the supply chain, making sure they are of excellent quality and get to where they are supposed to. Farmers and buyers may increase their trust in the supply chain and make sure that the products are handled appropriately throughout the entire process by leveraging blockchain technology and smart contracts. A. _Novelty in concept_ The use of blockchain technology in agriculture supply chain management is a novel approach that brings several unique benefits to the industry. Here are some of the key ways in which blockchain technology is novel in the context of agriculture supply chain management: 1. Trust and transparency: Blockchain technology provides a highly secure and transparent system for tracking the movement of agricultural products from the farm to the consumer. 2. Smart contracts: The use of smart contracts in agriculture supply chain management is a novel approach as by automating the execution of contracts, smart contracts can reduce the need for intermediaries and increase the speed of transactions and overall improve the efficiency of the supply chain. 3. Provenance tracking: Help to ensure that products are authentic and have been produced in compliance with relevant standards and regulations. It can help to reduce the risk of fraud and increase the trustworthiness of the supply chain. 4. Supply chain optimization: Reduce inefficiencies and minimize waste. By providing real-time data on the movement and quality of products, it can help to optimize logistics and reduce the need for manual interventions. B. _Feasibility_ To make this idea a feasible solution, it requires immense government support and a heavy capital investment to set up the entire ecosystem. From a technical viewpoint, the project is feasible. But some obstacles which we can face in this are the blockchain might get overloaded with data because as the package travels, at every point the data is being fed into the blockchain. From an economical viewpoint, this is going to be expensive to set up but considering that government will invest and support in this project, it is an idea which can be implemented. Another problem which we can encounter is that it is hard to convince people at first because blockchain is very new. India has come a long way in the last decade in terms of food traceability, thanks to private and public sector initiatives. Food traceability, once implemented across supply chains, will have the following effects on our supply chains: \({}^{\star}\) Meet consumer demand for transparency in food production. \({}^{\star}\) Improve your ability to recognise, respond to, and even prevent food safety issues. \({}^{\star}\) Help to optimise the supply chain and reduce food waste \({}^{\star}\) Validate sourcing claims to achieve sustainability goals. ## V Requirements and Outputs * Hardware Rfid Tag Rfid Scanner Arduino Servo motor Jumper cables and LEDs * Software Ethereum MetaMask Truffle NodeJS Infura. ## VI Conclusion and Inferneces The food supply chain can be revolutionized by a blockchain-based system by increasing efficiency, transparency, and traceability, which will have a favorable effect on public health, the environment, and the economy. Each stage of the supply chain may be tracked by the system, allowing early detection of potential quality problems, preventing foodborne diseases, decreasing food waste, and informing consumers about the country of origin of their food. Additionally, the use of a blockchain-based system can aid in the prevention of fraudulent activities like forging, mislabeling, and tampering by spotting any discrepancies in the supply chain and enabling quick remedial action. This project has a bright future ahead of it, with a wide range of opportunities for growth and collaboration with other new technologies, governmental organizations, and international marketplaces that will result in a more ethical and sustainable global supply chain. Ultimately, a blockchain-based system for the food supply chain might be advantageous to all parties and help it become more efficient, transparent, and sustainable. ## VII Future works The future scope of a blockchain-based system for the food supply chain is vast and promising. Here are some potential areas where this project can be expanded and developed further: Fig 4: Green Led blinking when right RFID tag is scanned Fig 2: Circuit connections Fig 3: Red Led blinking when wrong RFID tag is scanned * Integration with other technologies. * Expansion to other industries * Collaboration with Government Agencies * Implementation of Smart Contracts ## VIII Acknowledgement We extend our sincere appreciation to our project mentor and professor, Dr. Jothi R., Professor at the School of Computer Science Engineering at the Vellore Institute of Technology in Chennai. We are deeply grateful for his unwavering support and insightful guidance during the course of our project. We would also like to express our gratitude to the Head of Department, the Dean of Academics, and the University Dean for their encouragement and imparting their wisdom to us. We are thankful for the opportunity to take this course and work on this project. Lastly, we would like to thank our loved ones for their constant support and for being there for us during the project. Their unwavering encouragement and support meant a great deal to us.
2302.01635
Rate-limiting recovery processes in neurotransmission under sustained stimulation
At chemical synapses, an arriving electric signal induces the fusion of vesicles with the presynaptic membrane, thereby releasing neurotransmitters into the synaptic cleft. After a fusion event, both the release site and the vesicle undergo a recovery process before becoming available for reuse again. Of central interest is the question which of the two restoration steps acts as the limiting factor during neurotransmission under high-frequency sustained stimulation. In order to investigate this question, we introduce a novel non-linear reaction network which involves explicit recovery steps for both the vesicles and the release sites, and includes the induced time-dependent output current. The associated reaction dynamics are formulated by means of ordinary differential equations (ODEs), as well as via the associated stochastic jump process. While the stochastic jump model describes a single release site, the average over many release sites is close to the ODE solution and shares its periodic structure. The reason for this can be traced back to the insight that recovery dynamics of vesicles and release sites are statistically almost independent. A sensitivity analysis on the recovery rates based on the ODE formulation reveals that neither the vesicle nor the release site recovery step can be identified as the essential rate-limiting step but that the rate-limiting feature changes over the course of stimulation. Under sustained stimulation the dynamics given by the ODEs exhibit transient changes leading from an initial depression of the postsynaptic response to an asymptotic periodic orbit, while the individual trajectories of the stochastic jump model lack the oscillatory behavior and asymptotic periodicity of the ODE-solution.
Ariane Ernst, Nathalie Unger, Christof Schütte, Alexander Walter, Stefanie Winkelmann
2023-02-03T10:05:52Z
http://arxiv.org/abs/2302.01635v1
# Rate-limiting recovery processes in neurotransmission under sustained stimulation ###### Abstract At chemical synapses, an arriving electric signal induces the fusion of vesicles with the presynaptic membrane, thereby releasing neurotransmitters into the synaptic cleft. After a fusion event, both the release site and the vesicle undergo a recovery process before becoming available for reuse again. Of central interest is the question which of the two restoration steps acts as the limiting factor during neurotransmission under high-frequency sustained stimulation. In order to investigate this question, we introduce a novel non-linear reaction network which involves explicit recovery steps for both the vesicles and the release sites, and includes the induced time-dependent output current. The associated reaction dynamics are formulated by means of ordinary differential equations (ODEs), as well as via the associated stochastic jump process. While the stochastic jump model describes a single release site, the average over many release sites is close to the ODE solution and shares its periodic structure. The reason for this can be traced back to the insight that recovery dynamics of vesicles and release sites are statistically almost independent. A sensitivity analysis on the recovery rates based on the ODE formulation reveals that neither the vesicle nor the release site recovery step can be identified as the essential rate-limiting step but that the rate-limiting feature changes over the course of stimulation. Under sustained stimulation the dynamics given by the ODEs exhibits transient changes leading from an initial depression of the postsynaptic response to an asymptotic periodic orbit, while the individual trajectories of the stochastic jump model lack the oscillatory behavior and asymptotic periodicity of the ODE-solution. **Key words:** nonlinear reaction networks, neurotransmission models, vesicle fusion dynamics, sustained stimulation Introduction Communication in the nervous system relies on chemical transmission across synapses. For this, neurotransmitters are released from presynaptic neurons by the fusion of transmitter-containing synaptic vesicles with the plasma membrane. The liberated transmitter is detected by postsynaptic receptors which induces a response. At the presynapse, evoked transmitter release is limited to so-called release sites at which synaptic vesicles attach to the plasma membrane (a process referred to as _vesicle docking_) and functionally mature to become responsive to presynaptic stimulation (a process referred to as _vesicle priming_) [1, 2, 3, 4]. Transmitter release is typically induced by presynaptic action potentials, i.e., brief de- and re-polarisations of the cellular membrane potential that lead to the opening of voltage gated Ca\({}^{2+}\) ion channels [5, 6, 7]. The resulting elevation of the presynaptic Ca\({}^{2+}\) concentration following Ca\({}^{2+}\) influx through these channels triggers synaptic vesicle fusion by activating the vesicular Ca\({}^{2+}\) sensing protein Synaptotagmin [5, 8, 9]. During high-frequency sustained stimulation, most synapses exhibit a depression of the initial postsynaptic response to a plateau [10, 11, 12, 13]. Continued activity puts a great demand on the cell to replenish the active zone in time with synaptic vesicles as well as release site proteins. Both of these are finite resources that may be expended during continued exocytosis and therefore need to be replenished for sustained activity. The observed depression of the postsynaptic current during prolonged stimulation may thus be explained by the refractory recycling of the release sites and/or the depletion of available, fusion-competent synaptic vesicles and by the time it takes to replenish those. The membrane of synaptic vesicles that underwent fusion is taken up by endocytosis, and further processing and neurotransmitter re-uptake is required to regenerate a new synaptic vesicle [14]. Initially, it was thought that endocytosis itself was slow (on the timescale of tens of seconds [15]), but more recently it became clear that endocytosis can occur much faster ("ultrafast" millisecond timescale) at presynaptic membranes [16, 17]. However, the full process of synaptic vesicle reformation is thought to take longer [18, 19, 20, 21], which is why vesicle replenishment is often assumed to be the limiting step during sustained stimulation [1, 22, 23, 24, 25, 26, 27]. On the other hand, there is usually a large supply of reserve vesicles in a synaptic bouton and rapid replenishment from a large reserve pool could counteract synaptic depression (or even cause facilitation), and recent experimental data indicate that vesicle replenishment may commence much faster than previously thought, within milliseconds [1, 28, 29, 30, 31]. Apart from vesicle resupply, the release sites themselves may set constraints on further presynaptic activity, for instance, if they need to undergo some form of clearance and/or recycling before taking up another vesicle. Experiments in Drosophila, where the endocytosis machinery was acutely blocked, demonstrated that impairing endocytosis affected repetitive synaptic activity on a timescale of milliseconds, indicating that fast site clearance by endocytosis could be a major factor to maintain synaptic activity [32]. Accordingly, release site recycling was estimated to happen on very short timescales, within tenths of a second [29]. It is currently not known which of the two reactions - vesicle replenishment or release site resupply - is limiting sustained synaptic activity (as pointed out earlier [29]). Experimentally this is difficult to distinguish as most read-outs with sufficient temporal resolution (e.g. electrophysiology, live imaging) quantify the downstream neurotransmitter release and cannot directly report on upstream processes. More recently, rapid, high pressure freeze fixation shortly after synaptic stimulation provided insight into the morphological changes and provided a first account of the kinetics of vesicle reformation at neurotransmitter release sites [16]. Yet, even such approaches cannot resolve whether this reformation is limited by the vesicular association to the sites or the availability of the sites to receive a vesicle. Thus, to date it is not known to which degree either (or both) of the reactions limit continued synaptic output, or whether this can even be distinguished. An insight into this would be valuable, for example to understand which effects to expect if pathogenic or environmental factors selectively affect them. In this paper, we set out to investigate to which extent either vesicle replenishment or release site recycling limit neural activity during sustained activation. Based on the unpriming model investigated in prior work [33, 34], we introduce a novel non-linear model that includes the combined recycling dynamics of vesicles and release sites. Given the underlying reaction network, we first describe the dynamics by a set of ordinary differential equations (ODEs) and include the postsynaptic output by convolving the fusion events with a characteristic postsynaptic response evoked by a single vesicle [33]. The parameter values have been estimated based on literature with the aim to let the ODE solution describe the average postsynaptic response signal at the Drosophila neuromuscular junction. Typical solutions of the ODE model under sustained stimulation exhibit transient dynamics leading from an initial depression of the postsynaptic response to an asymptotic periodic orbit. We demonstrate that the asymptotic periodic solution oscillates around a unique steady state given by the running average of parameters under sustained stimulation. The ODE model with these parameters is then used to investigate the impact of the two recovery rates (vesicles vs release sites) on the dynamics. As a measure for the influence of both recovery steps we choose the sensitivity of the output current with respect to the recycling rates. We show that the identity of the rate-limiting process depends on the point in time during stimulation: With the investigated parameter values, the neural output during 100 Hz stimulation is initially more sensitive to the release site replenishment. Later (once vesicles have accumulated in the recycling state) this shifts to a high sensitivity to the vesicle replenishment. We observe that this behaviour is conserved over a large range of parameters. Next, we extend our analysis to the stochastic jump process model given by the associated reaction network. We observe that the characteristic transient behavior and asymptotic periodicity of the ODE dynamics under sustained stimulation is not visible for an individual stochastic trajectory. This is no surprise since the stochastic jump process is describing a single release site and its discrete stochastic response to stimulation. However, the transient dynamics and asymptotic periodicity return when considering the junction current averaged over many release sites. This averaged current converges to the first-order moment of the stochastic output current, which is demonstrated to be in very close agreement with the ODE-solution. We trace this similarity back to the statistical independence of the recovery processes and a resulting small correlation between release site and vesicle supply. The agreement is independent of the model parameters and therefore supports the validity and applicability of the ODE-based sensitivity analysis. Our paper is organized as follows. In Sec. 2 we introduce the recovery model as a reaction network including explicit recovery steps for vesicles and release sites. Next we numerically analyze the system response to sustained stimulation, including sensitivity analysis of the two recovery processes. In Sec. 3 we extend our analysis to stochastic dynamics. The total junction current induced by several release sites is simulated and compared with the ODE-solution. Finally, the system's first- and second-order moments and its correlation function are investigated. Recovery dynamics of vesicles and release sites In this section, we introduce the recovery model for the interaction dynamics of vesicles and release sites. The dynamics are formulated in terms of an ODE (more precisely, the reaction rate equation), which is solved numerically in order to study the temporal evolution of the system's response to sustained stimulation. By a sensitivity analysis, we investigate the impact of the recovery rates onto the output current. ### ODE model for many release sites Based on the unpriming model introduced by Kobbersmed et al. [33], see Sec. A.2 for a short summary, we introduce the following recovery model for the combined recycling dynamics of a large number of vesicles and release sites, see Fig. 1 for an illustration of the underlying reaction network. Note that experimentally measured currents are also the results of the combined activity of numerous release sites. Later, in Sec. 3 below, we will consider individual release sites and discuss the differences and similarities between the two cases. In the model, each release site can be in three different states: It can be freely available (state \(P\)), or there is a vesicle attached to it (both together forming the complex \(R\)), or the release site is in a recovery state \(W_{P}\). Similarly, there are three states for each vesicle: It can be freely available (state \(V\)), or attached to a release site (joint state \(R\)), or in recovery (state \(W_{V}\)). A freely available vesicle can dock with a certain rate \(k_{R}>0\) to a freely available release site, which is expressed by the second-order reaction \[V+P\xrightarrow{k_{R}}R. \tag{1}\] This reaction is reversible by an unpriming reaction of the form \[R\xrightarrow{k_{U}(t)}V+P, \tag{2}\] meaning that the vesicle detaches from the release site again. This happens at a time-dependent rate \(k_{U}(t)\geq 0\). The docked vesicle may fuse with the membrane, thereby transferring both itself and the release site into the recovery state, \[R\xrightarrow{k_{F}(t)}W_{V}+W_{P}, \tag{3}\] for a time-dependent fusion rate \(k_{F}(t)\geq 0\). Independently of each other, the vesicle and the release site recover according to the reactions \[W_{V}\xrightarrow{g_{V}}V,\quad W_{P}\xrightarrow{g_{P}}P, \tag{4}\] for time-independent rates \(g_{V},g_{P}>0\), respectively. The cumulative state of the system at time \(t\geq 0\) is given by \[\mathbf{X}(t)=\left(X_{i}(t)\right)_{i=1,\ldots,5}=\left(V(t),W_{V}(t),W_{P}(t),R (t),P(t)\right)^{\top} \tag{5}\] where \(X_{i}(t)\) stands for the (average) number of vesicles or release sites in the respective state. Additionally, there is the counting process \((F(t))_{t\geq 0}\) with \(F(t)\) referring to the number of fusion events (3) up to time \(t\). The dynamics are described by the reaction rate equation \[\dot{\mathbf{X}}(t)=h(\mathbf{X}(t),t) \tag{6}\] with \[h(\mathbf{X}(t),t):=\begin{pmatrix}-k_{R}V(t)P(t)+g_{V}W_{V}(t)+k_{U}(t)R(t)\\ k_{F}(t)R(t)-g_{V}W_{V}(t)\\ k_{F}(t)R(t)-g_{P}W_{P}(t)\\ k_{R}V(t)P(t)-k_{F}(t)R(t)-k_{U}(t)R(t)\\ -k_{R}V(t)P(t)+g_{P}W_{P}(t)+k_{U}(t)R(t)\end{pmatrix}. \tag{7}\] It is straightforward to see that both the total number of release sites and the total number of vesicles are conserved in the course of time, i.e., given that the initial states fulfill \(R(0)+P(0)+W_{P}(0)=n_{\text{sites}}\) and \(R(t)+V(t)+W_{V}(t)=n_{\text{ves}}\) for \(n_{\text{sites}},n_{\text{ves}}\in\mathbb{N}_{+}\), we have two conservation laws \[R(t)+P(t)+W_{P}(t)=n_{\text{sites}},\quad R(t)+V(t)+W_{V}(t)=n_{\text{ves}} \tag{8}\] for all \(t\geq 0\), making the system effectively three-dimensional. The number \(F(t)\) of fusion events is set to fulfill \(F(0)=0\) and \[\frac{d}{dt}F(t)=k_{F}(t)R(t). \tag{9}\] The postsynaptic _response current_ (output signal) induced by the dynamics of the process \((\mathbf{X}(t))_{t\geq 0}\) is given by the convolution of the derivative \(\dot{F}(t)=\frac{d}{dt}F(t)\) of \(F(t)\) with an impulse response function \(g\)[33, 34]: \[C(t):=(\dot{F}*g)(t)=\int_{-\infty}^{\infty}\dot{F}(s)g(t-s)ds, \tag{10}\] where \(g\) is specified by Eq. (39) in the Appendix. Symmetry of the model.At first sight, the recovery model seems to be symmetric in the sense that the recovery dynamics of the release sites have the same structure as those of the vesicles. It is not directly evident why the roles of release sites and vesicles should not be interchangeable. However, this seemingly symmetric situation is broken by the fact that the numbers of vesicles and release sites are different and the values of the corresponding recovery rates differ, too: Per release site there are typically several vesicles, each of which needs more time for recovery than the release site itself, see the parameter estimation in Sec. A.2. Figure 1: **The recovery model.** A freely available vesicle \(V\) may attach to (and detach from) a freely available release site \(P\). From the resulting joint state \(R\), fusion can occur, transferring both the vesicle and the release site into their recovery states \(W_{V},W_{P}\), respectively, and increasing the number \(F\) of fusion events. By recovery, the vesicles and release sites turn back into the states \(V\) and \(P\), respectively. Steady state.When assuming all rates to be time-independent (especially \(k_{U}(t)=\mathrm{const}\) and \(k_{F}(t)=\mathrm{const}\)), the function \(h\) defined in (7) also becomes explicitly independent of time, and the process \(\mathbf{X}(t)\) given by the reaction rate equation \(\dot{\mathbf{X}}(t)=h(\mathbf{X}(t))\) has a unique steady state which continuously depends on the values of the reaction rates and the numbers \(n_{\mathrm{sites}}\) and \(n_{\mathrm{ves}}\), as demonstrated in Sec. A.4. I.e., there is exactly one state \(\hat{\mathbf{x}}\in\mathbb{R}_{+}^{5}\) with \(h(\hat{\mathbf{x}})=0\), and the process will approach this state asymptotically, no matter where it starts from (as long as the initial state is non-negative). ### System response to sustained stimulation Fig. 2 demonstrates the temporal evolution of the model system and the current \(C\) as response to sustained stimulation for \(1\,\mathrm{s}\) at frequency \(f_{\mathrm{stim}}=100\,\mathrm{Hz}\), represented by the fusion rate \(k_{F}(t)\) and the unpriming rate \(k_{U}(t)\) (see Fig. 2, second plot). Both functions depend on the intracellular \(\mathrm{Ca}^{2+}\) concentration dynamics (shown in the top plot of Fig. 2), which were determined using the _CalC_ modeling tool [35] at a physiological external \(\mathrm{Ca}^{2+}\) concentration of \(1.5\,\mathrm{mM}\) and a distance of \(118\,\mathrm{nm}\) from the \(\mathrm{Ca}^{2+}\) channel. Based on the \(\mathrm{Ca}^{2+}\) flow, we estimated the asymptotically periodic fusion rate \(k_{F}\) as a weighted average of the fusion rates in the Kobbersmed model [33] (see Sec. A.2 for details). Also, following [33], we adapted the sigmoidal shape of \(k_{U}(t)\), with the specific parameter values of this function and the priming rate constant \(k_{R}\) chosen such that the facilitation effect was reproduced proportionally. For more details on the estimation of these rates and the remaining rate constants see Sec. A.2. The numbers of release sites and vesicles were set to be \(n_{\mathrm{sites}}=1\) and \(n_{\mathrm{ves}}=10\), respectively, which means that we consider the average dynamics per release site assuming that the number of vesicles per release site is 10. The system was initialized in steady state at \(t=0\), i.e., \(\mathbf{X}(0)=\hat{\mathbf{x}}\) is given by \(h(\hat{\mathbf{x}},0)=0\), i.e., with reaction rates referring to no stimulation. The initial number of fusion events was set to \(F(0)=0\). For the starting time of stimulation we chose \(t_{\mathrm{start}}=0.05\,\mathrm{s}\). The quantity analogous to experimentally measured currents is given by \(C(t)\) shown in the bottom plot in Fig. 2. The signal exhibits distinct phases: an initial large response (the first two stimuli, including a facilitation effect) and fast depression to a plateau lasting for about \(0.1\,\mathrm{s}\) and then a second slower decay to a final periodic orbit with significantly smaller amplitude. This behavior can be explained qualitatively: In the initial state (given by the steady state related to the initial parameter values), release sites are distributed between \(R\) and \(P\) due to the balance between the priming and unpriming reaction. The surplus of vesicles is accumulated in \(V\). After the first stimulus, the unpriming rate drops to a very low value and release sites in \(P\) can quickly bind to vesicles in \(V\), which explains the initial facilitation and the strong response that is weakened quickly as release sites accumulate in \(W_{P}\). Afterwards, the large vesicle supply in \(V\) immediately provides recovered release sites with a vesicle and is thus gradually vacated while the signal plateaus. Due to the low vesicle recovery rate, vesicles start to accumulate in \(W_{V}\). Once the amount of vesicles in \(V\) approaches low values, increasingly more release sites are starting to collect in \(P\) again and the system converges to a periodic orbit with a small signal amplitude. Periodically forced system.The final periodic orbit stems from sustained stimulation in which the rate \(k_{F}\) depends on time periodically, at least in an asymptotic sense, i.e., there is some time \(t_{0}\) after which the dependence of \(k_{F}\) can be considered periodic with period \(T\) given by the stimulation frequency, \[k_{F}(t)=k_{F}(t+T),\quad\forall t\geq t_{0},\] while \(k_{U}\) is constant for \(t\geq t_{0}\), and all other rates are time-independent. Consequently, the right-hand side function \(h\) in (6) also is \(T\)-periodic via its dependence on \(k_{F}\). In this case, dynamical systems theory [36, 37, 38] tells us that, as long as the amplitude of the periodic forcing is not too large, there is at least one asymptotically \(T\)-periodic solution \(\mathbf{X}_{\rm per}=\mathbf{X}_{\rm per}(t)\) of (6) that oscillates around a certain fixed point \(\mathbf{X}_{0}\). By continuation Figure 2: **System response to sustained stimulation.** Temporal evolution of species’ average numbers and the current \(C\) (blue lines in third to ninth plot from the top, see respective labels on \(y\)-axes) in response to changes in the time-dependent rates (\(k_{F}\) and \(k_{U}\), shown as yellow and green lines in the second plot) representing a stimulus train of length \(1\,\mathrm{s}\) at frequency \(f_{\rm stim}=100\,\mathrm{Hz}\), see Sec. 2.2. Both rates depend on the intracellular \(\mathrm{Ca}^{2+}\) concentration dynamics (grey line in first plot), which were determined using the _CalC_ modeling tool [35] at a physiological external \(\mathrm{Ca}^{2+}\) concentration of \(1.5\,\mathrm{mM}\) and a distance of \(118\,\mathrm{nm}\) from the \(\mathrm{Ca}^{2+}\) channel. theory and averaging, we know that this fixed point \(\mathbf{X}_{0}\) is the unique steady state of the averaged right-hand side function [36] \[\bar{h}(\mathbf{X}):=\frac{1}{T}\int_{t_{0}}^{t_{0}+T}h(\mathbf{X},t)dt=\begin{pmatrix}-k _{R}VP+g_{V}W_{V}+\bar{k}_{U}R\\ \bar{k}_{F}R-g_{V}W_{V}\\ \bar{k}_{F}R-g_{P}W_{P}\\ k_{R}VP-\bar{k}_{F}R-\bar{k}_{U}R\\ -k_{R}VP+g_{P}W_{P}+\bar{k}_{U}R\end{pmatrix}, \tag{11}\] with \[\bar{k}_{F}=\frac{1}{T}\int_{t_{0}}^{t_{0}+T}k_{F}(t)dt,\quad\bar{k}_{U}=k_{U} (t_{0}).\] That is, \(\mathbf{X}_{0}\) is the unique solution of \(\bar{h}(\mathbf{X}_{0})=0\), and, if \(k_{F}(t)=\bar{k}_{F}+\lambda\bar{k}_{F}(t)\) for \(T\)-periodic \(\bar{k}_{F}\) with running average \(0\), then we have that the periodic solution \(\mathbf{X}_{\text{per}}\) converges to \(\mathbf{X}_{0}\) for \(\lambda\to 0\), and stays in a \(\lambda\)-wide neighborhood of \(\mathbf{X}_{0}\) for not too large \(\lambda\). That is, for sustained periodic stimulation the system will asymptotically show periodic behavior with oscillations around the steady state given by the time-averaged rates. According to Eqs. (9) and (10), this holds also true for the current \(C\), which oscillates around the fixed point \(C_{0}\) given by \[C_{0}=(\bar{k}_{F}R_{0}*g)(t)=\bar{k}_{F}R_{0}\int_{-\infty}^{\infty}g(s)ds. \tag{12}\] With the investigated parameter values, the asymptotic behavior can already be observed after less than \(1\,\mathrm{s}\) of stimulation, as depicted in Fig. 3. The final periodic orbit as well as the transient system response are dictated by the balance between the two recovery processes. In the following we will consider the signal sensitivity in order to determine which process is more influential (i.e. 'rate-limiting' or 'rate-determining'). ### Sensitivity analysis A characteristic property of the limiting process is that the signal \(C(t)\) should be particularly sensitive to changes in its rate constant: for example, if vesicle recovery is Figure 3: **Asymptotic behavior.** Temporal evolution of the current \(C\) (blue) and asymptotic oscillation around the fixed point \(C_{0}\) (orange). The grey inset shows a zoom-in of the last \(0.2\,\mathrm{s}\). The temporal averaging was done for \(T=\frac{1}{100\,\mathrm{Hz}}=0.01\,\mathrm{s}\) and \(t_{0}=0.99\,\mathrm{s}\). more impactful than release site recovery, small changes in \(g_{V}\) should result in a greater change in \(C\) than small changes in \(g_{P}\). We therefore introduce the notation \(C(t,p(t))\) to emphasize that the output \(C\) depends on the (partially) time-varying parameter values \(p(t):=(k_{R},k_{U}(t),k_{F}(t),g_{V},g_{P})\) and consider the _sensitivity_\(Z_{C}\) as a measure of the influence of the two recovery processes on the output signal \(C\): \[Z_{C}^{g_{V}}(t):=\left.\frac{\partial C}{\partial g_{V}}(t,p(t))\right|_{p(t) =p^{*}(t)},\qquad Z_{C}^{g_{P}}(t):=\left.\frac{\partial C}{\partial g_{P}}(t, p(t))\right|_{p(t)=p^{*}(t)}, \tag{13}\] where \(p^{*}\) refers to the parameter values given by the parameter estimation, see Sec. A.2. Defining the sensitivities \(Z_{F}^{g_{V}}(t)\), \(Z_{F}^{g_{P}}(t)\) in analogy to (13), we observe that \[\frac{\partial}{\partial g_{V}}C(t) =\frac{\partial}{\partial g_{V}}\int_{-\infty}^{\infty}\dot{F}(s )g(t-s)ds \tag{14}\] \[\overset{(*)}{=}\int_{-\infty}^{\infty}\frac{\partial}{\partial g _{V}}\dot{F}(s)g(t-s)ds\] (15) \[=\int_{-\infty}^{\infty}Z_{F}^{g_{V}}(s)g(t-s)ds=(Z_{F}^{g_{V}}* g)(t), \tag{16}\] where we skipped \(p(t)\) and used the Leibniz rule in \((\star)\). That is, we have \[Z_{C}^{g_{V}}(t)=\left(Z_{F}^{g_{V}}*g\right)(t), \tag{17}\] and analogously for \(g_{P}\). Here, we used that the impulse response function \(g\) does not depend on the parameter values \(p\). The quantity \(Z_{C}^{g_{V}}(t)\) captures the change in \(C(t)\) induced by increasing the rate constant \(g_{V}\) by an infinitesimal amount for all times, and likewise for \(g_{P}\). A closed system of ordinary differential equations can be derived for the sensitivities of all model components, which we can solve simultaneously with the RRE in order to compute the sensitivities in \(C\) at any time \(t\geq 0\)[39]. Further details can be found in Sec. A.1. We finally normalize the sensitivity coefficients and define [40] \[z_{C}^{g_{V}}(t):=Z_{C}^{g_{V}}(t)\cdot\frac{g_{V}}{C(t)},\quad z_{C}^{g_{P}} (t):=Z_{C}^{g_{P}}(t)\cdot\frac{g_{P}}{C(t)}. \tag{18}\] Hereby, we obtain sensitivity values _relative_ to the rates \(g_{V},g_{P}\) and to the signal \(C(t)\). This is especially important because \(g_{V}\) is much smaller than \(g_{P}\) in our parameter estimation, such that the absolute sensitivities \(Z_{C}^{g_{V}}(t),Z_{C}^{g_{P}}(t)\) would deliver a distorted impression of the parameters' impact. The temporal evolution of these normalized sensitivities for the system response pictured in Fig. 2 is shown in Fig. 4. At the very beginning, the sensitivities are almost zero for both recovery processes. This is because the system is initially at steady state without stimulation, where both \(W_{P}\) and \(W_{V}\) have very low numbers and recovery is of low significance for the resulting signal. The further temporal evolution in Fig. 4 matches well with the above qualitative discussion on Fig. 2: For \(t\leq 0.2\,\mathrm{s}\) (during the plateau), the sensitivity to changes in \(g_{V}\) (blue) is low and only slowly increasing due to the fact that most vesicles are stored in \(V\). Once more vesicles start to accumulate in \(W_{V}\), \(z_{C}^{g_{V}}\) grows and finally approaches a constant positive value once the system reaches the asymptotic periodic orbit. The sensitivity in \(g_{P}\) (orange) initially quickly rises because release site abundance increases in \(W_{P}\) while it simultaneously decreases in \(P\). As there are sufficient vesicles available in \(V\) during the plateau for \(t\leq 0.2\,\mathrm{s}\), faster release site recovery increases the signal and the sensitivity is positive. However, this also means that the supply in \(V\) is emptied faster and the signal decreases at an earlier point in time. This is why the sensitivity \(z_{C}^{g_{P}}\) becomes negative after the plateau at around \(t=0.25\,\mathrm{s}\). With the system settling into its final periodic orbit, \(z_{C}^{g_{P}}\) approaches a constant low value. This is due to the fact that an increase in \(g_{P}\) leads to a time-shift of the transient phase (from plateau to asymptotic orbit) to an earlier time period, but not to a significant change in the final orbit itself. In order to determine which of the recovery processes is more influential, one needs to compare the two sensitivities' absolute values (colored bar at the bottom of Fig. 4). Initially, for \(t_{\mathrm{start}}\leq t\leq 0.2\,\mathrm{s}\), the sensitivity \(z_{C}^{g_{P}}\) with respect to \(g_{P}\) clearly exceeds the value of \(z_{C}^{g_{V}}\), which means that release site recovery is the limiting process. Near the shift from positive to negative values in \(z_{C}^{g_{P}}\), the sensitivity with respect to \(g_{V}\) temporally dominates, but then it is again surpassed by the negative impact that a permanent increase in the recovery rate \(g_{P}\) has onto the signal in the time frame \(0.3\,\mathrm{s}\leq t\leq 0.6\,\mathrm{s}\). After time \(t\approx 0.6\,\mathrm{s}\), with the system approaching the final periodic orbit, the sensitivity with respect to \(g_{V}\) clearly dominates, while the impact of the recovery rate \(g_{P}\) may be neglected. That is, in the long run, vesicle recovery is the limiting process. This behavior leads us to an important general insight independent of the specific model and parameters: the answer to the question of the rate-limiting process is not necessarily binary but can depend on the point in time during stimulation. Dependence on parameter values.Of course, the sensitivity evolution in Fig. 4 is only the result for the specific set of parameter values estimated in Sec. A.2. However, parameter studies in which we varied either \(k_{R}\), \(g_{V}\), or \(g_{P}\) from one twentieth to twenty times the original value demonstrate that the identity of the rate-limiting process is indeed a time-varying quantity over a large section of parameter space (details and images included in Sec. A.3). In all cases examined, upon stimulation, the dominant limiting process is initially release site recovery. The determining process then switches to vesicle recovery for some time unless \(g_{V}\) is very large or \(g_{P}\) is very small, that is, unless vesicle Figure 4: **Normalized sensitivities under sustained stimulation.** Temporal evolution of the normalized sensitivity (given in Eq. (18)) of the current \(C\) to vesicle and release site recovery rate. The system parameters were chosen as in Fig. 2. The colored bar (bottom) indicates which of the two sensitivities dominates in absolute values. recovery is very fast by comparison. Afterwards, site recovery may become limiting again for a period of time, but the system eventually switches back to higher sensitivity in \(g_{V}\) within the \(1\,\mathrm{s}\) of stimulation in almost all cases (again, unless \(g_{P}\) is very small). In summary, for a significant part of parameter space, the identity of the limiting process initially starts out as release site recovery but changes to vesicle recovery by the end of \(1\,\mathrm{s}\) of stimulation time, with the possibility of an additional switch to site recovery and back in between. This behavior can be ascribed to the initial surplus of vesicles in \(V\) which leads to very low sensitivity to \(g_{V}\). If \(V\) is emptied and vesicles are accumulating in the recovery state, changes in \(g_{V}\) have much higher impact and \(z_{C}^{g_{V}}\) dominates. Again, this will happen unless vesicle recovery is very fast by comparison. By choosing \(n_{\mathrm{sites}}=1,n_{\mathrm{ves}}=10\), the considered ODE is used to describe the _average_ dynamics at a _single_ release site which is available to 10 vesicles. Augmenting the values \(n_{\mathrm{sites}}\) and \(n_{\mathrm{ves}}\) would mean to consider an active zone of several release sites which all access the same vesicle pool of size \(n_{\mathrm{ves}}\). Typically, there are only very few release sites per active zone which justifies to stick to small numbers \(n_{\mathrm{sites}}\), as done in this work. In this case, stochastic effects in the dynamics may play an important role, which motivates to extend the analysis to stochastic dynamics. ## 3 Stochastic dynamics of individual release sites It is well-known that an ODE-description in terms of the reaction rate equation (as given by Eq. (6)) delivers a good approximation of the average reaction dynamics in case of large particle numbers. For a single release site, however, the number of partaking vesicles is rather small and deviations from the ODE-behavior are to be expected. Furthermore, experimentally measured postsynaptic currents exhibit noise and irregularities even though multiple release sites are involved and summed over in the neurotransmission process. This motivates to investigate stochastic effects and variances of the recovery dynamics introduced in Sec. 2.1 by formulating and analyzing the corresponding stochastic reaction-jump process. ### The reaction jump process The Markov process describing the stochastic recovery dynamics is denoted by \[\boldsymbol{\mathcal{X}}(t)=(\mathcal{X}_{i}(t))_{i=1,\ldots,5}=(\mathcal{V}(t ),\mathcal{W}_{V}(t),\mathcal{W}_{P}(t),\mathcal{R}(t),\mathcal{P}(t))^{\top} \tag{19}\] with \(\boldsymbol{\mathcal{X}}(t)\in\mathbb{N}_{0}^{5}\) for all \(t\geq 0\). Here, \(\mathcal{X}_{i}(t)\in\mathbb{N}_{0}\) is the (random) number of vesicles or release sites in the respective states at time \(t\), see again Fig. 1 for the underlying model. These numbers change by discrete jumps induced by individual reaction events (given by the reactions (1)-(4)), which occur after exponentially distributed waiting times. The associated probability distribution is characterized by the corresponding chemical master equation, see [41] and references therein. In analogy to Eq. (10), the _stochastic output current_ is given by \[\mathcal{C}(t):=(f*g)(t), \tag{20}\] where \(g\) is again the impulse response function defined in Eq. (39) and \(f\) is the functional derivative of the trajectories of the stochastic process \((\mathcal{F}(t))_{t\geq 0}\) counting the number of fusion events given by reaction (3). The latter is a monotonically increasing Markov jump process on the natural numbers, starting with \(\mathcal{F}(0)=0\) and augmenting by one whenever a fusion event happens. Denoting the random jump times of \((\mathcal{F}(t))_{t\geq 0}\) by \(T_{1},T_{2},...\), the functional derivative \(f\) is a sum of Dirac delta functions shifted by the times \(T_{i}\). An individual trajectory of the stochastic reaction jump process \(\mathcal{X}(t)\) is plotted in Fig. 5, including the counting process \(\mathcal{F}(t)\) of fusion events and the induced stochastic output current \(\mathcal{C}(t)\). This trajectory refers to the random dynamics at _one single_ release site. Its characteristics drastically deviate from the transient oscillatory and asymptotically periodic dynamics of the ODE-solution shown in Fig. 2, although all parameter values coincide. The components \(\mathcal{W}_{P}\), \(\mathcal{R}\) and \(\mathcal{P}\) switch between the discrete states \(0\) and \(1\), and the output current \(\mathcal{C}\) consists of a few peaks occurring at random points in time. Especially, there is no obvious periodicity in the stochastic dynamics for such a single release site. However, the periodicity reappears by either considering the total junction current triggered by several release sites, which will be done in the following Sec. 3.2, or by calculating the dynamics' first-order moments, see Sec. 3.3. ### Total junction current Experimental measurements are typically given by the joint output signal of several release sites (in Ref. [34] we took \(N=180\) release sites). The analogue quantity in our setting is given by the sum of \(N\in\mathbb{N}\) independent realizations \(\mathcal{C}_{i}\) of \(\mathcal{C}\): \[\mathcal{C}_{\text{total}}(t)=\sum_{i=1}^{N}\mathcal{C}_{i}(t). \tag{21}\] Figure 5: **Random dynamics at a single release site.** One realization of the stochastic reaction jump process \(\mathcal{X}(t)\), the counting process \(\mathcal{F}(t)\) and the stochastic output current \(\mathcal{C}(t)\) for the same parameter values as used in Fig. 2. The initial state is drawn randomly from the initial distribution which is the steady state distribution under no stimulation. Fig. 6 shows random realizations of the scaled total output current \(\mathcal{C}_{\mathrm{total}}/N\) for different numbers \(N\) of release sites. For all \(N\) one can observe that releases become scarcer in the course of time which is due to the fact that the reserve of vesicles is depleted. A periodicity in the dynamics is only perceptible for large \(N\) (\(N=50\), \(N=180\)) during the first \(0.4\,\mathrm{s}\) of stimulation. The characteristics pass from apparently non-periodic, randomly occurring peaks for small \(N\) to periodic dynamics that appear to be close to the ODE-solution for large \(N\). This can be explained as follows: By the law of large numbers, the scaled total output \(\mathcal{C}_{\mathrm{total}}(t)/N\) converges to the mean \(\mu_{\mathcal{C}}(t)\) of \(\mathcal{C}(t)\) which in turn is close to the ODE-solution, as we will show in the following Sec. 3.3. For small \(N\), the periodicity is hidden in the time-dependent fusion rate \(k_{F}(t)\) and only becomes visible when looking at statistical averages. In general, the stochastic dynamics show large variations which gradually decrease when increasing the number \(N\) of release sites. The following investigation of the system's first- and second-order moments clarifies this issue. ### First- and second-order moments The first-order moment \(\mu_{\mathcal{C}}(t):=\mathbb{E}(\mathcal{C}(t))=\frac{1}{N}\mathbb{E}( \mathcal{C}_{\mathrm{total}}(t))\) of the stochastic signal \(\mathcal{C}(t)\) and of the scaled total output current \(\mathcal{C}_{\mathrm{total}}(t)/N\) is plotted in Fig. 7, together with the time-dependent standard deviations \(\sigma_{\mathcal{C}}(t)\) and \(\sigma_{\mathcal{C}_{\mathrm{total}}/N}(t)\) for \(N=180\), all estimated from \(10^{4}\) MC simulations. We observe a periodicity in the first- and second-order moments as well as a close agreement of the mean \(\mu_{\mathcal{C}}(t)\) with the ODE-solution \(C(t)\) from Fig. 2. This similarity is surprising because nonlinear reaction systems typically show a significant deviation of the stochastic mean from the ODE-solution, at least for small particle numbers [41], which would imply the inequality \(\mu_{\mathcal{C}}(t)\neq C(t)\). However, further numerical experiments on the reaction system under investigation show that the high-level similarity \(\mu_{\mathcal{C}}(t)\approx C(t)\) exists independently of the population size and the chosen parameter values. Indeed, the source of the similarity mainly lies in the independence of the recovery processes of vesicles and release sites which implies a small covariance of their dynamics, as we will explain in the following. Figure 6: **Scaled total output for different numbers of release sites.** One realization of the scaled total junction current \(\mathcal{C}_{\mathrm{total}}(t)/N\) for \(N=1,10,50,180\). Independent recovery processes.After a fusion event (which carries both the vesicle and the release site into their recovery states \(W_{V}\) and \(W_{P}\), respectively), the recovery dynamics given by reaction (4) happen independently of each other. That is, the time it takes a recovering vesicle to increase the number of available vesicles \(V\) again does not affect the time it takes a recovering release site to add to the number of available sites \(P\), and vice versa. This stands in contrast to the unpriming reaction (2), which simultaneously augments both \(V\) and \(P\). However, unpriming happens at a very small rate (after a short initial phase), so that the increase in \(V\) or \(P\) mainly results from independent recovery reactions. Thereby, we obtain a certain degree of independence in the distributions of \(V\) and \(P\), meaning that we have (with respect to the law of the jump process), \[\mathbb{P}\left[\mathcal{V}(t)=n,\mathcal{P}(t)=m\right]\approx\mathbb{P} \left[\mathcal{V}(t)=n\right]\cdot\mathbb{P}\left[\mathcal{P}(t)=m\right], \tag{22}\] for \(n,m\in\mathbb{N}_{0}\) and \(t>0\), as well as \[\mathbb{E}\left[\mathcal{V}(t)\cdot\mathcal{P}(t)\right]\approx\mathbb{E} \left[\mathcal{V}(t)\right]\cdot\mathbb{E}\left[\mathcal{P}(t)\right] \tag{23}\] for most times \(t\geq 0\), and consequently \(\mathbb{E}(\boldsymbol{\mathcal{X}}(t))\approx\boldsymbol{X}(t)\) and \(\mathbb{E}(\mathcal{C}(t))\approx C(t)\). This similarity is equivalent to a small covariance \[\mathrm{cov}(\mathcal{V}(t),\mathcal{P}(t)):=\mathbb{E}\left[\mathcal{V}(t) \cdot\mathcal{P}(t)\right]-\mathbb{E}\left[\mathcal{V}(t)\right]\cdot\mathbb{E }\left[\mathcal{P}(t)\right] \tag{24}\] or a small correlation \[\mathrm{corr}(t):=\frac{\mathrm{cov}(\mathcal{V}(t),\mathcal{P}(t))}{\sigma_ {\mathcal{V}}(t)\sigma_{\mathcal{P}}(t)}, \tag{25}\] where \(\sigma_{\mathcal{V}}(t)\) and \(\sigma_{\mathcal{P}}(t)\) denote the standard deviations of the processes \(\mathcal{V}(t)\) and \(\mathcal{P}(t)\), respectively. The correlation function is plotted in Fig. 8. One observes a rapid decrease of the correlation towards zero, which corresponds to an increase in the degree of independence in our system. We study the correlation for two reduced reaction systems in order to clarify this effect of the independent recovery dynamics in Sec. A.5. Figure 7: **First- and second order moments of the output current.** ODE-solution \(C(t)\) (grey line) and first-order moment \(\mu_{\mathcal{C}}(t)=\mu_{\mathcal{C}_{\mathrm{total}}}(t)/N\) (black dotted line), together with the standard deviations \(\sigma_{\mathcal{C}}(t)\) and \(\sigma_{\mathcal{C}_{\mathrm{total}}/N}(t)\) (green lines) of the stochastic output current \(\mathcal{C}(t)\) and the rescaled total current \(\mathcal{C}_{\mathrm{total}}(t)/N\) for \(N=180\), respectively. The zoom-in shows a high level of agreement between \(C(t)\) (grey line) and \(\mu_{\mathcal{C}}(t)\) (black dotted line). The relative standard deviation decreases substantially when considering the total current of several release sites. Estimated from \(10^{4}\) MC simulations. Parameter values were chosen as in Fig. 2. ## 4 Concluding Remarks In this work, we have introduced a non-linear reaction network for the presynaptic dynamics of signal processing at chemical synapses, including explicit reactions for recovery processes. Modeled by a second-order reaction, a freely available vesicle attaches to a freely available release site. Afterwards, the vesicle either detaches again or fuses with the membrane, thereby triggering an output current. Both the unpriming rate and the fusion rate depend on time, accounting for the level of stimulation. After a fusion event, both the vesicle and the release site enter recovery states, where they stay until they return independently of each other and become available again. The goal of this work was (1) to understand the effect of the recovery processes on the total signal output under sustained stimulation of the system, and (2) to investigate how single release site dynamics may be related to the output current measured for many release sites as typically in experiments. We have analyzed the signaling process by numerically solving the associated reaction rate equation and by simulating the associated stochastic reaction jump process. The main findings may be summarized as follows. * During the initial phase of stimulation, there is a relatively stable response caused by the combined effect of fast release site recycling and a high supply of vesicles available for binding to the sites. With sustained activity, the vesicle supply is depleted which leads to a deep depression of the output signal. This is caused by the small recovery rate of the vesicles. Finally, the signal reaches a periodic orbit with a small amplitude in the vicinity of a uniquely determined steady state of the averaged system. * Sensitivity analysis for the recovery rates reveals a considerable time-dependence of the normalized sensitivity coefficients. While the output current is dominantly influenced by the recovery rate \(g_{P}\) of release sites at the start of stimulation, at later times, the vesicle recovery rate \(g_{V}\) becomes more decisive. This result is in good agreement with previous discussions on the subject [29] and extends them by the idea that the identity of the rate-determining step of neurotransmission under sustained activity depends on the time during stimulation. Since the evolution of the sensitivities depends on the parameter values, we conducted a parameter study Figure 8: **Temporal evolution of the correlation between \(\mathcal{V}(t)\) and \(\mathcal{P}(t)\).** The correlation function \(\mathrm{corr}(t)\) is defined in Eq. (25) and was estimated from \(10^{4}\) MC simulations. Parameter values were chosen as in Fig. 2. and observed that the characteristic structure is conserved over a large range of values. * By simulating single release sites by means of the stochastic reaction jump process we uncovered that the characteristics of an individual random trajectory, which contains separate spikes of the output signal occurring at random points in time, drastically deviate from the oscillatory and asymptotically periodic ODE-trajectory. At the same time, the first-order moments of the random dynamics showed a surprisingly close agreement with the ODE-solution. We have traced this closeness back to small correlation between vesicle supply and release site supply, which results from the near independence of the recovery processes. Moreover, we found that the periodic pattern of the ODE-solution is recovered for the stochastic dynamics when one considers the total junction current induced by averaging over larger numbers of release sites. Overall, the model introduced in this paper allows to study the effects of recovery processes within presynaptic neurotransmission dynamics on different levels. The results presented herein guide the way towards qualitatively and quantitatively understanding the role of different recovery rates and stimulation frequencies on the overall output signal. In this respect central questions of interest for future investigations include the following: Is it possible to differentiate between the effects caused by release site recovery in contrast to vesicle recovery? More concretely, may the same postsynaptic current arise from different combinations of recovery speeds? This question is of interest because it reveals to which extent experimental data may be used to uniquely identify the recovery rates. Answering this question would mean to quantitatively compare the system's response at different parameter values and stimulation frequencies and compare them with experimental measurements. #### Acknowledgements This research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through grant CRC 1114/3 and under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1 project ID: 390685689). #### Code availability The code used to generate the results in this paper is available at [https://doi.org/10.5281/zenodo.7551439](https://doi.org/10.5281/zenodo.7551439). ## Appendix A Appendix ### Sensitivity equation for recovery model Given the recovery model of Sec. 2.1 we consider the extension \[\mathbf{Y}(t):=\left(V(t),W_{V}(t),W_{P}(t),R(t),P(t),F(t)\right)^{\top} \tag{26}\] of the cumulative state \(\mathbf{X}(t)\) defined in (5), which satisfies the extended RRE of the form \[\dot{\mathbf{Y}}(t)=\begin{pmatrix}-k_{R}V(t)P(t)+g_{V}W_{V}(t)\\ k_{F}(t)R(t)-g_{V}W_{V}(t)\\ k_{F}(t)R(t)-g_{P}W_{P}(t)\\ k_{R}V(t)P(t)-k_{F}(t)R(t)\\ -k_{R}V(t)P(t)+g_{P}W_{P}(t)\\ k_{F}(t)R(t)\end{pmatrix}=:\tilde{h}(\mathbf{Y}(t),t). \tag{27}\] We emphasize the dependency on the time-varying parameters \(p(t)=(k_{R},k_{F}(t),g_{V},g_{P})\) by writing \(\mathbf{Y}(t,p(t))\) and \(\dot{\mathbf{Y}}(t,p(t))\). We introduce the sensitivities \[Z_{Y_{i}}^{g_{V}}(t):=\frac{\partial Y_{i}}{\partial g_{V}}(t,p(t))\Big{|}_{p (t)=p^{*}(t)},\quad Z_{Y_{i}}^{g_{P}}(t):=\frac{\partial Y_{i}}{\partial g_{P} }(t,p(t))\Big{|}_{p(t)=p^{*}(t)}, \tag{28}\] with \(p^{*}(t)\) being the reference parameter set from the parameter estimation, see Sec. A.2. Let \[\mathbf{Z}^{g_{V}}(t):=\begin{pmatrix}Z_{V}^{g_{V}}(t)\\ Z_{W_{V}}^{g_{V}}(t)\\ Z_{W_{P}}^{g_{V}}(t)\\ Z_{W}^{g_{V}}(t)\\ Z_{F}^{g_{V}}(t)\end{pmatrix},\quad\mathbf{Z}^{g_{P}}(t):=\begin{pmatrix}Z_{V}^{g_{ P}}(t)\\ Z_{W_{V}}^{g_{W}}(t)\\ Z_{W_{P}}^{g_{P}}(t)\\ Z_{F}^{g_{P}}(t)\end{pmatrix}.\] In accordance with [39], the ODE-systems for the sensitivities in matrix notation are then \[\dot{\mathbf{Z}}^{g_{V}}(t) =\frac{\partial\dot{\mathbf{Y}}}{\partial g_{V}}(t,p(t))\Big{|}_{p(t) =p^{*}(t)}+\mathcal{J}\big{(}t,p^{*}(t)\big{)}\mathbf{Z}^{g_{V}}(t), \tag{29}\] \[\dot{\mathbf{Z}}^{g_{P}}(t) =\frac{\partial\dot{\mathbf{Y}}}{\partial g_{P}}(t,p(t))\Big{|}_{p(t) =p^{*}(t)}+\mathcal{J}(t,p^{*}(t))\mathbf{Z}^{g_{P}}(t), \tag{30}\] where \[\frac{\partial\dot{\mathbf{Y}}}{\partial g_{V}}(t,p(t))=\begin{pmatrix}W_{V}(t,p (t))\\ -W_{V}(t,p(t))\\ 0\\ 0\\ 0\end{pmatrix},\quad\quad\quad\frac{\partial\dot{\mathbf{Y}}}{\partial g_{P}}(t,p (t))=\begin{pmatrix}0\\ 0\\ -W_{P}(t,p(t))\\ 0\\ W_{P}(t,p(t))\\ 0\end{pmatrix}, \tag{31}\] while \(\mathcal{J}(t,p(t))\) denotes the Jacobian matrix of \(\dot{\mathbf{Y}}(t,p(t))\) with respect to \(\mathbf{Y}\), so \(\mathcal{J}_{ij}(t,p(t)):=\frac{\partial\dot{Y}_{i}}{\partial\dot{Y}_{j}}(t,p (t))=\frac{\partial\tilde{h}}{\partial Y_{j}}(\mathbf{Y},t)\) for \(\dot{\mathbf{Y}}\) given in (27), such that \[\mathcal{J}(t,p(t))=\begin{pmatrix}-k_{R}P(t,p(t))&g_{V}&0&0&-k_{R}V(t,p(t))&0 \\ 0&-g_{V}&0&k_{F}(t)&0&0\\ 0&0&-g_{P}&k_{F}(t)&0&0\\ k_{R}P(t,p(t))&0&0&-k_{F}(t)&k_{R}V(t,p(t))&0\\ -k_{R}P(t,p(t))&0&g_{P}&0&-k_{R}V(t,p(t))&0\\ 0&0&0&k_{F}(t)&0&0\end{pmatrix}. \tag{32}\] We can solve numerically for the sensitivities of all model components. As the system is assumed to start in the parameter-dependent steady state \(\hat{\mathbf{x}}=\hat{\mathbf{x}}(p)\) (see A.4 for its calculation), the initial values of the sensitivities are given by \[Z_{X_{i}}^{g_{P}}(0)=\frac{\partial\hat{x}_{i}}{\partial g_{P}}(p(0))\Big{|}_{p( 0)=p^{*}(0)},\quad Z_{X_{i}}^{g_{V}}(0)=\frac{\partial\hat{x}_{i}}{\partial g_{ V}}(p(0))\Big{|}_{p(0)=p^{*}(0)}\quad\text{for}\quad i=1,...,5 \tag{33}\] for \((X_{i})_{i=1,...,5}=(V,W_{V},W_{P},R,P)\). Moreover, as \(F(0)=0\) holds independently of the parameter values, we know that \(Z_{F}^{g_{P}}(0)=Z_{F}^{g_{V}}(0)=0\). Finally, the sensitivity in \(\dot{F}\) can then be found by interchanging the partial derivatives, where one has to apply Schwarz's theorem: \[Z_{\dot{F}}^{g_{V}}(t)=\frac{\partial\dot{F}}{\partial g_{V}}(t)=\frac{ \partial}{\partial g_{V}}\,\frac{\partial F}{\partial t}(t)=\frac{\partial}{ \partial t}\frac{\partial F}{\partial g_{V}}(t)=\frac{\partial}{\partial t}Z_ {F}^{g_{V}}(t).\] The sensitivity \(Z_{\dot{F}}^{g_{P}}(t)\) can be found in an analogous manner. ### Estimation of parameter values In order to yield realistic model behavior, the rate constants were estimated both from the literature and from the _unpriming model_ by Kobbersmed et al. [33]. In this model, there are several states that each release site can attend: It can be either empty (state \(P_{0}\)), or there is a vesicle attached to it, which itself has zero to five \(\text{Ca}^{2+}\) ions bound to its fusion sensor (states \(R_{0},...,R_{5}\), respectively). Switches between these states happen by random jump events, where the jump propensities partially depend on the \(\text{Ca}^{2+}\) concentration. Besides the _priming_ reaction \(P_{0}\to R_{0}\), which represents the binding of a vesicle to the empty release site, there is the reverse _unpriming_ reaction, which describes the process of a docked vesicle detaching from the release site again. The central event of a docked vesicle fusing with the membrane can happen from each of the states \(R_{0},...,R_{5}\) and turns the release site into the empty state \(P_{0}\) again. This is modeled by the reaction \(R_{m}\to P_{0}+F\) (\(m=0,...,5\)), where \(F\) refers to the cumulative number of fusion events. Our model, as described in Sec. 2.1, merges the states \(R_{0},...,R_{5}\) of Ref. [33] to one state \(R\). On the other hand, we add the recovery states \(W_{V}\) and \(W_{P}\) as well as the state \(V\) of available vesicles, thereby turning the first-order priming reaction \(P_{0}\to R_{0}\) of the Kobbersmed model into a second-order reaction \(P+V\to R\) between available release sites and available vesicles. These relations between the models are used in the following to estimate the fusion rate as well as the priming and unpriming rates for our model. An overview of the parameter values as well as the used method of estimation is shown in Table 1, the details will be discussed in the following. **Release site recovery rate \(g_{P}\gtrsim 50\,\text{s}^{-1}\).** According to Kawasaki et al. [32], repeated stimulation in mutants with inhibited vesicle recovery induced synaptic fatiguing within \(20\,\text{ms}\). Thus, release site recovery is estimated to operate at a rate of \(g_{P}=\frac{1}{20\,\text{ms}}=50\,\text{s}^{-1}\). **Vesicle recovery rate \(g_{V}\gtrsim 0.4\,\text{s}^{-1}\).** Watanabe et al. [42] observed and timed a succession of steps for vesicle recovery: endocytosis of a large vesicle (\(50\,\text{ms}-100\,\text{ms}\)), transition to an endosome (\(1\,\text{s}\)), coating (\(3\,\text{s}\)) and separation of the endosome into approximately \(4\) synaptic vesicles (\(6\,\text{s}\)). We therefore estimated the vesicle recovery rate to be \(g_{V}=[(100\,\text{ms}+1\,\text{s}+3\,\text{s}+6\,\text{s})/4]^{-1}\approx(2.5 \,\text{s})^{-1}=0.4\,\text{s}^{-1}\). **Fusion rate \(k_{F}(t)\).** We estimated the fusion reaction propensity \(k_{F}(t)\) during stimulation by calculating a weighted average of the fusion rates in the Kobbersmed model [33] for \(1\,\mathrm{s}\) of stimulation at \(100\,\mathrm{Hz}\) with an external \(\mathrm{Ca}^{2+}\) concentration of \(1.5\,\mathrm{mM}\) and a distance from the \(\mathrm{Ca}^{2+}\) channel of \(118\,\mathrm{nm}\). The weights result from truncating the states \(P_{0}\) and \(F\) from the model and observing the distribution of the release sites \(R_{0},...,R_{5}\) in the truncated model in response to the stimulus train. In the Kobbersmed model, which is based on the allosteric fusion model by Lou et al. [43], the dynamic behavior results from time-dependent changes in the intracellular \(\mathrm{Ca}^{2+}\) concentration that directly enters the model's reaction rates. The behavior of this \(\mathrm{Ca}^{2+}\) concentration was determined using the \(CalC\) modeling tool [35] in accordance with the stimulation frequency and the number of applied stimuli (see top plot in Fig. 2). In order to let \(k_{F}\) be a continuously differentiable function, we approximate the resulting time-dependent weighted average \(\bar{k}_{F}^{\mathrm{Kob}}(t)\) as the sum of a baseline rate \(f_{\mathrm{baseline}}\) and a number of Gaussians \(f_{\mathrm{Gaussians}}\): \[k_{F}(t)=f_{\mathrm{baseline}}(t)+f_{\mathrm{Gaussians}}(t), \tag{34}\] where \[f_{\mathrm{baseline}}(t) =\frac{m_{0}}{1+e^{-m_{1}(t-m_{2})}}, \tag{35}\] \[f_{\mathrm{Gaussians}}(t) =\sum_{i=1}^{100}a_{i}e^{-\frac{0.5(t-t_{\mathrm{stim},i})^{2}}{ \sigma^{2}}}, \tag{36}\] with \(m_{0},m_{1},m_{2},\sigma,t_{\mathrm{stim},i},a_{i}\in\mathbb{R}_{+}\) for \(i=1,..,100\), see Table 1 for the values. The parameter values for the baseline function were found by fitting \(f_{\mathrm{baseline}}\) to the troughs of the weighted average \(\bar{k}_{F}^{\mathrm{Kob}}\). The parameter \(m_{0}\) denotes the supremum of the logistic function \(f_{\mathrm{baseline}}\), while \(m_{1}\) regulates the steepness and \(m_{2}\) the time at which it assumes its midpoint. The peak times of the fusion rate \(t_{\mathrm{stim},i}\) and amplitudes \(a_{i}\) with respect to the troughs were taken directly from \(\bar{k}_{F}^{\mathrm{Kob}}\), while the peak width \(\sigma\) was approximated as the average peak width in \(\bar{k}_{F}^{\mathrm{Kob}}\). The resulting function \(k_{F}(t)\) is plotted in the upper panel of Fig. 2. **Priming rate \(k_{R}\) and unpriming rate \(k_{U}(t)\).** In order to preserve the paired-pulse ratio from the Kobbersmed model, we optimized both the time-dependent unpriming rate \(k_{U}(t)\) and the priming rate \(k_{R}\) in the follwing way. In the interest of keeping the number of optimization parameters low, we assumed that \(k_{U}(t)\) was of the same general shape as in Ref. [33] which can approximately be described with the following continuously differentiable sigmoid function: \[k_{U}(t)=k_{U}^{\mathrm{max}}\left(1-\frac{1}{1+e^{-m_{3}(t-m_{4})}}\right)+k _{U}^{\mathrm{min}}\:. \tag{37}\] The parameters \(m_{3}\), \(m_{4}\), \(k_{U}^{\mathrm{min}}\in\mathbb{R}_{+}\) were estimated directly by fitting this function to the unpriming rate from the Kobbersmed model and adopting the same values, see Table 1. The remaining parameters \(k_{R}\), \(k_{U}^{\mathrm{max}}\in\mathbb{R}_{+}\) were then found by minimizing the parameter-dependent loss function \[L(p)=\Bigg{|}\frac{\dot{F}(t_{\mathrm{peak},2},p)}{\dot{F}(t_{\mathrm{peak}, 1},p)}-\frac{\dot{F}^{\mathrm{Kob}}(t_{\mathrm{peak},2}^{\mathrm{Kob}},p)}{ \dot{F}^{\mathrm{Kob}}(t_{\mathrm{peak},1}^{\mathrm{Kob}},p)}\Bigg{|}, \tag{38}\] fixing the previously estimated parameter values of \(p(t)=(k_{R},k_{U}(t),k_{F}(t),g_{V},g_{P})\). Here, \(t_{\mathrm{peak},j}\) denotes the point in time of the \(j\)-th peak in \(\hat{F}\) in the recovery model1 and, accordingly, \(t_{\mathrm{peak},j}^{\mathrm{Kob}}\) is the point in time of the \(j\)-th peak in the unpriming model with \(j=1,2\). The total number of vesicles and release sites were set to \(n_{\mathrm{ves}}=10\) and \(n_{\mathrm{sites}}=1\), respectively. Footnote 1: Note that these times are not the same as \(t_{\mathrm{stim},i}\) from the previous section since there is some latency behind the rise of the fusion rate \(k_{F}\) and the evocation of a signal. All parameter values are listed in Table 1 and 2 and determine the reference values \(p^{*}(t)\) of the rate functions. **Impulse response function.** The impulse response function was taken from Ref. [33]: \[g(t)=A\left(1-e^{-\frac{t-t_{0}}{\tau_{\mathrm{r}}}}\right)\left(Be^{-\frac{t-t _{0}}{\tau_{\mathrm{df}}}}+(1-B)e^{-\frac{t-t_{0}}{\tau_{\mathrm{da}}}}\right), \tag{39}\] where \(t_{0}=3\,\mathrm{ms}\) is the onset, \(A=7.21\,\mathrm{\SIUnitSymbolMicro A}\) is the full amplitude (if there was no decay), \(B=2.7\times 10^{-9}\) is the fraction of the fast decay, and \(\tau_{\mathrm{r}}=10.6928\,\mathrm{s}\), \(\tau_{\mathrm{df}}=1.5\,\mathrm{ms}\), \(\tau_{\mathrm{ds}}=2.8\,\mathrm{ms}\) are the time constants of rise, fast decay and slow decay, respectively. ### Parameter studies In order to contextualize our findings on the sensitivities as depicted in Fig. 4, we need to evaluate the range of possible system behaviors in response to different parameter values. For clarity, we limit our focus to alterations of the priming rate \(k_{R}\), the value of which was previously found via optimization, and the two recovery rates \(g_{V}\) and \(g_{P}\), which were estimated from the literature in our example (see Sec. A.2). \begin{table} \begin{tabular}{l|l|l} **parameter** & **value** & **method of estimation** \\ \hline \(t_{\mathrm{start}}\) & \(0.05\,\mathrm{s}\) & chosen freely \\ \hline \(g_{P}\) & \(50s^{-1}\) & literature [32] \\ \hline \(g_{V}\) & \(0.4s^{-1}\) & literature [42] \\ \hline \(m_{0}\) & \(397\,\mathrm{s}^{-1}\) & fitting \(f_{\mathrm{baseline}}\) to troughs of \(\bar{k}_{F}^{\mathrm{Kob}}\) \\ \(m_{1}\) & \(33.3\,\mathrm{s}^{-1}\) & \(\bar{\text{--}}\) \\ \(m_{2}\) & \(t_{\mathrm{start}}\)+ \(0.174\,\mathrm{s}\) & \(\bar{\text{--}}\) \\ \(a_{i}\) & see Table 2 & peak amplitudes with respect to troughs from \(\bar{k}_{F}^{\mathrm{Kob}}\) \\ \(t_{\mathrm{stim},i}\) & \(t_{\mathrm{start}}+i\cdot 0.01\,\mathrm{s}\) & peak times from \(\bar{k}_{F}^{\mathrm{Kob}}\) \\ \(\sigma\) & \(9.53\times 10^{-4}\,\mathrm{s}^{-1}\) & avg. peak width from \(\bar{k}_{F}^{\mathrm{Kob}}\) \\ \hline \(m_{3}\) & \(27\,318\,\mathrm{s}^{-1}\) & fitting \(k_{U}\) to \(k_{U}^{\mathrm{Kob}}\) \\ \(m_{4}\) & \(t_{\mathrm{start}}\)–\(1.4\times 10^{-3}\,\mathrm{s}\) & \(\bar{\text{--}}\) \\ \(k_{U}^{\mathrm{min}}\) & \(1.02\times 10^{-8}\,\mathrm{s}\) & \(\bar{\text{--}}\) \\ \(k_{U}^{\mathrm{max}}\) & \(334\,\mathrm{s}^{-1}\) & minimizing \(L(F(t))\) \\ \hline \(k_{R}\) & \(12.9\,\mathrm{s}^{-1}\) & minimizing \(L(F(t))\) \\ \end{tabular} \end{table} Table 1: **Parameter estimation results.** Overview of parameter values and the used method of estimation. #### a.3.1 Varying the docking rate The result of varying \(k_{R}\) from \(1/20\) to \(20\) times its original value while keeping all other parameter values as in our example is depicted in Fig. 9. Note the logarithmic spacing between different values of \(k_{R}\) (dark - low values, light - high values) and that the crimson color (line No. 5) corresponds to the parameter values used in the example in Fig. 4. The top two graphs show the temporal evolution of the sensitivities, while the bar plots beneath give the behavior of the sensitivities' absolute values. At stimulation onset (\(t_{0}=3\,\mathrm{ms}\)), the dominant sensitivity is \(z_{C}^{g_{P}}\) for all values of \(k_{R}\) under consideration, and after \(1\,\mathrm{s}\) of stimulation, \(z_{C}^{g_{P}}\) always dominates, i.e. the identity of the rate-determining process is time-dependent for all examined values of \(k_{R}\). Since the value of \(k_{R}\) regulates the speed of the priming reaction for most of the stimulation time (as the unpriming rate \(k_{U}\) falls to a negligible value after the first peak), one might naively expect a simple temporal compression (elongation) of the crimson system evolution in response to an increase (decrease) in \(k_{R}\). While the sensitivity plots (top) do show this general behavior, interestingly, for high \(k_{R}\), we also observe the formation of peaks of increased magnitude in the sensitivity graphs. The plots can be explained as follows: For small values of \(k_{R}\), the priming reaction happens so slowly that neither release site nor vesicle recovery can develop much impact on the resulting weak signal within \(1\,\mathrm{s}\) and both sensitivities are generally small. Due to the availability of the vesicle reserve, release site recovery is the limiting process until \(W_{V}\) has filled up sufficiently. At increased \(k_{R}\), the vesicle supply \(V\) is emptied at a greater speed that is dictated mainly by the recovery rate \(g_{p}\). The amplitude of the resulting current \(C\) is large as long as there are still vesicles available and exhibits a sharp decay after vesicle depletion - the higher \(k_{R}\) is, the steeper the decay. This is why a small increase in \(g_{P}\) can lead to a strong relative attenuation of \(C\), i.e. large negative peak values of \(z_{C}^{g_{P}}\), at the end of vesicle \begin{table} \begin{tabular}{l l||l l||l l||l l||l} \(i\) & \(a_{i}\) in \(\mathrm{s}^{-1}\) & \(i\) & \(a_{i}\) in \(\mathrm{s}^{-1}\) & \(i\) & \(a_{i}\) in \(\mathrm{s}^{-1}\) & \(i\) & \(a_{i}\) in \(\mathrm{s}^{-1}\) \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 2: **Fusion rate amplitudes.** Estimated peak heights of \(f_{\mathrm{Gaussians}}\). depletion: increasing the recovery rate \(g_{P}\) slightly shifts the time of vesicle depletion to the left, and the resulting relative difference in \(C\) in the decay region is larger for a more steeply-decaying signal, leading to larger and sharper peaks for increased \(k_{R}\). The peak emergence in \(z_{C}^{g_{V}}\) can be explained in an analogous manner, however, since a slight increase in \(g_{V}\) does not alter the vesicle depletion process much, the peaks are much smaller in magnitude (note the different scaling of the vertical axes). The combination of these effects leads to the formation of a second domain in which \(z_{C}^{g_{P}}\) is the dominant sensitivity for a majority of the examined parameter space. After most vesicles have accumulated in \(W_{V}\), the system is most sensitive to changes in \(g_{V}\). #### a.3.2 Varying the vesicle recovery rate Fig. 10 shows the impact of of varying the vesicle recovery rate \(g_{V}\) from 1/20 to 20 times its original value while keeping all other parameters as in the example. Increasing \(g_{V}\) induces a flattening of the sensitivity \(z_{C}^{g_{V}}\) curve to lower values, while the indentation in the course of \(z_{C}^{g_{P}}\) gradually becomes less negative and finally levels out to an almost constant positive value. This is due to the fact that raising \(g_{V}\) decreases vesicle depletion and thereby alleviates the effects discussed in the previous subsection. Vice versa, lowering \(g_{V}\) increases vesicle depletion and its impact on the sensitivities. As a result, the identity of the rate-limiting process keeps behaving in a way similar to our example (again, lines/bar No. 5) for low values of \(g_{V}\). Interestingly, when increasing \(g_{V}\), the second domain of higher absolute sensitivity \(z_{C}^{g_{P}}\) initially disappears and then, the first domain starts to expand significantly. Thus, the total amount of time spent in the site-limited state actually first decreases before increasing! This is especially relevant since it means that the same percentage of time spent in the site-limited state can result from different vesicle recovery rates. (If there was a way to distinguish the states experimentally, the short intermediate vesicle-limited domain may be too small to resolve.) Finally, when vesicles are replenished Figure 9: **Impact of changing \(k_{R}\) on the sensitivity time course.** The priming rate is varied from 1/20 to 20 times its original value while keeping all other parameter values as in our example. Note the logarithmic spacing between different values of \(k_{R}\) (dark - low values, light - high values) and that the crimson color (line No. 5) corresponds to the parameter values used in the example in Fig. 4. at very high speeds, the system is fully site-limited during the stimulation time. #### a.3.3 Varying the release site recovery rate The effect of varying the release site recovery rate \(g_{P}\) from \(1/20\) to \(20\) times its original value while keeping all other parameters as in the example is depicted in Fig. 11. An increase in \(g_{P}\) results in temporal compression of both sensitivities. For the sensitivity \(z_{C}^{g_{V}}\), a decrease in \(g_{P}\) simply has the opposite effect of a temporal stretching of the time course. This is because the release site recovery rate determines how fast the vesicle supply is emptied and \(W_{V}\) is filled, and the earlier this happens, the earlier the sensitivity to vesicle recovery rises. For \(z_{C}^{g_{P}}\), raising \(g_{P}\) also brings on a strong amplitude diminution while lowering \(g_{P}\) has the opposite effect. At high release site recovery rates, vesicles are depleted quickly and the resulting signal \(C\) shows an exponential decay. A small increase in \(g_{p}\) steepens the slope of this decay, however, there is a limit to this steepening since the vesicle depletion process is still constrained by the amount of priming and fusions that happen. Thus, at high \(g_{P}\), the slope changes only very slightly which is why the relative change in the current \(C\) and therefore also the sensitivity \(z_{C}^{g_{P}}\) is small. At low values of \(g_{P}\), as vesicle depletion happens very slowly, site recovery speed has the greatest impact on signal strength and even small increases in \(g_{P}\) can result in a lasting stronger signal. The combination of these effects results in the behavior of the limiting process that is depicted on the bottom of Fig. 11: Except for very low values of \(g_{P}\), the identity switches at least once and always begins as site-limiting at stimulation onset, before \(W_{V}\) has filled sufficiently for vesicle recovery to have an impact. The two domains from our example where the system is site-limited are conserved within a range of \(g_{P}\) but are compressed with increasing \(g_{P}\). For very high \(g_{P}\), the second domain disappears completely and the system is only site-limited for a short amount of time at stimulation onset. Only at very low site recovery rates, site recovery is always the rate-determining process. Figure 10: **Impact of changing \(g_{V}\) on the sensitivity time course.** The vesicle recovery rate is varied from \(1/20\) to \(20\) times its original value while keeping all other parameter values as in our example. Note the logarithmic spacing between different values of \(g_{V}\) (dark - low values, light - high values) and that the crimson color (line No. 5) corresponds to the parameter values used in the example in Fig. 4. ### Steady state investigation In the following we show that for constant \(k_{F}>0\) and constant \(k_{U}>0\) the system given by Eq. (6) has a unique steady state. Assume also \(g_{V},g_{P}>0\). The steady state \(\hat{\mathbf{x}}=(\hat{V},\hat{W}_{V},\hat{W}_{P},\hat{R},\hat{P})\) is given by the fixed point equations \[0 =-k_{R}\hat{V}\hat{P}+g_{V}\hat{W}_{V}+k_{U}\hat{R} \tag{40}\] \[0 =k_{F}\hat{R}-g_{V}\hat{W}_{V}\] (41) \[0 =k_{F}\hat{R}-g_{P}\hat{W}_{P}\] (42) \[0 =k_{R}\hat{V}\hat{P}-k_{U}\hat{R}-k_{F}\hat{R}\] (43) \[0 =-k_{R}\hat{V}\hat{P}+g_{P}\hat{W}_{P}+k_{U}\hat{R} \tag{44}\] with \[\hat{R}+\hat{P}+\hat{W}_{P} =n_{\rm sites}, \tag{45}\] \[\hat{V}+\hat{P}+\hat{W}_{V} =n_{\rm ves}. \tag{46}\] Note that (43) follows from (40) and (41), while (44) follows from (42) and (43), so both (43) and (44) are redundant. From (41) it follows \(\hat{W}_{V}=\frac{k_{F}}{g_{V}}\hat{R}\) and from (42) it follows \(\hat{W}_{P}=\frac{k_{F}}{g_{P}}\hat{R}\). Inserting into (45) and (46) we get \[\hat{P}=n_{\rm sites}-\left(1+\frac{k_{F}}{g_{P}}\right)\hat{R} \tag{47}\] and \[\hat{V}=n_{\rm ves}-\left(1+\frac{k_{F}}{g_{V}}\right)\hat{R}, \tag{48}\] Figure 11: **Impact of changing \(g_{P}\) on the sensitivity time course. The release site recovery rate is varied from \(1/20\) to \(20\) times its original value while keeping all other parameter values as in our example. Note the logarithmic spacing between different values of \(g_{P}\) (dark - low values, light - high values) and that the crimson color (line No. 5) corresponds to the parameter values used in the example in Fig. 4.** respectively. Set \(\alpha:=1+\frac{k_{F}}{g_{P}}>1\), \(\beta:=1+\frac{k_{F}}{g_{V}}>1\) and \(\gamma:=\frac{k_{F}+k_{U}}{k_{R}}>0\). Inserting into (40) we obtain \[0=\hat{R}^{2}\underbrace{-\left(\frac{n_{\text{ves}}}{\beta}+\frac{n_{\text{sites }}}{\alpha}+\frac{\gamma}{\alpha\beta}\right)}_{=:p}\hat{R}+\underbrace{\frac{n _{\text{ves}}n_{\text{sites}}}{\alpha\beta}}_{=:q}. \tag{49}\] This yields two solutions for \(\hat{R}\): \[\hat{R}_{1,2}=\underbrace{-\frac{p}{2}}_{=:R_{L}}\pm\underbrace{\sqrt{\frac{p^ {2}}{4}-q}}_{=:R_{R}}=R_{L}\pm R_{R}. \tag{50}\] We will now show that one of these solutions can be discarded as it leads to negative values for \(\hat{P}\). From (47) we have \[\hat{P}_{1,2}=n_{\text{sites}}-\alpha\hat{R}_{1,2}=n_{\text{sites}}-\alpha(R_ {L}\pm R_{R})=n_{\text{sites}}-\alpha R_{L}\mp\alpha R_{R}.\] For \(\alpha R_{R}>n_{\text{sites}}-\alpha R_{L}\) this will give \(\hat{P}_{1}<0\). Let us therefore compare the two expressions in the following. It holds \[n_{\text{sites}}-\alpha R_{L}\] \[=n_{\text{sites}}+\alpha\frac{p}{2}\] \[=\frac{\beta n_{\text{sites}}-\alpha n_{\text{ves}}-\gamma}{2\beta}\] \[=\sqrt{\frac{(\beta n_{\text{sites}}-\alpha n_{\text{ves}}- \gamma)^{2}}{(2\beta)^{2}}}\] \[=\sqrt{\frac{(\beta n_{\text{sites}})^{2}-2\beta n_{\text{sites }}(\alpha n_{\text{ves}}+\gamma)+(\alpha n_{\text{ves}}+\gamma)^{2}}{(2\beta) ^{2}}} \tag{51}\] and \[\alpha R_{R}\] \[=\sqrt{\alpha^{2}\left(\frac{p^{2}}{4}-q\right)}\] \[=\sqrt{\frac{(\beta n_{\text{sites}}+\alpha n_{\text{ves}}+ \gamma)^{2}-4\alpha\beta n_{\text{ves}}n_{\text{sites}}}{(2\beta)^{2}}}\] \[=\sqrt{\frac{(\beta n_{\text{sites}})^{2}+2\beta n_{\text{sites }}(\alpha n_{\text{ves}}+\gamma)+(\alpha n_{\text{ves}}+\gamma)^{2}-4\alpha \beta n_{\text{ves}}n_{\text{sites}}}{(2\beta)^{2}}}\] \[=\sqrt{\frac{(\beta n_{\text{sites}})^{2}-2\beta n_{\text{sites }}(\alpha n_{\text{ves}}-\gamma)+(\alpha n_{\text{ves}}+\gamma)^{2}}{(2\beta) ^{2}}}. \tag{52}\] Since \(\gamma>0\), comparing (51) to (52) proves that indeed \(\alpha R_{R}>n_{\text{sites}}-\alpha R_{L}\) and we need to choose \(\hat{R}_{2}\): \[\hat{R} =\hat{R}_{2}=-\frac{p}{2}-\sqrt{\frac{p^{2}}{4}-q}\] \[=\frac{\beta n_{\rm sites}+\alpha n_{\rm ves}+\gamma}{2\alpha\beta} -\sqrt{\frac{(\beta n_{\rm sites}+\alpha n_{\rm ves}+\gamma)^{2}}{(2\alpha \beta)^{2}}-\frac{n_{\rm ves}n_{\rm sites}}{\alpha\beta}}. \tag{53}\] We note that the term under the square root is always non-negative since \[\frac{(\beta n_{\rm sites}+\alpha n_{\rm ves}+\gamma)^{2}}{(2 \alpha\beta)^{2}}-\frac{n_{\rm ves}n_{\rm sites}}{\alpha\beta}\] \[=\frac{1}{(2\alpha\beta)^{2}}\left[(\beta n_{\rm sites}+\alpha n _{\rm ves})^{2}+2(\beta n_{\rm sites}+\alpha n_{\rm ves})\gamma+\gamma^{2}-2 \alpha\beta n_{\rm ves}n_{\rm sites}\right]\] \[=\frac{1}{(2\alpha\beta)^{2}}\left[(\beta n_{\rm sites})^{2}+( \alpha n_{\rm ves})^{2}+2(\beta n_{\rm sites}+\alpha n_{\rm ves})\gamma+ \gamma^{2}\right]\] \[\geq 0. \tag{54}\] In summary this means that for each choice of (positive) parameter values there exist two fixed points, but only one with physically relevant numbers while the other one has negative values. That is, there is a unique steady state of the system. Due to the stoichiometric structure of the system, this steady state will be approached in the course of time, no matter which initial state (with non-negative values) is chosen. For time-dependent, periodic rates \(k_{F}(t)\) the system will be pulled towards the time-dependent steady state, thereby showing itself a periodic behavior, see Fig. 2. ### Reduced reaction systems In Sec. 3.3 we have argued that the similarity of the ODE-solution with the stochastic mean stems from the independence of the recovery steps. We now clarify this issue by comparing two reduced reaction systems: 1. Standard binding and unbinding given by the reactions \[A+B\stackrel{{\alpha}}{{\longrightarrow}}W,\quad W\stackrel{{ \beta}}{{\longrightarrow}}A+B\] (55) with the associated ODEs given by \[\frac{d}{dt}A(t)=\frac{d}{dt}B(t)=-\alpha A(t)B(t)+\beta W(t)=-\frac{d}{dt}W(t).\] (56) 2. Binding with independent return given by the reactions \[A+B\stackrel{{\alpha^{\prime}}}{{\longrightarrow}}W_{A}+W_{B}, \quad W_{A}\stackrel{{ g_{A}}}{{\longrightarrow}}A,\quad W_{B} \stackrel{{ g_{B}}}{{\longrightarrow}}B\] (57) with the associated ODEs given by \[\frac{d}{dt}A(t) =-\alpha^{\prime}A(t)B(t)+g_{A}W_{A}(t)=-\frac{d}{dt}W_{A}(t),\] (58) \[\frac{d}{dt}B(t) =-\alpha^{\prime}A(t)B(t)+g_{B}W_{B}(t)=-\frac{d}{dt}W_{B}(t).\] (59) We note that system (I) arises from our full reaction system (as depicted in Fig. 1) by setting \(g_{V}=g_{P}=\infty\), \(k_{U}=0\)\(k_{F}=\beta\), \(k_{R}=\alpha\) (with the species being related by \(V=A\), \(P=B\), \(R=W\)), while the second system (II) results from setting \(k_{F}=\infty\) and \(k_{U}=0\), \(k_{R}=\alpha^{\prime}\), \(g_{V}=g_{A}\), \(g_{P}=g_{B}\) (with the species being related by \(V=A\), \(P=B\), \(W_{V}=W_{A}\), \(W_{P}=W_{B}\)). It can easily be shown that for \(\alpha=\alpha^{\prime}\), \(\beta=g_{A}=g_{B}\) and appropriate initial states (satisfying \(W(0)=W_{A}(0)=W_{B}(0)\)), the ODE-solutions of the two systems (I) and (II) fully agree. However, the first-order moments of the corresponding stochastic jump processes \(\mathcal{A}(t),\mathcal{B}(t)\) are not the same. Actually, the first-order moments \(\mu_{\mathcal{A}}(t),\mu_{\mathcal{B}}(t)\) of the second system (II) of independent return are closer to the ODE-solution \(A(t),B(t)\), see Fig. 11(a), where we plot the relative errors. Denote the corresponding standard deviations of \(\mathcal{A}(t)\) and \(\mathcal{B}(t)\) by \(\sigma_{\mathcal{A}}(t),\sigma_{\mathcal{B}}(t)\), respectively. Then Fig. 11(b) shows the correlation function \[\mathrm{corr}(t):=\frac{\mathrm{cov}(\mathcal{A}(t),\mathcal{B}(t))}{\sigma_{ \mathcal{A}}(t)\sigma_{\mathcal{B}}(t)} \tag{60}\] for both reduced systems, with significantly smaller values for the second system (II). This confirms our hypothesis that independent recovery increases the approximation quality of the ODE-system to the mean of the stochastic dynamics.
2310.15451
Critical dehydrogenation steps of perhydro-N-ethylcarbazole on Ru(0001) surface
Understanding of the critical atomistic steps during the dehydrogenation process of liquid organic hydrogen carriers (LOHCs) is important to the design of cost-efficient, high-performance LOHC catalysts. Based on the density functional theory (DFT) we studied the thermodynamics and kinetics of the complete dehydrogenation path of perhydro-N-ethylcarbazole (12H-NEC) on Ru(0001) surface, involving the adsorption of 12H-NEC, the discharge of H ions onto Ru surface, and the desorption of H2 and hydrogen-lean NEC. It was found that the bonding of nH-NEC is significantly strengthened for n $\le$ 4 because of the flat aromatic ring. Although the whole dehydrogenation process is endothermic, the release of H from nH-NEC, with H adsorbed onto the Ru surface, was found to be exothermic. The desorption of flat, hydrogen-lean NEC, which costs ~255 kJ/mol, was identified as the most energy demanding step. In addition, the effect of surface morphology on adsorption was studied based on an amorphous surface model. Overall, the results imply more efficient dehydrogenation could be achieved from relatively weak bonding of NEC to catalysts, either through engineering catalyst surface (such as surface defects or smaller catalyst particles) or different catalyst materials. Our calculations also revealed possible dealkylation at elevated temperatures.
Chunguang Tang, Preetham Permude, Shunxin Fei, Terry J. Frankcombe, Sean C. Smith, Yun Liu
2023-10-24T01:48:28Z
http://arxiv.org/abs/2310.15451v1
# Critical dehydrogenation steps of perhydro-\(N\)-ethylcarbazole on Ru(0001) surface ###### Abstract Understanding of the critical atomistic steps during the dehydrogenation process of liquid organic hydrogen carriers (LOHCs) is important to the design of cost-efficient, high-performance LOHC catalysts. Based on the density functional theory (DFT) we studied the thermodynamics and kinetics of the complete dehydrogenation path of perhydro-\(N\)-ethylcarbazole (12H-NEC) on Ru(0001) surface, involving the adsorption of 12H-NEC, the discharge of H ions onto Ru surface, and the desorption of H\({}_{2}\) and hydrogen-lean NEC. It was found that the bonding of \(n\)H-NEC is significantly strengthened for \(n\)\(\leq\)4 because of the flat aromatic ring. Although the whole dehydrogenation process is endothermic, the release of H from \(n\)H-NEC, with H adsorbed onto the Ru surface, was found to be exothermic. The desorption of flat, hydrogen-lean NEC, which costs \(\sim\)255 kJ/mol, was identified as the most energy demanding step. In addition, the effect of surface morphology on adsorption was studied based on an amorphous surface model. Overall, the results imply more efficient dehydrogenation could be achieved from relatively weak bonding of NEC to catalysts, either through engineering catalyst surface (such as surface defects or smaller catalyst particles) or different catalyst materials. Our calculations also revealed possible dealkylation at elevated temperatures. Published version at Computational Materials Science, 229, 112373 (2023) ## I Introduction Liquid organic hydrogen carriers (LOHCs) have attracted extensive research interests [1; 2; 3] as a potential alternative hydrogen storage approach to the traditional approaches such as liquid H\({}_{2}\), compressed H\({}_{2}\) gas and even competitive circular carriers [4]. Nevertheless, the large-scale application of LOHCs faces challenges such as abundant, cost-effective LOHCs as well as relevant catalysts. Currently a number of LOHCs, including benzene, toluene, naphthalene and \(N\)-ethylcarbazole (NEC), have been identified as potential LOHCs and their properties were discussed in some review or perspective articles [3; 4; 5]. Exploration of new LOHCs, such as those from natural products, has also been carried out recently [6]. For a given LOHC material, catalysts significantly impact on its (de)hydrogenation process and hence represent a key parameter for LOHC performance. To date, a number of catalysts have been explored for the above-mentioned LOHCs, as summarized in recent reviews [3]. For example, for dehydrogenation of cyclohexane, the performance of Ni-based [7; 8], Ag-based [9], Pt-based [10; 11] catalysts with various supports has been investigated and the synergistic effect from the addition of a second metal has been reported. As an ideal tool for investigating atomistic process and complementary to experiments, first principles modelling based on the density functional theory (DFT) has been often used to provide fundamental insights into the LOHC-catalyst reactions [12; 13; 14]. For example, DFT calculations have been used to examine the structural features of tetrahydrocarbazole adsorption on Pd surfaces and its preferred dehydrogenation pathways [15]. The calculations of dodecahydro(12H)-NEC, 12H-carbazole, and 12H-fluorene on Pd(111) surface [16] have revealed a linkage between the dehydrogenation rate and the adsorption strength. Some researchers have explored the adsorption of NEC and its hydrogenated states on different Ru surfaces [17] and suggested that intermediate 8H-NEC, which is kinetically stable, prefers surface sites of low coordination (such as edges). To date most studies have focused on the adsorption of LOHCs on catalyst surface, but the atomistic-scale (de)hydrogenation process involves multiple subprocesses, such as (for dehydrogenation) the adsorption of hydrogen-rich LOHCs on catalyst surface, the release of H onto catalyst surface, the recombination of H ions into H\({}_{2}\) molecules and the desorption of H\({}_{2}\) molecules and hydrogen-lean LOHCs. Therefore, understanding of the full picture of the dehydrogenation process is necessary for identifying the critical steps, which is important for designing cost-efficient, high-performance catalysts for LOHCs. In this work we compare the adsorption/desorption energetics as well as kinetics of both NEC-based molecules and hydrogen species on Ru(0001) in an attempt to identify the energetically critical steps for dehydrogenating 12H-NEC. We also examine the impact of imperfect surfaces on the adsorption of the molecules based on an amorphous Ru structure. In addition, a critical challenge for controlling dehydrogenation is the potential fluctuation of temperature in a big reactor, which may cause the decomposition of LOHCs [18], especially when the LOHCs contain weak bonds. In this work we also address the dealkylation of NEC and compare with the other events in energetics. ## II Methods Geometry optimization of all the relevant structures in this work was performed using the code VASP [19] with the GGA-PBE exchange-correlation functional and a plane wave basis set with an energy cutoff of 300 eV. For molecule-only structures, a simulation cell separating the molecules from their periodic images by more than 10 A was used, and the \(k\)-point sampling was performed at the \(\Gamma\) point. For bulk Ru, an orthogonal cell (\(\sim\)2.7\(\times\)4.69\(\times\)4.28 A\({}^{3}\)) with 11\(\times\)7\(\times\)7 \(k\)-point sampling was used for structural optimization. To represent the catalyst surface, we built an orthogonal Ru(0001) (7\(\times\)8) slab out of the optimized bulk Ru structure with 3 atomic layers in thickness, in view of the balance of accuracy and computational cost. The molecule/slab structure was then contained in a simulation cell of \(\sim\)18.8\(\times\)18.9\(\times\)25.5 A\({}^{3}\), which separates the structure with its periodic images along the surface normal by \(\sim\)15 A and is coupled with \(k\)-point sampling at the \(\Gamma\) point. During the calculations the bottom Ru layer was fixed. The geometry optimization of the system was stopped after the force on atoms is below 0.01 eV/A. To examine the effect of an imperfect Ru surface on adsorption, we first constructed an amorphous bulk Ru based on classical molecular dynamics, with the atomic interactions described by an embedded atom method (EAM) potential [20]. To this end, a bulk Ru of 224 atoms (similar to the slab structure mentioned above but with 4 layers) was well liquefied at 2500 K and then quenched to 100 K at 50 K/ps. The simulations were carried out under an NPT (constant particle number, pressure, and temperature) ensemble using code LAMMPS [21] with Nose-Hoover thermostat and barostat, and the timestep was set as 2 fs. The obtained amorphous structure, after being optimized using DFT, was used to create a slab by inserting a vacuum layer of \(\sim\)15 A into the simulation cell, with the final cell size being \(\sim\)19.4\(\times\)18.7\(\times\)25.5 A\({}^{3}\). The \(n\)H-NEC molecules were then put to 9 different surface sites of the slab for geometry optimization, with the bottom Ru layer of thickness \(\sim\)3.5 A being fixed. A previous study on natural LOHCs [6] indicates that DFT calculations using a PBE-based hybrid functional (HSE03) plus thermal corrections, which include the zero point energy and the enthalpy of the molecules, can predict the dehydrogenation enthalpy reasonably well. However, the thermal corrections result in extra computational cost and, more importantly, sometimes we experienced difficulties for the hybrid functional calculations to converge, especially for metals. To examine the feasibility of studying the dehydrogenation reactions using the PBE functional alone, we tested the performace of PBE and HSE03, with or without thermal and van der Waals corrections, on several example molecules following the parameters in reference [6]. As shown in Table 1 and Fig. 1, although the PBE functional alone overestimates the average and stepwise dehydrogenation enthalpies by \(>\)10 and \(\sim\)9\(-\)14 kJ/mol-H\({}_{2}\), respectively, it provides very good estimates for the relative energetics. Hence, in this work we address the dehydrogenation energetics only based on the PBE functional at zero temperature. For the dehydrogenation process, we assume the initial state (denoted as state A) as a 12H-NEC molecule far from the Ru surface and the final state (state F) as a NEC molecule plus 6 H\({}_{2}\) molecules far from the Ru surface, as schematically shown in Fig. 2. For energetics study, we consider the following intermediate states: 12H-NEC being adsorbed onto the Ru surface (state B), all of the 12 H ions being released (state C), and the H ions combining into 6 H\({}_{2}\) molecules on the surface (state D), the H\({}_{2}\) molecules being desorbed (state E), and the system reaches state F upon NEC desorption. Alternative to the process C\(\rightarrow\)D\(\rightarrow\)E, direct desorption of H ions into gaseous H\({}_{2}\) (C\(\rightarrow\)E) is also possible, of which the Figure 1: Comparison of the accuracy of various approximation methods used for dehydrogenation energy calculation. (top) Calculated dehydrogenation energy (kJ/mol-H\({}_{2}\)) of various molecules with different levels of approximation, as compared with experimental values. The D3 method with Becke-Jonson damping was used for van der Waals (VDW) correction. (bottom) Stepwise dehydrogenation energy for perhydro-trisphaeridine. Data from Table 1. energetics is straightforward. We note that the above order of states is assumed for convenience (for example, experimentally H\({}_{2}\) desorption may start while some remaining H ions are not released yet, or NEC desorption may occur before H\({}_{2}\) desorption is finished) and it does not affect the conclusions of this work. To identify the effects of various reaction processes and the catalyst on the dehydrogenation energetics, we computed the system energies along four reaction paths as detailed below. Reaction 1 (state A to state F) was considered for a reference scenario where Ru is not involved in the dehydrogenation process. In this case we treated the molecules and Ru slab separately and at each reaction step the system energy is the sum of the separate parts, which can be written as \[E(n)=E_{\text{nH-NEC}}+\frac{12-n}{2}E_{\text{H}_{2}}+E_{\text{Ru}} \tag{1}\] where \(n\) decreases from 12 for the initial state A to 0 for the final state F. For reaction 2, we considered the transition from state B to state E, with the released H ions directly forming gas H\({}_{2}\) molecules far from Ru surface. This hypothetical case is similar to reaction 1 except the \(n\)H-NEC molecule is adsorbed on Ru surface, and so it allows for identifying the effect of \(n\)H-NEC adsorption alone on dehydrogenation energetics. In this case, the system energy is \[E(n)=E_{\text{nH-NEC/Ru}}+\frac{12-n}{2}E_{\text{H}_{2}} \tag{2}\] Similarly, we considered transitions from B to D (reaction 3) and from B to C (reaction 4), with the released H ions forming adsorbed H\({}_{2}\) molecules and adsorbed H ions, respectively. This allows for identifying the adsorption effects of H\({}_{2}\) and H ions. Here we treat H\({}_{2}\) (or H ions) and the \(n\)H-NEC separately, and hence for reaction 3, the system energy can be written as \[E(n)=E_{\text{nH-NEC/Ru}}+\frac{12-n}{2}\mu_{\text{u}_{2}} \tag{3}\] where the chemical potential \(\mu_{\text{u}_{2}}\)=(\(E_{\text{H}_{2}/\text{Ru}}\)\(-\)\(E_{\text{Ru}}\)) represents the energy of H\({}_{2}\) adsorbed on Ru surface. \(E(n)\) for reaction 4 can be similarly defined. \begin{table} \begin{tabular}{c c c c c c c} reaction & \(n=12\) & \(n=10\) & \(n=8\) & \(n=6\) & \(n=4\) & \(n=2\) & \(n=0\) \\ \hline 1 & 0 & 97.55 & 120.22 & 248.55 & 258.59 & 392.61 & 401.48 \\ 1 (wB97XD) & 0 & 122.18 & 157.39 & 309.08 & 332.70 & 488.99 & 509.77 \\ 2 & -66.45 & 51.07 & 76.55 & 197.93 & 91.11 & 225.52 & 146.50 \\ 3 & -66.45 & -5.76 & -37.12 & 27.43 & -136.21 & -58.63 & -254.98 \\ 4 & -66.45 & -71.56 & -168.92 & -170.27 & -399.81 & -388.14 & -589.89 \\ \hline \(E_{d}^{\text{nH-NEC}}\) & 66.45 & 46.48 & 43.67 & 50.62 & 167.48 & 167.09 & 254.98 \\ \end{tabular} \end{table} Table 2: System energy (kJ/mol) calculated with the PBE functional at each step (\(n\)) of the four reactions shown in Fig. 2. For reaction 1, we also computed the energy with the wB97XD functional and cc-pVTZ basis set for comparison, implemented with code Gaussian. \(E_{d}\) represents the desorption energy of \(n\)H-NEC. \begin{table} \begin{tabular}{l c c c c c c c} Hydrogen-rich system & \(\Delta H_{\text{expt}}^{\alpha}\) & \(\Delta E_{\text{PBE}}\) & \(\Delta E_{\text{HSE}}\) & \(\Delta E_{\text{HSE+VDW}}\) & \(\Delta E_{\text{PBE}}^{\text{thermal}}\) & \(\Delta E_{\text{HSE}}^{\text{thermal}}\) & \(\Delta E_{\text{HSE+VDW}}^{\text{thermal}}\) & \(\delta E\) \\ \hline Cyclohexane & 68.7 & 79.36 & 92.29 & 96.26 & 54.62 & 67.56 & 71.52 & 11.8 \\ Decalin & 66.55 & 84.08 & 97.88 & 102.52 & 57.13 & 70.93 & 75.57 & 13.2 \\ Perhydro-NEC & 53.2 & 63.94 & 77.82 & 83.13 & 38.14 & 52.02 & 57.33 & 11.9 \\ Methylcyclohexane & 68.3 & 87.28 & 100.85 & 105.25 & 61.25 & 74.82 & 79.22 & 12.5 \\ Perhydro-trisphaeridine & & 66.40 & 81.11 & & 39.83 & 54.54 & & 12.8 \\ \end{tabular} \begin{tabular}{c c c c c c c} Stepwise dehydrogenation energy for perhydro-trisphaeridine & & & & & & \\ Energy & step 1 & step 2 & step 3 & step 4 & step 5 & step 6 & step 7 \\ \hline \(\Delta E_{\text{PBE}}\) & 60.94 & 106.98 & -7.34 & 103.52 & 42.54 & 129.10 & 29.08 \\ \(\Delta E_{\text{HSE}}^{\text{thermal}}\) & 47.77 & 96.24 & -21.41 & 94.67 & 31.12 & 117.14 & 16.25 \\ \(\delta E\) & 13.17 & 10.74 & 14.07 & 8.85 & 11.42 & 11.96 & 12.83 \\ \end{tabular} \end{table} Table 1: Calculated dehydrogenation energy (kJ/mol-H\({}_{2}\)) based on PBE and hybrid HSE03 functionals, with and without van der Waals (VDW) and thermal corrections. Thermal corrections and experimental data are for the standard condition (1 atm and 298 K). More details are in work [6] and references therein. \(\delta E=\Delta E_{\text{PBE}}-\Delta E_{\text{HSE03}}^{\text{thermal}}\). We note \(\Delta E_{\text{PBE}}\) for perhydro-NEC in this table (63.9 kJ/mol-H\({}_{2}\)) is slightly different from that (401.48/6=66.9 kJ/mol-H\({}_{2}\)) in Table 2. The difference may arise from different parameters used in the previous work [6] and this study, such as the perhydro-NEC isomer chosen and the planewave cutoff energy. The structures for \(n\)H-NEC we considered at various steps are shown at the bottom of Fig. 2, which are suggested by a few previous studies [22; 23; 24]. For 12H-NEC, there are six possible isomers [17], labelled as a-f in Fig. 2, and we chose the one with the lowest calculated energy at 0 K. The chosen isomer is also one of the experimentally observed species [17]. ## III Results and discussions ### Dehydrogenation thermodynamics The computed system energies for the four reactions, all referenced to that of state A, are listed in Table 2 and shown in Fig. 2(right). For reaction 1, the calculated system energy increases as dehydrogenation proceeds, with the average energy increase to be \(\sim\)67 kJ/mol-H\({}_{2}\). For reaction 2, the system energies are reduced because of the adsorption of \(n\)H-NEC. The desorption energy (\(E_{d}\)) for \(n\)H-NEC, which is the \(E\) difference between reactions 1 and 2, ranges from \(\sim\)44 to \(\sim\)66 kJ/mol for \(n\)\(\geq\)6 and from \(\sim\)167 to \(\sim\)255 kJ/mol for \(n\)\(\leq\)4. We attribute the significantly higher \(E_{d}\) for \(n\)\(\leq\)4 to the formation of the aromatic ring, of which the flat structure allows for stronger C-Ru bonding with the flat Ru(0001) surface. To illustrate this we compare the the structures of 8H-NEC and 4H-NEC as an example. The former molecule adsorbs to Ru mainly via H atoms and hence the carbon rings are not as well aligned with the surface Ru atoms as the aromatic carbon ring in 4H-NEC (top view in Fig. 3). The charge density isosurfaces in Fig. 3 show that more electrons transfer from surface Ru towards 4H-NEC than towards 8H-NEC because of the Ru-C bonding. To quantitatively illustrate this, we performed Bader charge analysis. As shown in Table 3, the three Ru atoms beneath the aromatic ring of 4H-NEC lose \(\sim\)0.6 more electrons as compared to their counterparts beneath 8H-NEC. Fig. 3 also compares their projected density of states (PDOS) for H and C atoms as well as the three surface Ru atoms beneath the aromatic ring of 4H-NEC and the corresponding ring of 8H-NEC. By integrating over the PDOS, one can compute the electron energy of Figure 2: Dehydrogenation of 12H-NEC on Ru(0001) surface. (top left) Schematic illustration of four reaction paths related to the dehydrogenation. (top right) System energy along the four reactions as a function of \(n\), the number of H atoms charged to NEC, with the energies referenced to state A. For reaction 1, we also computed the energy with the wB97XD functional using code Gaussian for comparison. The energy values are in Table 2. (bottom) Structures of 12-NEC isomers (a-f) with energy referenced to the most stable isomer and the structures of \(n\)H-NEC considered during dehydrogenation. the corresponding orbitals (per atom) according to \[E_{e}=\frac{1}{N_{a}}\int_{-\infty}^{E_{F}}n(\varepsilon)\varepsilon d\varepsilon \tag{4}\] where \(n(\varepsilon)\) is the PDOS density, \(E_{F}\) is the Fermi energy, and \(N_{a}\) is the number of H, C, or Ru atoms contributing to the PDOS. We found \(E_{e}\) values of \(s_{\rm u}\), \(p_{\rm c}\) and \(d_{\rm Ru}\) of 4H-NEC/Ru are lower than those of 8H-NEC/Ru by \(\sim\)0.3, \(\sim\)1.2, and \(\sim\)1.3 eV, respectively. This indicates the stronger bonding between 4H-NEC and Ru, consistent with the charge density analysis. For \(n\)H-NEC adsorption we tested the effect of the ethyl group state (pointing down to the surface or pointing up) by rotating the ethyl group around the neighboring N-C bond. We found that for \(n\leq 4\) the up state is highly favorable (by energy difference more than Figure 4: Effect of ethyl group orientation on the adsorption of 2H-NEC on Ru(0001). The preferred ethyl-up state enables a better match of the aromatic ring to surface Ru. Figure 3: Electronic structures of 8H-NEC and 4H-NEC on Ru(0001). (left) Structure and isosurface of the differential charge density, or the charge density minus the superposition of atomic charge densities. The isosurfaces represent charge density level of 10 \(\mu\)\(e\)/Å\({}^{3}\) (red) and \(-\)10 \(\mu\)\(e\)/Å\({}^{3}\) (blue). The brown and green spheres represent different Ru layers, and C, N, and H atoms are shown in grey, white, and blue, respectively. (right) Projected density of states of H and C atoms as well as three surface Ru atoms (labelled as 1-3 in the left). \begin{table} \begin{tabular}{c c c c c} System & atom 1 & atom 2 & atom 3 & Surface Ru \\ \hline 8H-NEC/Ru & \(+\)0.11 & 0.00 & \(+\)0.04 & \(+\)0.07 \\ 4H-NEC/Ru & \(-\)0.11 & \(-\)0.18 & \(-\)0.15 & \(+\)0.06 \\ \end{tabular} \end{table} Table 3: Bader charges of selected surface Ru atoms (refer to Fig. 3) and average Bader charges of the top surface Ru atoms. The charges, in unit \(\epsilon\), are referenced to neutral Ru atoms. The analysis show that the top Ru layer gain electrons from the beneath Ru layer, resulting in a positive average charge. 70 kJ/mol) while for the other _n_H-NEC structures the two states are nearly equally favorable (with energy difference less than 10 kJ/mol). For 4H- and 2H-NEC, pointing-down of the ethyl group reduces the bonding of the molecule with Ru by displacing the aromatic ring from the favorable position (Fig. 4). On the other hand, for _n_=0, the pointing-down ethyl group relaxes into a flat position, allowing the aromatic rings to stay parallel to the Ru surface, but we found that a pointing-up position is energetically more favorable. For _n_\(\geq\)6, the charged H atoms increase the space between the carbon rings and the Ru surface, making the up/down state of the ethyl group relatively insignificant. For gaseous _n_H-NEC molecules we did not find significant energy differences between the ethyl group states. We note that our results are different from a previous study [17] which reported negligible adsorption energy for 8H-NEC with the ethyl group facing down and \(\sim\)30 kJ/mol for facing up. However, both studies predict that the adsorption of 8H-NEC on a flat Ru surface is relatively weak, which could result in the desorption of 8H-NEC molecule. It was found that 8H-NEC prefers low-coordination sites, such as surface steps, to a flat surface [17]. If the released hydrogen are adsorbed onto the Ru surface, their chemical potential is lower than that of gas H\({}_{2}\) molecule and hence reduces the system energy. We studied the adsorption of hydrogen molecules and ions on various Ru surface sites (Fig. 5). It was found that hydrogen molecules are stable on the atop sites while two hydrogen ions could stay on hcp sites, fcc sites, or their combination, with the fcc sites being favorable. Assuming hydrogen chemical potential based on the most favorable hydrogen molecule/ion adsorptions, we obtained the system energies for reactions 3 and 4, respectively, as shown in Fig. 2. These two reactions reveal an exothermic hydrogen release process, with the path of hydrogen ions being more favorable. Fig. 2 reveals not only the most probable path for 12-NEC dehydrogenation but also the contributions of different factors (_n_H-NEC desorption, H\({}_{2}\) desorption, and H ion recombination) to dehydrogenation energetics, which allows for identifying the energetically critical step. For example, for the dehydrogenation path A\(\rightarrow\)B\(\rightarrow\)C\(\rightarrow\)D\(\rightarrow\)E\(\rightarrow\)F, the first two steps are exothermic and the final three steps cost 335, 401, and 255 kJ/mol, respectively. In view that C\(\rightarrow\)D and D\(\rightarrow\)E are for 6 H\({}_{2}\) molecules while E\(\rightarrow\)F is the desorption of a single NEC molecule, it is clear that NEC desorption is the energetically critical step for the dehydrogenation. Because of the reversibility of (de)hydrogenation, it is easy to see that, for NEC hydrogenation (assuming the same catalyst), the endothermic step C\(\rightarrow\)B costs 523 kJ/mol, or on average \(\sim\)44 kJ/mol for charging each H atom, and the critical desorption of 12H-NEC costs \(\sim\)66 kJ/mol. As the lowest energy point in Fig. 2, state C is also critical since too strong adsorption of H ions on the catalyst surface could be an energy trap that slows the reactions down. From Fig. 2 one also can predict the kinetically stable intermediate products, 8H-NEC and 4H-NEC, as observed in experiments [23; 26]. Along path B\(\rightarrow\)C, the system ener Figure 5: Adsorption of H\({}_{2}\) molecule and 2 H atoms on Ru(0001) surface. In (a) and (b), the red squares schematically represent metastable adsorption sites. In (a), the adsorption energies (\(E_{\rm H_{2}/Ru}-E_{\rm Ru}-E_{\rm H_{2}}\)) were calculated to be \(-\)56.82, \(-\)2.47, and \(+\)0.78 kJ/mol for atop\({}_{\rm F}\), atop\({}_{\rm V}\), and hcp\({}_{\rm V}\) site, respectively. The subscript F or V indicates a flat or vertical position of the H\({}_{2}\). In (b), the adsorption energies (\(E_{\rm 2H/Ru}-E_{\rm Ru}-2\times E_{\rm H}\)) are \(-\)548.73, \(-\)528.06, and \(-\)557.05 kJ/mol for hcp-hcp, hcp-fcc, and fcc-fcc site, respectively, which are close to previous calculations [25]. The adsorption energies become \(-\)114.36, \(-\)93.69, and \(-\)122.69 kJ/mol, respectively, if a H\({}_{2}\) molecule is assumed in the initial state (i.e., 2\(\times E_{\rm H}\) is replaced by \(E_{\rm H_{2}}\) in the formula). In (c) for the unstable sites, the squares/arrows represent the initial/final positions of hydrogen. gies of \(n\)=8 and \(n\)=4 are close to those of \(n\)=6 and \(n\)=2, respectively, which makes 8H-NEC and 4H-NEC kinetically stable in view of the activation energy for the reactions. On the other hand, the big thermodynamic driving force makes the other intermediate products (\(n\)=10, 6, 2) much less stable. Especially, we note that the formation of aromatic sextets (transitions \(n\)=6\(\rightarrow\)\(n\)=4 and \(n\)=2\(\rightarrow\)\(n\)=0) is associated with a big driving force (\(\sim\)200 kJ/mol). Similar phenomena were observed in our previous studies for bio-based LOHCs [6]. Due to the strong NEC-Ru bonding and the relatively weak nitrogen-alkyl bonding, it is possible for dealkylation to occur before the desorption of NEC. Here we considered the dealkylation reaction by comparing the energy of NEC/Ru with two additional H atoms adsorbed on two fcc sites and that of a carbazole and an ethane adsorbed on Ru. It was found the latter is higher in energy by \(\sim\)98.5 kJ/mol, much smaller than the desorption energy of NEC, implying the possibility of dealkylation under suitable situations. Further, we note that the desorption of the resulting carbazole and ethane costs \(\sim\)212 kJ/mol, while the desorption energies of NEC and 2H (from two fcc sites to gaseous H\({}_{2}\)) sum to \(\sim\)255+123=378 kJ/mol. This indicates that the dealkylation approach is thermodynamically favorable by 378\(-\)212\(-\)98.5\(\approx\)67 kJ/mol, which essentially is energy difference between (NEC+H\({}_{2}\)) and (carbazole+ethane) in their gaseous state. Experimentally, dealkylation of NEC on Pt(111) surface [18] was reported for temperatures above 390 K. ### Dehydrogenation kinetics The above discussions reveal the thermodynamics during the (de)hydrogenation process of NEC on Ru. We also studied the kinetics of the process by computing the energy barrier of relevant steps on the Ru surface using the climbing image nudged elastic band method [27]. The steps considered here include H discharge from \(n\)H-NEC, H diffusion, the combination of 2 H into a H\({}_{2}\) molecule, and the desorption of H\({}_{2}\) and NEC. For hydrogen discharge, although it is proper to consider H pairs in the above thermodynamics calculations and in theory it is possible for H to be discharged in pairs, here for the kinetics we consider a single H atom, consistent with experimental observations [15]. The H atoms on \(n\)H-NEC can be classified into three types, with the C-H bond roughly pointing down towards the Ru surface, parallel to the surface, or pointing up. We computed the activation energy for discharging a H atom from 12H-NEC, 4H-NEC, and 2H-NEC for example, with the corresponding C-H bond either pointing down or relatively parallel to the surface. As shown in Fig. 6(a), the activation energies for these cases range from \(\sim\)50 to \(\sim\)100 kJ/mol, which are similar to the calculated barriers (\(\sim\)39 to \(\sim\)92 kJ/mol) for de Figure 6: Activation energies for dehydrogenation. (a) Hydrogen discharge from \(n\)H-NEC, with the reaction coordinate indicating the initial (0), intermediate, and final (1) states, and for each discharge event, the system energy is referenced to the initial state. The structure pictures are for the initial/final state of each discharge event. For 2H-NEC to 1H-NEC, the picture also shows further structural relaxation after the discharged H diffuses away. (b) Activation energies for other relevant surface events. hydrogenating tetrahydrorocarbazole to carbazole on palladium surface [15]. We note that for 4H-NEC\(\rightarrow\)3H-NEC and 2H-NEC\(\rightarrow\)1H-NEC, the final state is higher in energy than the initial state (Fig. 6(a)), partially because the relaxation of the system is limited when the discharged H atom sits below the \(n\)H-NEC. For 2H-NEC\(\rightarrow\)1H-NEC as an example, we found that once the discharged H atom diffuses away, the system decreases in energy by \(\sim\)130 kJ/mol with stronger bonding to the Ru surface. For most 12H-NEC isomers (see Fig. 2, bottom) adsorbed on the Ru surface, there exist some C-H bonds of the five-membered ring pointing up. The activation energy for discharging an upward H atom to Ru surface was found to be higher than 300 kJ/mol. Alternatively, we considered rotating the upward C-H bonds downwards, which allows easier H discharge later, and found similar activation energy. In view of the relatively low desorption energy of \(n\)H-NEC for \(n\)\(\geq\)6, a more probable mechanism for discharging these upward H atoms is that the \(n\)H-NEC molecule flips over the Ru surface and hence turns them downwards. In the presence of a solvent that accepts hydrogen, it is also possible to discharge the upward H atoms via the solvent [15]. We also computed the activation energies for other relevant events on Ru surface. For hydrogen migrating from a fcc site to a neighboring hcp site, the activation energies was found to be \(\sim\)14.9 kJ/mol only, which is consistent with the high mobility of H on Ru surface. For the combination of two neighboring fcc-site H atoms into an atop H\({}_{2}\) molecule, \(\sim\)68.9 kJ/mol is necessary to activate the process. Finally, we examined the desorption of H\({}_{2}\) and NEC. For these events, we set the final states to be local energy minima corresponding to the H\({}_{2}\) and NEC molecules being \(\sim\)4 and \(\sim\)6 A, respectively, above the Ru surface. We note that these final states are slightly lower in energy than the states with the molecules infinitely away from the surface (by \(\sim\)4.2 and \(\sim\)5.5 kJ/mol for H\({}_{2}\) and NEC, respectively). Based on the selected final states, we computed the activation energies for H\({}_{2}\) and NEC desorption to be 54.3 and 249.5 kJ/mol, respectively, which are close to the desorption energies. In the above discussion of H\({}_{2}\) desorption, we assumed the path of two H atoms recombining into an atop H\({}_{2}\) before the desorption. The intermediate atop H\({}_{2}\) tends to decompose back into H atoms due to the small activation energy. Hence, we also studied the direct desorption of two H atoms from neighboring fcc sites into a gaseous H\({}_{2}\) molecule, as shown in Fig. 6(b). We found the activation energy to be \(\sim\)122 kJ/mol, close to the desorption energy. The calculated desorption energy in this work compares well to the experimental value, \(\sim\)120 kJ/mol, reported for low H coverage on Ru(0001) surface [28]. Overall, the kinetics calculations also identify the desorption of NEC as the critical step for dehydrogenation. On the other hand, the discharged H ions are found to be very mobile on Ru surface. ### Effect of surface morphology In the above we have discussed the dehydrogenation thermodynamics and kinetics of perhydro-\(N\)-ethylcarbazole on perfect Ru(0001) surface. In practice, a crystal surface may have structural defects and these defects may impact on the behavior of adsorbed molecules. For example, a previous study [17] has shown that surface steps enables more stable adsorption of 8H-NEC as compared to a flat surface. Here, in a more general way, we use an amorphous Ru surface to represent an imperfect surface in view of its rich local structural features. Fig. 7 shows the average system energy for reaction 2 on the amorphous surface, as compared to that on Ru(0001). It can be seen that the adsorption energies of 12H- and 10H-NEC are similar for the two cases, but the energies for 8H- and 6H-NEC on the amorphous surface are about 50 kJ/mol lower than on Ru(0001). This highlights the impact of surface morphology and is qualitatively consistent with the stabilization effect of a surface step on 8H-NEC [17]. The adsorption of 4H- and 2H-NEC may prefer a relatively flat surface due to the formation of the flat aromatic ring, and consequently the system energy for the amorphous case is slightly higher. This effect is more significant for the NEC case, with its adsorption energy on the amorphous surface is about 100 kJ/mol higher than on Ru(0001). Overall, however, the energy trends for reaction 2 in the two cases are similar, and the energy difference observed here does not alter the conclusions based on the crystalline phase. ## IV Summary Based on first principles calculations we studied the thermodynamics and kinetics of the complete dehydrogenation path of perhydro-\(N\)-ethylcarbazole (12H-NEC) Figure 7: System energy for reaction 2 on amorphous Ru surface and Ru(0001), with the energies referenced to state A for both cases. The energy for the amorphous case is averaged over nine different adsorption sites. on Ru(0001) surface, involving the adsorption of 12H-NEC, the release of H ions, H ion diffusion and recombination into H\({}_{2}\), and the desorption of H\({}_{2}\) and hydrogen-lean NEC. During the dehydrogenation process, the adsorption of \(n\)H-NEC on Ru(0001) is significantly strengthened upon the formation of aromatic ring, of which the flat structure allows for stronger C-Ru bonding. Although the whole dehydrogenation process is endothermic, the release of H from \(n\)H-NEC was found to be exothermic because of H adsorption onto Ru surface. The desorption of flat, hydrogen-lean NEC, which costs \(\sim\)255 kJ/mol, was identified as the most energy demanding step. Based on an amorphous model, we also showed that the surface morphology has a great impact on the stability of molecule adsorption. Overall, the results imply more efficient dehydrogenation could be achieved by weakening the bonding of NEC to catalysts, either through engineering catalyst surface (such as surface defects or smaller catalyst particles) or different catalyst materials. Our calculations also revealed possible dealkylation at elevated temperatures. **Acknowledge** CT thanks the financial support from the Australian National University Grand Challenge program (Zero-Carbon Energy for the Asia-Pacific). This work was supported by computational resources provided by the Australian Government through the National Computational Infrastructure (NCI) under the ANU Merit Allocation Scheme. **Data Availability** The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.
2306.08480
Combining piano performance dimensions for score difficulty classification
Predicting the difficulty of playing a musical score is essential for structuring and exploring score collections. Despite its importance for music education, the automatic difficulty classification of piano scores is not yet solved, mainly due to the lack of annotated data and the subjectiveness of the annotations. This paper aims to advance the state-of-the-art in score difficulty classification with two major contributions. To address the lack of data, we present Can I Play It? (CIPI) dataset, a machine-readable piano score dataset with difficulty annotations obtained from the renowned classical music publisher Henle Verlag. The dataset is created by matching public domain scores with difficulty labels from Henle Verlag, then reviewed and corrected by an expert pianist. As a second contribution, we explore various input representations from score information to pre-trained ML models for piano fingering and expressiveness inspired by the musicology definition of performance. We show that combining the outputs of multiple classifiers performs better than the classifiers on their own, pointing to the fact that the representations capture different aspects of difficulty. In addition, we conduct numerous experiments that lay a foundation for score difficulty classification and create a basis for future research. Our best-performing model reports a 39.47% balanced accuracy and 1.13 median square error across the nine difficulty levels proposed in this study. Code, dataset, and models are made available for reproducibility.
Pedro Ramoneda, Dasaem Jeong, Vsevolod Eremenko, Nazif Can Tamer, Marius Miron, Xavier Serra
2023-06-14T12:49:59Z
http://arxiv.org/abs/2306.08480v2
# Combining piano performance dimensions ###### Abstract Predicting the difficulty of playing a musical score is essential for structuring and exploring score collections. Despite its importance for music education, the automatic difficulty classification of piano scores is not yet solved, mainly due to the lack of annotated data and the subjectiveness of the annotations. This paper aims to advance the state-of-the-art in score difficulty classification with two major contributions. To address the lack of data, we present _Can I Play It? (CIPI)_ dataset, a machine-readable piano score dataset with difficulty annotations obtained from the renowned classical music publisher Henle Verlag. The dataset is created by matching public domain scores with difficulty labels from Henle Verlag, then reviewed and corrected by an expert pianist. As a second contribution, we explore various input representations from score information to pre-trained ML models for piano fingering and expressiveness inspired by the musicology definition of performance. We show that combining the outputs of multiple classifiers performs better than the classifiers on their own, pointing to the fact that the representations capture different aspects of difficulty. In addition, we conduct numerous experiments that lay a foundation for score difficulty classification and create a basis for future research. Our best-performing model reports a 39.47% balanced accuracy and 1.13 median square error across the nine difficulty levels proposed in this study. Code, dataset, and models are made available for reproducibility. keywords: Performance Difficulty Prediction, Education Technology, Music Information Retrieval + ## 1 Introduction Music corpora classification is a well-studied topic in Music Information Retrieval (MIR). It is frequently addressed from the perspective of listeners who explore, find, and receive song recommendations based on a search term, listening profile, or their search history (Ghosal & Kolekar, 2018; Weiss & Muller, 2015; Fukayama & Goto, 2016; Yang & Chen, 2010). However, there is a need to reframe this topic from the artist's perspective (Ferraro et al., 2021). As artists often browse sound or score collections for creative or educational reasons, ongoing advancements in research related to this area (Lerch et al., 2019) and the growing popularity of music education technologies (Kim et al., 2022; Eremenko et al., 2020; Can Tamer et al., 2022) could potentially enhance their effectiveness in the future. In this paper, we address the task of music score classification concerning performance difficulty, a challenging and subjective task that remains largely unsolved (Chiu & Chen, 2012; Sebastien et al., 2012; Ramoneda et al., 2022b). More precisely, we are trying to answer the pianists' question when browsing an extensive collection of musical scores: "Can I play it?". This allows to structure and explore large pedagogical score databases and personalized score recommendation systems, which is helpful for both individual instrument learners and music teachers. Predicting the performance difficulty of the pieces is beneficial from music education and MIR perspectives. Firstly, current piano performance repertoires rely on a limited number of musical works (Karlsen & Westerlund, 2015). The preference of institutions and teachers for popular or familiar pieces leads to a lesser focus on composers who are not as well-known. This manifests the long-tail problem, as described by Levy & Bosteels (2010). Simultaneously, the lack of tools for exploring extensive piano score collections dampens teaching and performance curricula diversity. This fact leads to several groups of composers, such as women and Eastern European composers, being historically under-represented and underplayed. Furthermore, involving students in creating their curriculum can increase their motivation, as they may not know their ability to play a music piece. Lastly, from a MIR perspective, score difficulty can be used in incorporating human expertise when designing more efficient curriculum learning strategies on symbolic music data. Thus, it might benefit convergence times on other MIR tasks such as automatic music generation (Hernandez-Olivan & Beltran, 2022), audio-to-MIDI transcription (Benetos et al., 2018), MIDI-to-score conversion (Liu et al., 2022) and optical music recognition (Calvo-Zaragoza et al., 2020). The datasets for piano difficulty classification are scarce and limited in their coverage and label quality. Sebastien et al. (2012) collected 50 scores from an online forum, while Chiu & Chen (2012) collected 300 crowd-sourced scores from two popular score repositories where different users provided the difficulty grades without any coherent criteria. In a more controlled scenario, Ramoneda et al. (2022b) introduced a dataset of 147 works from a single composer, Bela Bartok, with difficulty ranking provided by the composer. Although composer-derived labels are of higher quality than the previous community-driven approach, their method is biased towards a single composer and composition style. Moreover, since the difficulty is a subjective term with multiple dimensions such as expressivity, rhythm, tone quality, and technique (Neuhaus, 2008), Bela Bartok's difficulty concepts may not extrapolate to other composers, and insights from Ramoneda et al. (2022b) may not generalize. In addition, the three previous proposals separate the difficulty space with very low granularity, with 3 (Ramoneda et al., 2022b) and 4 (Sebastien et al., 2012; Chiu & Chen, 2012) difficulty classes, respectively. The number of publicly available machine-readable scores has notably increased in recent years. For example, the community with the most MusicXML scores, MuseScore (musescore), contains over one million scores, and IMSLP (IMSLP) contains over half a million pdf scores. However, the quality of these scores is often heterogeneous; the metadata is unstructured and in different languages. To overcome these issues, matching data from different sources using metadata has been previously used in dataset creation (Jeong et al., 2019; Kong et al., 2020). We propose a methodology to create music datasets by matching metadata from different sources based on the extended metadata offered by the open-source MusicBrainz project. After an expert pianist manually corrected the matches, we derived _Can I play it? (CIPI)_1, a dataset that may be used for various MIR tasks, in our particular case of score difficulty classification. It contains 652 classical piano pieces, spanning 9 difficulty levels and 29 composers ranging from the baroque to the 20th Century, gathered from the renowned classical music publisher Henle Verlag (Henle web). To the best of our knowledge, this is the first open dataset containing public-domain piano scores and reliable difficulty annotations on nine levels from a well-established source. Alongside the data, we also provide a thorough description of the corpus based on different statistical features of the scores to deepen understanding of the score difficulty task. Footnote 1: Dataset available at: [https://doi.org/10.5281/zenodo.6564421](https://doi.org/10.5281/zenodo.6564421) Drawing from musicology literature, our machine learning methodologies are designed to capture the multidimensional nature of musical performance. Although the score is a rich representation, some studies suggest that performance is a more complex phenomenon involving multiple dimensions not explicitly represented in the score. A recent musicological definition of music performance, as outlined by Cook (1999), states that "musical performance involves negotiating between the demands of physical gesture and sound and those of notation and its associated verbal traditions." To that extent, music performance can be considered a bidirectional path connecting the multiple aspects of music performance: notation (score information), verbal tradition, physical gesture (technique), and sound (expressivity). Following the Cook Cook (1999) study, we model the performance with the interlinked concepts shown in Figure 1, and we study the possibility of enriching the score information with the pre-trained embeddings aimed at performance-related tasks, i.e., technique and expressivity. Leveraging pre-trained embeddings as powerful input features for related tasks in shallow models can be particularly effective when data is scarce (Wang et al., 2019). In this study, we use the embeddings from (Jeong et al., 2019) and (Ramoneda et al., 2022), proxies of expressivity and technique, respectively. The former yields feature related to the realization of music performance expressivity through dynamics and temporal variations (Jeong et al., 2019). The latter involves identifying the necessary hand and finger physical movements for executing a musical performance. The main reason behind using such a model is that playing more difficult pieces requires various localized hand and finger-moving strategies, which pianists develop over the years. To that extent, we leverage the embeddings obtained from a state-of-the-art auto-regressive graph neural network which we published recently (Ramoneda et al., 2022). On top of that, we build models that take the score information as input and the technique and expressivity embeddings. Towards modeling the temporal dependency between the features at various time steps, models comprise gated recurrent units with an attention mechanism. Similar models have been previously used to classify sequences (Ramoneda et al., 2022; Maghoumi and LaViola, 2019). Since score difficulty, datasets comprise scores of different lengths with difficulty annotated at the piece level, segments of the same piece may contain different difficulty levels (e.g., weak labels). To account for that, we use context attention to aggregate the patterns the recurrent neural network analyzes. In addition, we extensively compare different losses useful for the ordinal regression nature of the task and ways of using all the multiple backbone representations together through feature fusion and ensemble learning. Finally, we conduct experiments helpful for further work and expose music examples. The contributions of this article can be summarised as follows: (1) we present Figure 1: Diagram showing the relationship between notation (score information), physical gesture (technique), and sound (expressiveness) Cook’s dimensions (Cook, 1999) that we use as inspiration for the input representations of the score difficulty classification models. the _CIPI_ dataset, a performance difficulty classification dataset with 652 classical piano pieces spanning 9 difficulty levels and 29 composers ranging from the baroque to the 20th Century; (2) we train models based on different dimensions of music performance inspired by the musicology background and extensive comparison with the previous _Mikrokoskmos-difficulty_ dataset; (3) inspired by Cook (1999), we propose joining model predictions of multiple models trained on various musical performance dimensions. We show that it outperforms individual models with a 39.47% balanced accuracy and a median square error of 1.13 across the nine difficulty levels of the CIPI dataset; (4) the importance of the election of the loss given the ordinal regression nature of the task when training in the CIPI dataset; (5) we carry out extensive experiments helpful for further research training with: fragments of the pieces instead of the full pieces, different methods for feature fusion, only 3 classes on _CIPI_ dataset, or only with the shortest pieces; and, (6) finally, we provide the code, models, and dataset as open source to promote the research on the topic 2. Footnote 2: Code and models available at: [https://github.com/PRamoneda/difficulty-prediction-CIPI](https://github.com/PRamoneda/difficulty-prediction-CIPI) The remainder of this paper is organized as follows: in Section 2, we describe the relation with previous work. In Section 3, we introduce the _CIPI_ dataset. Consequently, in Section 4, we propose our difficulty classification methods and present the results in Section 5. Finally, in Section 6, we expose the future research avenues, and in Section 7, we state the main takeaways from the present research. ## 2 Relation with previous work In this section, we refer to the main computational methods to model difficulty in piano repertoire (Sebastien et al., 2012; Chiu & Chen, 2012; Nakamura et al., 2014). Sebastien et al. (2012) propose a list of different descriptors for difficulty classification, further extended by Chiu & Chen (2012). Consequently, Nakamura et al. (2014, 2020) propose measuring the concept of difficulty based on automatic piano fingering models. Finally, in Ramoneda et al. (2022b), we collected a small dataset, _Mikrokosmos-dataset_, and propose to use neural networks based on score information and finger representations. In Sebastien et al. (2012), one of the initial works on score difficulty analysis, they proposed seven descriptors to characterize the difficulty of piano scores. The authors used principal component analysis to project the descriptors onto two axes and evaluated the system's performance through expert perception. The list of descriptors was further extended by Chiu & Chen (2012) up to 17 features for characterizing scores difficulty. However, neither of these proposals approached the task as an ordinal regression, which is important to consider given the duality of the score difficulty estimation task, which has both classification and regression aspects. One of the limitations of both feature engineering methods is the lack of a clear definition for the concept of difficulty. Despite several difficulty rankings, designing features that accurately capture the concept of difficulty is challenging. For instance, a study (Chiu & Chen, 2012) found the average pitch, calculated as the average of all the notes' pitches, to be relevant in predicting difficulty. However, it is not clear how this average pitch relates to the difficulty concept. In addition, most of the features proposed by Sebastien et al. (2012); Chiu & Chen (2012) are instrument agnostic and focused on score information. We think score difficulty analysis is a complex process dependent on the specific instrument and how the musician performs the music score (Cook, 1999). To overcome this limitation, our approach aims to design instrument-specific methodologies and work with representations derived from performance rather than just musical structure. Finally, neither of the approaches open source their code, dataset, or models or provide sufficient information on creating the features, making replication difficult. Another work targeting piano difficulty is proposed by Nakamura et al. (2014, 2020) and deals with fingering frequencies and playing rate. The rationale for this proposal is that piano fingerings which occur less often, lead to increased difficulty. However, the proposal was not evaluated because of the lack of data available. This method is extended to other tasks such as polyphonic transcription (Nakamura et al., 2018), rhythm transcription (Nakamura et al., 2017) or score reductions (Nakamura & Sagayama, 2015; Nakamura & Yoshii, 2018), making clear the importance of piano technique in the creation of music technology systems. In our previous works (Ramoneda et al., 2022b), we proposed a score difficulty classification method and introduced hand-crafted piano technique feature representations based on different piano fingering algorithms (pianoplayer) and (Nakamura et al., 2014) research. We used these features as input for two classifiers: a Gated Recurrent Unit neural network (GRU) with an attention mechanism and gradient-boosted trees trained on score segments. Our results show that incorporating fingering-based features into the score difficulty estimation task can improve performance compared to using only note-based features. This highlights the importance of considering the physical demands of playing the score when evaluating difficulty. Furthermore, the results demonstrate the potential of our proposed dataset, _Mikrokosmos-difficulty_, for evaluating and comparing different score difficulty estimation approaches. Although it provides a valuable benchmark for further research in this area and offers insights into the influence of fingering on the perceived difficulty of piano scores, we think the dataset is small and focuses on one composer's work. To significantly impact the music community, we need to provide a more extensive and diverse dataset. ## 3 Can I play it? (_Cipi_) dataset We explored multiple sources comprising difficulty-labeled scores. However, not all sources were equally reliable, and not all contained public domain machine-readable scores. Publishers, examination boards, and online repositories often classify scores based on performance difficulty. However, the easiest levels frequently include 20th Century music scores without expired copyright. Websites like 8note also distribute difficulty levels that users annotate. However, assessing the quality of these crowd-sourced annotations, recorded without any standard criteria, can be challenging. After considering all alternatives, we selected Henle Verlag, a renowned publisher, as our source of difficulty labels. Their piano difficulty rankings are ranged from 1 to 9 and are annotated by Prof. Rolf Koenen (R. Koenen). Henle Verlag has an excellent reputation in the piano education community for producing high-quality and authoritative editions (Jensen-Abbott, 2020; Howat, 2013). This section describes the methodology used to obtain difficulty labels for 2830 piano works from the Henle Verlag website. Following open science practices, we aim to release the dataset and pair the difficulty labels with music scores from the public domain. Note that Henle editions cover numerous pieces, but public domain scores may not be available for many of them. Finally, we discuss statistics about the dataset. ### Dataset creation methodology As a first step, we automatically match composer names and work titles of Henle work to composer names and titles of pieces openly shared by users of the public domain sources of (musescore), Craig Sapp Humdrum collection (Chopin; Bach Inventions; Beethoven; Haydn; Mozart; Scarlatti; Devaney et al., 2015; Bach WTC) and Mutopia Project (Mutopia). We downloaded pre-collected metadata (xmander), previously used by MIR community (Edirisooriya et al., 2021). More than a million MuseScore, two thousand Craig Sapp, and one thousand Mutopia entries are considered. We use the Lucene (lucene) indexing and search engine to facilitate matching. Since there are no strict rules for crowd-sourced musical work naming, to improve the results, we index the names and aliases from MusicBrainzDB (Swartz, 2002) for each work and composer, in addition to the Henle given titles. For example, for _Clair de lune D flat major, Suite Bergamasque, Claude Debussy_ there are more than 500 translations or aliases (MB example). Consequently, we assign each of the 949 Musescore, 390 Craig Sapp titles, and 299 Mutopia files a title from Henle, such that each title has at least one assignment (7 assignments per title on average, ordered by Lucene hit score). The three crowd-sourced collections used are in machine-readable formats, allowing access to all the information engraved in the musical score without transcribing. Note that we will use musicXML as the standard representation for the final dataset because it is the most interoperable and widespread format. Musescore music scores are converted to musicXML format without errors. In contrast, Craig Sapp and Mutopia music scores are manually validated to guarantee the proper conversion to musicXML from Humdrum and Lylipond formats, respectively. However, crowd-sourced data may produce an uncertain quality in the collected scores and inconsistent metadata. To that extent, automatic matching produces two types of problems: false positives (e.g., matching the metadata of a piece with the wrong score) and including scores of doubtful quality. The retrieval system produces false positives because (a) the metadata is inconsistent, (b) the score is adapted for a different instrument, such as an adaptation for two violins of a piano piece, or (c) the score is arranged by a third author and the difficulty annotation is not valid. A typical error emerges when the score is automatically generated from audio, MIDI, or PDF without manually correcting it or when solely a score fragment is available. Another typical error is produced where the Henle annotation is for the whole musical piece, but only a movement is retrieved or the opposite. To account for these problems, as an additional step, a manual best candidate selection or title rejection was performed by a human expert, a classical pianist with more than 20 years of playing the piano, a professional degree, and teaching experience. To facilitate dataset validation and correction by the expert pianist, we create an electron (electron) desktop interface, shown in Figure 2. This interface displays each piece, the associated metadata, and the annotated difficulty from the Henle Verlag Publisher. In addition, we include a link to the Henle website and a Youtube link we queried using the score metadata. The interface allows (i) to move across the retrieved musicXML scores with the horizontal computer keyboard arrows, and (ii) across the different pieces with the vertical arrows. It also allows (iii) to annotate the best option with the enter key, (iv) to indicate if any musicXML score corresponds to the Henle metadata, and (v) to display a confidence rating concerning the score quality on the scale of 1-4: (v.1) complete score, (v.2) a few minor engraving errors, (v.3) some signs of automatic creation and (v.4) not all the movements of the music work. The interface functionality allowed the expert pianist to manually correct and review all the pieces resulting from automatic matching in more than 100 hours of work and discard 58% of the automated matches in the case of Musescore files, 42% in the case of Craig Sapp matches, and 90% in the case of Mutopia matches. Therefore, finally, the CIPI dataset comprises 394 Musescore files, 228 Craig Sapp files, and 30 Mutopia files, comprising 652 symbolic scores well engraved in musicXML format with annotations from the established Henle Verlag publisher. ### Dataset Analysis The _CIPI_ dataset comprises 652 classical piano pieces, spanning 9 difficulty levels and 29 composers ranging from the Baroque to the 20th Century. The distribution of composers and grades is shown in Figure 3. In comparison to the piano scores difficulty we previously released, _Mikrokosmos-difficulty_, _CIPI_ is more diverse, as shown in Table 1. Figure 2: Annotation interface for reviewing and fixing automatic matches. The _CIPI_ data distribution presents some challenges. The distribution of the composers is skewed towards the most famous authors. The distribution is similar to other MIR datasets and real-word corpora (Levy & Bosteels, 2010) \begin{table} \begin{tabular}{|l|l|l|} \cline{2-3} \multicolumn{1}{c|}{} & Mikrokosmos-difficulty & CIPI \\ \hline No. composers & 1 & 29 \\ \hline No. levels & 3 & 9 \\ \hline No. pieces & 147 & 652 \\ \hline No. notes & 42699 & 1672699 \\ \hline No. measures & 5041 & 115523 \\ \hline \end{tabular} \end{table} Table 1: Comparison between Mikrokosmos-difficulty and CIPI, showing the number of composers, levels, pieces, notes, and measures in both datasets. Figure 4: Heatmap displaying the distribution of the lengths (number of notes) across the nine levels of difficulty in the _CIPI_ dataset. Figure 3: Heatmap displaying composers’ distribution across the nine difficulty levels in the _CIPI_ dataset. commonly known as long-tail. To that extent, some composers and styles are over-represented, as is shown in Figure 3. Note that we are using all the available scores at our disposal since the works of famous composers like Chopin were digitized more than those of less popular ones. These trends may bias the creation of personalized curricula based on the difficulty analysis research. In addition, the grades have a bell-shaped distribution accumulating more in the central grade pieces than in the extremes. All these biases must be considered when using the CIPI Dataset. The shorter pieces are over-represented in our dataset, while the longer pieces are few. We display a heatmap of the pieces and the corresponding difficulty levels in Figure 4. Although the correlation between length and difficulty level is noticeable with Stuart's tau-c coefficient of 0.48, the length is not the only feature important for characterizing difficulty level, e.g., some pieces with high difficulty levels are short. Therefore, we recommend in the future use of _CIPI_ dataset, pay special attention to biases related to the length of the pieces. We distribute the _CIPI_ dataset for research purposes with the license creative commons 4.0, limiting access to the data upon request under the Zenodo platform. In addition, we distribute the links to all the source pieces, composer, and work metadata we have used in creating _CIPI_. ## 4 Methodology We introduce input representations based on the score information, expressive performance modeling of piano scores (Jeong et al., 2019) and automatic piano fingering (Ramonda et al., 2022), as detailed in Section 4.1. Furthermore, we employ a machine learning classification approach, discussed in Section 4.2, to address automatic score difficulty classification on the _CIPI_ dataset. We also explore various methods for combining the musicology-inspired representations, as described in Sections 4.3 and 4.4, and losses to capture the ordinal regression nature of the task. ### Backbone models Feature representations derived from inner-layer activations of pre-trained neural networks, commonly known as embeddings, may serve as powerful input features for downstream tasks (Wang et al., 2019; Raffel et al., 2020; Alonso-Jimenez et al., 2020). In section 1, we emphasize the importance of fingering and expressiveness features for capturing the information about piano performance (Cook, 1999). Moreover, in section 2, we discuss previous approaches that employ automatic piano fingering to indicate technical difficulty. In this context, we use the current state of the art in automatic piano fingering (Ramonda et al., 2022) and the expressive piano performance generation model (Jeong et al., 2019) as input features for neural networks to model piano difficulty. Both tasks are trained at the note level, allowing thousands of samples for each score in contrast with _CIPI_, which has solely a global annotation for each score. **Automatic Piano Fingering - ArGNN backbone.** Predicting the physical hand and finger movements executed by a pianist based on a score could potentially serve as an indicator for assessing the difficulty of a piece (Ramoneda et al., 2022; Nakamura et al., 2014). Fingering may be correlated with difficulty classifications since piano students learn to move their hands progressively during the early years of their careers while playing increasingly more challenging pieces. The objective of piano fingering is to replicate a pianist's movement of the fingers on the piano keyboard while performing a particular piece of music. It assigns a finger number to each note in the score from either the right or left hand: thumb (1), index (2), middle (3), ring (4), and pinky (5). According to (Palmer, 1997), piano fingering is among the most demanding human activities, usually requiring years of intensive practice. Piano players can improve their technique by adjusting their finger placement on the keys for adequate music interpretation. This involves anticipating the finger movements needed for the following sequence and adjusting accordingly, as the fingerings are not always clearly marked in the score. We utilize a pre-trained auto-regressive graph neural network from our recent publication (Ramoneda et al., 2022) to produce embeddings that serve as input features for subsequent difficulty classification tasks. In this previous work, we train a model that predicts finger movements with near-human precision. The intermediate layers retain information about the input score and the predicted fingering movements, representing the music score (the music structure) and the physical movements associated with the technique. These are two main dimensions of music performance described in Cook (1999). The ArgGNN backbone has an encoder-decoder architecture, as shown in Figure 5. The encoder is a graph neural network (GNN), while an autoregressive recurrent neural network acts as the decoder. The input of the model is a sequence of notes containing information only about the pitch, similar to the previous literature on automatic piano fingering (Nakamura et al., 2014, 2020; Guan et al., 2022). The GNN encodes the polyphonic relation between notes, while the decoder ensures the sequential consistency of automatic piano fingering. We employ the last LSTM decoder layer embeddings as features for classifying scores based on performance difficulty. Due to its autoregressive nature, we decided to use this embedding because the intermediate representation in an autoregressive model includes information from preceding layers as well as previous temporal finger-label predictions. **Expressive Performance Modelling.** Musical expression and interpretation might be associated with difficulty. It requires the musician's understanding of the pieces to bring the emotional intent or meaning behind the music. These skills take time and practice to develop. As a result, music students gradually cultivate an understanding of music and its subsequent interpretations of dynamics and agogics, the so-called expressiveness. We utilize the intermediate features from a neural network trained for expressive piano performance modeling (Jeong et al., 2019). To accurately estimate the performance features, the model must process the semantics of the music score, such as which note needs to be played with higher intensity or which part of a musical phrase, typically corresponding to the ending, should be played slowly. Therefore, the embeddings learned by the models trained for performance modeling may be used for other tasks, such as difficulty classification. We rely on the VirtuosoNet (Jeong et al., 2019), a neural network model trained for expressive performance modeling. As a result, we use the activations provided by one of the upper layers of this model as input features for another neural network that predicts difficulty. VirtuosoNet takes a sequence of note-level score features and predicts note-level performance features. It consists of three modules: score encoder, performance encoder, and performance decoder. Here, we adapt the score encoder of a pre-trained VirtuosoNet to obtain note-level representations for the _CIPI_ Dataset. The score encoder includes specialized RNN layers for each musical hierarchy: note, voice, beat, and measure. The hidden representation of each RNN is broadcasted into note-level and then concatenated into note-level representations with 64 dimensions. Figure 5: Encoder-decoder diagram of the autoregressive graph neural network for automatic piano fingering (Ramoneda et al., 2022) which we use as a proxy of Cook (1999)’s technique dimension. ### Classifier Architecture We propose using a straightforward architecture for summarising the performance difficulty of the musical pieces using the backbone features we describe in Section 4.1. Similar architectures have been previously employed to benchmark and analysis the language understanding of proposed representations (Wang et al., 2019). In the previous research (Ramonda et al., 2022b), we used DeepGRU (Maghoumi and LaViola, 2019), a multivariate time series classification model. The model comprises three stacked gated recurrent units (GRU), two fully connected (FC) layers, and a global attention layer. The final two FC layers with ReLU activations take the attention module's output and produce the probability distribution of the class labels using a softmax classifier. The attention mechanism selectively focuses on specific note onsets more critical in difficulty assessment, while the stacked GRUs' layers can model time dependencies. Finally, the attention layer can be employed to visualize and comprehend critical notes (Ramonda et al., 2022b). In this work, we propose to change the previous architecture (Ramonda et al., 2022b) by incorporating a different attention mechanism, context attention. It was first proposed for summarizing semantic meanings of a sentence or a paragraph for document classification (Yang et al., 2016) and was later adapted for modeling the hierarchical structure of music score in Jeong et al. (2019). In this work, we use hierarchical context attention to summarize the sequence of note-level hidden states of a piece with arbitrary length into a single vector, Figure 6: Hierarchical RNN-based diagram of the model for expressive piano performance generation (Jeong et al., 2019) which we use as a proxy of Cook (1999)’s expressiveness dimension. as shown in Figure 7. For a given sequence of note-level hidden states \(\mathbf{x}_{T}=[x_{0},x_{1},...,x_{t}]\), hierarchical context attention summarizes it as a \(y=\sum_{t}^{T}\alpha_{t}x_{t}\), where \(\alpha_{t}=\text{Softmax}(\tanh(\mathbf{W}x_{t}+b)^{\top}c)\) and \(c\) represents a context vector that is trainable. In other words, the weight of each note representation is decided by the dot product value with the context vector. Since the context vector is a trainable parameter, context attention can learn which note is more important to predict the difficulty level of the piece. While the attention module of DeepGRU (Maghoumi and LaViola, 2019) uses the hidden state of the last time step to calculate attention weights, the context attention does not explicitly uses the last hidden state as a designated vector for attention calculation. Using the last hidden state can benefit gesture recognition as it was first proposed. Still, the difficulty of a musical piece does not need explicit focus on the beginning or the end of the piece. Therefore, we employed context attention instead of DeepGRU attention. The final layer, shown in Figure 7, is a linear layer (FC) followed by a loss-dependant layer discussed in Section 4.5 to predict the difficulty level. The automatic piano fingering backbone comprises two models, one for the Figure 7: Diagram of the classifier architecture we use for score difficulty classification from the precomputed performance embeddings computed on the Backbone models. right hand and the other for the left hand, which are trained independently. As the embeddings for each hand have different origins, we duplicate the GRUs and attention layers before the final output layer to accommodate each hand's different features, as shown in Figure 8. ### Feature Fusion We have proposed five strategies for combining the virtuoso and ArGNN features to classify scores by performance difficulty. These strategies are based on early and late fusion approaches, which have been shown to improve different tasks (Toselli et al., 2011; Gao et al., 2020), including music-related tasks (Alfaro-Contreras et al., 2022). Early fusion is applied as _sync-fusion_, which concatenates the right and left-hand embeddings from the ArGNN representation with the virtuoso representation at each time frame. This technique only modifies the input to the classifier architecture. On the other hand, late fusion methods modify the classifier architecture itself and include four different strategies: _sum-fusion_, _concat-fusion_, _att-fusion_, and _int-fusion_. The simplest of these, _sum-fusion_ and _concat-fusion_, involve either summing or concatenating the outputs of the last layers from separate branches of the classifier, each dedicated to processing one of the input representations. The more complex methods, _att-fusion_ and _int-fusion_, add a posterior architecture to the classifier to summarize the outputs of both branches using different attention mechanisms, providing a more sophisticated way of integrating information from the virtuoso and ArGNN embeddings. The _att-fusion_ Figure 8: Comparison of classifier architectures showing (left) precomputed representations of _virtuoso_, _virtuoso_enc_, or _pitch_ and a single branch classifier architecture, and (right) precomputed representation of _argnn_ and a classifier architecture with two branches. combines _virtualoso_ and _argnn_ branch outputs using the attention mechanism in Section 4.2. _int-fusion_ uses the existing AutoInt (Song et al., 2019) feature fusion attention mechanism to combine the branches, automatically learning high-order feature interactions and mapping them into a low-dimensional space with a multi-head self-attentive neural network and residual connections. These five fusion strategies showcase the flexibility of the proposed architecture in combining different aspects of musical performance to predict score difficulty. By considering both a piece's expressiveness and technical complexity, our model can provide a more comprehensive assessment of its difficulty, ultimately benefiting students, teachers, and performers in understanding and mastering the challenges presented by various compositions. ### Ensemble classifier Ensemble methods typically improve the robustness and generalization of the classifier by reducing overfitting and bias (Opitz & Maclin, 1999). We propose to use a deep learning ensemble classifier to combine multiple models trained in different modalities to improve the classifier's overall performance. This is done by training multiple models on different representations on the same dataset and averaging their predictions to make the final decision. ### Loss functions The target variable we aim to predict represents increasing difficulty levels. Therefore, it is ordinal. However, standard classification algorithms do not consider the relationship of order between classes, which may yield inconsistent labels for the ordinal classes; e.g., consider a machine learning model that gives the probabilities values 0.5, 0.2, 0.9, which are inconsistent with consecutive difficulty levels 1, 2, and 3. Towards adapting the neural network to the ordinal nature of the problem we are trying to solve, we propose a wide range of solutions, from embedding ordinality into the loss functions to using regression instead of classification. **The negative log-likelihood loss (NLLLoss).** As a simple baseline, we use the NLLLoss, frequently applied to multiple class classification. The last layer of the classifier architecture uses a logarithmic softmax function to output the probability distribution of the neural network with the size of the number of classes. The categorical index with a higher probability indicates the predicted class. Because our dataset is imbalanced, having more low-difficulty pieces, we use the weighted version of the loss by assigning a higher weight to less frequent difficulty levels. The correct answer's probability value is added to the average after taking the log of the probability value following softmax. \[\text{NLLLoss}(x,y)=-\frac{1}{N}\sum_{i=1}^{N}w_{i}y_{i}\log(x_{i}) \tag{1}\] where \(N\) is the number of samples in the dataset, \(y_{i}\) is the label of the sample encoded as a one-hot vector, \(x_{i}\) is the predicted probability of the sample belonging to each class, also encoded as a one-hot vector, and \(w_{i}\) is a weighting factor for each sample, varying in function to the dataset imbalance. **Mixed loss: regression and classification (RegClassLoss)**. We combine classification and regression losses to better model the data distribution and to avoid converging to sub-optimal minima. Towards taking advantage of the dual nature of difficulty score classification, we combine the NLLloss with a standard regression loss, in this case, the Mean Square Error (MSE) loss: \[\text{MSELoss}(l_{x},l_{y})=\frac{1}{N}\sum_{i=1}^{N}w_{i}(l_{yi}-l_{xi})^{2} \tag{2}\] where \(N\) is the number of samples, \(l_{x}\) are the predictions and \(l_{y}\) the ground-truth difficulty level as a scalar value, \(l_{xi}\), \(l_{yi}\) are, respectively, the predicted value and the true value of the \(i-th\) sample, and \(w_{i}\) is a weighting factor for each sample, varying in function to the dataset imbalance. Therefore, this loss minimizes the mean square error of an estimator, i.e., the square difference between the ground truth values and the estimated values. To combine both losses, we add a projection layer that maps a scalar value from the same last hidden state of the classifier network. Therefore, the MSELoss uses a scalar, \(l_{y}\), from the classifier's last layer as input, while NLLLoss uses \(y\) as many scalars as classes. Finally, we combine both losses using a correction factor \(\alpha\). \[\text{RegClassLoss}(x,l_{x},y,l_{y})=\text{NLLLoss}(x,y)+\alpha\cdot\text{ MSELoss}(l_{x},l_{y}) \tag{3}\] **Multilabel smoothed loss (MSLoss)**. We argue that difficulty is a subjective concept. It can change depending on how a piece is played and who plays it. Thus, we use a label smoothing on BCELoss, previously used to model subjectivity problems (Lukasik et al., 2020). The BCELoss is usually applied to binary classification problems, with the predictions and the ground truth labels encoded as one-hot vectors. To compute the smoothed labels, \(\hat{y_{i}}\), we process the labels with a Gaussian smoothing function with \(\sigma=0.5\) to train a multilabel prediction. We smooth the label one-hot vector using Gaussian blur to give slight weight to the neighboring difficulty level and a zero weight for the rest. This way, the model may account for difficulty-level subjectivity and produce more accurate predictions. For example, a one-hot label \([0,0,0,0,1,0,0,0,0]\) is smoothed and normalized into \([0,0,0,0.1,1,0.1,0,0,0,0]\). \[\text{MSmoothLoss}(x,y)=-\frac{1}{N}\sum_{i=1}^{N}w_{i}(\hat{y_{i}}\log(x_{i}) +(1-\hat{y_{i}})\log(1-x_{i})) \tag{4}\] where \(N\) is the number of samples in the dataset, \(\hat{y_{i}}\) is the label smoothed, \(x_{i}\) is the predicted probability of the sample belonging to each class, and \(w_{i}\) is a weighting factor for each sample, varying in function to the dataset imbalance. **Ordinal loss (OrdinalLoss)**. The ordinal loss proposed by (Cheng et al., 2008) considers that the predicted labels have an ordinal relation between them. The proposal grounds in an ordinal encoding, shown in Figure 9, in contrast to the one-hot encoding used in the previous approaches. In ordinal encoding, the model is forced to learn an ordered structure where the prediction of one class implies that all the previous classes, following the order defined, are also predicted. Therefore, whether the model predicts a class with a higher encoded integer value, for example, class difficulty level 3, it also implies that classes: difficulty level 3, difficulty level 2 and difficulty level 2, and difficulty level 1 are also predicted, as shown in Figure 9. To force the ordinal structure of the predictions, we use the mean squared error (MSE), \[\text{OrdinalLoss}(x,y)=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-x_{i})^{2} \tag{5}\] where \(N\) is the number of samples in the dataset, \(y_{i}\) is the ground-truth value of the sample, ordinal encoded, and \(x_{i}\) is the prediction value of the sample. Both \(y_{i}\) and \(x_{i}\) are ordinally encoded and therefore have the same size as the number of classes. Note that in inference, a class is predicted if it reaches a certain threshold, in our case, 0.5. Consequently, if the ordinal encoding structure is not satisfied in the prediction, not being continuous in Figure 9: Comparison of the probability distribution and the ground-true label encoding for class level 5. (left) max likelihood encoding. (right) Ordinal encoding. terms of index that exceeds the threshold, the predicted class is not defined, and evaluation metrics compute it as an error. For example, if the prediction of the model is \([1,0,0,0,1,0,0,0,0]\) then the predicted label is not defined. **Rank-consistent Ordinal Regression (CoralLoss).** The CoralLoss is a loss function for training neural networks for ranking tasks, introduced by Cao et al. (2020). It uses the same ground-truth encoding as the ordinal loss, shown in Figure 9. The key distinction compared with OrdinalLoss is enforcing the model to learn a cumulative probability. This means that the model is required to learn a monotonic relationship between the class index and their probabilities, such that the logit for a class should always be higher than the logit of the preceding class. As described in Cao et al. (2020), the weight parameters of the neural network, excluding the final layer's bias units, are denoted as \(\mathbf{W}\). The output of the penultimate layer, \(g(\mathbf{a}_{i},\mathbf{W})\), is shared by all nodes in the final output layer, with \(K-1\) independent bias units added to \(g(x_{i},\mathbf{W})\). The predicted empirical probability for class \(k\) is given by \(\widehat{P}(y_{i}^{(k)}=1)=\sigma(g(\mathbf{a}_{i},\mathbf{W})+b_{k})\), where \(\sigma(z)\) is the logistic sigmoid function. The model is trained by minimizing the weighted cross-entropy loss function, which is defined as: where \(N\) is the number of samples, \(\lambda^{(k)}\) denotes the weight of the loss associated with the classifier, and \(K-1\) is the number of binary classifiers. Class 0 is implicitly encoded. During inference, the binary labels for rank prediction are obtained by \(f_{k}(x_{i})=\widehat{P}(yi^{(k)}=1)>\text{tr}\) where \(\text{tr}\) is a fixed threshold. ## 5 Experiments and results We conduct a series of experiments to understand better the relation between the input features, datasets, and losses on difficulty classification. Section 5.1 overviews the base experimental framework that serves as the foundation for subsequent experiments. In Section 5.2, we present the primary results, while Section 5.3 presents additional experiments. Finally, in Section 5.4, we analyze the output of the proposed models on a selection of music examples to better comprehend the results. ### Experimental setup We base our experiments on two datasets containing scores and difficulty labels: the _Mikrokosmos-difficulty_(Ramoneda et al., 2022) and the newly introduced _CIPI_ dataset, which we detail in Section 3. Given the limited size of both datasets, we evaluate our models using five pseudo-random splits to ensure reproducibility. For each split, we designate 60% of the dataset for training, 20% for validation, and reserve the remaining 20% for testing. The results reported in the experiments represent the mean and standard deviation. Our dataset stratification strategy considers both the target difficulty level and the length of the piece, which is discretized into corresponding intervals of 1000 notes. As a result, we partition the dataset into pairs consisting of the difficulty label and the corresponding length interval. It should be noted that specific pairs of difficulty labels and lengths are represented by fewer than five scores, which is less than the number of folds in our experimental setup. We randomly allocate these music scores to the training, validation, or testing sets. In the _CIPI_ dataset experiments, multiple metrics are used to evaluate the difficulty classification task as an ordinal regression problem, including 9-class balanced accuracy Acc-9, _3-class_ balanced accuracy, Acc-3, relaxed class boundary accuracy, Acc\(\pm\)1, and mean square error, MSE. The dataset is highly unbalanced, so the metrics are macro-averaged to account for this. The Acc-3 metric groups the 9 levels into three groups of levels and computes balanced accuracy. The Acc\(\pm\)1 metric relaxes the class boundaries. Consequently, the predicted label mismatches with neighboring classes are not penalized. Finally, the MSE is used to analyze the regression potential of the task. The first two metrics, Acc-9 and Acc-3, are widely used classification metrics. In comparison, the latter two metrics, Acc\(\pm\)1 and MSE, are commonly used in regression and are related to the ordinal nature of the task. The balanced accuracy, Acc-9 and Acc-3, is defined as, \[\text{Acc-n}=\frac{1}{n}\sum_{j=1}^{n}\frac{TP_{j}+TN_{j}}{TP_{j}+TN_{j}+FP_{j }+FN_{j}} \tag{6}\] Where \(TP_{j}\), \(TN_{j}\), \(FP_{j}\), and \(FN_{i}\) are the true positive, true negative, false positive, and false negative rates for the \(j^{th}\) class, respectively, and \(n\) is the total number of classes, i.e., 3 or 9. The unbalanced mean square error is defined as: \[\text{UMSE}=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y_{i}})^{2} \tag{7}\] Where \(y_{i}\) is the ground-truth value for the \(i^{th}\) sample, \(\hat{y_{i}}\) is the predicted value for the \(i^{th}\) sample, and \(N\) is the total number of samples. The unbalanced relaxed class boundary accuracy is defined as \[\text{UAcc}\pm 1=\frac{1}{N}\sum_{i=1}^{N}\begin{cases}1&\text{if }|y_{i}-\hat{y_{i}}|\leq 1\\ 0&\text{otherwise}\end{cases} \tag{8}\] Where \(y_{i}\) is the ground-truth value for the \(i^{th}\) sample, \(\hat{y_{i}}\) is the predicted value for the \(i^{th}\) sample, and \(N\) is the total number of samples. This way, it only penalizes when the predicted label mismatch is more significant than one. Otherwise, it is considered a correct classification. We also macro-average both, UMSE and UAcc\(\pm\)1 metrics, for each class to consider the datasets' imbalance. Therefore, \[\text{MSE}=\frac{1}{n}\sum_{j=1}^{n}\text{UMSE}_{j} \tag{9}\] \[\text{Acc}\pm 1=\frac{1}{n}\sum_{j=1}^{n}\text{UAcc}\pm 1_{j} \tag{10}\] Where \(\text{UMSE}_{j}\) or UAcc\(\pm 1_{j}\) is the value of the metric, UMSE or UAcc\(\pm 1\), for the \(j^{th}\) class and \(n\) is the total number of classes. Because the _Mikrokosmos-difficulty_ dataset contains solely 3 levels of difficulty, we evaluate the difficulty classification task with 3-class balanced accuracy. However, the annotations of the _CIPI_ dataset are structured on 9 levels. The _Mikrokosmos-difficulty_ dataset has only 3 classes and the regression errors are less critical, so we do not include the metrics Acc-3 and Acc\(\pm 1\) related to them. The machine learning models are trained using early stopping. The metrics used on the validation set to determine the stopping point are Acc-9 and MSE, as in (Ramoneda et al., 2022). We use Acc-9 and MSE on the evaluation set to select the best-performing model in the early stopping. We train the deep learning methods using mini-batch stochastic gradient descent training using the Adam optimizer, a dropout of 0.2 between the GRU layers, gradient clipping with the value of \(1\cdot 10^{-4}\), a batch size of 64, and a learning rate of \(1\cdot 10^{-4}\). We use a balanced sampler, which retrieves a uniform distribution of the samples and weights in the losses for each batch to solve the problem of unbalanced data on the CIPI dataset. In the experiments on the _Mikrokosmos-difficulty_ dataset, we use the NLLLoss criterion as we have fewer classes, and the ranking criterion may cause large errors. ### Basic Evaluation In this section, we present the main results of the paper. In Section 5.2.1, we run experiments on the _Mikrokosmos-difficulty_ dataset (Ramoneda et al., 2022). We compare the performance of various feature representations and explore the potential of feature fusion. We repeat these experiments for the _CIPI_ dataset and present the results in Section 5.2.2. However, as we increase the number of classes, the ranking nature of the task becomes essential. Consequently, in Section 5.2.2, we evaluate the impact of various loss functions related to ordinal regression. #### 5.2.1 Mikrokosmos-Difficulty evaluation In the following paragraphs, we describe the experiments conducted with the _Mikrokosmos-difficulty_ dataset. We explore (a) the distinct input representations introduced in this paper, (b) alternatives to the _argnn_ representation, (c) the representations from experiment \(a\) using the previous architecture of Ramoneda et al. (2022), (d) feature fusion experiments, and (e) an ensemble combining all the previous representations. The results of these experiments are summarized in Table 2. **Representations for score difficulty representations (a).** This section provides a detailed assessment of the different deep-learning input representations and their efficacy for difficulty analysis. The model we train using this feature is the front-end architecture we introduce in Section 4.2. In a previous study, a feature engineering representation, velocity, outperformed other input representations (Ramonda et al., 2022b). However, this algorithm fails to process over 20% of the pieces in the CIPI dataset. Therefore, our analysis only compares the velocity and _argnn_ representations within the _Mikrokosmos-difficulty dataset_, both proxies for piano technique. In addition, we examine the score information input of the _argnn_ backbone,_pitch_. Similarly, we evaluate the _virtuoso_ representation as a proxy of piano expressiveness and its input representation _virtuoso_enc_. The results from this initial comparison are presented in Section \(a\) of Table 2. The _argnn_ representation performs slightly higher, 1.1%, than _pitch_ and _velocity_. However, these findings must be analyzed considering the high standard deviations. **Deep learning representation of automatic piano fingering (APF) (b).** We evaluate the performance of the front-end architecture when trained with the embeddings obtained from the decoder of the state-of-the-art neural \begin{table} \begin{tabular}{l l} \hline \multicolumn{2}{c}{(a) Representations} \\ \hline Ours (velocity) & **74.2(9.4)** \\ Ours (argnn) & **75.3(6.1)** \\ Ours (virtuoso) & 65.7(7.8) \\ Ours (pitch) & **74.2(5.3)** \\ Ours (virtuoso\_enc) & 61.5(9.2) \\ \hline \multicolumn{2}{c}{(b) APF representations} \\ \hline Ours (_ArGNN-model_ - encoder) & 66.2(8.4) \\ Ours (_ArGNN-model_ - decoder) & **75.3(6.1)** \\ Ours (_ArLSTM-model_ - encoder) & 66.9(12.8) \\ Ours (_ArLSTM-model_ - decoder) & 68.4(12.1) \\ \hline \multicolumn{2}{c}{(c) DeepGru comparison} \\ \hline DeepGRU (velocity) & **68.6(13.1)** \\ DeepGRU (virtuoso) & 61.5(8.3) \\ DeepGRU (pitch) & 54.2(12.1) \\ DeepGRU (virtuoso\_enc) & 67.7(10.) \\ \hline \multicolumn{2}{c}{(d) Feature Fusion} \\ \hline Ours (sync - fusion) & **73.4(12.5)** \\ Ours (concat - fusion) & 70.2(7.3) \\ Ours (sum - fusion) & 70.5(13.4) \\ Ours (it - fusion) & 65.(4.1) \\ Ours (att - fusion) & 66.1(4.3) \\ \hline \multicolumn{2}{c}{(e) Ensemble Classifier} \\ \hline Ours (ensemble) & **76.4(2.3)** \\ \hline \end{tabular} \end{table} Table 2: _Mikrokosmos-difficulty experiments. (a) the different input representations presented in this paper. (b) alternatives to the _argnn_ representation. (c) the representations of the experiment \(a\) with the previous architecture of Ramoneda et al. (2022b). (d) feature fusion experiments. (e) combining all the previous representations with an ensemble_ network for automatic piano fingering, referred to as _ArGNN-model_(Ramoneda et al., 2022b). As an alternative, the embedding can be extracted from the encoder of another architecture, _ArLSTM-model_, which we also proposed in a previous paper (Ramoneda et al., 2022b). We compare these two alternatives in section (b) of Table 2. The representation used as input is highlighted within parentheses, with the two parts divided by a hyphen. The right part corresponds to the original architecture where the embedding is extracted, either ArGNN or ArLSTM. The left part indicates whether the embedding was extracted from the encoder or the decoder. We observe that the ArGNN (ArGNN-decoder) outperforms, with 75.4% of balanced accuracy, the rest of the representations numerically in all the subsets while being better in the test set. For the sake of simplicity, the _ArGNN-decoder_ representation is denoted as _argnn_ throughout the rest of the paper. **Comparison of the proposed architecture with the DeepGRU architecture in (Ramoneda et al., 2022b) (c).** In the present work, we replaced the architecture _DeepGRU_ with a more robust architecture for summarising performance, referred to as _Ours_. In this section, we evaluate the representations _velocity_, _argnn_, _pitch_, and _virtuoso_enc_ as different inputs for the _DeepGRU_. As presented in Table 2 Section _(c)_, the representation with the highest balanced accuracy is _velocity_ (68.6%) following the results of the original paper (Ramoneda et al., 2022b). The _virtuoso_ representation also performs well (61.5%). However, both representations, _velocity_ and _virtuoso_, are underperforming with _Ours_ results, with a difference of more than 6% and 4%, respectively. Moreover, _pitch_, and _virtuoso_enc_ have very poor results on _Mikrokosmos-difficulty_ compared with _Ours_ on train, validation, and test. **Feature Fusion (d).** We investigate the performance of the feature fusion methodologies as described in Section 4.3. Specifically, we contrast the early feature fusion _sync-fusion_ and the four strategies for fusion virtuoso and _argnn_ in a late stage _sum-fusion_, _concat-fusion_, _att-fusion_ and _it-fusion_. Section \(d\) of Table 2 presents the results of this experiment. The more straightforward approaches _concat-fusion_ and _sum-fusion_ work better than the other two late feature fusion strategies. However, they are less effective than training the representations independently. On the other hand, _sync-fusion_ demonstrates a mean accuracy of 73.4 close to the \(a\) best experiments. The lack of data may be an essential drawback when training very complicated representations, as proposed in this feature fusion experiment. **Ensemble Classification (e).** The experiment in section \(e\) investigates the combination of the \(a\) models by averaging their predictions to achieve better results. The results show a slight performance increase, achieving an accuracy of 76.4%. The minor performance of _pitch_ and _virtuoso_enc_, combined with the limited data in the _Mikrokosmos-dataset_, could explain the marginal improvement seen. Another factor that might explain this is the composition of the _Mikrokosmos-dataset_ itself. Being designed with pedagogical motivations, all its dimensions progressively increase in difficulty. This means there is no dimension with a steep difficulty increase while others remain flat, which could occur in _CIPI_ pieces. Further musicological research is required to understand this phenomenon fully. #### 5.2.2 CIPI evaluation **Representation experiments**. Table 3 illustrates the comparison of the experiments conducted with the representations proposed in Section 4.1 and the losses of Section 4.5. Interpreting the results of the experiments comparing input representations, especially when considering the classification metrics Acc-9 and Acc-3, is challenging. We observe that _argnn_, _virtuoso_, and _pitch_ can have comparable results depending on the loss. In the subsequent experiments, we will analyze if the information of each representation is complementary. Furthermore, we note that the model trained with _virtuoso_enc_ is underperforming compared to the rest of the experiments, leading to its exclusion from further sections when analyzing results on the _CIPI_ dataset. The _virtuoso_ embedding representation seems to have better results than the input of the original virtuoso model (Jeong et al., 2019), i.e., the _virtuoso_enc_ representation. Simultaneously, the _argnn_ representation seems to perform similarly to the original input of the ARGNN-model (Ramonda et al., 2022), the _pitch_ representation. The _virtuoso_enc_ appears more complex, while _pitch_ is a more straightforward representation that assists the model in learning. Furthermore, the _virtuoso_ and the _argnn_ representations contain different information than the simple score information. We delve into each representation experiment in the following lines. The _argnn_ representation results vary significantly depending on the loss experiment. It presents the best-performing model, with MSEloss, on Acc-3 metric, registering a value of 70.88 and \(\sigma=3.87\). The _argnn_ representation with the CoralLoss experiment shows the best results across all metrics, with Acc-9 recording the second-best result across all representation experiments with a value of 34.54 and \(\sigma=3.65\). The _virtuoso_ representation performs slightly better than other loss experiments for this input representation in the OrdinalLoss experiment. The MSE and Acc\(\pm 1\) values were 2.07 and 72.93, respectively, with Acc-9 demonstrating the best numerical performance at 35.20, albeit with a high standard deviation of 7.32. The _pitch_ representation with MSloss has notable results in classification metrics, Acc-9 and Acc-3, near the better numerical results with values of 33.64 and 69.64. However, the OrdinalLoss outperforms Acc\(\pm 1\) and MSE for all the experiments in all the representations. Acc\(\pm 1\) is near to having the slightly best performance with 74.89 and \(\sigma=3.47\). In contrast, MSE performs better than the rest of the experiments in all the representations. The _virtuoso_enc_ representation underperforms compared to the rest. We observe that the model trained with ordinalLoss predicts colliding classes without predicting the correct ones. **Loss experiments**. Because _CIPI_ comprises more classes and has higher granularity in terms of difficulty, the ordinal nature of the task becomes essential when training and evaluating this dataset. Therefore, the election of the loss becomes also essential. In Section 4.5, we introduce several losses related to either regression or ordinal classification. The architecture is trained using these losses, and the resulting models with ordinal-related metrics such as Acc\(\pm\)1 and MSE are introduced in Section 5.1. Additional classification metrics, Acc-9 and Acc-3, are also presented. The results are displayed in Table 3 across the four input representations The OrdinalLoss stands out when compared to other losses. Looking at Acc-9 and Acc-3, it is challenging to claim any loss outperforms the rest. However, the OrdinalLoss demonstrates superior performance in Acc\(\pm\)1 and MSE metrics. This suggests that OrdinalLoss more effectively models the ordinal regression problem across all input representations while delivering comparable results in the classification metrics. Models trained with OrdinalLoss exhibit better MSE, outperforming other experiments across the four representations, with differences with the nearest loss experiment on each representation varying from 0.39 to 1.4 points. It is important to note that the models using the OrdinalLoss use mean square error as the criterion to optimize the model. Nevertheless, the models trained with OrdinalLoss also perform better in the ranking-related metric, Acc\(\pm\)1, related to the MSloss, with differences varying from 2.25% to 6.95%. Given all these reasons, we have decided that the OrdinalLoss is the loss that best fits our task, and we will use it in the subsequent experiments. The other losses included in the comparison help to understand the differences between a pure classification and an ordinal regression. The NLLLoss's failure to understand the ordinality of the task prevents it from performing better than the other losses, even in the pure classification metrics. Considering the standard deviations, MSLoss is comparable with losses for this metric, having Acc-3 differences of less than 1% compared with the best-performing experiment in each representation. On the other hand, the models trained with RegClass-Loss underperform when using precomputed embeddings as input, _argnn_ and _virtuoso_. Lastly, the Coralloss and MSLoss do not have better results in the pure classification metrics, Acc-9 and Acc-3, when considering all input representations except _virtuoso_, where the OrdinalLoss is the best on all the metrics except Acc-3. MSLoss also shows promising results in Acc-3. Ultimately, the loss results indicate the importance of early stopping to avoid overfitting due to the limited data of the task. **Feature fusion experiments.** The combination of the features is essential in multimodal scenarios. In this experiment, we combine the representation embeddings, _argnn_ and _virtuoso_, with multiple techniques to learn a representation learning of the piano performance. We carry out experiments concerning _syncfusion_ and _late-fusion_, _concat-fusion_, _sum-fusion_, _att-fusion_, and _int-fusion_. The results of the different feature fusion experiments are presented in Table 4. The outcomes are generally not substantially better than the models trained on isolated representations, given the standard deviation across the different splits. The _concat-fusion_ experiment performs best in the Acc\(\pm\)1 and MSE metrics, with scores of 27.29 and 1.87, respectively. Further, the _int-fusion_ experiment performs similarly in the ordinal regression metrics Acc\(\pm\)1 and MSE. \begin{table} \begin{tabular}{l|l l l l l} \hline loss & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE & loss \\ \hline \multicolumn{5}{c}{argnn} \\ \hline MSLoss & 30.42(6.76) & **70.88(3.87)** & 66.06(3.31) & 2.64(0.50) & 0.88(0.07) \\ NLLLoss & 26.62(7.22) & 63.87(3.81) & 63.67(6.76) & 3.55(2.16) & 2.65(0.44) \\ CoralLoss & **34.54(3.65)** & 69.62(5.78) & 69.73(5.61) & 3.50(0.99) & 3.38(0.41) \\ RegClassLoss & 25.15(3.41) & 60.36(4.26) & 65.45(7.38) & 4.16(1.80) & 5.86(0.67) \\ OrdinalLoss & 32.68(2.86) & 69.22(3.63) & **71.98(5.35)** & **2.10(0.20)** & 27.96(2.85) \\ \hline \multicolumn{5}{c}{virtuoso} \\ \hline MSLoss & 30.68(6.04) & 64.63(5.45) & 66.62(4.41) & 3.17(0.51) & 0.94(0.09) \\ NLLLoss & 26.51(4.81) & 55.84(3.78) & 55.02(4.31) & 6.55(1.02) & 2.02(0.14) \\ CoralLoss & 27.21(1.56) & 64.60(3.17) & 59.96(3.73) & 5.51(1.37) & 4.05(0.33) \\ RegClassLoss & 30.21(3.34) & 56.92(3.50) & 59.88(2.49) & 5.50(0.72) & 5.41(0.08) \\ OrdinalLoss & **35.20(7.32)** & **67.02(3.09)** & **73.57(3.91)** & **2.07(0.22)** & 24.86(1.81) \\ \hline \multicolumn{5}{c}{pitch} \\ \hline MSLoss & **33.64(4.50)** & **69.64(3.17)** & 69.84(4.33) & 2.27(0.44) & 0.87(0.06) \\ NLLLoss & 27.36(7.08) & 62.91(2.88) & 51.81(6.57) & 3.88(0.71) & 1.98(0.12) \\ CoralLoss & 30.01(2.37) & 65.83(2.73) & 61.66(3.95) & 4.62(1.53) & 3.80(0.36) \\ RegClassLoss & 33.47(4.76) & 62.21(5.15) & 51.02(10.23) & 4.12(0.99) & 5.16(0.06) \\ OrdinalLoss & 32.19(5.94) & 67.91(4.06) & **76.44(2.82)** & **1.88(0.24)** & 23.59(2.30) \\ \hline \multicolumn{5}{c}{virtuoso\_enc} \\ \hline MSLoss & **25.64(8.88)** & **57.51(5.20)** & 60.25(9.57) & 3.63(0.90) & 1.00(0.08) \\ NLLLoss & 19.77(5.61) & 47.91(9.14) & 19.38(10.31) & 13.18(5.08) & 2.96(1.08) \\ CoralLoss & 13.41(8.97) & 38.70(13.39) & 42.00(10.23) & 7.28(2.13) & 6.45(1.15) \\ RegClassLoss & 16.73(8.75) & 44.45(9.23) & 24.98(15.64) & 12.11(5.62) & 5.90(0.30) \\ OrdinalLoss & 12.92(0.29) & 37.90(4.34) & **63.33(6.14)** & **2.81(0.69)** & 39.45(12.16) \\ \hline \end{tabular} \end{table} Table 3: Comparison of experimental results of models trained on all input representations (_argnn_, _virtuoso_enc_ and _pitch_) and all proposed losses (MSLoss, NLLLoss, CoralLoss, RegClassLoss, and OrdinalLoss). Additionally, the _int-fusion_ experiment performs best in the pure classification metrics Acc-9 and Acc-3, with scores of 34.51 and 1.87. The limited amount of data may dampen the potential of feature fusion. The loss achieved with all methods is similar to the models trained in each representation, which may indicate that they are reaching similar local minima. The comparison of feature fusion methods on the _CIPI_ and _Mikrokosmos-difficulty_ datasets reveals that in the _Mikrokosmos-difficulty_ dataset, the simpler feature fusion methods such as _sync-fusion_, _concat-fusion_ and _sum-fusion_ tend to perform better, which could be attributed to the limited data in this dataset. On the other hand, in the _CIPI_ dataset, which has more data, more complex methods like _int-fusion_ have better model performance difficulty. #### Ensemble classification experiments. In contrast to feature fusion, where the features are concatenated before the front-end module, the ensemble classifiers integrate the predictions from multiple models to limit generalization errors. In Table 5, we ensemble the models trained with the three top-performing representations: _argnn_, _virtuoso_, and _pitch_, using OrdinalLoss. As detailed in Section 5.1, we compute the ensemble classification for each split, reporting the average and standard deviation metrics on the test subset. We present the results of the _ensemble_ model, e. g., _virtuoso_, _argnn_, and _pitch_, along with all the possible combinations. As shown in Table 5, the _ensemble_ results surpass all previous results. Both Acc-9 and Acc\(\pm\)1 show significant improvements, with average values of 39.47(3.36) and 87.27, more than 4 and 11 points, respectively compared to the best-performing model. Simultaneously, Acc-3 and MSE also experience minor improvements. These results exceed our expectations, suggesting the information between the different features is complementary. In contrast, the standard deviation remains similar to the original experiments. Analyzing all the combinations of ensemble models shows the complementary nature of the proposed representations. Both the MSE and Acc\(\pm\)1 metrics show better results when combining any pair of models. Moreover, when observing the Acc-9 metric, it becomes evident that the models trained in the combination of _argnn_ and _pitch_ models produce better results than the models trained only on _argnn_ or _pitch_ models. However, the other two representation ensembles do not show such a significant performance increase. Finally, the combination of all three representations, _argnn_, _virtuoso_, and _pitch_, leads to the best performance. \begin{table} \begin{tabular}{l l l l l l} & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE & loss \\ \hline sync-fusion & 30.58(4.51) & 66.42(5.16) & 70.89(6.73) & 2.09(0.24) & 27.89(1.49) \\ concat-fusion & 32.68(3.00) & 68.45(5.17) & **75.29(2.38)** & **1.87(0.29)** & 26.28(3.60) \\ sum-fusion & 30.91(5.43) & 65.01(4.95) & 72.27(3.09) & 2.21(0.20) & 28.97(3.09) \\ att-fusion & 27.49(4.79) & 64.68(4.75) & 71.51(4.52) & 2.20(0.39) & 27.98(3.00) \\ int-fusion & **34.51(4.87)** & **69.49(2.28)** & 74.53(3.77) & 1.94(0.26) & 24.94(2.61) \\ \hline \end{tabular} \end{table} Table 4: The comparison of experiments evaluates the results of models trained on different strategies of feature fusion: _sync-fusion_, _concat-fusion_, _sum-fusion_, _att-fusion_ and _int-fusion_. This underscores the importance of combining the three dimensions of piano performance: technique, expressiveness, and score information to achieve the best results, inspired by musicological research Cook (1999). #### 5.2.3 Mikrokosmos-difficulty and CIPI. Summary of the results. To summarize our findings of the experiments on _CIPI_ and _Mikrokosmos-difficulty_, we present Table 6, which details the results for the best performing models for each representation: _argnn_, _virtuoso_, and _virtuoso_enc_ and their ensemble. The ensemble classifier's results demonstrate how combining these representations reduces errors and outperforms all other metrics. On the _Mikrokosmos-difficulty_ dataset, the balanced accuracy for all classes is 76.13, while on the _CIPI_ dataset, it is 39.47. This shows that the ensemble model performs better than models trained on individual representations, indicating that the representations are complementary. The performance improvement is more significant on the _CIPI_ dataset. Moreover, the ensemble classifier's ranking metrics on the _CIPI_ dataset - with Acc\(\pm\)1 = 87.27 and MSE = 1.13(0.16) - are impressive. These results are appropriate for applications requiring understanding difficulty as a regression task, such as exploring large music libraries or curriculum learning. \begin{table} \begin{tabular}{l l l l|l|l} \cline{2-6} & \multicolumn{4}{c|}{CIPI} & \multicolumn{1}{c}{MKD} \\ \cline{2-6} & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE & Acc-3 \\ \hline argnn & 32.68(2.86) & 69.22(3.63) & 71.98(5.35) & 2.10(0.20) & 75.3(6.1) \\ virtuoso & 35.20(7.32) & 67.02(3.09) & 73.57(3.91) & 2.07(0.22) & 65.7(7.8) \\ pitch & 32.19(5.94) & 67.91(4.06) & 76.44(2.82) & 1.88(0.24) & 74.2(9.2) \\ virtuoso\_enc & 25.64(8.88) & 57.51(5.20) & 60.25(9.57) & 3.63(0.90) & 61.5(9.2) \\ ensemble & **39.47(3.36)** & **71.3(3.23)** & **87.27(2.21)** & **1.13(0.16)** & **76.4(2.3)** \\ \hline \end{tabular} \end{table} Table 6: Experiment comparison of all the input representations proposed and the ensemble on _CIPI_ and _Mikrokosmos-dataset_. \begin{table} \begin{tabular}{c l l l l} \hline rep combinations & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE \\ \hline argnn & 32.68(2.86) & 69.22(3.63) & 71.98(5.35) & 2.10(0.20) \\ virtuoso & 35.20(7.32) & 67.02(3.09) & 73.57(3.91) & 2.07(0.22) \\ pitch & 32.19(5.94) & 67.91(4.06) & 76.44(2.82) & 1.88(0.24) \\ argnn and virtuoso & 35.47(6.66) & 66.99(0.87) & 80.9(4.35) & 1.48(0.26) \\ argnn and pitch & 33.33(4.36) & 68.47(4.69) & 78.44(2.8) & 1.59(0.26) \\ virtuoso and pitch & 33.36(3.23) & 66.48(5.17) & 80.82(4.64) & 1.51(0.17) \\ \hline ensemble & **39.47(3.36)** & **71.3(3.23)** & **87.27(2.21)** & **1.13(0.16)** \\ \hline \end{tabular} \end{table} Table 5: Ensemble ablation study: the figure displays the individual models first, followed by the models grouped in pairs, and finally, the outcome of the ensemble. ### Other experiments In the following experiments, we show further results that may provide valuable insights for future research in performance difficulty analysis. First, in Section 5.3.1, we show the challenge of training on fragments of the pieces. Consequently, in Section 5.3.2, we train the models at a higher granularity on the CIPI dataset. In Section 5.3.3, we investigate the influence of different ways of stratifying the dataset, and in Section 5.3.4 on the influence of the length of the music works on the task. #### 5.3.1 Training in fragments of the pieces We investigate the feasibility of training a model on small fragments of music pieces instead of the entire pieces. Our assumtion is that not all fragments share the same difficulty level, and the provided annotation only corresponds to the overall difficulty. Consequently, in previous experiments, we trained the models with the whole piece. To verify this, we split each piece into fragments of 256 notes, including both hands, with a 25% overlap. We conduct the same experimental setup described in Section 5.1 using the best-performing representations: _argnn_, _virtuoso_, and _pitch_. If different local fragments have a difficulty similar to that of the whole piece, the performance of the classifier trained on fragments should be comparable to that of the model trained on the full-length pieces. The number of samples in the dataset increases significantly when the fragments are considered, from 660 samples to 12769 samples. Therefore, if our assumption is not valid, performance may increase significantly. The results, shown in Figure 7, reveal that the classifier's performance is not comparable with the previous experiment on _CIPI_, with a difference of more than 10 points in Acc-9 and 1 point in the MSE metric. The experiments in any of the representations reach 20% of nine class accuracy, while the Acc-3 metric is also underperforming. This indicates that the difficulty level of each piece fragment does not necessarily correspond to the overall piece difficulty level. #### 5.3.2 Training in three classes In Section 5.2, we found that the Acc-3 trained on the _CIPI_ dataset is comparable with _Mikrokosmos-difficulty_ Acc-3 performance. In this section, we explore on _CIPI_ dataset whether the performance of the model can be improved by training it on fewer classes. Specifically, we consider if training on only three classes can enhance the Acc-3 metric. The classes predicted will be divided \begin{table} \begin{tabular}{l l l l l l} & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE & loss \\ \hline argnn & 15.29(1.38) & 42.40(2.50) & 65.86(0.36) & 2.57(0.09) & 20.51(0.18) \\ virtuoso & 23.37(2.75) & 51.76(2.07) & 68.63(3.34) & 2.30(0.27) & 20.53(1.62) \\ pitch & 20.01(1.38) & 49.73(2.79) & 68.29(4.45) & 2.38(0.25) & 21.08(1.95) \\ \hline \end{tabular} \end{table} Table 7: Experiment outcome of the models trained in fragments of the pieces instead of using as input the whole piece on the _CIPI_ dataset. into \(class\ 1\ =\ 1,\ 2,\ 3,\ class\ 2\ =\ 4,\ 5,\ 6\) and \(class\ 3\ =\ 7,\ 8,\ 9\), i.e., the same division in which Acc-3 is evaluated. We compare the representation of better-performing representations: _argnn_, _pitch_, and _virtuoso_ while optimizing the NLLLoss because the ordinal ranking nature of the task is is less critical when only three classes are predicted. The results of training on only three classes are presented in Table 8. It can be observed that the _argnn_ representation outperforms the _pitch_ and _virtuoso_ representations with more than 3%. Furthermore, the models perform worse than the Acc-3 predicted in Table 6. Thus, it can be concluded that training on the nine classes, as shown in Table 6, results in better Acc-3 performance than only training on three classes, the results of this experiment have a reduced accuracy varying from 1.14 in the case of _argnn_ to 7.01 in the case o _pitch_. In other words, it can be claimed that training in nine classes does not negatively impact the Acc-3 metric. #### 5.3.3 What is the effect of stratifying? In Section 5.1, we decided to stratify the dataset by lengths and difficulty levels. In this experiment, we aim to compare this decision with other alternatives of stratifying. One main alternative is stratifying by difficulty levels and composer. We do not employ a combination of the two previous approaches for stratification because when considering different sets of difficulty levels, compositions, and lengths, more than half of the dataset cannot be stratified as there are less than five samples in those sets. Table 9 compares the stratification of the dataset by length and difficulty levels versus that by composer and difficulty levels. More sensitivity can be observed when using the latter method. If we compare each model trained with a certain representational can observe the dropout of more than 3 points in Acc-9. This decrease in performance may be because some subsets, such as the training, validation, or test sets, have sequences of a certain length grouped in a specific class. The model seems more sensitive to lengths compared to the composer. Further research is needed to understand this phenomenon. #### 5.3.4 How much influence the lengths into the results? We examine whether the variable length of music pieces in the _CIPI_ dataset affects performance. In Section 3, we illustrate the differences in length between pieces in _CIPI_ using Table 4, which shows that some pieces have fewer than 500 notes while others have more than 30,000 notes. In the first experiment, we remove the pieces exceeding 3500 notes from the original splits, while in the \begin{table} \begin{tabular}{l l l} \cline{2-3} & Acc-3 & MSE \\ \hline argnn & 68.08(6.08) & 0.26(0.04) \\ pitch & 60.90(4.65) & 0.37(0.05) \\ virtuoso & 59.07(5.21) & 0.47(0.10) \\ \hline \end{tabular} \end{table} Table 8: Experiment outcome of the models trained in three classes on the _CIPI_ dataset. second experiment, we remove the pieces exceeding 7000 notes. The splits were calculated by stratifying by composers and lengths, as outlined in Section 5.1. Consequently, removing pieces greater than 3500 and 7000 notes, we preserve the same proportion of evaluation sets (train: 60%, validation: 20%, test: 20%) and the stratification. Therefore, we trained and evaluated the models over pieces of all possible lengths (_full_), pieces with fewer than 7000 notes (7000), and pieces with fewer than 3500 notes (3500). We compare the top three representations: _argnn_, _virtuoso_, and _pitch_, with OrdinalLoss. The results are presented in Table 10. The classification metrics, Acc-9 and Acc-3, are lower in 7000 and 3500 experiments, with decrements ranging from 2% to 11%. In contrast, ordinal regression metrics, MSE and Acc\(\pm\)1, remain similar, with less than 0.2 increment or decrement. Importantly, the computational time cost sees a substantial reduction--close to 60% in the 7000-note experiments and 80% in the 3500-note experiments. These results offer valuable insights for future research, highlighting the trade-off between performance and piece length, and underscoring potential avenues for speeding up computations in future studies. ### Case study The case study explores particular examples and their difficulty predictions of the models trained on, _argnn_, _virtuoso_, and _pitch_, and the final predictions of combining the three models, _ensemble_. To carry out the case study, we checked the examples with a more significant typical deviation between the models trained on _argnn_, _virtuoso_, and _pitch_, shown in Table 6. The study provides a comprehensive analysis of music scores, using multiple examples to demonstrate the performance of the models. The objective of the case study is to gain insights into how these models process and interpret musical information and how we can design accurate score difficulty classification systems based on combining the multiple music performance dimensions. Furthermore, the study also identifies common errors detected during the analysis. These errors provide valuable information for future research and can help researchers improve the models' performance. The results of this case \begin{table} \begin{tabular}{l l l l l} \cline{2-5} & Acc-9 & Acc-3 & MSE & loss \\ \hline \multicolumn{5}{c}{stratify by length and difficulty level} \\ \hline argnn & 32.68(2.86) & 69.22(3.63) & 71.98(5.35) & 2.10(0.20) \\ virtuoso & 35.20(7.32) & 67.02(3.09) & 73.57(3.91) & 2.07(0.22) \\ pitch & 32.19(5.94) & 67.91(4.06) & 76.44(2.82) & 1.88(0.24) \\ \hline \multicolumn{5}{c}{stratify by composer and difficulty level} \\ \hline argnn & 29.71(6.66) & 65.83(4.53) & 68.63(5.39) & 2.41(0.22) \\ virtuoso & 26.64(7.45) & 65.46(5.95) & 75.64(1.72) & 1.92(0.22) \\ pitch & 26.92(1.07) & 65.18(4.93) & 74.79(3.25) & 1.98(0.35) \\ \hline \end{tabular} \end{table} Table 9: Experiment outcome of the models trained with different an alternative stratification on the _CIPI_ dataset. study can contribute to the advancement of music information retrieval and can significantly impact the development of more sophisticated music difficulty prediction models. We have occasionally observed how the _pitch_ and _argnn_ models have a lower prediction than the models trained on _virtuoso_. We want to show it with examples in Figure 5.4. The first example is _Winter solstice song, Bela Bartok_, the prediction of the ensemble and the ground truth is level 3 while _argnn_ and \(x\) prediction are 1 and _virtuoso_ prediction is 6. The second example is _Children's Dance, no. 10, Bela Bartok_. The ground truth label is 2, the _ensemble_ prediction is 3, and the predictions of _argnn_ and \(x\) are the same as the ground \begin{table} \begin{tabular}{l l l l l l} \hline lengths & Acc-9 & Acc-3 & Acc\(\pm\)1 & MSE & epoch time \\ \hline \multicolumn{5}{c}{argnn} \\ \hline full & 32.68(2.86) & 69.22(3.63) & 71.98(5.35) & 2.10(0.20) & 11s \\ 7000 & 25.38(4.95) & 62.04(6.37) & 73.66(2.80) & 2.04(0.21) & 4s \\ 3500 & 21.90(4.19) & 57.38(6.06) & 74.48(4.82) & 2.25(0.38) & 2s \\ \hline \multicolumn{5}{c}{virtuoso} \\ \hline full & 35.20(7.32) & 67.02(3.09) & 73.57(3.91) & 2.07(0.22) & 9s \\ 7000 & 32.08(7.52) & 62.84(7.34) & 69.63(4.31) & 2.33(0.31) & 4s \\ 3500 & 26.64(7.45) & 57.98(5.97) & 72.34(4.01) & 3.22(0.39) & 2s \\ \hline \multicolumn{5}{c}{pitch} \\ \hline full & 32.19(5.94) & 67.91(4.06) & 76.44(2.82) & 1.88(0.24) & 10s \\ 7000 & 25.62(4.61) & 65.78(4.92) & 73.17(3.32) & 1.99(0.17) & 4s \\ 3500 & 28.15(5.72) & 59.54(4.96) & 76.65(2.59) & 1.86(0.27) & 2s \\ \hline \end{tabular} \end{table} Table 10: Experiment results of models trained on _argnn_, _virtuoso_, and _pitch_ on the _CIPI_ dataset for pieces of any length (full), pieces less than 7000 notes (7000), and pieces less than 3500 notes (3500). Figure 10: Examples of pieces with the model trained in _virtuoso_ predicting overestimated difficulty. (a) Fragment of the piece _Winter solstice song, Bela Bartok_. (b) Fragment of the piece _Children’s Dance, no. 10, Bela Bartok_. truth. In contrast, _virtuoso_ model outputs level 6. The piece lies in the constant changes in dynamics and articulation, even though the fingering is simple and there is limited use of cross-fingerings and note patterns. In addition, keeping the 4/4 time and the local tempo also requires skill and precision. However, although it is difficult to assess whether the _argnn_ or _virtuoso_ predictions are accurate, the score information, analyzed by _pitch_, may be simpler. In Figure 11, we show two examples where the _virtuoso_ model estimates lower than the ensemble's other two models. The first piece is _Prelude E major op. 28,9, Frederic Chopin_. The ground truth label is 5, and the ensemble model's prediction is 5. The prediction made by the _argnn_ model is 8, while the _pitch_ model predicts 5 and the _virtuoso_ model predicts 2. The second piece is _La Cathedrale engloutie, Preludes, Claude Debussy_. The ground truth label is 5, and the ensemble model's prediction is 8. The prediction made by the _argnn_ model is 9, while the _pitch_ model predicts 7 and the _virtuoso_ model predicts 3. Both pieces present unique and challenging finger sequences that can prove difficult for many pianists, and _argnn_ model may capture those patterns. However, it is uncertain if their technical demands are as simple as predicted, or if it may be a bias due to the pieces' relatively slow tempo. We have observed some scores engraved without the articulations, as shown in Figure 12. However, some editions or composers do not provide the articulations, and we think is important to expose this case. The piano piece is _Wichtige Begebenheit op. 15,6, Robert Schumann_. The ground truth label is 4, and the ensemble model's prediction is 5. The prediction made by the _argnn_ model is 7, while the _pitch_ model predicts 5 and the _virtuoso_ model predicts 2. We can observe the large performance deterioration of the _virtuoso_ model caused by the Figure 11: Examples of pieces where the model trained in _virtuoso_ have a low estimated prediction. (a) Fragment of the piece _Prelude E major op. 28,9, Frederic Chopin_. (b) Fragment of the piece _La Cathedrale engloutie, Preludes, Claude Debussy_. ack of articulations. This is reasonable because simpler pieces generally have fewer articulations. However, it may induce a biased piece classification in some instances. Lastly, in Figure 13, we highlight the limitations of \(x\) representation. The first example is _3rd movement (Rondo) from Piano Sonata (Facile), C major KV 545, W. A. Mozart_. The ground truth label is 4, and the ensemble model's prediction is 5. The prediction made by the _argnn_ model is 4, while the _pitch_ model predicts 3, and the _virtuoso_ model predicts 7. The second example is _Exercise 2a WoO 6,2a, Johannes Brahms_. The ground truth label is 5, and the ensemble model's prediction is 5. The prediction made by the _argnn_ model is 8, while the _pitch_ model predicts 3 and the _virtuoso_ model predicts 5. In both cases, we can observe that the model trained on _pitch_ predicts lower grades than the other models. We think it is because _pitch_ representation has limitations in understanding polyphony, as we state on Ramoneda et al. (2022). Finally, in the second example, we want to emphasize the strong prediction of the _argnn_ model. Considering the technical nature of _Exercise 2a WoO 6,2a, Johannes Brahms_, this is reasonable. The case study showed that the three main input representations contribute to the final _ensemble_ model. However, the annotations of Henle Verlag only Figure 12: Example of under-performance of the model trained on _virtuoso_. Fragment of _Wichtige Begebenheit op. 15,6, Robert Schumann. Figure 13: Example of under-performance of the model trained on \(x\) (a) Fragment of the piece _3rd movement (Rondo) from Piano Sonata (Facile), C major KV 545, W. A. Mozart_. (b) Fragment of the piece _Exercise 2a WoO 6,2a, Johannes Brahms_. provide information about the general difficulty of a piece, not the technical, expressiveness, or structural difficulty, making the models trained in a particular dimension have an unreliable prediction. The study exposed some limitations of the methods proposed and demonstrated the relationship between piano performance and input representations. ## 6 Challenges Our results confirm a correlation between the difficulty of piano performance and various dimensions of musical performance, guided by prior musicology research Cook (1999). This leads to several avenues for further research. **Training with fragments of chunks.** The scarcity of labeled data at the piece level for score difficulty analysis has led us to explore training with smaller fragments instead of the entire piece in Section 5.3.1. Although there are many weak annotations regarding the number of labeled segments, using fragments to train the model is not straightforward. Further research in musicology and computational musicology is necessary to understand how fragments contribute to the overall difficulty of a piece. This understanding will be crucial for future advancements in difficulty performance analysis. **Better performance generation models.** We have modeled performance through automatic piano fingering Ramoneda et al. (2022) and expressive piano performance generation Jeong et al. (2019). However, both of these tasks are yet to be fully resolved, and improvements in their performance may enhance the analysis of performance from the score, particularly for the task of piano difficulty classification. **Data augmentation.** The application of data augmentation is fundamental in tasks with very little data, such as the ones presented in the present research. However, data augmentation techniques traditionally used on symbolic music tasks Lopez et al. (2021); Yang et al. (2022) can not be directly applied to the difficulty classification task. For instance, transposition augmentation could alter the distances between the black and white notes of the piano, creating a very different technical difficulty. Also, a random combination of the fragments of the pieces may cause significant changes of difficulty in the union between the fragments. Furthermore, the exclusion of parts of the pieces can lead to the omission of valuable information about the difficulty of the musical work. **Learning with noisy labels.** In future work, it is crucial to utilize crowd-sourced annotations about difficulty from websites such as 8notes to improve performance on the _CIPI_ dataset. Additionally, exploring how to expand the _CIPI_ dataset domain through self-supervision in large corpora and semi-supervision is a promising research avenue. **Using all the multi-modal data available.** The information of classical performances can be represented in multiples modalities: symbolic piano-roll, symbolic scores (used in _Mirkoskosmos-difficulty_ and _CIPI_, pdf scores, audio, and video of the performance. Although the symbolic score modality is a well starting point because of interpretability, the other modalities have other advantages, such as implicitly containing the technique and expressive information. Exploring how to analyze the performance difficulty in other modalities may be helpful. **Multi-ranking.** The classification of the most difficult musical pieces can be found from various sources such as other music systems, publishers, or examination boards in addition to Henle Verlag. The most time-consuming task of the present research was compiling a high-quality collection of symbolic music. However, searching other sources can clarify the ranking of the collected _CIPI_ musical works. We believe that having multiple perspectives on the concept of difficulty can aid in finding a more objective view of musical performance difficulty. ## 7 Conclusions In this work, we introduced a new dataset of symbolic piano scores with difficulty level annotations from the recognized classical music publisher Henle Verlag with a new methodology to create MIR datasets. We curated the _CIPI_ dataset after evaluating and rectifying the automatic matching between public domain scores and Henle Verlag annotations by an expert pianist. We trained models based on various dimensions of musical performance on _CIPI_ dataset, inspired by prior musicology research and in comparison with the _Mikrooksmos-difficulty_ dataset. Following the approach outlined in Cook (1999), we combined the predictions of multiple models trained on different musical performance dimensions, which resulted in improved performance compared to individual models. Our models achieved a balanced accuracy of 39.47% and a median square error of 1.13 across the nine difficulty levels in the CIPI dataset. We emphasized the importance of choosing the appropriate loss function for the ordinal regression task in training on the CIPI dataset. Additionally, we conducted extensive experiments to inform further research, including training with fragments of pieces instead of whole pieces, using different methods for feature fusion, limiting the classes in the _CIPI_ dataset to only 3, and training only with the shortest pieces. We conclude that difficulty analysis is a very challenging and complex task involving many dimensions. With this paper, we want to lay the foundations for research on difficulty analysis in piano repertoire from a MIR perspective. Advancing the research for better structuring extensive collections of classical music to increase the diversity of the piano curriculum on music education and enhancing the participation of the student on the election of the mentioned curriculum. Furthermore, studying the difficulty analysis through computational approaches contributes to the research on automatic music arrangement systems and other automatic composition tools for music education. In addition, the task we want to establish with this paper may help design curriculum learning strategies for other tasks, such as automatic piano fingering, automatic music generation, or expressive performance. In future work, we plan to create more explainable representations for computational musicology-oriented research and explore other data sources to classify difficulty. ## Acknowledgements The authors would like to thank Craig Sapp, Luca Chiantore, and Pedro d'Avila for their insightful comments. This work is supported in part by the project Musical AI - PID2019- 111403GB-I00/AEI/10.13039/501100011033 funded by the Spanish Ministerio de Ciencia, Innovacion y Universidades (MCIU) and the Agencia Estatal de Investigacion (AEI) and Sogang University Research Grant of 202110035.01.
2301.03572
Non-oscillating Early Dark Energy and Quintessence from Alpha-Attractors
Early dark energy (EDE) is one of the most promising possibilities in order to resolve the Hubble tension: the discrepancy between early and late-Universe measurements of the Hubble constant. In this paper we propose a model of a scalar field which can explain both EDE and late Dark Energy (DE) in a joined manner without additional fine-tuning. The field features kinetic poles as with alpha-attractors. Our model provides an injection of EDE near matter-radiation equality, and redshifts away shortly after via free-fall, later refreezing to become late-time DE at the present day. Using reasonable estimates of the current constraints on EDE from the literature, we find that the parameter space is narrow but viable. As such our model is readily falsifiable. In contrast to other work in EDE, our model is non-oscillatory, which causes its decay to be faster than that of the usual oscillatory EDE, thereby achieving better agreement with observations.
Lucy Brissenden, Konstantinos Dimopoulos, Samuel Sánchez López
2023-01-09T18:50:02Z
http://arxiv.org/abs/2301.03572v3
# Non-oscillating Early Dark Energy and Quintessence from \(\alpha\)-Attractors ###### Abstract Early dark energy (EDE) is one of the most promising possibilities in order to resolve the Hubble tension: the discrepancy between early and late-Universe measurements of the Hubble constant. In this paper we propose a model of a scalar field which can explain both EDE and late Dark Energy (DE) in a joined manner without additional fine-tuning. The field features kinetic poles as with \(\alpha\)-attractors. Our model provides an injection of EDE near matter-radiation equality, and redshifts away shortly after via free-fall, later refreezing to become late-time DE at the present day. Using reasonable estimates of the current constraints on EDE from the literature, we find that the parameter space is narrow but viable. As such our model is readily falsifiable. In contrast to other work in EDE, our model is non-oscillatory, which causes its decay to be faster than that of the usual oscillatory EDE, thereby achieving better agreement with observations. ## 1 Introduction In the last few decades cosmological observations of the early and late Universe have converged into a broad understanding of the history of our Universe from the very first seconds of its existence until today. Thus, cosmology has developed a standard model called the concordance model, or in short \(\Lambda\)CDM. However, the latest data might imply that the celebrated \(\Lambda\)CDM model is not that robust after all. In particular, there is a 5-\(\sigma\) discrepancy between the measurements of the current expansion rate, the Hubble constant \(H_{0}\), as inferred by early Universe observations compared with late Universe observations. This Hubble tension has undermined our confidence in \(\Lambda\)CDM and as such it is investigated intensely at present. In this work we study a toy model that can simultaneously solve the Hubble tension and explain the current accelerated expansion with no more tuning that in \(\Lambda\)CDM. Our model introduces a scalar field which plays both the role of early dark energy (EDE) and quintessence. In contrast to most other works in the literature which consider scalar fields as EDE, ours is not an oscillating scalar field. We use natural units with \(c=\hbar=1\), the reduced Planck mass \(m_{\rm P}=1/\sqrt{8\pi G}=2.43\times 10^{18}\)GeV and consider a positive signature metric \((-1,+1,+1,+1)\) throughout the present work. ### The Hubble tension Measurements in observational cosmology can broadly be classified into two groups. These are measurements of quantities which depend only on the early-time history of our Universe (such as the cosmic microwave background (CMB) radiation at redshift \(z\simeq 1100\), or Baryon Acoustic Oscillations (BAO)) and measurements of quantities which depend on present-day observations (the primary example of this is the cosmic distance ladder, which measures the redshift of observable astrophysical objects such as Cepheid stars and type-1a supernovae, at redshift \(z=\mathcal{O}(1)\)). The value of the Hubble constant \(H_{0}\) can in principle be inferred from both early and late-time measurements. However, it has been found that while early-time measurements are in good agreement with each other, they disagree with current late-time data. Latest analysis of the CMB temperature anisotropies' data gives the value inferred from Planck satellite [1], \[H_{0}=67.44\pm 0.58\ \text{km s}^{-1}\text{Mpc}^{-1}, \tag{1}\] and a distance scale measurement using Cepheid-SN 1a data from the SH0ES collaboration [2] as \[H_{0}=73.04\pm 1.04\ \text{km s}^{-1}\text{Mpc}^{-1}. \tag{2}\] This is a \(5\sigma\) tension which includes estimates of all systematic errors and which the SH0ES team conclude has "no indication of arising from measurement uncertainties or analysis variations considered to date". It is becoming increasingly apparent with successive measurements that this tension is likely to have a theoretical resolution [3; 4; 5; 6], which can have many possible sources [7; 8] but increasingly favours early-time modifications [9; 10]. ### Early Dark Energy One proposed class of solutions to the Hubble tension is models of Early Dark Energy (EDE), whose early works include references [11; 12; 13; 14], followed by many others, e.g. see Refs. [7; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. These involve an injection of energy in the dark energy sector at around the time of matter-radiation equality, which then dilutes or otherwise decays away faster than the background energy density, such that it becomes negligible before it can be detected in the CMB. As briefly reviewed below, such models result in a slight change in the expansion history of the Universe, bumping up the value of the Hubble parameter at the present day. It has previously been concluded [3; 7; 8] that EDE models are most likely to source a theoretical resolution to the Hubble tension. One reason for this is that EDE can effect substantial modifications to \(H_{0}\) without significant effect on other cosmological parameters which are tightly constrained by observations.1 In particular, EDE models can be incorporated into existing scalar-field models of inflation and late-time dark energy; one example of the latter is the model detailed in this work. Footnote 1: Models which modify other cosmological parameters are often unable to reconcile their changes with current observational constraints on said parameters (see Ref. [7] for a comprehensive review). However, precisely because EDE models exist so close in time to existing observational data, they have significant constraints; the primary consideration being that EDE must be subdominant at all times and must decay away fast enough to be essentially negligible at the time of last scattering translating to a redshift rate that is faster than radiation [12]. So far, in previous works in EDE, this has been achieved by considering first or second-order phase transitions (e.g. [27], [33]). These abrupt events might have undesirable side-effects such as inhomogeneities from bubble collisions or topological defects. Other proposed models [7; 11; 12; 27; 28; 29; 30; 31; 32; 33; 34] typically feature oscillatory behaviour to achieve the rapid decay rate necessary for EDE to be negligible at last scattering. As with the original proposal in Ref. [11], the EDE field is taken to oscillate around its Vacuum Expectation Value (VEV) in a potential minimum which is tuned to be of order higher than quartic. As a result, its energy density decays on average as \(\propto a^{-n}\), with \(4<n<6\). In contrast, in our model, the EDE scalar field experiences a period of kinetic domination, where the field is in non-oscillatory free-fall and its density decreases as \(\propto a^{-6}\), exactly rather than approximately. Before continuing, we briefly explain how EDE manages to increase the value of \(H_{0}\) as from CMB observations. Measurements of the CMB temperature anisotropies provide very tight constraints on the cosmological parameters. One would therefore think that this severely limits models which alter the Universe content and dynamics at this time. However, there are certain classes of models for which this is not the case. These are models that affect both the Hubble parameter and \(r_{s}\), the comoving sound horizon2 (in this case during the drag epoch, shortly after recombination), given by Footnote 2: This is the characteristic scale of BAO, typically approximately proportional to the value of the cosmological horizon at that point by \(r_{s}=\frac{1}{\sqrt{3}}r_{H}\) assuming spatial flatness. \[r_{s}=\int_{z_{d}}^{\infty}\frac{c_{s}(z)}{H(z)}dz, \tag{3}\] where \(c_{s}(z)\) is the sound speed and \(H(z)\) is the Hubble parameter, both as a function of redshift. An additional amount of dark energy in the Universe increases the total density, which in turn increases the Hubble parameter because of the Friedmann equation \(\rho\propto H^{2}\). Therefore, EDE considers such a brief increase at or before decoupling, which lowers the value of the sound horizon because it increases \(H(z)\) in Eq. (3). However, there is a way to avoid this being evident in and therefore disproved by current CMB measurements. This is because BAO and CMB measurements do not constrain the value of the sound horizon directly. For example, BAO measurements do not constrain the sound horizon alone, but the combination \(H(z)r_{s}\)[41]. The observations of the Planck satellite measure the quantity \(\theta_{*}\equiv\frac{r_{*}}{D_{*}}\)[42], the angular scale of the sound horizon; given by ratio of the comoving sound horizon to the angular diameter distance at which we observe fluctuations. Both of these measurements entail an assumption of \(\Lambda\)CDM cosmology and can be shown to be equally constrained by other models, provided that they make only small modifications which simultaneously lower the value of \(r_{s}\) and increase \(H_{0}\). EDE may have a significant drawback, however, in that it does not alleviate the \(\sigma_{8}\) tension (associated with matter clustering) and may in fact exacerbate it [43; 44; 3; 45]. As with many others, our model does not attempt to solve this problem. ### \(\alpha\)-attractors Our model unifies EDE with late DE in the context of \(\alpha\)-attractors. An earlier attempt for such unification in the same theoretical context can be seen in Ref. [34]. However, this proposal is also of oscillatory EDE. \(\alpha\)-attractors [46; 47; 48; 49; 50; 51; 52; 53; 54], which appear naturally in conformal field theory or supergravity theories, are a class of models whose inflationary predictions continuously interpolate between those of chaotic inflation [55] and those of Starobinsky [56] and Higgs inflation [57]. In supergravity, introducing curvature to the internal field-space manifold can give rise to a non-trivial Kahler metric, which results in kinetic poles for some of the scalar fields of the theory. The free parameter \(\alpha\) is inversely proportional to said curvature. It is also worth clarifying what is meant by the word "attractor". It is not only used in the usual sense (_i.e._, field trajectories during inflation flowing to a unique one, regardless of the initial conditions), but also to refer to the fact that the inflationary predictions are largely insensitive of the specific characteristics of the model under consideration. Such an attractor behaviour is seen for sufficiently large curvature (small \(\alpha\)) in the internal field-space manifold. In practical terms, the scalar field has a non-canonical kinetic term, featuring two poles, which the field cannot transverse. To aid our intuition, the field can be canonically normalised via a field redefinition, such that the finite poles for the non-canonical field are transposed to infinity for the canonical one. As a result, the scalar potential is "stretched" near the poles, resulting in two plateau regions, which are useful for modelling inflation, see _e.g._ Refs. [58, 59, 60, 61, 62, 63] or quintessence [64], or both, in the context of quintessential inflation [64, 65, 66]. Following the standard recipe, we introduce two poles at \(\varphi=\pm\sqrt{6\alpha}\,m_{P}\) by considering the Lagrangian \[\mathcal{L}=\frac{-\frac{1}{2}(\partial\varphi)^{2}}{(1-\frac{\varphi^{2}}{6 \alpha\,m_{\rm P}^{2}})^{2}}-V(\varphi)\,, \tag{4}\] where \(\varphi\) is the non-canonical scalar field and we use the short-hand notation \((\partial\varphi)^{2}\equiv g^{\mu\nu}\partial_{\mu}\varphi\,\partial_{\nu}\varphi\). We then redefine the non-canonical field in terms of the canonical scalar field \(\phi\) as \[\mathrm{d}\phi=\frac{\mathrm{d}\varphi}{1-\frac{\varphi^{2}}{6\alpha m_{\rm P} ^{2}}}\quad\Rightarrow\quad\varphi=m_{\rm P}\sqrt{6\alpha}\,\tanh\left(\frac{ \phi}{\sqrt{6\alpha}\,m_{\rm P}}\right). \tag{5}\] It is obvious that the poles \(\varphi=\pm\sqrt{6\alpha}\) are transposed to infinity. In terms of the canonical field, the Lagrangian now reads \[\mathcal{L}=-\frac{1}{2}(\partial\phi)^{2}-V(\phi). \tag{6}\] ### Quintessence "Early" Dark Energy is so named in order to make it distinct from "late" Dark Dnergy, which is the original source of the name (and often just called Dark Energy (DE)). In cosmological terms the latter is just beginning to dominate the Universe at present, making up approximately 70% of the Universe's energy density [67]. This is the mysterious unknown substance that is responsible for the current accelerating expansion of the Universe and has equation-of-state (barotropic) parameter of \(w=-1.03\pm 0.03\)[1]. Late DE that is due to an (as-yet-undiscovered) scalar field is called _quintessence_[68], so-named because it is the "fifth element" making up the content of the Universe 3. In this case, the Planck-satellite bound on the barotropic parameter of DE is \(-1\leq w<-0.95\)[1]. Quintessence is distinct from other explanations for DE because a scalar field has a variable barotropic parameter and can therefore exhibit completely different behaviour in different periods of the Universe's history. In order to get it to look like late-time DE, a scalar field should be dominated by its potential density, making its barotropic parameter sufficiently close to \(-1\). It is useful to consider the CPL parametrization, which is obtained by Taylor expanding \(w(z)\) near the present as [69; 70] \[w(z)=w_{0}+w_{a}\frac{z}{z+1}\,, \tag{7}\] where \(w_{a}\equiv-(\mathrm{d}w/\mathrm{d}a)_{0}\). The Planck satellite observations impose the bounds [1] \[-1\leq w<-0.95\] \[w_{a}=-0.29^{+0.32}_{-0.26}\,. \tag{8}\] ## 2 The Model ### Lagrangian and Field Equations Consider a potential of the form \[V(\varphi)=V_{X}\exp\Bigl{(}-\lambda e^{\kappa\varphi/mp}\Bigr{)},\] \[\text{with}\ \ V_{\Lambda}\equiv\exp\Bigl{(}-\lambda e^{\kappa \sqrt{6\alpha}}\Bigr{)}V_{X}\,, \tag{9}\] where \(\alpha,\kappa,\lambda\) are dimensionless model parameters, \(V_{X}\) is a constant energy density scale and \(\varphi\) is the non-canonical scalar field with kinetic poles given by the typical alpha attractors form (see [50]) with Lagrangian density given by Eq. (4).4 In the above, \(V_{\Lambda}\) is the vacuum density at present. To assist our intuition, we switch to the canonically normalised (canonical) scalar field \(\phi\), using the transformation in Eq. (5). In terms of the canonical scalar field, the Lagrangian density is then given by Eq. (6), where the scalar potential is Footnote 4: The model parameter is \(V_{X}\) and not \(V_{\Lambda}\), the latter being generated by \(V_{X}\) and the remaining model parameters as shown in Eq. (9). \[V(\phi)=\exp\Bigl{(}\lambda e^{\kappa\sqrt{6\alpha}}\Bigr{)}V_{\Lambda}\exp \Bigl{[}-\lambda e^{\kappa\sqrt{6\alpha}\tanh\bigl{(}\phi/\sqrt{6\alpha}\,m_{ P}\bigr{)}}\Bigr{]}\,. \tag{10}\] As usual, the Klein-Gordon equation of motion for the homogeneous canonical field is \[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=0\,, \tag{11}\] where the dot and prime denote derivatives with respect to the cosmic time and the scalar field respectively, and we assumed that the field was homogenised by inflation, when the latter overcame the horizon problem. ### Shape of Potential and Expected Behaviour Henceforth we will discuss the behaviour of the field in terms of the variation, i.e. movement in field space, of the canonical field. ### Asymptotic forms of the scalar potential We are interested in two limits for the potential above: \(\phi\to 0\) (\(\varphi\to 0\)) and \(\phi\to+\infty\) (\(\varphi\to\sqrt{6\alpha}\,m_{P}\)). The first limit would correspond to matter-radiation equality. In this limit, the potential is \[V_{\rm eq}\simeq\exp\Bigl{[}\lambda(e^{\kappa\sqrt{6\alpha}}-1)\Bigr{]}V_{\Lambda} \exp(-\kappa\lambda\,\phi_{\rm eq}/m_{\rm P})\,, \tag{4}\] where the subscript 'eq' denotes the time of matter-radiation equality when the field unfreezes. It is assumed that the field was originally frozen there. We discuss and justify this assumption in Sec. 5. After unfreezing, it is considered that the field has not varied much, for the above approximation to hold, i.e. \[0\lesssim\phi_{\rm eq}\ll\sqrt{6\alpha}m_{\rm P}\,. \tag{5}\] This is a reasonable assumption given that the field begins shortly before matter-radiation equality frozen at the origin, unfreezing at some point during this time 5. Footnote 5: There is no suggestion in the EDE literature [7; 11; 12; 27; 28; 29; 30; 31; 32; 33; 34] that the field has to unfreeze at any particular time, as long as it does not grow to larger than the allowed fraction and its energy density is essentially negligible by the time of decoupling. At large \(\phi\) (\(\phi\to\infty\)), the non-canonical field is near the kinetic pole (\(\varphi\to\sqrt{6\alpha}\,m_{\rm P}\)). Then the potential in this limit is \[V_{0}\simeq V_{\Lambda}\left[1+2\kappa\lambda e^{\kappa\sqrt{6\alpha}}\sqrt{6 \alpha}\,\exp\left(-\frac{2\phi_{0}}{\sqrt{6\alpha}\,m_{\rm P}}\right)\right], \tag{6}\] which, even for sub-Planckian total field excursion in \(\phi\), should be a good approximation for sufficiently small \(\alpha\). The subscript '0' denotes the present time.6 Footnote 6: Note that, as the field becomes very large, the potential approaches the positive constant \(V_{\Lambda}\), which corresponds to non-zero vacuum density with \(w=-1\), as in \(\Lambda\)CDM. Thus, our model outperforms pure quintessence (with \(-1<w<-0.95\)[1]), which can push \(H_{0}\) to lower instead of higher values [71; 72]. Figure 1: Graph of the canonical potential and its two approximations for small and large field values, given in Eqs. (4) and (6) respectively. These approximations are useful because they are simple exponential potentials with known attractors, so we know the type of behaviour the field should exhibit when each approximation is valid. It can be readily seen that, after leaving the origin the field jumps off a potential plateau and is free-falling as a result. The above approximations describe well the scalar potential near equality and the present time, as shown in Fig. 1. As we explain below, in between these regions, the scalar field free-falls and becomes oblivious of the scalar potential as the term \(V^{\prime}(\phi)\) in its equation of motion (3) becomes negligible. #### 2.3.1 Expected Field Behaviour Here we explain the rationale behind the mechanism envisaged. We make a number of crude approximations, which enable us to follow the evolution of the scalar field, but which need to be carefully examined numerically. We do so in the next section. First, we consider that originally the field is frozen at zero (for reasons explained in Sec. 5). Its energy density is such that it remains frozen there until equality, when it thaws following the appropriate exponential attractor, since \(V_{\rm eq}\) in Eq. (4) is approximately exponential [73]. Assuming that this is the subdominant attractor requires that the strength of the exponential is [74, 75] \[Z\equiv\kappa\lambda>\sqrt{3}\,. \tag{7}\] The subdominant exponential attractor dictates that the energy density of the rolling scalar field mimics the dominant background energy density. Thus, the density parameter of the field is constant, given by the value [73, 74, 75] \[\Omega_{\phi}^{\rm eq}\simeq\frac{3}{Z^{2}}=\frac{3}{(\kappa\lambda)^{2}}<1 \tag{8}\] This provides an estimate of the moment when the originally frozen scalar field, unfreezes and begins rolling down its potential. Unfreezing happens when \(\Omega_{\phi}\) (which is growing while the field is frozen, because the background density decreases with the expansion of the Universe) obtains the above value. However, after unfreezing, the field soon experiences the full \(\exp(\exp)\) steeper than exponential potential so, it does not follow the subdominant attractor any more but it free-falls,7 such that its density scales as \(\rho_{\phi}\simeq\frac{1}{2}\dot{\phi}^{2}\propto a^{-6}\), until it refreezes at a larger value \(\phi_{F}\). This value is estimated as follows. Footnote 7: i.e. its energy density is dominated by its kinetic energy density only. In free-fall, the slope term in the equation of motion (3) of the field is negligible, so that the equation is reduced to \(\ddot{\phi}+3H\dot{\phi}\simeq 0\), where \(H=2/3t\) after equality. The solution is \[\phi(t)=\phi_{\rm eq}+\frac{C}{t_{\rm eq}}\left(1-\frac{t_{\rm eq}}{t}\right)\,, \tag{9}\] where \(C\) is an integration constant. From the above, it is straightforward to find that \(\dot{\phi}=Ct^{-2}\). Thus, the density parameter at equality is \[\Omega_{\phi}^{\rm eq}=\left.\frac{\rho_{\phi}}{\rho}\right|_{ \rm eq}=\left.\frac{\frac{1}{2}C^{2}t_{\rm eq}^{-4}}{\frac{4}{3}(m_{P}t_{\rm eq })^{2}}=\frac{3}{8}\frac{C^{2}}{(m_{P}t_{\rm eq})^{2}}\right.\] \[\Rightarrow\,\,\,C=\sqrt{\frac{8}{3}\Omega_{\phi}^{\rm eq}}\,m_{P }t_{\rm eq}=\frac{\sqrt{8}}{\kappa\lambda}\,m_{P}\,t_{\rm eq}\;, \tag{10}\] where we used Eq. (8), \(\rho_{\phi}\simeq\frac{1}{2}\dot{\phi}^{2}\) and that \(\rho=1/6\pi Gt^{2}=\frac{4}{3}(m_{P}/t)^{2}\). Thus, the field freezes at the value \[\phi_{0}=\phi_{\rm eq}+C/t_{\rm eq}=\phi_{\rm eq}+\frac{\sqrt{8}}{\kappa \lambda}\,m_{P}\;, \tag{11}\] where we considered that \(t_{\rm eq}\ll t_{\rm freeze}<t_{0}\). Using that \(t_{\rm eq}\sim 10^{4}\,{\rm y}\) and \(t_{0}\sim 10^{10}\,{\rm y}\), we can estimate \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{\Omega_{\phi}^{\rm eq}\rho_{\rm eq}}{0.7 \,\rho_{0}}\simeq\frac{30}{7(\kappa\lambda)^{2}}\left(\frac{t_{0}}{t_{\rm eq}} \right)^{2}\simeq\frac{3}{7(\kappa\lambda)^{2}}\times 10^{13}\,. \tag{12}\] Now, from Eqs. (4) and (6) we find \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{e^{\lambda(e^{\kappa\sqrt{6\alpha}}-1)} \exp(-\kappa\lambda\,\phi_{\rm eq}/m_{P})}{1+2\kappa\lambda\,e^{\kappa\sqrt{6 \alpha}}\sqrt{6\alpha}\exp\bigl{(}-2\phi_{0}/\sqrt{6\alpha}\,m_{P}\bigr{)}}\,. \tag{13}\] In view of Eqs. (5) and (11), the above can be written as \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{e^{\lambda(e^{\kappa\sqrt{6\alpha}}-1)} }{1+2\kappa\lambda\,e^{\kappa\sqrt{6\alpha}}\sqrt{6\alpha}\,e^{-2\sqrt{8}/ \kappa\lambda\sqrt{6\alpha}}}\,. \tag{14}\] Taking \(\Omega_{\phi}^{\rm eq}\simeq 0.1\) as required by EDE, Eq. (8) suggests \[\kappa\lambda\simeq\sqrt{30}\,. \tag{15}\] Combining this with Eq. (12) we obtain \[e^{\frac{\sqrt{30}}{\kappa}(e^{\kappa\sqrt{6\alpha}}-1)}\sim 10^{12}/7\,, \tag{16}\] where we have ignored the 2nd term in the denominator of the right-hand-side of Eq. (14). From the above we see that, \(\kappa\) is large when \(\alpha\) is small. Taking, as an example, \(\alpha=0.01\) we obtain \(\kappa\simeq 18\) and \(\lambda\simeq 0.30\) (from Eq. (15)). With these values, the second term in the denominator of the right-hand-side of Eq. (14), which was ignored above, amounts to the value 3.2. This forces a correction to the ratio \(V_{\rm eq}/V_{0}\) of order unity, which means that the order-of-magnitude estimate in Eq. (16) is not affected. Using the selected values, Eq. (11) suggests that the total excursion of the field is \[\Delta\phi=\phi_{0}-\phi_{\rm eq}=\frac{\sqrt{8}}{\kappa\lambda}\,m_{P}\simeq 0.5\,m_{P}\, \tag{17}\] i.e. it is sub-Planckian. In the approximation of Eq. (4), we see that the argument of the exponential becomes \(\kappa\lambda\Delta\phi/m_{P}\simeq 2.7>1\), where we used Eq. (15). This means that the approximation breaks down and the \(\exp(\exp)\) potential is felt as considered, as depicted also in Fig. 1. For small \(\alpha\) the eventual exponential potential in Eq. (6) is steep, which suggests that field rushes towards the minimum at infinity and the barotropic parameter is \(w\approx-1\) because the potential is dominated by the constant \(V_{\Lambda}\). ### Tuning requirements Our model addresses in a single shot two cosmological problems: firstly, the Hubble tension between inferences of \(H_{0}\) using early and late-time data; and secondly, the reason for the late-time accelerated expansion of the Universe; late DE. However, it is subject to some tuning. Namely, the two free parameters \(\kappa\) and \(\lambda\), the intrinsic field-space curvature dictated by \(\alpha\), and the scale of the potential introduced by \(V_{\Lambda}\). As we have seen \(\kappa\) and \(\lambda\) seem to take natural values, not too far from order unity. Regarding \(\alpha\) we only need that it is small enough to lead to rapid decrease of the exponential contribution in the scalar potential in Eq. (6), leaving the constant \(V_{\Lambda}\) to dominate at present. We show in the next section that \(\alpha\sim 10^{-4}\) is sufficient for this task. This leaves \(V_{\Lambda}\) itself. The required tuning of this parameter is given by \(V_{\Lambda}=\left(\frac{H_{\rm P}^{\rm Planck}}{H_{\rm Q}^{\rm Planck}}\right)^{ 2}V_{\Lambda}^{\rm Planck}\), where \(V_{\Lambda}^{\rm Planck}\) is given by the Planck 2018 [1] estimate of \(\rho_{0}\), the density today, multiplied by \(\Omega_{\Lambda}\), the estimate of the density parameter of dark energy today, i.e. \(V_{\Lambda}^{\rm Planck}=\Omega_{\Lambda}\rho_{0}\). Since \(\left(\frac{H_{\rm P}^{\rm Planck}}{H_{\rm Q}^{\rm Planck}}\right)^{2}\simeq( \frac{67.44}{73.04})^{2}=0.8525\) we see that the required fine-tuning of our \(V_{\Lambda}\) is not different from the fine-tuning introduced in \(\Lambda\)CDM, but, in contrast to \(\Lambda\)CDM, our proposal addresses two cosmological problems; not only late DE but also the Hubble tension.8 Footnote 8: In our simulations we use \(V_{\Lambda}=10^{-120.068}\,m_{\rm P}^{4}\) as assumed also in Fig. 1. ## 3 Numerical Simulation In order to numerically solve the dynamics of the system, it is enough to solve for the scale factor \(a(t)\), the field \(\phi(t)\) and the background fluid densities \(\rho_{\rm m}(t)\) and \(\rho_{\rm r}(t)\), as every other quantity depends on these. They are governed by the Friedmann equations, the Klein-Gordon equation and the continuity equations respectively. Of course, the Klein-Gordon equation is a second order ODE, while the continuity equations are first order so that we need the initial value and velocity of \(\phi\) and just the initial value of \(\rho_{\rm m}\) and \(\rho_{\rm r}\) as initial conditions. As described above, the field starts frozen and unfreezes around matter-radiation equality. Effectively, this means using \(\phi_{\rm ini}=0\) and \(\dot{\phi}_{\rm ini}=0\) as initial conditions, a few e-folds before matter-radiation equality, while the initial radiation and matter energy densities are chosen to satisfy the bounds obtained by Planck [1] at matter-radiation equality, _i.e._, \(\rho_{\rm m}(t_{\rm eq})=\rho_{\rm r}(t_{\rm eq})=1.27\times 10^{-110}m_{\rm P}^{4}\). For convenience, we rewrite the equations in terms of the logarithmic energy densities \(\tilde{\rho}_{m}(t)=\ln\left(\rho_{m}(t)/m_{\rm P}^{4}\right)\) and \(\tilde{\rho}_{r}(t)=\ln\left(\rho_{r}(t)/m_{\rm P}^{4}\right)\). Plugging the first Friedmann equation in the Klein-Gordon equation, gives \[\ddot{\phi}(t)+\frac{\sqrt{3\rho(t)}}{m_{\rm P}}\;\dot{\phi}(t)+\frac{dV}{d \phi}=0, \tag{10}\] \[\dot{\tilde{\rho}}_{m}(t)+\frac{\sqrt{3\rho(t)}}{m_{\rm P}}=0, \tag{11}\] \[\dot{\tilde{\rho}}_{r}(t)+\frac{4}{3}\frac{\sqrt{3\rho(t)}}{m_{\rm P}}=0, \tag{12}\] where \(3m_{\rm P}^{2}H^{2}(t)=\rho(t)=[\exp(\tilde{\rho}_{m}(t))+\exp(\tilde{\rho}_{r }(t))]m_{\rm P}^{4}+\rho_{\phi}(t)\) and \(\rho_{\phi}(t)=K(\phi(t))+V(\phi(t))\) where \(K(\phi(t))=\frac{1}{2}(\dot{\phi}(t))^{2}\) and \(V(\phi(t))\) is given by Eq. (2). As mentioned above, we assume the field to be frozen at an ESP, such that it could have been the inflaton or a spectator field at earlier times. The time of unfreezing is then controlled only by the parameters of the model's potential.9 The densities of matter and radiation are scaled back to find some initial conditions at some arbitrary redshift, \(z_{\rm ini}=10^{4}\), before equality. Footnote 9: Although we could use an estimate for the initial time, it turns out that it makes no difference to the numerical results or the behaviour of the field and simply offsets the differential equations. The differential solver records three "events" during solving: matter-radiation equality, triggered by the obvious condition; decoupling, triggered by the total energy density taking the correct value; and the present day, triggered by the field making up the correct fraction of the total energy density (as estimated by the Planck satellite [1]). These values are saved to an association so that they can later be searched to identify points which fulfill the necessary constraints, in order to find a viable parameter space. Once the final event is recorded, the solver is terminated. If a field point does not meet the conditions for the final event (i.e. the present day), this indicates that the field began the simulation as the dominant component and will never reach the correct energy density. The point is thrown away. Finally, reasonable observational and theoretical constraints to the parameter space are applied to the data collected, which are outlined in Table 4. \begin{table} \begin{tabular}{|l|l|l|} \hline Initial Densities & Calculation & Value \\ \hline Matter & \(\rho_{m}=3\Omega_{m,0}^{\rm Planck}m_{P}^{2}(H_{0}^{\rm SH0ES})^{2}\) & \(3.84\times 10^{-121}m_{\rm P}^{4}\) \\ Radiation & \(\frac{\pi^{2}}{30}g_{*}(T_{\rm CMB,\ 0}^{\rm Planck})^{4}\) & \(9.56\times 10^{-125}m_{\rm P}^{4}\) \\ \hline \end{tabular} \end{table} Table 1: Table of present-day densities, where the present matter density parameter is \(\Omega_{m,0}^{\rm Planck}=0.3111\), \(T_{\rm CMB,\ 0}^{\rm Planck}=2.7255\) K and the effective relativistic degrees of freedom of radiation are \(g_{*}=3.36\), calculated by taking the photon and neutrino contribution into account (see section 5 of [76]). \begin{table} \begin{tabular}{|l|l|l|} \hline Variable & Initial Value & Source \\ \hline Redshift & \(z_{\rm initial}=10^{4}\) & chosen to be shortly before \\ & & matter-radiation equality \\ Time & \(t_{\rm ini}=0.1m_{\rm P}^{-1}\) & chosen to be close to zero \\ & & (see footnote 9) \\ Field Value & \(\phi(t_{\rm ini})=0\) & simplified initial conditions \\ Rate of change of Field & \(\dot{\phi}(t_{\rm ini})=0\) & simplified initial conditions \\ Value & & \\ Density of Matter & \(\rho_{m}(t_{\rm ini})=3.84\times 10^{-109}m_{\rm P}^{4}\) & \(\rho_{m}(t_{0})_{\rm Planck}(z_{\rm ini}+1)^{3}\) \\ Density of Radiation & \(\rho_{r}(t_{\rm ini})=1.24\times 10^{-108}m_{\rm P}^{4}\) & \(\rho_{r}(t_{0})_{\rm Planck}(z_{\rm ini}+1)^{4}\) \\ E-folds elapsed & \(N_{\rm ini}=0\) & chosen for convenience \\ \hline \end{tabular} \end{table} Table 2: Table detailing the initial conditions for the differential equations. ## 4 Results and analysis ### Parameter Space As evident from Figs. 2, 3 and 4, we find that \(\kappa\sim 10^{2}\) and \(\lambda\sim 10^{-3}\), which are rather reasonable values. In particular, the value of \(\kappa\) suggests that the mass-scale which suppresses the non-canonical field \(\varphi\) in the original potential in Eq. (1) is near the scale of grand unification \(\sim 10^{-2}\,m_{\rm P}\). Regarding the curvature of field space we find \(\alpha\sim 10^{-4}\), which again is not unreasonable. The viable parameter space suggests that \(\kappa\lambda>\sqrt{3}\), which contradicts our assumption in Eq. (7). This implies that, unlike the analytics in Sec. 2.3.1, the field does not adopt the subdominant exponential scaling attractor but the slow-roll exponential attractor, which leads to domination [73, 75]. As the field thaws and starts following this attractor, the approximation in Eq. (4) breaks down as the field experiences the full \(\exp(\exp)\) potential, which is steeper that exponential (see Fig. 1). Consequently, instead of becoming dominant the field free-falls. This contradiction with our discussion in Sec. 2.3.1 is not very important. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Parameter to be constrained** & **Source** & **Description** & **Constraint** \\ \hline Density parameter of the field at equality & EDE literature [29] & Upper limit governed by the maximum value that does not impede structure formation; lower limit is so that EDE actually has an effect & \(0.015\leq\Omega_{\phi}^{\rm eq}<0.107\) \\ Density parameter of the field at Last Scattering & EDE literature [12] & This is the upper limit that ensures EDE cannot currently be detected in the CMB & \(\Omega_{\phi}^{\rm eq}<0.015\) \\ Density parameters of the field at Last Scattering and Equality & Theoretical & Achieves desired behaviour of the field & \(\Omega_{\phi}^{\rm eq}>\Omega_{\phi}^{\rm ls}\) \\ Density parameter of the field today & Planck 2018 [1] & Observational constraint & \(0.6833\leq\Omega_{\phi}^{0}\leq 0.6945\) \\ Barotropic parameter of the field today & Planck 2018 [1] & Observational constraint & \(-1\leq w_{\phi}^{0}\leq-0.95\) \\ Running of the barotropic parameter today & Planck 2018 [1] & Observational constraint & \(-0.55\leq w_{\phi}^{a}\leq 0.03\) \\ Hubble constant & SH0ES & Observational constraint & \(72.00\!\leq\!\frac{H_{0}}{\rm km\,s^{-1}\,Mpc^{-1}}\!\leq\!74.08\) \\ Total Field Excursion & Theoretical & From analytical estimates, the total excursion of the field should ideally be sub-Planckian & \(\phi_{0}-\phi_{\rm eq}<m_{\rm P}\) \\ \hline \end{tabular} \end{table} Table 4: Table describing and justifying constraints used to identify the viable parameter space. In the above, \(w_{\phi}^{a}=-\left.\frac{\mathrm{d}w_{\phi}}{\mathrm{d}a}\right|_{0}\), c.f. Eq. (8). The existence of the scaling attractor provided an easy analytic estimate for the moment when the field unfreezes. It turns out that, because the scaling attractor has been substituted by the slow-roll attractor, the field unfreezes because its potential energy density becomes comparable to the total energy density, going straight into free-fall. In is much harder to analytically estimate when exactly this takes place, but the eventual result (free-fall) is the same. The redshift of matter-radiation equality occurs earlier than usual at \(z_{\rm eq}\simeq 4000\). However, equality occurs well before last scattering, \(z_{\rm eq}>z_{\rm ls}\) and its redshift is only indirectly inferred by observations. In contrast, the redshift of last scattering is where we would expect it at \(z_{\rm ls}\simeq 1087\). Theoretical constraints suggest \(z_{\rm ls}\simeq 1090\)[77], and the observations of the Planck satellite suggest \(z_{\rm ls}=1089.80\pm 0.21\)[1]. Figure 2: Parameter space slice in the \(\kappa-\alpha\) plane with \(0<\lambda<0.027\) and \(V_{\Lambda}=10^{-120.068}m_{\rm P}^{4}\). The blue dotted line is the boundary of the region that produces non-inflationary results (see below), while the orange region is constituted by the successful points, _i.e._, those for which the constraints detailed in Table 4 are satisfied. Note that the region bounded in blue is not equal to the range of the scan, which goes from \(0\leq\kappa\leq 700,\ 0\leq\alpha\leq 0.00071\). This is because points with potential larger than a certain starting value result in the field beginning the simulation dominant, which means that the Universe goes into inflation which cannot terminate and will never meet the numerical end condition for the present day. These points are very close to the viable parameter space for these two parameters and therefore must be thrown away. ### Field Behaviour The field behaves as expected, with the mild modification of the attractor solution at unfreezing (slow-roll instead of scaling), which leads to free-fall. The evolution is depicted in Figs. 5, 6, 7 and 8 for the example point at \(\alpha=0.0005,\ \kappa=145,\ \lambda=0.008125\), and \(V_{\lambda}\) tuned to the SH\({}_{0}\)ES cosmological constant [2]. The observables obtained in this case (i.e. the values of \(H_{0}\), \(w_{0}\) and \(w_{a}\)) are shown in Table 5. The behaviour of the Hubble parameter is a function of redshift as can be seen in Fig. 7. As mentioned in Table 4, the maximum allowed value of the EDE density parameter at equality is just over 0.1. However, it is possible that this is too lenient a constraint because unlike the models for which this constraint was developed, our model has a true free-fall period, which means it redshifts away _exactly_ as \(a^{-6}\) rather than below this rate as in oscillatory behaviour (see Figs. 5 and 8). A full MCMC analysis may provide a more accurate constraint for non-oscillatory models. At present, the exponential contribution to the potential density in Eq. (6) is largely subdominant to \(V_{\Lambda}\), so the contribution of the scalar field to the total density budget is almost constant, as in \(\Lambda\)CDM. Its barotropic parameter is, therefore, \(w_{\phi}\approx-1\) (see Fig. 5). Technically, it is not exactly -1 but its running is negligible, with the viable parameter space for \(w_{a}\) fitting easily within the constraint in Eq. (8) by some ten orders of magnitude (see Table 5). Figure 3: Parameter space slice in the \(\lambda-\alpha\) plane with \(0<\kappa<700\) and \(V_{\Lambda}=10^{-120.068}m_{\rm P}^{4}\). The orange region is constituted by the successful points, _i.e._, those for which the constraints detailed in Table 4 are satisfied. ## 5 Initial Conditions Our model accounts for both EDE and late-time dark energy in a non-oscillatory manner (in contrast to Ref. [34]). The field is frozen at early times, thawing just before matter-radiation equality when its density grows to nearly \(0.1\) of the total value (see Fig. 6), as set by constraints in Ref. [29]. A steep \(\exp(-\exp)\) potential then forces the field into free-fall, causing its energy density to dilute away as \(\rho_{\phi}\propto a^{-6}\). After this, the field hits the asymptote of the exponential decay and refreezes, becoming dominant at present (see Fig. 8). Thus, we achieve DE-like behaviour at the present day by ensuring that the field refreezes after its period of free-fall, therefore remaining at a constant energy density equal to the value of the potential density at that point. Although this constant potential density is initially negligible, the expansion of the Universe causes the density of matter to decrease. Because the field refreezes at a potential density that is comparable to the density of matter at present, the field starts to become dominant at the present day. Once it begins to dominate the Universe, the field thaws again, but the density of the Universe is dominated by a constant contribution \(V_{\Lambda}\), as with \(\Lambda\)CDM. The obvious question is why our scalar field finds itself frozen at the origin in the first place. One compelling explanation is the following. We assume that the origin is an enhanced symmetry point (ESP) such that, at very early times, an interaction of \(\varphi\) with some other scalar field \(\chi\) traps the rolling of \(\varphi\) at zero. The idea follows the scenario explored in Ref. [78]. Figure 4: Parameter space slice in the \(\lambda-\kappa\) plane with \(0<\alpha<0.00071\) and \(V_{\Lambda}=10^{-120.068}m_{\rm P}^{4}\). The orange region is constituted by the successful points, _i.e._, those for which the constraints detailed in Table 4 are satisfied. In this scenario, the scalar potential includes the interaction \[\Delta V=\frac{1}{2}g^{2}\varphi^{2}\chi^{2}\,, \tag{5.1}\] Figure 5: Barotropic parameter of the scalar field (dotted green), of the background perfect fluid (full blue) and of the sum of both components (full black), for \(\alpha=0.0005,\ \kappa=145,\ \lambda=0.008125,\) and \(V_{\Lambda}=10^{-120.068}m_{\rm P}^{4}\). Figure 6: The density parameter of the scalar field, for \(\alpha=0.0005,\ \kappa=145,\ \lambda=0.008125,\) and \(V_{\Lambda}=10^{-120.068}m_{\rm P}^{4}\), as a function of the redshift (top) and e-folds (bottom) elapsed since the beginning of the simulation. where the coupling \(g<1\) parametrises the strength of the interaction. Figure 8: The logarithmic densities of matter (dot-dashed red), radiation (dotted orange), the sum of both (solid blue) and the scalar field (dashed green), as a function of the redshift (top) and the e-folds (bottom) elapsed since the beginning of the simulation. The horizontal full line represents the (S\(H_{0}\)ES) energy density of the Universe at present. Figure 7: The Hubble parameter (in units of km s\({}^{-1}\)Mpc\({}^{-1}\)) of a Universe with the modelled scalar field (green), a classical \(\Lambda\)CDM simulation (black), and one with only matter and radiation (blue), as a function of the redshift (top) and the e-folds (bottom) elapsed since the beginning of the simulation. We assume that initially \(\varphi\) is rolling down its steep potential.10 Then, the interaction in Eq. (10) provides a modulated effective mass-squared \(m_{\rm eff}^{2}=g^{2}\varphi^{2}\) to the scalar field \(\chi\). When \(\varphi\) crosses the origin, this effective mass becomes momentarily zero. If the variation of the \(\varphi\) field (i.e. the speed \(|\dot{\varphi}|\) in field space) is large enough, then there is a window around the origin when \(|\dot{m}_{\rm eff}|\gg m_{\rm eff}^{2}\) (because, \(|\dot{\varphi}|\gg\varphi^{2}\simeq 0\)). This violates adiabaticity and leads to copious production of \(\chi\)-particles [78].11 Footnote 10: For away from the origin, the scalar potential \(V(\varphi)\) does not have to be of the form in Eq. (1). In fact, it is conceivable that \(\varphi\) might play the role of the inflaton field too (see Appendix). Footnote 11: Near the origin, when \(\varphi\simeq 0\), the \(\varphi\)-field is approximately canonically normalised, as suggested by Eq. (5), so the considerations of Ref. [78] are readily applicable. As the field moves past the ESP, the produced \(\chi\) particles become heavy, which takes more energy from the \(\varphi\) field, producing an effective potential incline in the direction the \(\varphi\) field is moving. Indeed, the particle production generates an additional linear potential \(\sim g|\varphi|n_{\chi}\)[78], where \(n_{\chi}\) is the number density of the produced \(\chi\)-particles. This number density is constant because the duration of the effect is much smaller than a Hubble time, so that we can ignore dilution from the Universe expansion. The rolling \(\varphi\) field climbs up the linear potential until its kinetic energy density is depleted. Then the field momentarily stops and afterwards reverses its motion (variation) back to the origin. When crossing the origin again, there is another bout of \(\chi\)-particle production, which increases \(n_{\chi}\) and makes the linear potential steeper to climb. This time, \(\varphi\) variation halts at a value closer to the origin. Then, the field reverses its motion and rushes through the origin again. Another outburst of \(\chi\)-particle production steepens the linear potential further. The process continues until the \begin{table} \begin{tabular}{|l|l|} \hline Constraint & Field Value \\ \hline \(0.015\leq\Omega_{\phi}^{\rm eq}<0.107\) & 0.05178 \\ \(\Omega_{\phi}^{\rm ls}<0.015\) & 0.001722 \\ \(\Omega_{\phi}^{\rm eq}>\Omega_{\phi}^{\rm ls}\) & YES \\ \(0.6833\leq\Omega_{\phi}^{0}\leq 0.6945\) & 0.6889 \\ \(-1\leq w_{\phi}^{0}\leq-0.95\) & -1.000 \\ \(-0.55\leq w_{\phi}^{a}\equiv-\left.\frac{\mathrm{d}w_{\phi}}{\mathrm{d}a} \right|_{0}\leq 0.03\) & \(-4.850\times 10^{-11}\) \\ \(72.00\leq\frac{H_{0}}{\mathrm{km\,s^{-1}\,Mpc^{-1}}}\leq 74.08\) & **73.27** \\ \(\kappa\lambda\) & 1.178 \\ \((\phi_{0}-\phi_{\rm eq})/m_{\rm P}<1\) & 0.4274 \\ \hline \end{tabular} \end{table} Table 5: Table giving the constraints and their corresponding values for an example point, \(\alpha=0.0005,\ \kappa=145,\ \lambda=0.008125\), and \(V_{\Lambda}\) tuned to the SH\({}_{0}\)ES cosmological constant, in the viable parameter space. The Hubble constant obtained in this example is \(H_{0}=73.27\,\mathrm{km/s}\) Mpc. \(\varphi\)-field is trapped at the origin [75, 78]. The trapping of a rolling scalar field at an ESP can take place only if the \(\chi\)-particles do not decay before trapping occurs. If they did, the \(n_{\chi}\) would decrease and the potential \(g|\varphi|n_{\chi}\) would not be able to halt the motion (variation) of the \(\varphi\)-field. The end result of this process is that all the kinetic energy density of the rolling \(\varphi\) has been given to the \(\chi\)-particles. Now, since \(\varphi\) is trapped at the origin, the effective mass of the \(\chi\)-particles is zero, which means that they are relativistic matter, with density scaling as \(\rho_{\chi}\propto a^{-4}\). As far as \(\varphi\) is concerned, it is trapped at the origin and its density is only \(\rho_{\varphi}=V(\varphi=0)=e^{-\lambda}V_{X}=\,\)constant (cf. Eq. (1)). After some time, it may be assumed that the \(\chi\)-particles do eventually decay into the standard model particles, which comprise the thermal bath of the hot Big Bang. The confining potential, which is proportional to \(n_{\chi}\), disappears but, we expect the \(\varphi\)-field to remain Figure 9: Schematic log-log plot depicting the evolution of the density of the scalar field \(\rho_{\phi}\) (solid blue line) and the density of radiation and matter \(\rho_{r}+\rho_{m}\) (dashed red line) in the case when the decay of the kinetic energy density of the trapped scalar field generates the thermal bath of the hot Big Bang (as in Ref. [79]). Originally the \(\phi\)-field is rushing towards the minimum of the potential, dominated by its kinetic density, so that \(\rho_{\phi}\propto a^{-6}\) (free-fall). When it crosses the enhanced symmetry point (ESP) its interaction to the \(\chi\)-field (cf. Eq. (10)) traps the rolling \(\phi\)-field at the ESP while all its kinetic energy is given to \(\chi\)-particles, which soon decay into the radiation and matter of the hot Big Bang (the decay is assumed to be quick, just after trapping). Afterwards, the \(\phi\)-field stays frozen, with energy density \(V(\phi=0)=e^{-\lambda}V_{X}\) (cf. Eq. (1)) until much later, when its potential density is comparable to the background. Then it unfreezes before dominating, acting as early dark energy at the time near matter-radiation equality, and subsequently free-falls to its value \(\phi_{0}\), with potential density approximately \(V_{\Lambda}=\,\)constant. The field stays there until the present when it dominates the Universe and becomes late dark energy. frozen at the origin because the scalar potential \(V(\varphi)\) in Eq. (1) is flat enough there. As we have discussed, the \(\varphi\)-field unfreezes again in matter-radiation equality. The above scenario is depicted in Fig. 9 For simplicity, we have considered that, apart from the obvious violation of adiabacity at the ESP, the \(\chi\) direction is otherwise approximately flat and the \(\chi\)-field has a negligible bare mass compared to the \(\varphi\) field. It would be more realistic to consider a non-zero bare mass for the \(\chi\)-particles, which when they become non-relativistic (much later than the trapping of \(\varphi\)) can safely decay to the thermal bath of the hot Big Bang, reheating thereby the Universe, e.g. in a manner not dissimilar to Ref. [79]. The above scenario is one possible explanation of the initial condition considered and not directly relevant to the scope of this work - numerical simulations simply assume that the field begins frozen at the origin. Other possibilities to explain our initial condition exist, for example considering a thermal correction of the form \(\delta V\propto T^{2}\varphi^{2}\), which would make the origin an effective minimum of the potential at high temperatures and drive the \(\varphi\)-field there. ## 6 Conclusions In conclusion, we have studied in detail a non-oscillatory model of unified early and late dark energy, which resolves the Hubble tension and simultaneously explains the observed current accelerated expansion with no more fine tuning than \(\Lambda\)CDM. Our model considers a single scalar field in the context of \(\alpha\)-attractors, as in Ref. [34], but in our case the field is not oscillating; instead after equality, it free-falls with energy density decreasing as \(a^{-6}\), faster than most early dark energy (EDE) proposals and the fastest possible.12 Footnote 12: Causality implies that the barotropic parameter \(w\) of a perfect fluid cannot be larger than unity because the speed of sound of the fluid \(c_{s}^{2}=w\) must be subluminal. This implies \(w\leq 1\) and so, the density of an independent perfect fluid \(\rho\propto a^{-3(1+w)}\) cannot decrease faster than \(a^{-6}\). However, a homogeneous scalar field can be represented as a perfect fluid with \(w=\frac{\rho_{\rm kin}-V}{\rho_{\rm kin}+V}\), where \(\rho_{\rm kin}\) is the kinetic energy density of the scalar field and \(V\) the potential. It seems that \(w>1\) could indeed happen when the field transverses an AdS minimum of \(V\), such that \(V<0\). As a result, the density of such scalar field could decrease faster than \(a^{-6}\). The scenario of such EDE has been considered in Refs. [80; 81]. In our proposed scenario, the scalar field lies originally frozen at the origin, until it thaws near the time of equal matter-radiation densities, when it becomes EDE. Afterwards it free-falls until it refreezes at a lower potential energy density value, which provides the vacuum density of \(\Lambda\)CDM. We showed that the total excursion of the field in configuration space is sub-Planckian, which implies that our potential is stable under radiative corrections. One explanation of our initial conditions is that the origin is an enhanced symmetry point (ESP). Our scalar field is originally kinetically dominated until it is trapped at the ESP when crossing it.13 As we discuss in Appendix A, the scalar field could even be the inflaton, which after inflation rolls down its runaway potential until it becomes trapped at the ESP. Footnote 13: A thermal correction to the scalar potential can have a similar effect. Our potential in Eq. (1) really serves to demonstrate that a model unifying EDE with \(\Lambda\)CDM can be achieved with a suitably steep runaway potential. With the parameters of our model assuming rather natural values, thereby not introducing fine-tuning additional to that of \(\Lambda\)CDM, we show that this is indeed possible with a simple design. The challenge lies in constructing a concrete theoretical framework for such a potential. **Acknowledgements:** LB is supported by STFC. KD is supported (in part) by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant: ST/T001038/1. SSL is supported by the FST of Lancaster University. Quintessential Inflation Is it possible that our scalar field can not only be early and late dark energy, but also be the inflaton field, responsible for accelerated expansion in the early Universe? The \(\alpha\)-attractors construction leads to two flat regions in the scalar potential of the canonical field, as the kinetic poles of the non-caninical field are displaced to infinity. This idea has been employed in the construction of quintessential inflation models in Refs. [64; 65; 66], where the low-energy plateau was the quintessential tail, responsible for quintessence and the high-energy plateau was responsible for inflation. However, if we inspect the potential in Eq. (1) at the poles \(\varphi=\pm\sqrt{6\alpha}\,m_{\rm P}\), we find that the potential for the positive pole is \(V(\varphi_{+})=V_{\rm A}\) as expected, while for the negative pole we have \(V(\varphi_{-})=V_{\rm A}\exp\bigl{[}2\lambda\sinh\bigl{(}\kappa\sqrt{6\alpha} \bigr{)}\bigr{]}\). For the values of the parameters obtained (\(\kappa\sim 10^{2}\), \(\lambda\sim 10^{-3}\) and \(\alpha\sim 10^{-4}\)) it is easy to check that \(V(\varphi_{-})\) is unsuitable for the inflationary plateau. Thus, our model needs to be modified to lead to quintessential inflation. The first modification is a shift in field space such that our new field is \[\tilde{\varphi}=\varphi+\Phi\,, \tag{104}\] where \(\Phi\) is a constant. The \(\alpha\)-attractors construction applies now on the new field \(\tilde{\varphi}\) for which the Lagrangian density is given by the expression in Eq. (4) with the substitution \(\varphi\to\tilde{\varphi}\). The poles of our new field lie at \(\tilde{\varphi}_{\pm}=\pm\sqrt{6\tilde{\alpha}}\,m_{\rm P}\), where \(\tilde{\alpha}\) is the new \(\alpha\)-attractors parameter. We want all our results to remain unaffected, which means that, for the positive pole, Eq. (104) suggests \[\varphi_{+}=\sqrt{6\alpha}\,m_{\rm P}=\tilde{\varphi}_{+}-\Phi=\sqrt{6\tilde{ \alpha}}\,m_{\rm P}-\Phi\;\Rightarrow\;\tilde{\alpha}=\frac{1}{6}\left(\frac{ \Phi}{m_{\rm P}}+\sqrt{6\alpha}\right)^{2}\,. \tag{105}\] The above, however, is not enough. It turns out we need to modify the scalar potential as well. This modification must be such that near the positive pole the scalar potential reduces to the one in Eq. (1). A simple proposal is \[V(\tilde{\varphi})=V_{X}\exp\{-2\lambda\sinh[\kappa(\tilde{\varphi}-\Phi)/m_ {\rm P}]\}\,, \tag{106}\] which indeed reduces to Eq. (1) when \(\kappa(\tilde{\varphi}-\Phi)=\kappa\varphi>m_{\rm P}\) Note that \(\kappa\sqrt{6\alpha}>1\) is implied from the requirement that near the positive pole we have \(\kappa\sqrt{6\alpha}\,m_{\rm P}=\kappa\varphi_{+}>m_{\rm P}\). The ESP discussed in Sec. 5 is now located at \(\tilde{\varphi}=\Phi\), such that Eq. (104) is now \(\Delta V=\frac{1}{2}g^{2}(\tilde{\varphi}-\Phi)^{2}\chi^{2}\).14 Footnote 14: Near the ESP the potential does not approximate Eq. (1). However, we assume that, after unfreezing, the field rolls away fast from the ESP, such that soon the \(\exp(\exp)\) form of the potential becomes valid and the evolution is the one discussed in the main text of our paper. We are interested in investigating the inflationary plateau. This is generated for the canonical field near the negative pole \(\tilde{\varphi}_{-}=-\sqrt{6\tilde{\alpha}}\,m_{\rm P}\), where the scalar potential of the canonical field "flattens out" [50]. Assuming that \(\Phi>\sqrt{6\alpha}\,m_{\rm P}\), we have that \(\tilde{\varphi}_{-}-\Phi=-2\Phi-\sqrt{6\alpha}\,m_{\rm P}\simeq-2\Phi\), where we used Eq. (105). Hence, for the potential energy density of the inflationary plateau we obtain \[V_{\rm inf}=V(\tilde{\varphi}_{-}) \simeq V_{X}\exp[-2\lambda\sinh(-2\kappa\Phi/m_{\rm P})] \tag{107}\] \[\simeq \exp\Bigl{(}\lambda\,e^{\kappa\sqrt{6\alpha}}\Bigr{)}V_{\Lambda} \exp[\lambda\exp(2\kappa\Phi/m_{\rm P})]\] \[= \exp\Bigl{[}\lambda(e^{\kappa\sqrt{6\alpha}}+e^{2\kappa\Phi/m_{ \rm P}})\Bigr{]}V_{\Lambda}\simeq V_{\Lambda}\exp\Bigl{(}\lambda\,e^{2\kappa \Phi/m_{\rm P}}\Bigr{)}\,,\] where we used Eq. (1) and that in \(-2\sinh(-x)\simeq e^{x}\), when \(x\gg 1\). With \(\alpha\)-attractors, the inflationary predictions are \(n_{s}=1-2/N\) and \(r=12\tilde{\alpha}/N^{2}\)[50], where \(n_{s}\) is the spectral index of the scalar curvature perturbation and \(r\) is the ratio of the spectrum of the tensor curvature perturbation to the spectrum of the scalar curvature perturbation, with \(N\) being the number of inflationary efolds remaining after the cosmological scales exit the horizon. Typically, \(N=60-65\) for quintessential inflation, which means that \(n_{s}=0.967-0.969\), in excellent agreement with the observations [82]. For the tensor-to-scalar ratio the observations provide the bound \(r<0.036\)[83], which suggests \(\tilde{\alpha}<0.003\,N^{2}=10.8-12.7\). The COBE constraint requires \(V_{\rm inf}\sim 10^{-10}\,m_{\rm P}^{4}\). Using that \(V_{\Lambda}\sim 10^{-120}\,m_{\rm P}^{4}\), Eq. (10), suggests that \(\kappa\Phi/m_{\rm P}=\frac{1}{2}\ln(110\ln 10/\lambda)\). Hence. the conditions \(\Phi>\sqrt{6\alpha}\,m_{\rm P}\) and \(\kappa\sqrt{6\alpha}>1\) suggest \[1<\kappa\sqrt{6\alpha}<\kappa\Phi/m_{\rm P}=\frac{1}{2}\ln(110\ln 10/\lambda)\,. \tag{11}\] Our findings in Sec. 4 are marginally in agreement with the above requirements. For example, taking \(\alpha=0.0006\) and \(\kappa=100\) we find \(\kappa\sqrt{6\alpha}=6\) and then Eq. (11) suggests \(\lambda<1.556\times 10^{-3}\). We also find \(\Phi/m_{\rm P}>\sqrt{6\alpha}=0.06\), which is rather reasonable. Then, Eq. (10) implies \(\tilde{\alpha}>12\alpha=7.2\times 10^{-3}\), which comfortably satisfies the observational constraint on \(r\). In fact, taking \(N\simeq 60\), we find \(r=12\tilde{\alpha}/N^{2}>\alpha/25=2.4\times 10^{-5}\). The above should be taken with a pinch of salt because the approximations employed are rather crude. However, they seem to suggest that our augmented model in Eq. (10) may lead to successful quintessential inflation while also resolving the Hubble tension, with no more fine-tuning than that of \(\Lambda\)CDM.15 A full numerical investigation is needed to confirm this. Footnote 15: Unifying inflation, EDE and late DE in \(F(R)\) modified gravity has been investigated in Refs. [84; 85].
2308.00570
Enhancing Sample Efficiency and Uncertainty Compensation in Learning-based Model Predictive Control for Aerial Robots
The recent increase in data availability and reliability has led to a surge in the development of learning-based model predictive control (MPC) frameworks for robot systems. Despite attaining substantial performance improvements over their non-learning counterparts, many of these frameworks rely on an offline learning procedure to synthesize a dynamics model. This implies that uncertainties encountered by the robot during deployment are not accounted for in the learning process. On the other hand, learning-based MPC methods that learn dynamics models online are computationally expensive and often require a significant amount of data. To alleviate these shortcomings, we propose a novel learning-enhanced MPC framework that incorporates components from $\mathcal{L}_1$ adaptive control into learning-based MPC. This integration enables the accurate compensation of both matched and unmatched uncertainties in a sample-efficient way, enhancing the control performance during deployment. In our proposed framework, we present two variants and apply them to the control of a quadrotor system. Through simulations and physical experiments, we demonstrate that the proposed framework not only allows the synthesis of an accurate dynamics model on-the-fly, but also significantly improves the closed-loop control performance under a wide range of spatio-temporal uncertainties.
Kong Yao Chee, Thales C. Silva, M. Ani Hsieh, George J. Pappas
2023-08-01T14:20:27Z
http://arxiv.org/abs/2308.00570v1
Enhancing Sample Efficiency and Uncertainty Compensation in Learning-based Model Predictive Control for Aerial Robots ###### Abstract The recent increase in data availability and reliability has led to a surge in the development of learning-based model predictive control (MPC) frameworks for robot systems. Despite attaining substantial performance improvements over their non-learning counterparts, many of these frameworks rely on an offline learning procedure to synthesize a dynamics model. This implies that uncertainties encountered by the robot during deployment are not accounted for in the learning process. On the other hand, learning-based MPC methods that learn dynamics models online are computationally expensive and often require a significant amount of data. To alleviate these shortcomings, we propose a novel learning-enhanced MPC framework that incorporates components from \(\mathcal{L}_{1}\) adaptive control into learning-based MPC. This integration enables the accurate compensation of both matched and unmatched uncertainties in a sample-efficient way, enhancing the control performance during deployment. In our proposed framework, we present two variants and apply them to the control of a quadrotor system. Through simulations and physical experiments, we demonstrate that the proposed framework not only allows the synthesis of an accurate dynamics model on-the-fly, but also significantly improves the closed-loop control performance under a wide range of spatio-temporal uncertainties. ## I Introduction Model predictive control (MPC) is a versatile control framework that generates control actions through the consideration of a possibly nonlinear dynamics model, as well as state and control input constraints. Due to its flexibility, MPC has been applied to a variety of robot systems such as ground vehicles [1], quadruped robots [2] and aerial robots [3, 4]. With an increase in data accessibility, there is an upcoming trend of integrating machine learning methods into MPC, in an attempt to improve model accuracy and control performance [5]. One prominent direction in this domain of learning-based MPC is to utilize learning tools for the construction of dynamics models. In [6] and [7], the authors use Gaussian processes (GPs) to model the residual dynamics of an autonomous vehicle and a quadcopter respectively, before applying the learned models within an MPC framework. While there are sample-efficient variants, such as sparse GPs [8], it is often challenging for GPs to handle large amounts of data without any additional data selection strategies. On the other hand, there are a number of works that use neural networks (NNs) to model robot dynamics for MPC. The authors in [10] use a feedforward NN to model vehicle dynamics to account for friction. In [11], a temporal convolutional NN is used to model the dynamics of a quadcopter. Within the context of model-based reinforcement learning (MBRL), NN ensembles are employed to create uncertainty-aware dynamics models [12], before using them for the control of robots within the MuJoCo [13] environment. In [14], a NN is used to learn the dynamics of a ground vehicle before applying it in a sampling-based MBRL framework. An overarching theme in these works is that while these learned models have been shown to be accurate in representing complex dynamics, they often require relatively large architectures, with a high number of hidden layers, neurons or models within the ensemble. Hence, it is challenging to use these methods within a conventional nonlinear MPC formulation [15], in which a constrained nonlinear optimization problem is solved at every time step during deployment. One possible solution is to utilize the KNODE-MPC framework proposed in [16], where a neural ordinary differential equation (NODE) model is used to characterize the residual dynamics of a quadrotor system. The NODE model is combined with a first principles model to form a knowledge-based NODE (KNODE) model, which is then employed within an MPC framework. Because of its Fig. 1: Schematic of the proposed \(\mathcal{L}_{1}\)-KNODE-MPC framework, for the control of a quadrotor system. **Top**: The first variant, \(\mathcal{L}_{1}\)-KNODE-MPC-Direct, combines the control signals from KNODE-MPC (highlighted orange) and the \(\mathcal{L}_{1}\) adaptive module module (blue) in a direct way. **Bottom**: The second variant, \(\mathcal{L}_{1}\)-KNODE-MPC-\(Int\), integrates the uncertainties estimated from a modified adaptive module (blue) into the dynamics model within the KNODE-MPC framework (orange). _Image source for quadrotor_: [9].
2305.01692
Precision CMB constraints on eV-scale bosons coupled to neutrinos
The cosmic microwave background (CMB) has proven to be an invaluable tool for studying the properties and interactions of neutrinos, providing insight not only into the sum of neutrino masses but also the free streaming nature of neutrinos prior to recombination. The CMB is a particularly powerful probe of new eV-scale bosons interacting with neutrinos, as these particles can thermalize with neutrinos via the inverse decay process, $\nu\bar{\nu} \rightarrow X$, and suppress neutrino free streaming near recombination -- even for couplings as small as $\lambda_\nu \sim \mathcal{O}(10^{-13})$. Here, we revisit CMB constraints on such bosons, improving upon a number of approximations previously adopted in the literature and generalizing the constraints to a broader class of models. This includes scenarios in which the boson is either spin-$0$ or spin-$1$, the number of interacting neutrinos is either $N_{\rm int} = 1,2 $ or $3$, and the case in which a primordial abundance of the species is present. We apply these bounds to well-motivated models, such as the singlet majoron model or a light $U(1)_{L_\mu-L_\tau}$ gauge boson, and find that they represent the leading constraints for masses $m_X\sim 1\, {\rm eV}$. Finally, we revisit the extent to which neutrino-philic bosons can ameliorate the Hubble tension, and find that recent improvements in the understanding of how such bosons damp neutrino free streaming reduces the previously found success of this proposal.
Stefan Sandner, Miguel Escudero, Samuel J. Witte
2023-05-02T18:01:10Z
http://arxiv.org/abs/2305.01692v2
# Precision CMB constraints on eV-scale bosons coupled to neutrinos ###### Abstract The cosmic microwave background (CMB) has proven to be an invaluable tool for studying the properties and interactions of neutrinos, providing insight not only into the sum of neutrino masses but also the free streaming nature of neutrinos prior to recombination. The CMB is a particularly powerful probe of new eV-scale bosons interacting with neutrinos, as these particles can thermalize with neutrinos via the inverse decay process, \(\nu\nu\to X\), and suppress neutrino free streaming near recombination - even for couplings as small as \(\lambda_{\nu}\sim\mathcal{O}(10^{-13})\). Here, we revisit CMB constraints on such bosons, improving upon a number of approximations previously adopted in the literature and generalizing the constraints to a broader class of models. This includes scenarios in which the boson is either spin-0 or spin-1, the number of interacting neutrinos is either \(N_{\rm int}=1,2\) or \(3\), and the case in which a primordial abundance of the species is present. We apply these bounds to well-motivated models, such as the singlet majoron model or a light \(U(1)_{L_{p}-L_{\tau}}\) gauge boson, and find that they represent the leading constraints for masses \(m_{X}\sim 1\,{\rm eV}\). Finally, we revisit the extent to which neutrino-philic bosons can ameliorate the Hubble tension, and find that recent improvements in the understanding of how such bosons damp neutrino free streaming reduces the previously found success of this proposal. + Footnote †: preprint: CERN-TH-2023-073, IFIC/23-13, FTUV-23-0413.0599 ## I Introduction Neutrinos always comprise a sizable fraction of the energy density in the Universe. In particular, prior to matter-radiation equality they represent \(\sim 40\%\) of the energy budget. Neutrinos are also the only species with a sizable anisotropic stress - a consequence of their decoupling from the thermal plasma at \(T\sim 2\,{\rm MeV}\). Collectively, these facts imply that neutrino free streaming plays an important role in the evolution of the gravitational potentials responsible for sourcing the CMB anisotropies [1; 2; 3]. Current observations of the CMB by the Planck satellite [4; 5; 6] are compatible with the standard picture in which neutrinos are free streaming at redshifts \(2000\lesssim z\lesssim 10^{5}\)[7] (corresponding to temperatures \(0.5\,{\rm eV}\lesssim T_{\gamma}\lesssim 25\,{\rm eV}\)), implying these observations can be used to stringently constrain the existence of new light particles coupled to the neutrino sector. The impact of exotic neutrino interactions in cosmology, and in particular in the CMB, have been studied in various contexts, including scenarios in which: neutrinos have self-interactions that arise from heavy mediators [8; 9; 10; 11; 12; 13; 14; 15; 16; 17], neutrinos annihilate into massless scalars [3; 18; 19; 20; 21; 22; 23], neutrinos decay into light particles [24; 25; 26; 27; 28; 29; 30; 31], and neutrinos temporarily thermalize with eV\(-\)scale neutrino-philic scalars [2; 32; 33; 34]. The latter scenario is particularly interesting, as particles at the eV mass-scale can arise naturally in theories which explain the origin of neutrino masses (e.g. the majoron model) [35; 36; 37; 38] or in weakly coupled realizations of spontaneously broken gauge flavor symmetries [39; 40; 41; 42]. Furthermore, it has been shown that eV\(-\)scale neutrino-philic scalars like the majoron could play an important role in helping to ameliorate the largest outstanding discrepancy in cosmology, the Hubble tension [32; 33; 34] (see e.g. [43; 44] for recent reviews on the Hubble tension and proposed solutions). However, this scenario is challenging to model, as the light bosons and neutrinos undergo an out-of-equilibrium thermalization followed by an out-of-equilibrium decay, leading to a non-trivial modification of the expansion history of the Universe. The goal of this work is to perform a precision study of the impact of eV\(-\)scale neutrino-philic bosons on the CMB, improving upon previous analyses which relied on numerous simplified approximations [32; 33; 34], and extending the results of these analyses to the more general class of light neutrino-philic bosons. The primary improvements of this work are three-fold. First, we have incorporated the background thermodynamic evolution of neutrinos and the neutrino-philic bosons in the cosmological Boltzmann code CLASS [45; 46]. This allows us to solve for the thermodynamics on the fly, with precision and speed which allows a full Bayesian analysis of Planck legacy data1. Next, we incorporated a refined computa tion of the collision term [29; 30] which damps the neutrino free streaming less efficiently than assumed in previous studies [32; 33; 34]. Finally, we generalize the analysis to arbitrary number of interacting neutrino species, include the possibility of both vector and scalar bosons and the possibility of having a primordial abundance such bosons. Footnote 1: The \(\mathcal{O}(1)\) symmetry is broken by the \(\mathcal{O}(1)\) symmetry, which is broken by the \(\mathcal{O}(1)\) symmetry. In general, we find that the CMB can robustly constrain the existence of eV\(-\)scale neutrino-philic bosons with couplings on the order of \(\lambda_{\nu}\sim\mathcal{O}(10^{-13})\). The value of this coupling roughly corresponds to the new bosonic particles having a lifetime shorter than the age of the Universe at recombination, \(\Gamma_{X}\sim\lambda_{\rm e}^{2}m_{X}/(8\pi)\lesssim H(z_{\rm rec})\). These bounds play an important role in testing a variety of well-motivated high-energy theories, such as the singlet majoron model (where these observations are testing scales of lepton number breaking as high as \(\sim 1\,\mathrm{TeV}\)), and the \(U(1)_{L_{\mu}-L_{\tau}}\) extension of the Standard Model. The main results of our study are highlighted in Figure 1, which display the \(3\sigma\) constraint on the coupling of the majoron and \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson, respectively. In the case of the majoron, we also highlight a region of parameter space that is favoured by Planck legacy data at the \(\sim 1\sigma\) level. The reminder of this work is structured as follows. First, in Section II we briefly introduce and motivate the particle physics models that we consider. In Section III, we present the formalism behind our work. In particular, we describe how we treat the thermodynamic evolution of the Universe in the presence of eV\(-\)scale neutrino-philic bosons, including how the dynamics are implemented at the level of both the background and perturbations. In Section IV we present the constraints we derive on the couplings between neutrinos and eV\(-\)scale bosons. We also include a quantitative discussion about the ability of these models to solve or ameliorate the Hubble tension, showing that the new collision term strongly suppresses the previous success of this model identified in [32; 33; 34]. Finally, in Section VI we present a summary of our results and outline our conclusions. For completeness, we provide in the appendices I and II further information on the formalism and details on the modified cosmological history. ## II Particle physics models _Effective Interactions:_ We will consider an effective coupling between neutrinos and a light bosonic mediator \(X\) and we will study two cases, one where the mediator is a pseudoscalar \(X=\phi\) and one where it is a vector \(X=Z^{\prime}\). We will work after electroweak symmetry breaking and in the active neutrino mass basis. The effective Lagrangians describing these interactions are: \[\mathcal{L}_{\rm scalar} = i\frac{a}{2}\,\sum_{\nu}\lambda_{\nu}\,\bar{\nu}\gamma_{5}\nu\,X\,, \tag{1}\] \[\mathcal{L}_{\rm vector} = \frac{\sqrt{3}a}{2}\,\sum_{\nu}\lambda_{\nu}\,\bar{\nu}\gamma^{ \mu}P_{L}\nu\,X_{\mu}\,, \tag{2}\] where \(\lambda_{\nu}\) are dimensionless coupling constants and where \(a=1\) for Majorana neutrinos and \(a=\sqrt{2}\) for Dirac Figure 1: Parameter space for neutrino interactions with a scalar (_left panel_) and vector (_right panel_) boson \(X\) with mass \(m_{X}\). The bounds are interpreted within the singlet majoron model, where \(\lambda_{\nu}=m_{\nu}/v_{L}\) and for a light \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson, for which \(\lambda_{\nu}\simeq g_{\mu-\tau}\), respectively. An analysis of Planck legacy data excludes blue regions with \(3\sigma\) confidence. Grey regions represent current cosmological, astrophysical and laboratory constraints, see Section V for details. In pink we indicate constraints coming from the out-of-equilibrium decay of the new \(X\) boson which apply if a primordial abundance was generated before BBN. We also indicate the region of parameter space which will be tested by the Simons Observatory. In particular, the region above the purple dashed-dotted line will be tested because the thermalization of the \(X\) boson leads to an observable excess of \(\Delta N_{\rm eff}\geq 0.1\). Finally, we also highlight in red the best fit region of parameter space for the scenario of the \(X\) boson being of scalar type and interacting with one neutrino family, \(N_{\rm int}=1\). This region is of particular interest because it indicates that non-trivial neutrino interactions are statistically slightly preferred over \(\Lambda\)CDM. neutrinos. Given these interactions, the scalar and vector boson partial decay rate into a pair of massive neutrinos are given by: \[\Gamma(X\to\bar{\nu}\nu)|_{\rm scalar} =\frac{\lambda_{\nu}^{2}}{16\pi}m_{X}\sqrt{1-\frac{4m_{\nu}^{2}}{m_ {X}^{2}}}\,, \tag{3}\] \[\Gamma(X\to\bar{\nu}\nu)|_{\rm vector} =\frac{\lambda_{\nu}^{2}}{16\pi}m_{X}\sqrt{1-\frac{4m_{\nu}^{2}}{ m_{X}^{2}}}\left[1-\frac{m_{\nu}^{2}}{m_{X}^{2}}\right]^{2}\,, \tag{4}\] _Mapping to concrete models:_ These effective Lagrangians have a direct interpretation in terms of well motivated BSM scenarios. For example, Eq. (1) is the effective interaction generated in the famous singlet majoron model [35] with \(X=\phi\) identified as the majoron and with \(\lambda_{\nu}=m_{\nu}/v_{L}\), where \(v_{L}\) is the scale at which the global \(U(1)_{L}\) symmetry is spontaneously broken. In particular, in this model the coupling between massive neutrinos and the majoron is diagonal up to small corrections [36]. The vector interactions in Eq. (2) also effectively describe new interactions of neutrinos in many BSM constructions. Typically, in the vector case the interaction arises by the gauging of lepton number family symmetries, and as such, the interaction is non-diagonal in the neutrino mass basis [39; 40]. However, in such cases all massive neutrinos couple to the \(X\) boson, and the couplings in the mass and flavor basis are simply related by a PMNS rotation. As an example, we can consider the case of a light \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson; here, the coupling \(\lambda_{\nu}\) is intimately related to the \(U(1)_{L_{\mu}-L_{\tau}}\) gauge coupling, \(\lambda_{\nu}\simeq g_{\mu-\tau}\) - see Ref. [47] for the precise mapping. _Scenarios Considered:_ We will consider several scenarios that we expect to broadly cover the phenomenology of the most well-motivated BSM models featuring new neutrino interactions below the MeV scale (these scenarios are summarized in Table 1). All scenarios correspond to different combinations of i) the number of interacting neutrino families, \(N_{\rm int}\), ii) the internal degrees of freedom of the \(X\) particle, \(g_{X}\), and iii) if the \(X\) species has a non-zero primordial abundance or not, parametrized by \(\Delta N_{\rm eff}^{\rm BBN}\). To be specific, we consider the following: * Case (a), with \(N_{\rm int}=3\) and \(g_{X}=1\), corresponds to the singlet majoron model in which neutrinos are pseudo-degenerate (note that pseudo-degenerate neutrinos imply a universal coupling \(\lambda_{\nu}\)). * Case (b), with \(N_{\rm int}=3\) and \(g_{X}=3\), corresponds to the commonly studied model of a light \(Z^{\prime}\) boson coupled to a lepton number family symmetry. In this model it is once again a good approximation to consider a flavour universal coupling, since the PMNS matrix does not show a hierarchical structure. * Case (c), with \(N_{\rm int}=1\) and \(g_{X}=1\), corresponds to the case of the singlet majoron model coupled mainly to one neutrino. This can happen with one approximate vanishing neutrino mass eigenstate where the coupling is mostly to the heaviest neutrino state or for \(2m_{\nu}^{\rm lightest}<m_{X}<0.1\,{\rm eV}\simeq 2\sqrt{|\Delta m_{\rm atm}^{2}|}\) since the majoron in that case can only kinematically couple to the lightest neutrino. * Case (d) corresponds to a case where a vector boson couples to a single neutrino mass eigenstate. As in scenario (c), this option is relevant in particular for \(2m_{\nu}^{\rm lightest}<m_{X}<0.1\,{\rm eV}\). However, a concrete model realization for \(m_{X}>0.1\,{\rm eV}\) in which a vector interacts only with one neutrino mass eigenstate is challenging, and generically involves cancellations of different couplings in flavour space. * The cases (e) and (f) correspond to the cases (a) and (b), respectively, but allowing for a non-zero primordial abundance of the \(X\) particle parameterized by \(\Delta N_{\rm eff}^{\rm BBN}\). Such a primordial abundance of \(X\) particles can arise e.g. due to the decay of other, heavy particle species in the early Universe. For instance, majorons can be produced from the decays of GeV\(-\)scale sterile neutrinos [33], and the \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson can be produced via muon-antimuon annihilations in the early Universe [42]. ## III Cosmological Implications and Formalism _Cosmological Implications:_ The cosmological implications of these light neutrino-philic bosons are governed by their decay rate into neutrinos. In particular, the ratio between the decay rate of \(X\) into neutrinos and the Hubble parameter at \(T\simeq m_{X}/3\) determines whether or not the \(X\) boson thermalizes in the early Universe. In a radiation dominated Universe, this ratio can be parametrized \begin{table} \begin{tabular}{c|c} \hline \hline Scenario & Specification \\ \hline \hline (a) & \(N_{\rm int}=3,\ \ g_{X}=1\) \\ \hline (b) & \(N_{\rm int}=3,\ \ g_{X}=3\) \\ \hline (c) & \(N_{\rm int}=1,\ \ g_{X}=1\) \\ \hline (d) & \(N_{\rm int}=1,\ \ g_{X}=3\) \\ \hline (e) & \(N_{\rm int}=3,\ \ g_{X}=1,\ \ \Delta N_{\rm eff}^{\rm BBN}\neq 0\) \\ \hline (f) & \(N_{\rm int}=3,\ \ g_{X}=3,\ \ \Delta N_{\rm eff}^{\rm BBN}\neq 0\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the different scenarios considered as described in the text. by: \[K_{\rm eff} \equiv \left(\frac{\lambda_{\nu}}{4\times 10^{-12}}\right)^{2}\,\left( \frac{\rm keV}{m_{X}}\right)\] \[\simeq \left.\frac{3\,\left\langle\Gamma(\bar{\nu}\nu\to X)\right\rangle}{H} \right|_{T_{\nu}=m_{X}/3}\,,\] where \(\langle\Gamma(\bar{\nu}\nu\to X)\rangle\) is the thermally averaged inverse decay rate. For \(K_{\rm eff}\gtrsim 1\) the \(X\) boson thermalizes with the neutrinos in the early Universe via decays and inverse decays out of neutrinos2. Thermalization has two important cosmolgical consequences: Footnote 2: Processes such as \(XX\leftrightarrow\bar{\nu}\nu\) are only effective for \(\lambda_{\nu}\gtrsim 10^{-7}\) and as can be seen from Eq. (5) we will be interested in much smaller couplings. 1. _Non-standard expansion at \(T_{\nu}\lesssim m_{X}\)_ - If the \(X\) boson thermalizes with neutrinos it will represent a non-negligible fraction of the energy density of the Universe. In particular, the \(X\) boson will behave as radiation until \(T_{\nu}\sim m_{X}\) but after it will start redshifting like matter and decay. This leads to a non-standard expansion history during this time, and to an enhanced value of \(N_{\rm eff}\) at the time of recombination (provided that \(m_{X}\) has decays before recombination). 2. _Suppression of neutrino free streaming_ - The new interactions between neutrinos and the \(X\) particle tend to homogenize the neutrino fluid, suppressing neutrino free streaming. This has important consequences for CMB observations as highlighted in the introduction. _Background Thermodynamics:_ The exact description of the thermodynamic evolution of the Universe in the presence of a light boson interacting with neutrinos can be found by solving the Liouville equation for the distribution function of neutrinos and the \(X\) boson. This is numerically very costly, but Ref. [48] explicitly demonstrated that for scenarios where the \(X\) boson interacts efficiently with neutrinos, namely for \(K_{\rm eff}\gtrsim 10^{-3}\), the thermodynamics can be accurately described by simple ordinary differential equations tracking the temperature and chemical potential of the neutrinos and the new light boson. These equations are explicitly outlined in Appendix I. In the left panel of Figure 2 we highlight the thermally averaged inverse decay rate (\(\langle\Gamma_{\bar{\nu}\nu\to X}\rangle=\delta\rho_{X}/\delta t|_{\bar{\nu} \nu\to X}/\rho_{\nu}\)) normalized to the Hubble parameter for a \(m_{X}=1\,\)eV boson with \(g_{X}=1\,\). We show the evolution for several values of \(K_{\rm eff}=100\,,1\,,10^{-2}\) representing cases where thermal equilibrium is well established, where thermal equilibrium is only slightly reached, and where the \(X\) boson does not thermalize, respectively. The energy density evolution for the \(X\) particle for each of these cases is highlighted in the right panel of Figure 2. From this figure we can clearly see that for \(K_{\rm eff}\gtrsim 1\) the \(X\) boson thermalizes with neutrinos and its thermodynamic evolution is dictated by thermal equilibrium. On the other hand, for \(K_{\rm eff}<1\) thermal equilibrium is not established which leads to out of equilibrium decays. The evolution at \(T_{\nu}\lesssim m_{X}/3\) will lead in all cases to a non-standard expansion history. For \(K_{\rm eff}\gg 1\) and for \(m_{X}\gtrsim 10\,\)eV thermal equilibrium dictates what is the value of the neutrino energy density after the \(X\) particle has decayed away. By assuming thermal equilibrium and tracking the number and entropy densities of the neutrinos and \(X\) species (see [48]), we can calculate the minimum values of \(\Delta N_{\rm eff}\) at recombination for the scenarios (a)-(d). These results are outlined in Table 2. For \(X\) being a scalar mediator one expects \(\Delta N_{\rm eff}^{\rm CMB}=0.08-0.12\) and for the vector mediator case \(\Delta N_{\rm eff}^{\rm CMB}=0.15-0.24\). We note that these values are similar to Planck's \(1\sigma\) sensitivity to \(N_{\rm eff}\), and thus an accurate treatment of this modified expansion history is needed to analyze the latest data. In the event that a primordial population of bosons already exists at the time of BBN, the process of thermalization at late times, i.e. near recombination, can significantly increase \(\Delta N_{\rm eff}\). For this reason, we differentiate the abundance of the new bosonic species at BBN and recombination using \(\Delta N_{\rm eff}^{\rm BBN}\) and \(\Delta N_{\rm eff}^{\rm CMB}\). We illustrate the evolution of \(\Delta N_{\rm eff}^{\rm CMB}\) assuming a primordial abundance of \(\Delta N_{\rm eff}^{\rm BBN}=0.4\) in Figure 3. Two immediate conclusions can be drawn from this figure. Firstly, the shift in \(\Delta N_{\rm eff}\) between BBN and recombination can greatly exceed the values outlined in Table 2. Secondly, \(\Delta N_{\rm eff}\) increases dramatically for \(K_{\rm eff}\lesssim 1\). This is because the \(X\) boson becomes non-relativistic and its delayed decay leads to a significant increase of the relative energy stored in this species. Consequently, scenarios with \(\lambda_{\nu}\to 0\) and \(\Delta N_{\rm eff}^{\rm BBN}\neq 0\) lead to a drastically distinct phenomenology compared to \(\Lambda\)CDM. Although, the effect of neutrino-free streaming suppression is negligible, these scenarios will be tightly constrained from the increase in \(\Delta N_{\rm eff}\). _Cosmological Perturbations:_ In order to track the cosmological perturbations of the fluids describing neutrinos and the neutrino-philic boson \(X\), we rely on several approximations. First, we treat the two interacting fluids as coupled, as done in past literature [29; 30]. This implies that we can evolve the perturbations jointly. In \begin{table} \begin{tabular}{c|c} \hline \hline Model & \(\Delta N_{\rm eff}^{\rm CMB}\) \\ \hline \hline Case (a), \(N_{\rm int}=3\), \(g_{X}=1\) & 0.12 \\ \hline Case (b), \(N_{\rm int}=3\), \(g_{X}=3\) & 0.24 \\ \hline Case (c), \(N_{\rm int}=1\), \(g_{X}=1\) & 0.08 \\ \hline Case (d), \(N_{\rm int}=1\), \(g_{X}=3\) & 0.15 \\ \hline \hline \end{tabular} \end{table} Table 2: Minimum contributions to \(\Delta N_{\rm eff}\) at the time of recombination resulting from the thermalization and subsequent decay of the \(X\) neutrino-philic boson. This corresponds to \(K_{\rm eff}\gg 1\) and \(m_{X}\gtrsim 10\,\)eV. the limit that the interactions are sufficiently strong this approximation is by definition valid. On the other hand, in the weak interaction limit, we also expect the approximation to be valid, because the perturbation equations in this case are equivalent to two decoupled fluids. The second approximation adopted here enters in the collision term describing the \(1\leftrightarrow 2\) interactions between the neutrinos and the \(X\) boson. Following Ref. [30] we assume: (1) Maxwell-Boltzmann statistics, (2) that the background momentum dependence of the neutrino distribution is not strongly time dependent, and (3) that the perturbation generated by gravity is universal to all the species involved. We expect all these approximations to hold in our scenario. Finally, we treat neutrinos as being massless. This assumption significantly simplifies the evolution of the neutrino perturbations. Since current Planck data is consistent with massless neutrinos, setting an upper limit on the sum of neutrino masses at the level of \(\sum m_{\nu}<0.12\,\mathrm{eV}\)[4], we believe this approximation does not significantly alter our results. Nevertheless, a more thorough treatment including neutrino masses would be of interest, and thus we leave this for future work. Under the approximations listed above, the equations describing the joint neutrino+boson system in synchronous gauge read [49]: \[\dot{\delta} =-(1+w)\left(\theta+\frac{\dot{h}}{2}\right)-\mathcal{H}\left(c_ {s}^{2}-w\right)\delta\,, \tag{6a}\] \[\dot{\theta} =-\mathcal{H}(1-3w)\theta-\frac{\dot{w}}{1+w}\theta+\frac{c_{s}^ {2}}{1+w}\,k^{2}\delta-k^{2}\sigma\,,\] (6b) \[\dot{F}_{2} =2\dot{\sigma}=\frac{8}{15}\theta-\frac{3}{5}kF_{3}+\frac{4}{15} \dot{h}+\frac{8}{5}\dot{\eta}-2\,a\,\Gamma_{\mathrm{NF}\,2}\,\sigma,\] (6c) \[\dot{F}_{\ell} =\frac{k}{2\ell+1}\left[\ell\,F_{\ell-1}-(\ell+1)F_{\ell+1} \right]-a\,\Gamma_{\mathrm{NF}\,\ell}\,F_{\ell}\,,\,\mathrm{for}\,\ell\geq 3\,. \tag{6d}\] Here, derivatives are taken with respect to conformal time, \(\mathcal{H}\) is the conformal Hubble parameter, \(h\) and \(\eta\) represent the metric perturbations, \(a\) is the scale factor, \(\omega=p/\rho\) is the equation of state of the system, \(c_{s}^{2}=dp/d\rho\) is the sound speed squared, \(k\) defines the given Fourier mode, \(\delta\) and \(\theta\) are the energy and velocity perturbations respectively, \(F_{\ell}\) represents the \(\ell\) moment of the perturbed distribution function, and the neutrino free streaming suppression rate is given by is [30]: \[\Gamma_{\mathrm{NF}\,\ell}=-\alpha_{\ell}\,\frac{g_{X}}{4\pi^{2}} \frac{m_{X}T_{\nu}^{3}}{\rho_{X}+\rho_{\nu}}\Gamma(X\to\bar{\nu}\nu)\,\left( \frac{m_{X}}{T}\right)^{4}\,\mathscr{F}\left(\frac{m_{X}}{T_{\nu}}\right)\,. \tag{7}\] In this expression we neglect the chemical potentials, which we explicitly checked to have negligible impact on Figure 3: Evolution of \(N_{\mathrm{eff}}\) for the case of a scalar interacting with three neutrinos with a primordial contribution to \(\Delta N_{\mathrm{eff}}^{\mathrm{BBN}}=0.4\). We notice that the value of \(N_{\mathrm{eff}}\) always increases and that for small \(K_{\mathrm{eff}}\) it increases significantly due to very out of equilibrium decays of the \(X\) particle. Figure 2: _Left:_ Effective interaction rates at the background level (solid lines) as well as at the perturbation level (dashed lines) for different values of \(K_{\mathrm{eff}}\). The scenario considered consists of all 3 neutrinos interacting with the scalar type boson. _Right:_ Evolution of the normalized \(X\) boson energy density for the same scenarios as before. For reference, we highlight in dashed the photon energy density. observables. The coefficients are given by [30] \[\alpha_{\ell} \equiv (3\ell^{4}+2\ell^{3}-11\ell^{2}+6\ell)/32\,, \tag{8}\] \[\mathscr{F}(x) \equiv \frac{1}{2}\mathrm{e}^{-x}\left(-1+x-\mathrm{e}^{x}(x^{2}-2)\Gamma (0,x)\right)\,, \tag{9}\] where \(\Gamma(0,x)\) is the incomplete gamma function. At high temperatures \(\Gamma_{\mathrm{NF}}\sim(m_{X}/T_{\nu})^{5}\Gamma(X\to\bar{\nu}\nu)\) and at very small temperatures \(\Gamma_{\mathrm{NF}}\sim e^{-mx/T_{\nu}}\Gamma(X\to\bar{\nu}\nu)\). This neutrino free streaming rate is shown as a function of temperature in dashed lines in the the left panel of Figure 2. We can clearly see that at high temperatures the scaling of \(\Gamma_{\mathrm{NF}}\) is different to the background evolution. Moreover at \(T_{\nu}\sim m_{X}/3\), where the rate is maximal, it is a factor of \(\sim 1/10\) smaller than the background equivalent. It is actually easy to see that for \(\Gamma_{\mathrm{NF}}/H>1\), \(F_{\ell}\to 0\) exponentially fast, which strongly reduces neutrino free streaming. _Numerical Implementation in CLASS:_ We track the impact of the neutrino-\(X\) interactions on the CMB power spectrum by modifying the cosmological Boltzmann code CLASS [45; 46]. The code is available on github. It can also help to study the thermodynamic evolution of different BSM scenarios. In the left panel of Figure 4 we show the evolution of the neutrino anisotropic stress associated with a mode of \(k=0.1\,\mathrm{Mpc}^{-1}\) as a function of redshift. We choose \(k=0.1\,\mathrm{Mpc}^{-1}\) because it is the largest wave number well probed by CMB observations. The evolution for different, smaller wave numbers are shown in Figure S8 of the appendix. From Figure 4 we can clearly see how the decays and inverse decays of \(X\) reduce the neutrino anisotropic stress. In the right panel of the same figure we also show the relative impact on the temperature power spectrum \(C_{\ell}^{TT}\) compared to \(\Lambda\)CDM. The impact on the observable \(C_{\ell}^{TT}\) spectrum can go well above the level of the \(1\sigma\) relative error bars, as indicated by the grey band. In Figure 5 we show the CMB temperature power spectrum for different values of \(m_{X}\), taking \(N_{\mathrm{int}}=3\), \(g_{X}=1\), and fixing \(K_{\mathrm{eff}}=10^{4}\). This corresponds to a scenario where the \(X\) particle interacts very efficiently with neutrinos, and thermal equilibrium is reached at \(T\sim 30\times m_{X}\). From this plot we can appreciate a number of interesting features: firstly, we notice that for \(m_{X}\lesssim 0.1\mathrm{eV}\) the impact on the CMB power spectrum is not significant. This is because the non-standard expansion history occurs after recombination, and owing to the high temperature suppression in the collision term, neutrino free streaming is not significantly altered before recombination. We notice that the most significant effect is for bosons with \(1\,\mathrm{eV}\lesssim m_{X}\lesssim 100\,\mathrm{eV}\). This is because the interaction rate of these bosons is maximal during the window of redshift to which the CMB is sensitive, i.e. \(2000\lesssim z\lesssim 10^{5}\). Finally, for the case with heavy mediator, \(m_{X}=10\,\mathrm{keV}\), the boson can not alter late-time free streaming, since it will have decayed already at higher redshift. This means that the observed effect purely corresponds to a shift in \(N_{\mathrm{eff}}\) of \(0.12\) (see Table 2). ## IV CMB Data Analysis and Results _Cosmological Data and Analysis:_ We perform MCMC analyses with MontePython[50; 51] on each of the models listed in Table 1. For the likelihood we use data from Planck2018+BAO data [4; 52]. In particular, this includes the temperature and polarization power spectra, as well as the lensing likelihood, from Planck [52], and the 6DF galaxy survey [53], the MGS galaxy sample of SDSS [54], and the CMASS and LOWZ galaxy samples of BOSS DR12 [55; 56; 57; 58]. In order to investigate the extent to which these scenarios could explain or ameliorate the Hubble tension we perform additional MCMC analyses including a Gaussian likelihood on \(H_{0}=73.30\pm 1.04\,\mathrm{km/s/Mpc}\)[59]. These results are used to replicate the three statistical criteria (described in detail below) introduced in the '\(H_{0}\) Olympics' [43]. This comparison allows to establish the relative success and failure of the models of Table 1 in relation to other proposed solutions. For the standard cosmological parameters and the nuisance parameters of the Planck likelihood we use the same priors as the Planck collaboration. For the mass and coupling of the neutrino-philic bosons we adopt log priors over the range: \[\log_{10}(\lambda_{\nu}) \in \left[-15,-6\right] \tag{10}\] \[\log_{10}(m_{X}/\mathrm{eV}) \in \left[-1.0,3.5\right]. \tag{11}\] The lower bound on \(m_{X}\) corresponds to twice the minimum mass of the heaviest neutrino, \(2\sqrt{|\Delta m^{2}_{\mathrm{atm}}|}\simeq 0.1\,\mathrm{eV}\). For the case of the \(X\) boson interacting with \(N_{\mathrm{int}}<3\) neutrino families, the prior range is extended to \(\log_{10}(m_{X}/\mathrm{eV})\in\left[-4,3.5\right]\) as one of the neutrinos could be much lighter and thus open up parameter space for lighter \(X\) bosons. The lower limit in this case is chosen to be sufficiently small such that the interaction rate is never effective to thermalize the \(X\) boson. We also introduce a specific upper limit on \(\lambda_{\nu}=10^{-6}\). This is because at larger couplings two-to-two processes (\(XX\leftrightarrow\nu\bar{\nu}\)), which are not captured by our treatment, begin to become relevant. On the other hand, the lower limit in the coupling is chosen to be sufficiently small that the \(X\) boson is effectively fully decoupled from the neutrino sector. In this limit, \(\lambda_{\nu}\to 0\), \(\Lambda\)CDM is recovered. At sufficiently large masses, the \(X\) boson decays at high redshift, producing a shift in \(\Delta N_{\mathrm{eff}}\) without altering neutrino free streaming - our upper bound on the mass is set by the fact that this effect is the same for \(m_{X}\gtrsim 1\,\mathrm{keV}\) (assuming a sufficiently large coupling such that the bosons thermalize). Finally, in some of the scenarios we also allow for a non-zero initial abundance of the \(X\) particle. We parameterize it by \(\Delta N_{\mathrm{eff}}^{\mathrm{BBN}}\) and adopt a flat, linear prior over the range \[\Delta N_{\mathrm{eff}}^{\mathrm{BBN}}\in\left[0,0.7\right]. \tag{12}\] Performing the MCMC analysis with the likelihoods and priors as described above leads to the result of Figure 1 which combines cases (a)-(d) of Table 1. These runs contain a total of \(N\sim 2\times 10^{6}\) samples. The \(3\sigma\) exclusion region is obtained by binning the points in \(\log_{10}(m_{X}/\mathrm{eV})\), and in each bin determining the coupling \(\lambda_{\nu}\) for which 99.7% of the samples have \(\lambda_{\nu}\leq\lambda_{\mathrm{limit}}\). A particularly interesting result is obtained for the scenario (c), i.e. the scalar boson \(X\) which interacts with \(N_{\mathrm{int}}=1\) neutrino family. In this scenario, we find a slight statistical preference for non-zero neutrino interactions; we note, however, that the \(\Lambda\)CDM limit is also favored at the \(1\sigma\) level, implying the statistical preference for this best-fit region is not remarkably significant. This region can be seen more clearly in Figure S10, where the MonteCarlo samples are explicitly shown. This best fit region of parameter space roughly corresponds to: \[\Gamma_{\mathrm{NF}}/H(z)=1\ \mathrm{at}\,z=1100-3500\,, \tag{13}\] namely, this preferred region of parameter space corresponds to scenarios where the neutrino anisotropic stress starts to be damped right before recombination, \(1100\lesssim z\lesssim 3500\). This is highlighted by the red region labelled 'best fit region' in Figure 1. It has been shown in [32, 33, 34] that models with neutrino \(X\)-boson interactions can have the potential to significantly ameliorate the Hubble tension for two main reasons: 1) the \(X\)-neutrino interactions can lead to a non-trivial enhancement of the expansion history near recombination, 2) there exists a level of degeneracy between the impact of the damping of neutrino free streaming and an enhanced value of \(N_{\mathrm{eff}}\) which allows for additional radiation without spoiling the fit to the data from Planck. In particular, the detailed statistical analysis of the '\(H_{0}\) Olympics' [43] awarded the model with a silver medal. However, as mentioned above, the original implementation of this model relied on numerous approximations. For this reason, we revisit the three '\(H_{0}\) Olympics' criteria using the improved analysis developed here. These criteria include: 1. The Gaussian Tension, given by \[\frac{\overline{H_{0}}_{\mathcal{C}}-\overline{H_{0}}_{\mathrm{SH }_{0}\mathrm{ES}}}{\sqrt{\sigma_{\mathcal{C}}^{2}+\sigma_{\mathrm{SH}_{0} \mathrm{ES}}^{2}}}\,,\] (14) where \(\overline{H_{0}}_{i}\) and \(\sigma_{i}\) are the central value and the uncertainty on the inferred value \(H_{0}\). The index \(i=\{\mathcal{C},\mathrm{SH}_{0}\mathrm{ES}\}\) refers to the cosmologically inferred value (using Planck and BAO) or the value measured by \(\mathrm{SH}_{0}\mathrm{ES}\), \(H_{0}=73.3\pm 1.04\,\mathrm{km/s/Mpc}\). 2. The \(Q_{\mathrm{DMAP}}\) (difference of the maximum a posteriori), given by \[\sqrt{\chi^{2}_{\mathrm{min},\mathcal{C}+\mathrm{SH}_{0}\mathrm{ES}}-\chi^{2} _{\mathrm{min},\mathcal{C}}}\,,\] (15) Figure 4: _Left panel:_ Evolution of the neutrino anisotropic stress for a mode of \(k=0.1\,\mathrm{Mpc}^{-1}\) for \(\Lambda\)CDM and an scenario with \(N_{\mathrm{int}}=3\) neutrinos interacting with a scalar with different coupling strengths. _Right panel:_ Relative difference of the TT power spectrum in a majoron cosmology with respect to \(\Lambda\)CDM as a function of multipole \(\ell\). We show for reference the size of the Planck error bars. The comparison has been made with fixed standard cosmological parameters. We can clearly appreciate how the strong damping of the neutrino anisotropic stress on the left hand side is strongly related with a strong change on the power spectra. Figure 5: Fractional difference on the TT power spectrum with respect to \(\Lambda\)CDM for the case of a scalar particle interacting efficiently with neutrinos, \(K_{\mathrm{eff}}=10^{4}\), see Eq. (5). We show the results for different values of \(m_{X}\). where the minimum \(\chi^{2}\) is evaluated using a likelihood that does (\(\mathcal{C}+\text{SH}_{0}\text{ES}\)) and does not contain (\(\mathcal{C}\)) the SH\({}_{0}\)ES likelihood. 3. Akaike Information Criterium (AIC), given by \[\Delta\text{AIC}=\chi^{2}_{\text{min},\mathcal{M}}-\chi^{2}_{\text{ min},\Lambda\text{CDM}}+2(N_{\mathcal{M}}-N_{\Lambda CDM})\,,\] (16) where \(\mathcal{M}\) refers to the model under consideration and \(N\) corresponds to the number of free parameters of that model. Here, the \(\chi^{2}_{\text{min}}\) values are obtained using a likelihood that includes the Gaussian contribution from SH\({}_{0}\)ES. Each criteria is intended to address a slightly different question - we refer the interested reader to [43] for a broader overview of the benefits and drawbacks of each. The results of each model are summarized in Table 3. There we also show for comparison the \(\Lambda\)CDM result and the simple scenario containing free streaming dark radiation as parameterized by \(\Delta N_{\text{eff}}\). Interestingly, none of the models investigated show a significant reduction in the cosmological tension, with the most successful of them only reducing it to the \(3.2\sigma\) level (in comparison with \(4.5\sigma\) for \(\Lambda\)CDM). This result obtained here represents a degradation compared to what was found in previous works [32, 33, 34]. The main reason for this deviation is due to the refined collision term included here, see Eq. (7), which reduces the damping of neutrino free streaming with respect to the approximation of [32, 33, 34] at \(T\gg m_{X}\). In particular, the full collision term helps to break the partial degeneracy between the damping of the neutrino free streaming at high redshift and the enhancement of \(\Delta N_{\text{eff}}\). ## V Additional constraints The models we have discussed in the main text are subject to additional constraints coming from other cosmological probes, emission from astrophysical objects, and laboratory searches. In this section we briefly highlight the origin of each constraint shown in Figure 1. _Laboratory Constraints:_ In the two benchmark particle physics models we consider, see Eqns. (1)-(2), the coupling of the new boson to neutrinos is constrained by a different set of laboratory constraints. In the case of \(X\) being identified as a light scalar, its coupling to neutrinos can give rise to double beta decay along the emission of a scalar. The latest constraints on \(\lambda_{\nu}\) from the non-observation of such a process from the EXO-200 experiment reads: \(\lambda_{\nu}<0.9\times 10^{-5}\)[60]. In the case of \(X\) being a light \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson, we adopt a nominal value of kinetic mixing induced at 1-loop by muons and taus, \(\epsilon\simeq-g_{\mu-\tau}/70\)[61]. The presence of this mixing can in turn change the scattering rate of neutrinos and electrons, which has been precisely measured by Borexino [62]. For \(m_{X}\lesssim\text{MeV}\), the coupling is constrained to be \(g_{\mu-\tau}<4\times 10^{-5}\)[63, 64, 65]. Both the EXO-200 and Borexino bounds are shown in Figure 1. _Supernova Bounds:_ Despite being very weakly coupled, the neutrino-philic bosons considered in this work can be copiously produced in extreme astrophysical environments such as supernovae. If so, these particles can modify the energy and temporal distributions of the neutrino flux arriving on Earth. In particular, in the majoron model the neutrino coalescence \(\bar{\nu}\nu\to\phi\) can produce a delayed high-energy neutrino signal [66, 67, 68, 69]. The non-observation of such a signature in the measured neutrino flux from SN1987A [70, 71, 72] leads to the following constraint [66]: \[5\times 10^{-10}<\lambda_{\nu}\frac{m_{X}}{\text{MeV}}\sqrt{g_{X}}<1.3\times 1 0^{-7}\,, \tag{17}\] for \(10\,\text{keV}\lesssim m_{X}\lesssim 1\,\text{MeV}\). On the other hand, the high densities present at supernovae induce flavour and helicity dependent effective neutrino masses. Therefore, for masses \(m_{X}\lesssim 10\,\text{keV}\), the process \(\bar{\nu}\to\nu X\) in kinematically allowed [73, 74]. Including these processes one finds constraints at the level of \[5\times 10^{-7}\lesssim\lambda_{\nu}\lesssim 3\times 10^{-5}\,. \tag{18}\] The SN1987A bound for a \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson were derived in [42, 75]. The emission of gauge bosons of \(m_{Z^{\prime}}<\text{MeV}\) is dominated by semi-Compton processes \(\mu\gamma\to\mu Z^{\prime}\) and the constraint imposed by the observation of the SN1987A signal is at the level of \(g_{\mu-\tau}\lesssim 10^{-9}\)[75]. _Star Cooling:_ A light \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson with the canonical kinetic mixing interacts with charged matter, and thus can be produced in stars. Should these particles be produced, they can free stream out of the star, carrying away a sizeable amount of energy. Consequently, strong constraints can be derived by requiring that the stellar cooling rate is not significantly altered. Recasting the limits derived in [76] (see also [77] and [78]) using the nominal kinetic mixing \(\epsilon=-g_{\mu-\tau}/70\) yields the bound in Figure 1, labelled 'Stars'. _BBN Bounds:_ The production of new relativistic particles prior to BBN will enhance the value of \(\Delta N_{\text{eff}}\). This modifies the expansion rate and in turn the prediction \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Model/Metric & Gaussian Tension & \(Q_{\text{DMAP}}\) & \(\Delta\text{AIK}\) \\ \hline \hline \(N_{\text{int}}=3\), scalar & 3.71 & 3.20 & 0.67 \\ \hline \(N_{\text{int}}=1\), scalar & 3.73 & 4.10 & 2.22 \\ \hline \(N_{\text{int}}=3\), vector & 3.72 & 3.71 & 2.44 \\ \hline Dark Radiation & 3.76 & 3.96 & -1.0 \\ \hline \(\Lambda\)CDM & 4.55 & 4.56 & 0 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of tension metrics for three different models, a simple model with free streaming dark radiation and \(\Lambda\)CDM. Note that the tension is slightly below \(5\sigma\) in \(\Lambda\)CDM because we are considering purely massless neutrinos for simplicity. of the primordial element abundances. Current observations of the primordial abundances are consistent with \(\Delta N_{\rm eff}\sim 0\). In particular, \(\Delta N_{\rm eff}^{\rm BBN}\leq 0.41\) at \(2\sigma\)[79; 80], and thus large deviations from this can yield strong constraints on the interactions with new particles. Limits were recently derived on the majoron by identifying the couplings for which \(\bar{\nu}\nu\to\phi\) lead to a shift in \(\Delta N_{\rm eff}\) at the level of 0.5 [32]. Comparable constraints were derived on the \(\mu-\tau\) gauge boson from the production of a primordial population via \(\mu^{+}\mu^{-}\to Z^{\prime}\gamma\) processes [42]. These constraints are shown in Figure 1 with the label 'BBN'. _CMB bounds on out of equilibrium decays:_ The thermodynamic treatment of the neutrino-philic bosons used in this study is only capable of accounting for moderate departures of thermal equilibrium, namely for \(K_{\rm eff}\gtrsim 10^{-3}\)[48]. In the absence of a primordial abundance, the region of parameter space with \(K_{\rm eff}\lesssim 10^{-3}\) is irrelevant as \(K_{\rm eff}\) controls the production of \(X\) particles and for such small \(K_{\rm eff}\) the energy density of \(X\) particles is negligible. However, even a small primordial abundance in the weakly coupled limit can yield strong observable consequences. The reason is that the primordial species can become non-relativistic prior to matter-radiation equality, dramatically increasing the relative energy density stored in this species before it undergoes an out-of-equilibrium decay into neutrinos. The detailed treatment of this scenario is rather intricate (see e.g. [81; 82]), and a full parameter space exploration is still lacking. In order to illustrate where these constraints would lie, we assume a primordial abundance at BBN of \(\Delta N_{\rm eff}|_{\rm BBN}=g_{X}\times 0.027\) (corresponding to the minimal value predicted for a boson that was in thermal equilibrium at temperatures above the electroweak phase transition) and derive an approximate constraint by requiring that \(N_{\rm eff}<4\) at recombination. We did this by tracking the evolution of the \(X\) boson energy density allowing for out of equilibrium decays and neglecting inverse decays (which are highly inefficient in this region of parameter space). In Figure 1 this constraint is indicated by the pink region labelled 'out of equilibrium decay' (and would exclude couplings _below_ this line). ## VI Summary, Conclusions and Outlook In this work, we have presented an improved treatment of the cosmological evolution of weakly coupled neutrino-philic bosons with masses in the \(\mathcal{O}(\rm eV)\) range. This work represents a significant improvement upon previously analyses [32; 33], which focused exclusively on the singlet majoron model and relied on a number of simplified approximations. Specifically, in this manuscript we present three updates: 1. We have incorporated the thermodynamic evolution tracing the out-of-equilibrium thermalization of the neutrino-philic bosons directly in the Boltzmann solver CLASS. This allows for a more accurate and careful treatment of the neutrino-boson interactions across a wide array of parameter space. The developed code is made public on GitHub. 2. We have incorporated a recently derived collision term [29; 30], which captures the impact of these interactions on the damping of the neutrino anisotropic stress. 3. We generalize this analysis to include: interactions with one, two, or three neutrino species, and both vector and scalar bosons. Our fiducial limits are recasted in the terms of the singlet majoron model and the \(U(1)_{L_{\mu}-L_{\tau}}\) gauge boson, but these limits can be easily interpreted in the context of many other neutrino-philic boson models. As shown in Figure 1, the limits derived using a combination of CMB and BAO data provide the strongest constraints to date across a range of masses near the \(\mathcal{O}(\rm eV)\) scale. We have also revisited the extent to which neutrino-philic bosons can resolve the Hubble tension. We show that the improved collision term, which is strongly suppressed in comparison to the previous approximations at \(T\gg m_{X}\), significantly degrades the extent to which neutrino-philic bosons can ameliorate the tension. In the case of the majoron singlet model, there exists a slight preference in the data for non-zero majoron-neutrino interactions (at the \(\sim 1\sigma\) level). This region of parameter space is expected to be fully probed in the near future by LiteBIRD [83] thanks to a cosmic variance limited measurement of the large scale EE polarization power spectrum. Upcoming observations from the Simons Observatory [84] are expected to measure \(N_{\rm eff}\) with a \(1\sigma\) precision of 0.05. This will be an improvement by a factor of 4 as compared with Planck and will significantly improve sensitivity for bosons with masses \(1\,\rm eV\lesssim m_{X}\lesssim 1\,\rm MeV\) that thermalize in the early Universe with neutrinos. Both of these experiments are fully funded and expected to probe these regions of parameter space within a decade. ###### Acknowledgements. SJW acknowledges support through the program Ramon y Cajal (RYC2021-030893-I) of the Spanish Ministry of Science and Innovation, and through the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 864035 - Undark) and the Netherlands eScience Center, grant number ETEC.2019.018. The work of SS received the support of a fellowship from "la Caixa" Foundation (ID 100010434) with fellowship code LCF/BQ/DI19/11730034. SS also thanks the CERN theory group, the Lawrence Berkeley National Laboratory and the Berkeley Center for Theoretical Physics for hospitality. We gratefully acknowledges the computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV).
2302.05453
INSPIRE: INvestigating Stellar Population In RElics III. Second data release (DR2): testing the systematics on the stellar velocity dispersion
This is the second data release (DR2) of the INvestigating Stellar Population In RElics (INSPIRE) project, comprising 21 new systems with observations completed before March 2022. For each system, we release four one-dimensional (1D) spectra to the ESO Science Archive, one spectrum for each arm of the X-Shooter spectrograph. In this paper, we focus on the line-of-sight velocity distribution, measuring integrated stellar velocity dispersions from the spectra, and assessing their robustness and the associated uncertainties. For each of the 21 new systems, we systematically investigated the effect of the parameters and set-ups of the full spectral fitting on the stellar velocity dispersion ($\sigma$) measurements. In particular, we tested how $\sigma$ changes when several parameters of the fit as well as the resolution and spectral coverage of the input spectra are varied. We found that the effect that causes the largest systematic uncertainties on $\sigma$ is the wavelength range used for the fit, especially for spectra with a lower signal-to-noise ratio (S/N $\leq$ 30). When using blue wavelengths (UVB arm) one generally underestimates the velocity dispersion (by $\sim$15 km/s). The values obtained from the near-IR (NIR) arm present a larger scatter because the quality of the spectra is lower. We finally compared our results with those in literature, finding a very good agreement overall. Joining results obtained in DR1 with those presented here, INSPIRE contains 40 ultra-compact massive galaxies, corresponding to 75% of the whole survey. By plotting these systems in a stellar mass-velocity dispersion diagram, we identify at least four highly reliable relic candidates among the new systems. Their velocity dispersion is larger than that of normal-sized galaxies of similar stellar mass.
G. D'Ago, C. Spiniello, L. Coccato, C. Tortora, F. La Barbera, M. Arnaboldi, D. Bevacqua, A. Ferré-Mateu, A. Gallazzi, J. Hartke, L. K. Hunt, I. Martín-Navarro, N. R. Napolitano, C. Pulsoni, M. Radovich, P. Saracco, D. Scognamiglio, S. Zibetti
2023-02-10T19:00:00Z
http://arxiv.org/abs/2302.05453v1
# INSPIRE: INvestigating Stellar Population In RElics III. Second data release (DR2)+ ###### Abstract Context:The project called INvestigating Stellar Population In RElics (INSPIRE) is based on VLT/X-Shooter data from the homonymous on-going ESO Large Program. It targets 52 ultra-compact massive galaxies at \(0.1<z<0.5\) with the goal of constraining their kinematics and stellar population properties in great detail and of analysing their relic nature. Aims:This is the second INSPIRE data release (DR2), comprising 21 new systems with observations completed before March 2022. For each system, we release four one-dimensional (1D) spectra to the ESO Science Archive, one spectrum for each arm of the X-Shooter spectrograph. They are at their original resolution. We also release a combined and smoothed spectrum with a full width at half maximum resolution of \(2.51\AA\). In this paper, we focus on the line-of-sight velocity distribution, measuring integrated stellar velocity dispersions from the spectra, and assessing their robustness and the associated uncertainties. Methods:For each of the 21 new systems, we systematically investigated the effect of the parameters and set-ups of the full spectral fitting on the stellar velocity dispersion (\(\sigma\)) measurements. In particular, we tested how \(\sigma\) changes when several parameters of the fit as well as the resolution and spectral coverage of the input spectra are varied. Results:We found that the effect that causes the largest systematic uncertainties on \(\sigma\) is the wavelength range used for the fit, especially for spectra with a lower signal-to-noise ratio (S/N \(\leq 30\)). When using blue wavelengths (UVB arm) one generally underestimates the velocity dispersion (by \(\sim 15\) km/s). The values obtained from the near-IR (NIR) arm present a larger scatter because the quality of the spectra is lower. We finally compared our results with those in literature, finding a very good agreement overall. Conclusions:Joining results obtained in DR1 with those presented here, INSPIRE contains 40 ultra-compact massive galaxies, corresponding to 75% of the whole survey. By plotting these systems in a stellar mass-velocity dispersion diagram, we identify at least four highly reliable relic candidates among the new systems. Their velocity dispersion is larger than that of normal-sized galaxies of similar stellar mass. Conclusions:evolution - Galaxies: formation - Galaxies: elliptical and lenticular, cD - Galaxies: kinematics and dynamics - Galaxies: stellar content - Galaxies: star formation ## 1 Introduction The most massive and oldest galaxies in the Universe (early-type galaxies, ETGs) play a fundamental role in the process of structure formation because they account for more than half of the total mass in the Universe (Blumenthal et al. 1984). However, the details of their formation and evolution are a contentious question in present-day extragalactic astrophysics and cosmology. Recently, a two-phase formation scenario has been proposed to explain their mass assembly (Naab et al. 2009; Oser et al. 2010, 2012; Hilz et al. 2013; Rodriguez-Gomez et al. 2016). A few billion years after the Big Bang, intense and fast star formation episodes (\(\tau\sim 100\) Myr, star formation rate \(\geq 1000\)\(M_{\odot}\,yr^{-1}\)) created ultra-compact and massive objects. When these galaxies are observed to be passive, they are usually referred to as red nuggets (Damjanov et al., 2011). Subsequently, a second longer phase, dominated by accretion, mergers, and gas inflows, drove structural evolution and size growth, shaping the local giant elliptical galaxies we observe today (Daddi et al., 2005; van Dokkum et al., 2008; Buitrago et al., 2018). The stars that formed during the first phase become then the core of today's ETGs, occupying their innermost regions (Ferre-Mateu et al., 2019; Pulsoni et al., 2021; Barbosa et al., 2021). A fraction of these red nuggets could also end up in the bulges of massive nearby spiral galaxies (de la Rosa et al., 2016; Costantini et al., 2021). However, the exact formation mechanism of red nuggets and the physics that transform them into today's massive galaxies is far from clear, and these questions have a significance well beyond the context of the size evolution of quiescent galaxies. The dense centres of local massive galaxies host the most massive black holes in the Universe (Kormendy & Ho, 2013; Ferre-Mateu et al., 2015). Their chemical enrichment history is extremely different from that of the Milky Way (Worthey, 1994), and their stars may have formed with a bottom-heavy (i.e. dwarf-rich) stellar initial mass function (IMF; Conroy et al., 2013; Martin-Navarro et al., 2015; Sarzi et al., 2018; Barbosa et al., 2021). In local ETGs, the material accreted in the second phase unfortunately contaminates the in situ (i.e., first phase) component that encodes the information about high-redshift baryonic processes, affecting its spatial and orbital distributions. In turn, obtaining spectra of high-\(z\) red nuggets with a signal-to-noise ratio (S/N) that is high enough to perform detailed stellar population analyses would require prohibitive integration times with current instrumentation. Because of the stochastic nature of mergers, a small fraction of red nuggets fortunately survived without experiencing any interactions. They are now massive compact relic galaxies (Trujillo et al., 2009). Relics are thus the only objects that allow us to study the physical processes that shaped the mass assembly of galaxies in the high-\(z\) universe with the same amount of detail achievable in the nearby Universe. So far, only three relics have been found and were fully characterised in the local Universe: NGC1277 (Trujillo et al., 2014; Martin-Navarro et al., 2015; Beasley et al., 2018), Mrk 1216, and PGC 032873 (Ferre-Mateu et al., 2017). These three objects all show a very quick (time-scales \(<\)1 Gyr) high-\(z\) star formation and thus are consistent with having stars with a very old mean mass-weighted age (\(\sim 13\) Gyr, almost as old as the Universe). Moreover, their morphology, kinematics, and density profiles perfectly resemble those of \(z>2\) red nuggets (Ferre-Mateu et al., 2017). With the effort of enlarging the number of confirmed relics and also detecting them at higher redshifts, we started the project called INvestigating Stellar Population In RElics (INSPIRE), which aims at building the first large catalogue of relics at \(0.1<z<0.5\)(Spiniello et al., 2021, 2021). The (relatively \(\sim 30\)) high S/N and wide wavelength (from UVB to NIR) spectra from the X-Shooter spectrograph (XSH; Vernet et al., 2011) now allow us to infer the stellar kinematics and population properties (age, metallicity, elemental abundance, and low-mass end of the IMF slope) of a sample of ultra-compact massive galaxies (UCMGs) at \(0.1<z<0.5\) with stellar masses \(M_{\star}>6\times 10^{10}M_{\odot}\) and effective radii R\({}_{\rm e}\)\(<\)2 kpc. This is the second of the three planned yearly data releases, in which we analyse the spectra of 21 systems whose observations were completed before March 20221. Here, we focus on analysing the line-of-sight velocity distribution (LOSVD) of the 21 new systems, and, in particular, on measuring the integrated stellar velocity distribution (\(\sigma\)). The latter is often used as proxy for the total mass of the galaxy, and it has been used in the literature to select relic candidates (e.g. Saulder et al., 2015). Moreover, in DR1, we found that relics and especially extreme relics have larger integrated \(\sigma\) than non-relics and normal-sized galaxies of similar stellar masses. Hence, we need to ensure that based on the quality and wavelength range of our spectra, we can robustly measure the integrated stellar velocity dispersion values from them. After demonstrating that the \(\sigma\) can be securely inferred from medium-S/N spectra, this can be used as a selection criterion, together with the R\({}_{\rm e}\) and stellar mass, to pre-select good relic candidates from on-going and future wide-sky surveys such as the Galaxy Evolution Survey with the 4-metre Multi-Object Spectrograph Telescope (4MOST; de Jong et al., 2019). This paper therefore presents a systematic and quantitative analysis of all the parameters and code set-ups that might bias and influence the stellar velocity dispersion measurements. In addition, it presents the first effort to show that the stellar velocity dispersion is not higher for all compact objects, as was proposed by Saulder et al. (2015), for example, but only for a sub-sample of them. These probably are the most reliable relic candidates. Footnote 1: The data are publicly available through the ESO Science Archive, [https://archive.eso.org/scienceportal/home?data_collection=INSPIRE](https://archive.eso.org/scienceportal/home?data_collection=INSPIRE). The paper is organised as follows. The sample and current status of the observations, as well as previous results obtained with INSPIRE, are described in Sect. 2. The data reduction and analysis, including the extraction of the 1D spectra, the telluric correction on visual (VIS) and NIR arms, and the combination of the three arms, are described in Sect. 3, and the kinematical analysis and results are presented in Sect. 4. In Sect. 5 we compare velocity dispersion values obtained from X-Shooter spectra to those inferred from GAMA2 spectra for ten objects in common. In Sect. 6 we present the stellar mass-velocity dispersion plot for DR1 and the DR2 INSPIRE objects. Finally, we present our conclusions and outline the future development of INSPIRE in Sect. 7. Throughout the paper, we assume a standard \(\Lambda\)-cold dark matter (\(\Lambda\)CDM) cosmology with \(H_{0}=69.6\) km/s Mpc\({}^{-1}\), \(\Omega_{\rm vac}=0.714\), and \(\Omega_{\rm M}=0.286\)(Bennett et al., 2014). Footnote 2: the Galaxy And Mass Assembly survey ## 2 INSPIRE project status and sample INSPIRE is based on an on-going ESO Large Program (LP, ID: 1104.B-0370, PI: C. Spiniello) that started in P104 (October 2019) with the aim to spectroscopically follow up 52 UCMGs at redshift \(0.1<z<0.5\) with the XSH spectrograph (Vernet et al., 2011). The sample was collected through a dedicated observational effort to find and spectroscopically confirm as many UCMGs in the Kilo Degree Survey (KiDS; Kuijken, 2011) as possible (de Jong et al., 2015; Kuijken et al., 2019). The results of this census for UCMGs were presented in Tortora et al. (2016, 2018), and Scognamiglio et al. (2020), hereafter T16, T18, and S20. These objects are the perfect candidates to host very old stars as their \(g-i\) broad-band colours are compatible with the colour of a stellar population with an integrated age \(\geq 8\) Gyrs (considering a solar, super-solar, and sub-solar metallicity; see Fig. 1 in INSPIRE DR1). They also all have remarkably small sizes (R\({}_{\rm e}<2\) kpc) and high stellar masses (M\({}_{\star}>6\times 10^{10}\)M\({}_{\odot}\)). The redshift window covered by the sample, shown in Fig. 1, is \(0.1<z<0.5\), with a peak at \(z\sim 0.28\). At the time of writing, 148 of the total 154 hours have been delivered by ESO. Forty systems were completely observed by the end of the ESO Period 108 (31 March 2022). Of these, 19 have been publicly released as part of the INSPIRE DR1, whereas the remaining 21 constitute the data release presented here that is publicly released via the ESO Phase 3 Science Archive3. Footnote 3: [https://doi.org/10.18727/archive/36](https://doi.org/10.18727/archive/36) We briefly describe the survey and observation strategy below. A more detailed description can be found in DR1. Depending on the \(r\)-band surface brightness luminosity (\(\langle\mu_{e}\rangle\)) and aperture magnitudes (mag\({}_{-}\)), both taken from the KiDS survey DR4 catalogue (Kuijken et al. 2019), the exposure times on target varied from 2810 to 11240 seconds. This allowed us to obtain an integrated 1D spectrum with an S/N that was good enough (S/N \(\geq\) 15 per A ) to constrain the stellar age, metallicity, and [Mg/Fe] abundance of the stellar populations and hence confirm the relic nature of the UCMGs. For each of the targets, structural parameters were computed from KiDS images in \(g,r,i\) bands and stellar masses inferred from spectral energy distribution (SED) fitting in \(ugri\) bands. These numbers were reported in T18 and S20. The slit widths that we chose for the UVB, VIS, and NIR arms are 1\({}^{\prime\prime}\).6, 1\({}^{\prime\prime}\).5, and 1\({}^{\prime\prime}\).5 respectively. The position angles (P.A.) of the slit were always oriented along the major axis of the galaxies, taken from T18 or S20. The observations were carried out in nodding mode, with a dithering scheme consisting of multiple frames shifted by a small amount from the slit centre to facilitate a proper sky subtraction. Similarly to DR1, the seeing during the observations ranged between 0\({}^{\prime\prime}\).85 to 1\({}^{\prime\prime}\).2, with a median value of \(\sim\) 1\({}^{\prime\prime}\). The feasibility of the method and techniques were extensively tested in Spinello et al. (2021a, hereafter INSPIRE Pilot), where the first three objects of the survey were presented. Then, in Spinello et al. (2021b, hereafter INSPIRE DR1), we applied the same routines to 19 systems that were fully observed until March 2020. We confirmed ten new relic galaxies, demonstrating that they had formed more than 75% of their stellar mass at \(z>2\), hence extending the number by a factor of 3.3 and pushing the redshift up to \(z\sim 0.5\). The remaining nine systems showed a longer star formation. An important result initially proposed by Ferre-Mateu et al. (2017) that was confirmed in DR1 is the degree of relicness: Some of the relics were already fully assembled in terms of stellar mass soon after the Big Bang (BB) and before the end of the first formation phase (\(z\sim 2\), extreme relics), whereas other relics formed a (high) fraction of their stars through a starburst in a very short time at high \(z\) but then had subsequent lower-z star formation events (relics). Hence, the star formation history (SFH) of relics can be more or less extreme, and this might correlate with other morphological and stellar characteristics and possibly with the environment in which they live. As part of this DR2, we release the 1D NIR spectra of the 19 galaxies that were presented in DR1 to the ESO Phase 3 Science Archive, together with 1D UVB-VIS-NIR spectra of the 21 new systems. In particular, we obtain and release three fully reduced and flux-calibrated 1D extracted spectra per galaxy, one for each arm of the detector (UVB, VIS, and NIR), at their original resolutions. In addition, we combine the arms, after smoothing everything to a common resolution with a full width at half maximum (FWHM) of \(\sim 2.51\)A and release the final spectrum as an additional data product. The INSPIRE DR2 targets along with their coordinates are listed in Table 1. We list the final exposure time and the P.A. of the slit, as well as photometric properties (\(r\)-band magnitudes and surface brightness), R\({}_{\rm e}\) derived as the median of the quantities obtained from \(g\), \(r\) and \(i\)-band KiDS images and stellar masses, from SED fitting in \(ugri\) bands. Finally, in the last column of the table, we report the sample from which the object was taken. Ten of the systems analysed in T18 or S20 were also independently observed with the AAOmega spectrograph and are part of the GAMA second data release database (DR2; Liske et al. 2015) or fourth data release (DR4; Driver et al. 2022). For these ten galaxies, we can directly compare the kinematic results obtained from the GAMA spectra with those computed from our spectra with their better resolution and higher S/N XSH (see Sec. 5). ## 3 Data reduction and analysis ### Data reduction and 1D extraction As already explained in the INSPIRE DR1, we performed an ad hoc extraction of the one 1D spectra to take into account the fact that these galaxies are not spatially resolved and the spectra are dominated by seeing, as the R\({}_{\rm e}\) of all objects in arcseconds (apparent sizes, on average R\({}_{\rm e}\sim\) 0\({}^{\prime\prime}\).3) are much smaller than the median seeing of the observations (\(\sim\) 1\({}^{\prime\prime}\) on average). Hence, we reduced the data using the ESO XSH pipeline (v3.5.3) under the ESO Reflex Workflow (Freudling et al. 2013, version 2.11.3), only up to the creation of the 2D spectral frames (one for each arm). Then, we subsequently used our own Python routines, developed for the INSPIRE Pilot, which were used in INSPIRE DR1. We cannot use ESO internal data products because they only comprise the already extracted 1D spectra. Finally, for the VIS and NIR arms, we corrected all the spectra for telluric absorption lines using the code molefit (Smette et al. 2015, version 4.2), which was run with its interactive ESO Reflex workflow. The telluric correction was performed with the recipe _molefit_model_ that fits telluric absorption features on the telluric standard that is observed in the same night and with the same instrument set-up as the galaxies. After we determined the Figure 1: Redshift distributions for the INSPIRE targets in DR1 (blue), DR2 (red), and for the final sample (grey). column densities of the various molecules in the spectrum, we constructed the telluric correction and took the difference in airmass between the observations of the telluric standard and the galaxy into account. For this purpose, we used the recipe _molecular fit_calctrans_. In previous papers of the INSPIRE series, we have extracted spectra with two different approaches. On the one hand, we collapsed the whole slit, but weighted the pixels by their flux (following the optimal extraction approach described in Naylor 1998). Alternatively, as a second approach, we also extracted the spectra of each galaxy from an aperture that contained more or less the same fraction of light for the different objects (R50, containing \(\sim 50\%\) of the total light, but a mix from inside and outside the real R\({}_{\rm e}\), given the spatial resolution of the data; see INSPIRE DR1 for more details). The R50 approach is best when the INSPIRE sample is to be compared with other galaxy samples from the literature because this is the most comparable aperture, at least in terms of light fraction, to that extracted at one R\({}_{\rm e}\) for normal-size galaxies. In INSPIRE DR1, we proved that the extraction method does not change the kinematics and stellar population results, and hence it does not play a role in the relic confirmation. Therefore, in this DR2, we extracted spectra following the R50 approach alone. ### Arm combination and smoothing When deriving the stellar population parameters, it is important to use a wide wavelength range that allows breaking the age-metallicity degeneracy (Worthey 1994) and thus properly inferring the stellar population parameters. Hence, the three arms in the INSPIRE data set must be combined. In order to do this, they must first be brought to the same final resolution. We point out that the UVB and VIS spectra have the same spatial and spectral sampling (scale = 0\({}^{\prime\prime}\).16/px, \(\delta\lambda\sim 0.156\) A), while the NIR has lower resolution (scale = 0\({}^{\prime\prime}\).25/px, \(\delta\lambda\sim 0.467\) A). First, we computed the redshift of each galaxy and independently identified the most prominent stellar absorption lines in the different arms. The redshift values are always consistent between the arms, and in 20 out of 21 cases, they are also consistent with the values reported in T18 and S20 (within 0.0005, the nominal uncertainties on the redshifts). Only for J1218+0232 do we find a slightly higher redshift from the XSH spectrum with a higher S/N than that used by T18 and S20 (\(\Delta z=z_{\rm{XSH}}-z_{\rm{S20}}=0.0352\)). Then, we reported all the spectra for all the systems and in all arms to the same resolution at fixed FHWM and the same binning. We chose an FWHM\({}_{\rm fin}=2.51\)A, which is equal to that of the MILES single stellar population (SSP) models (Vazdekis et al. 2015) that we used for the kinematic analysis. For the smoothing, we adopted the same spectral convolution procedure as was successfully employed in the INSPIRE Pilot and in the INSPIRE DR1, based on the use of a Gaussian function with a variable sigma (following the prescription of Cappellari 2017). After smoothing and binning, we finally joined the three arms using data from the UVB up to \(\lambda=5560\)A (observed wavelength), from VIS up to \(\lambda=10100\)A, and from the NIR for redder wavelengths. The arms extend slightly further (UVB 5595A and VIS 10240 A) and overlap by \(\sim 300\)A. Anyway, we point out that the cut we set for each of the three arms was chosen in order to avoid extremely noisy wavelengths at the borders. This choice has no effect on the results presented in the paper \begin{table} \begin{tabular}{r r r r r r r r r r} \hline \hline ID & RA & DEC & Exp.T. & P.A. & mag\({}_{r}\) & \(\langle\mu_{\rm e}\rangle\) & \(\langle\)R\({}_{\rm e}\rangle\) & \(\langle\)R\({}_{\rm c}\rangle\) & M\({}_{\star}\) & SAMPLE \\ KiDS & (deg) & (deg) & (sec) & (deg) & (AB) & (AB) & (\({}^{\prime\prime}\)) & (kpc) & (\(10^{11}\)M\({}_{\odot}\)) & \\ \hline J0844+0148 & 131.055386 & +1.8132204 & 11240 & \(-\)37.4 & 19.78 & 18.53 & 0.26 & 1.14 & 0.71 & S20/GAMA \\ J0904-0018 & 136.0518949 & -0.3054848 & 5620 & \(-\)96.6 & 19.11 & 18.06 & 0.26 & 1.16 & 1.3 & S20/GAMA \\ J0909+0147 & 137.3989150 & +1.7880025 & 5620 & 10.3 & 18.68 & 16.05 & 0.30 & 1.05 & 1.05 & T18/GAMA \\ J0917-0123 & 139.2701850 & -1.3887918 & 11240 & \(-\)62.9 & 19.21 & 17.99 & 0.27 & 1.37 & 2.19 & S20 \\ J0920+0126 & 140.1291393 & +1.4431610 & 11240 & \(-\)115.6 & 19.52 & 18.82 & 0.33 & 1.51 & 0.98 & S20/GAMA \\ J1026+0033 & 156.7231818 & +0.5580979 & 5620 & \(-\)83.1 & 17.39 & 16.98 & 0.34 & 1.02 & 1.48 & SDSS \\ J1040+0056 & 160.2152308 & +0.9407580 & 11240 & 53.4 & 19.52 & 18.85 & 0.31 & 1.29 & 0.93 & S20 \\ J1114+0039 & 168.699435 & +0.6510299 & 8430 & 34.0 & 19.0 & 17.89 & 0.34 & 1.52 & 1.62 & S20 \\ J1128-0153 & 172.0885023 & -1.8890642 & 8430 & \(-\)2.9 & 18.56 & 17.94 & 0.35 & 1.27 & 1.30 & T18 \\ J1142+0012 & 175.7023296 & +0.2043419 & 2810 & \(-\)84.8 & 17.02 & 17.90 & 0.71 & 1.40 & 0.84 & S20/GAMA \\ J1154-0016 & 178.6922828 & -0.2779248 & 8430 & 25.4 & 19.52 & 18.28 & 0.22 & 1.06 & 0.64 & T18/GAMA \\ J1156-0023 & 179.2186145 & -0.3946597 & 5620 & 15.8 & 18.83 & 17.01 & 0.26 & 1.04 & 1.39 & T18/GAMA \\ J1202+0251 & 180.5132277 & +2.8515452 & 8430 & \(-\)70.6 & 19.43 & 18.53 & 0.31 & 1.49 & 0.68 & S20 \\ J1218+0232 & 184.7355807 & +2.5449139 & 5620 & 1.8 & 19.23 & 18.72 & 0.31 & 1.40 & 0.93 & S20 \\ J1228-0153 & 187.0640987 & -1.8989049 & 5620 & \(-\)74.1 & 18.85 & 18.57 & 0.36 & 1.61 & 1.15 & S20 \\ J1411+0233 & 212.8336012 & +2.5618381 & 8430 & 37.3 & 18.86 & 17.44 & 0.21 & 1.07 & 1.55 & S20/GAMA \\ J1436+0007 & 219.0481314 & +0.1217459 & 5620 & \(-\)100.6 & 18.27 & 18.27 & 0.39 & 1.40 & 1.15 & S20/GAMA \\ J2202-3101 & 330.5472803 & -31.0183808 & 8430 & \(-\)87.6 & 19.43 & 18.77 & 0.31 & 1.45 & 1.10 & T18 \\ J2204-3112 & 331.2228147 & -31.2002605 & 8430 & \(-\)86.1 & 19.32 & 18.74 & 0.35 & 1.39 & 0.90 & T18 \\ J2257-3306 & 344.3966471 & -33.1144449 & 8430 & 3.5 & 19.42 & 17.09 & 0.29 & 1.18 & 0.93 & T18/GAMA \\ J2356-3332 & 359.1261248 & -33.5334748 & 11240 & \(-\)46.5 & 19.81 & 18.37 & 0.22 & 1.06 & 0.98 & T18 \\ \hline \hline \end{tabular} \end{table} Table 1: INSPIRE DR2 sample. We list from left to right the galaxy ID and coordinates, the exposure times and the position angles (along the major axis of the galaxy) of the XSH observations, the aperture magnitudes (MAG_AUTO from the KiDS DR3 catalogue, corrected for extinction), the surface brightness luminosity averaged within the R\({}_{\rm e}\), both in \(r\)-band, R\({}_{\rm e}\) in arcseconds and kiloparsec, computed as median of the quantities obtained from the \(g,r,i\) bands, and the stellar masses from the SED fitting. Finally, in the last column, we list the sample from which each object was taken. The six objects with a double reference were selected from T18(or S20), but were then also found in the GAMA DR4. because we measured the kinematics both from the single-arm spectra and from the combined ones. The redshifts inferred from the combined and smoothed spectra are reported in the second column of Table 2. They are fully consistent with those computed from the single-arms spectra. Finally, we brought all the spectra to \(z=0\). ### Signal-to-noise calculation To calculate the S/N of the 1D spectra, we followed the same recipe as was used in DR1. We used the IDL code DER_SNR (Stoehr et al. 2008), which estimates the S/N directly from the flux, assuming that it is Gaussian distributed and uncorrelated in wavelength bins spaced two pixels apart. We obtained three different estimates, one for each arm separately (across the entire wavelength, which slightly varies from one object to the next because their redshifts are different), and then, we also computed the arithmetic mean of the three. These numbers are reported in Table 2, along with the redshifts inferred from the final combined and smoothed spectra. We recall that the three independent estimates of the redshift obtained from the three single spectra are perfectly consistent with the estimate computed from the combined spectrum. For all the objects, the S/N increases from the UVB to the VIS, demonstrating the passive nature of the systems, but then decreases again in the NIR; this is likely due to the noisier nature of the NIR spectra. We did not compute the S/N on the combined and smoothed spectra because the convolution causes a noise correlation across different pixels, and hence the assumption made by the DER_SNR code is no longer valid. ## 4 Analysis of the line-of-sight velocity distribution To derive the LOSVD, and, in particular, to obtain the integrated stellar velocity dispersion values, we used the Python-based penalised pixel-fitting software (pPXF, v8.1.0; Cappellari & Emsellem 2004; Cappellari 2017). We only used an additive Legendre polynomial (ADGEREE) to correct for the continuum shape during the fit, but did not use a multiplicative one, nor did we use any regularisation (see Cappellari 2017 for more details) because in this paper, we focus on the kinematics without computing stellar population parameters and obtaining the relic confirmation, which will be presented in a follow-up paper (Spinello et al., in prep). The main purpose of our analysis is to assess the robustness of the inferred velocity dispersion values that will be used in forthcoming INSPIRE papers as a proxy for the total mass in the dynamical modelling of UCMGs in general and relics in particular (Tortora et al., in prep). Moreover, the velocity dispersion also appears to be a good way to select high-confidence relic candidates because we found in the INSPIRE DR1 that relics have a higher velocity dispersion than non-relics with similar sizes and stellar masses. In order to assess how solid the inferred \(\sigma\) values are, we performed an extensive number of tests to determine the dependence of the kinematic measurements upon the input spectra and the various assumptions and parameters in the pPXF fit: the ADGEREE and the moments of the LOSVD were left as free parameters, the wavelength range, and the masked pixels. We did not need to investigate on the effect of changing the stellar templates in the fitting procedure because this was already done in the INSPIRE Pilot. No detectable difference in the inferred \(\sigma\) was found. Therefore, we only fit with the EMILES models presented in Vazdekis et al. (2015). We limited ourselves to the safe range of parameters, following the prescription of the authors of the models4. We used the models with a bimodal stellar IMF with a fixed slope of \(\Gamma=1.3\) and PADOVA00 theoretical isochrones (Girardi et al. 2000). In a forthcoming publication of the INSPIRE series, we will directly investigate the non-universality of the IMF (Martin-Navarro et al. 2023). In this analysis, we study the UVB+VIS+NIR and fit up to 10000 A (rest frame) alone, where IMF effects are weaker (since dwarf stars emit more strongly at redder wavelengths), and where the majority of the narrow, strong stellar absorption lines are. The three spectra separately in the three arms as well as the combined spectra are made public for the astronomical community and can be downloaded directly from the ESO Science Archive. They range from 2300A(2800) to 18000A (23000) for the highest (lowest) redshift in the sample. Footnote 4: The safe ranges are defined as follows: ages: [0.063 \(-\) 17.8] Gyr, and metallicites: [\(-2.32,0.22\)], [\(\alpha\)/Fe]: [0.0 \(-\) 0.4]. See [http://research.iac.es/proyecto/miles/pages/ssp-models/safe-ranges.php](http://research.iac.es/proyecto/miles/pages/ssp-models/safe-ranges.php) for more details. ### Changing the ADGEREE parameter According to the pPXF recommendations, when performing a full spectral fitting to obtain the LOSVD, only an additive Legendre polynomial should be used to correct for the continuum shape during the kinematic fit. The degree of the additive polynomial, which is regulated by the ADGEREE keyword in pPXF, might influence the final result on the integrated stellar velocity dispersion, especially in case of spectra with a low S/N. In previous INSPIRE papers, we have tested a range of values for the ADGEREE parameter, always fixing it to the value that sta \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline ID & z & S/N & S/N & S/N & S/N \\ KiDS & (\(\pm 0.0005\)) & UVB & VIS & NIR & MEAN \\ \hline J1142+0012 & 0.1077 & 57.9 & 124.1 & 69.8 & 83.9 \\ J1026+0033 & 0.1743 & 38.9 & 113.6 & 68.3 & 73.6 \\ J0909+0147 & 0.2151 & 20.7 & 75.3 & 45.1 & 47.0 \\ J1228-0153 & 0.2973 & 23.2 & 70.1 & 40.8 & 44.7 \\ J1128-0153 & 0.2217 & 21.1 & 69.2 & 37.0 & 42.4 \\ J1411+0223 & 0.3598 & 24.1 & 73.2 & 26.9 & 41.4 \\ J1436+0007 & 0.221 & 21.1 & 67.2 & 29.0 & 39.1 \\ J1156-0023 & 0.2552 & 22.6 & 60.9 & 31.8 & 38.4 \\ J0920+0126 & 0.3117 & 17.9 & 55.6 & 29.7 & 34.4 \\ J1114+0039 & 0.3004 & 19.5 & 54.0 & 28.4 & 34.0 \\ J2204-3112 & 0.2581 & 14.4 & 54.1 & 24.5 & 31.0 \\ J0917-0123 & 0.3602 & 12.2 & 50.3 & 27.5 & 30.0 \\ J1040+0056 & 0.2716 & 11.5 & 46.7 & 31.4 & 29.9 \\ J1202+0251 & 0.3298 & 14.7 & 45.9 & 25.3 & 28.6 \\ J0844+0148 & 0.2837 & 12.9 & 45.0 & 28.0 & 28.6 \\ J0904-0018 & 0.2989 & 12.6 & 44.3 & 26.6 & 27.8 \\ J1154-0016 & 0.3356 & 16.6 & 42.8 & 23.5 & 27.6 \\ J2257-3306 & 0.2575 & 17.8 & 40.0 & 24.0 & 27.3 \\ J2202-3101 & 0.3185 & 13.1 & 47.6 & 20.5 & 27.1 \\ J1218+0232 & 0.308 & 14.6 & 42.0 & 22.7 & 26.4 \\ J2356-3332 & 0.3389 & 11.5 & 34.2 & 17.7 & 21.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Spectroscopic properties of the INSPIRE DR2 sample. We list from left to right the ID, the redshift computed from the combined spectra, the three estimates of the S/N (per Å) from the three arms at their original resolution, and the arithmetic mean of these three values. bilises the results against changes of other parameters while at the same time minimizing the reduced \(\chi^{\prime 2}\) (\(\chi^{2}\) divided by the number of good pixels used for the fit.) We decided to use the same ADGREE value for all the systems (ADEGREE\(\sim\) 20) to speed up the computational time, but risked to over-fit the noise affecting the stellar continuum in some cases, however. Here, we repeated this test in a more systematic way, using a broader range of values and more spectra. In particular, we tested ADGREE values from 1 to 30 and found that at low polynomials degrees (\(<\) 5), the inferred velocity dispersion is generally not stable. Then it reaches a plateau around a degree of 8-15, and in some cases, it then starts to wiggle again at very high degree values (\(>\) 17). However, as expected, this plateau falls in a slightly different range of ADEGREE values for different systems, and this prevents us from choosing the same degree for all them. In Fig. 2 we indicate the fiducial choice of the ADGREE value with a large red dot. This adopted choice for each system lies on the median \(\sigma\) retrieved from the test, and it has two purposes: it represents a good central guess for the bootstrap routines we present in Sect. 4.4 because it is far from the region in which \(\sigma\) is strongly unstable. It also allows us to be sensitive to the uncertainties arising from the choice of a specific ADGREE, hence assessing the error budget associated with this parameter. This is because the velocity dispersion values change slightly for degrees around the fiducial value. Finally, we note that the S/N does not appear to depend on the input spectrum (shown in each panel), and the overall variation in \(\sigma\) is generally about 5%. This is shown in the figure as the shaded blue region. ### Changing the MOMENTS parameter The pPXF allows us to constrain from two and up to six moments of the Gauss-Hermite parameterisation (van der Marel & Franx 1993), using the keyword MOMENTS. In an ideal case, the higher moments (\(h_{3}\),...,\(h_{6}\)) should be completely uncorrelated to the velocity (V) and velocity dispersion (\(\sigma\)). However, when the LOSVD is not perfectly sampled by the data, performing the fit with a different number of moments might influence the resulting \(\sigma\) values. Hence, we ran different fits changing the MOMENTS from two to six in order to test the effect of adopting different choices for the MOMENTS parameter. We show the resulting \(\sigma\) values in Fig. 3, where galaxies are again ordered according to the mean S/N. During this test, we used the fiducial value for the ADGREE parameter for each galaxy (i.e. the red dot in Fig. 2). No correlation with the S/N is found in this case either, and the \(\sigma\) values inferred using different MOMENTS are very close to each other and in many cases are consistent within the errors. For the majority of systems, a small increase in \(\sigma\) (\(\sim\) 5\(-\)10 km/s) is observed from MOMENTS = 2 to MOMENTS = 4. Then, in 16 out of 21 cases the result is virtually unchanged when the number of fitted moments is further increased. In two (three) cases, \(\sigma\) decreases (increases) when the number of moments in the fit is increased. Given the results of this test, we consider as fiducial values for the velocity dispersion the values computed with MOMENT=4 for all the 21 systems. In all but two cases, the variation with respect to the median value is always below 5% (shaded region in the figure). Figure 2: ADGREE test: Variation in the measured \(\sigma\) as a function of the degree of the additive polynomial. The spectra are ordered from top to bottom and from left to right in descending S/N (shown in each panel in parentheses). The larger red dots are the ADGREE chosen for the fiducial fit, and the shaded blue region shows a variation on the \(\sigma\) from the median value of \(\pm\)2.5%. ### Changing the wavelength range of the fitted region Another effect we tested is the effect of the fitted spectral wavelength on the inferred \(\sigma\) values. For this purpose, we repeated the fit many times with the fiducial ADEGREE and MOMENT parameters, but each time with a smaller wavelength range by systematically shifting the blue and the red ends limits of the fitted window. We started from the widest range, [3000\(-\)10000] A and decreased in steps of 200 A on each side of the fitting window down to the smallest range, [6000\(-\)7000] A. In this case, we find a dependence on the S/N: A larger variation in the inferred \(\sigma\) values is observed in low-S/N spectra. This is visible from Fig. 4, where we plot the relative difference in velocity dispersion between the value obtained by fitting the widest wavelength range and the value obtained fitting the narrowest range, against the S/N. For S/N\(<35\), the scatter is larger than 20% (\(\Delta\sigma/\sigma_{\rm max}>0.2\)), while it stabilises around that level or below it for spectra with higher S/N. For the majority of the cases, higher values of \(\sigma\) are found when the 4000 A break is masked out. This effect is smaller for high S/N spectra, however, where the results are much more stable against change in the fitted region. To evaluate the effect of masking a certain line from the fit, we ran another test in which we repeated the fit on the same spectrum across the fiducial fitting window (3000\(-\)10000 A) 50 times, each time masking one strong (absorption or emission) stellar feature. We note that frequently, different lines are less than 200 A apart. We nevertheless ran the masking test one time for each line, always centering the masking window on the corresponding line (sometimes partially masking nearby lines as well). The result of this test is shown in Fig. 5 where again we group the spectra according to their mean S/N. Within each group, the spectra were also ordered from the highest (top) to the lowest (bottom) S/N. In this case, a mild dependence on the S/N is found overall, although changes on a single-system level are also visible. In general, the two regions appear to produce a non-negligible effect on the \(\sigma\) measurements for more than one galaxy: the region between 3800 and 4300 A, and that around the Mg\({}_{b}\) and Fe strong absorption line (\(\sim\) 5100-5400 A). Overall, however, the differences in sigma (\(\Delta\sigma\)) are never larger than 10%. Figure 4: Fit range test. The plot displays the relative difference in \(\sigma\) between the measurements obtained by fitting the widest and narrowest wavelength range. For spectra with a lower S/N, the spread is larger and the variation in the velocity dispersion is in some cases as high as 20%. Figure 3: MOMENTS test: Variation in the measured \(\sigma\) as a function of the moments of the LOSVD that were constrained in the fit. The objects are in the same order as in Fig. 2, from high (top left) to low (bottom right) S/N. In each panel, the galaxy ID and the S/N are plotted in the right bottom corner, the larger red dot is the MOMENT chosen for the fiducial fit (=4), and the shaded blue region shows a variation on the \(\sigma\) from the median value of \(\pm\)2.5%. ### Bootstrap After assessing the effects of the main fit parameters individually, we set up different bootstrap routines, always repeating 250 pPXF fits of the same spectrum, to quantitatively infer uncertainties on the velocity dispersion values and to combine all the tests we performed so far in a systematic way. Specifically, we started to randomise each time the flux of every single pixel according to a Gaussian distribution of the noise around the observed flux and computed the uncertainties associated with this (\(\Delta_{noise}\)). Then, keeping the randomisation of the noise, we also added a random selection of the ADGREE within the range 1-30 in the bootstrap, and evaluated the uncertainties in this case as well (\(\Delta_{edge}\)). Subsequently, we ran a third bootstrap in which we combined the noise randomisation and the ADEGREE randomisation, and we also randomly changed the moments of the LOSVD (\(\Delta_{norm}\)), always from two to six. We finally ran another bootstrap including the three effects described above and a randomisation of the fit limit (\(\Delta_{num}\)), chosen within 2500 pixels from the blue and red end of the entire wavelength range. We followed the same approach as described in Sec. 4.3. All these tests allowed us to control the uncertainties introduced by the choice of the fit parameters during the fit. The \(\Delta\sigma\) that are listed in Table 3 are always \(<20\) km/s. Each \(\Delta\) column in the table was obtained by also including the previous randomisation(s). Thus, the last column refers to the case where noise, ADGREE, and MOMENTS were all randomised. We note that randomizing on more parameters at the same time does not necessarily correspond to an increase in the associated uncertainty. The four \(\Delta\sigma\) columns of Table 3 all list comparable values, demonstrating that randomizing on all parameters gives a good indication of the total uncertainty associated with all the possible parameters of the fit. Finally, a clear dependence on the S/N of the spectra is found, as shown in Fig. 6. The uncertainties are \(\sim 10\) (4) % for the lowest (highest) S/N spectra. ### Changing the input spectra and their resolution After assessing how the parameters and set-ups of the pPXF code influence the velocity dispersion measurements, we performed one more test by changing the input spectra, both in terms of the considered wavelength band and in terms of spectral resolution and binning. In particular, we ran the code independently on four different spectra for each galaxy: the UVB, the VIS, and the NIR at their original resolution (\(R_{\rm UVB}=3200\), \(R_{\rm VIS}=5000\), and \(R_{\rm NIR}=4300\)), and on the combined and smoothed spectrum described in Sec.3.2, up to \(\lambda=10000\)A. For this final test, we did not run the bootstrap analysis, did not mask single lines, nor changed the limits of wavelength range on which the fit is performed, as we already know the contribution on the uncertainties that this has. We also fixed ADGREE to the fiducial value of each system and MOMENTS = 4, given the results of the tests performed so far. Overall, the spectral resolution plays a small role, leading to \(\sigma\) values consistent within the (statistical) uncertainties. This is expected, given the relatively high stellar velocity dispersion values covered by our sample (130-400 km/s), corresponding to a resolution that is well below the resolution of the XSH data and of the MILES models. The effect of changing the resolution of the input spectra on the \(\sigma\) is shown in the first panel in Fig. 7, where we plot the \(\sigma\) values measured from the combined spec Figure 5: Random masking test. The objects are in the same order as in Fig. 2, from high (top left) to low (bottom right) S/N. The figure shows the effect on the \(\sigma\) of masking a window of 200 Å placed on each of the spectral lines falling within the fitting limits out from the pPXF fit. trum (between 3000 and 10000 A) on the x-axis and the values obtained from the single arms on the y-axis. A fair agreement is found between the measurements, especially for UVB and VIS. The measurements in NIR show a much larger scatter, as shown in the second panel of this figure. Here we plot the difference between the \(\sigma\) measured from the combined spectrum and that measured from the single arms versus the mean S/N of the corresponding spectrum. Independently of the S/N, the difference in the NIR is larger (\(>50\) km/s in many cases). The third panel of Fig. 7 shows the histograms of the \(\Delta\sigma\) (Table 4), from which a much larger scatter is visible from the NIR spectra than from UVB and VIS. This is due to the several spikes, bad pixels, and residuals in this arm. A small systematic difference between the values computed from the UVB (\(\sim 14\) km/s) and the VIS (\(\sim-8\) km/s) is found as well. Finally, the rightmost panel demonstrates that the \(\sigma\) shift is not a relative effect (e.g. 5% of the \(\sigma\) value), but rather an apparently absolute shift of 20% in UVB and VIS at most, but up to 60% in the NIR (for systems with a lower velocity dispersion and relatively low S/N spectra). The values of the stellar velocity dispersion measured from the single arms, those measured from the combined and smoothed spectra, and the differences between them are all listed in Table 4. The errors quoted on the \(\Delta\sigma\) are the sum of the errors of the two terms. All the quantities in the table are rounded to the integer. The main conclusion that can be drawn from this test is that when different input spectra are used that cover different wavelength ranges, the inferred stellar velocity dispersion values can change systematically and by up to 20% between UVB and VIS and up to 60% when NIR is considered as well (however, this happens only for the lower S/N spectra). We speculate that the different (generally higher) values inferred from blue to red might have a physical origin, especially for non-relics, which might include multiple populations, possibly with different kinematic properties within the galaxy. Excluding the region below 4000A might exclude the contribution of relatively younger stars, while focussing on \(\lambda>6000\)A might mean that only the oldest and reddest stars are considered. Finally, in our case, the spectral region at \(\lambda>10000\)A becomes very noisy and does not always allow us to obtain a very precise constraint on the stellar velocity dispersion. Hence, in conclusion, the \(\sigma\) values computed from the wavelength range [3000-10000] A probably provide the most robust estimate for XSH spectra with a medium or high S/N. We therefore quote them as our fiducial choice. Figure 8 shows the fiducial fit for all the systems, grouped and ordered according to the S/N of the corresponding spectrum. ## 5 Comparison with literature measurements Ten of the 21 systems analysed in this INSPIRE DR2 have also been targeted by the GAMA Survey. Generally, the GAMA spectra have a lower or comparable S/N, but much lower (2x) spectral resolution (R\(\sim 1300\)) and thus a larger instrumental dispersion. We ran pPXF on these ten GAMA spectra and on the ten INSPIRE spectra by adopting the same fit settings (i.e. the same wavelength range, MOMENTS=4, and the optimal ADEGREE choice for each galaxy), after correcting for the flux shift between the blue and red GAMA arms that is due to a bad splicing. The fitted wavelength range for both data sets is \(\sim 3500-7000\)A. We note that the two set of spectra are both fully seeing dominated, and hence the inferred velocity dispersion values should be considered as lower limits (for more details, see Appendix A of the INSPIRE DR1). For the INSPIRE case, the R50 apertures are about \(\sim 0.5-0.6\arcsec\), and the GAMA spectra are extracted from a circular aperture of radius=1\(\arcsec\). We decided not \begin{table} \begin{tabular}{c|c c c c c} \hline \hline ID & \(\sigma\) (km/s) & \multicolumn{4}{c}{\(\Delta\sigma_{Boot}\) (km/s)} \\ KIDS & COMB & \(\Delta_{noise}\) & \(\Delta_{noise}\) & \(\Delta_{\rm mom}\) & \(\Delta_{\rm aver}\) \\ \hline J1142+0012 & 129 & \(\pm\)6 & \(\pm\)6 & \(\pm\)6 & \(\pm\)6 \\ J1026+0033 & 225 & \(\pm\)5 & \(\pm\)7 & \(\pm\)8 & \(\pm\)6 \\ J0909+0147 & 401 & \(\pm\)15 & \(\pm\)18 & \(\pm\)17 & \(\pm\)12 \\ J1228-0153 & 191 & \(\pm\)6 & \(\pm\)8 & \(\pm\)7 & \(\pm\)7 \\ J1128-0153 & 192 & \(\pm\)8 & \(\pm\)9 & \(\pm\)8 & \(\pm\)8 \\ J1411+0233 & 217 & \(\pm\)7 & \(\pm\)9 & \(\pm\)8 & \(\pm\)9 \\ J1436+0007 & 193 & \(\pm\)9 & \(\pm\)10 & \(\pm\)10 & \(\pm\)10 \\ J1156-0023 & 177 & \(\pm\)11 & \(\pm\)11 & \(\pm\)10 & \(\pm\)10 \\ J0920+0126 & 190 & \(\pm\)8 & \(\pm\)8 & \(\pm\)8 & \(\pm\)9 \\ J1114+0039 & 181 & \(\pm\)10 & \(\pm\)13 & \(\pm\)12 & \(\pm\)10 \\ J2204-3112 & 227 & \(\pm\)14 & \(\pm\)16 & \(\pm\)13 & \(\pm\)14 \\ J0917-0123 & 239 & \(\pm\)13 & \(\pm\)13 & \(\pm\)13 & \(\pm\)14 \\ J1040+0056 & 240 & \(\pm\)14 & \(\pm\)18 & \(\pm\)16 & \(\pm\)14 \\ J1202+0251 & 165 & \(\pm\)17 & \(\pm\)14 & \(\pm\)13 & \(\pm\)17 \\ J0844+0148 & 224 & \(\pm\)12 & \(\pm\)12 & \(\pm\)15 & \(\pm\)15 \\ J0904-0018 & 205 & \(\pm\)11 & \(\pm\)12 & \(\pm\)11 & \(\pm\)13 \\ J1154-0016 & 163 & \(\pm\)8 & \(\pm\)10 & \(\pm\)10 & \(\pm\)9 \\ J2257-3306 & 185 & \(\pm\)15 & \(\pm\)13 & \(\pm\)12 & \(\pm\)13 \\ J2202-3101 & 221 & \(\pm\)13 & \(\pm\)12 & \(\pm\)12 & \(\pm\)10 \\ J1218+0232 & 171 & \(\pm\)11 & \(\pm\)13 & \(\pm\)11 & \(\pm\)16 \\ J2356-3332 & 162 & \(\pm\)15 & \(\pm\)15 & \(\pm\)15 & \(\pm\)16 \\ \hline \end{tabular} \end{table} Table 3: Results of the kinematics bootstrap analysis for the INSPIRE DR2 sample. We report from left to right the ID, the fiducial \(\sigma\) resulting from the bootstrap analysis on the combined and smoothed spectra (\(\sigma_{\rm COMB}\)), and the uncertainties associated with the four bootstrap routines. Each subsequent column includes the randomisation of the previous test (see text for more details). Figure 6: Relative variation of \(\sigma\) against the mean S/N of the spectra. The four different symbols show the values relative to the four different bootstrap routines described in Sec. 4.4, and the points are colour-coded by the S/N of the spectra to which they refer. The uncertainties are about 10% for the lowest S/N spectra (red) and \(\sim 4\%\) for the highest S/N spectra (blue). The colour gradient of the data points allows distinguishing the different systems. to apply any aperture-corrections to the \(\sigma\) because the equations derived in the literature (e.g. Eq. 1 of Cappellari et al. 2006) for normal-sized galaxies probably cannot be applied to ultra-compact galaxies. However, we note that bringing the measurements to \(\rm R_{e}\) will account for a variation by 5% of the \(\sigma\) at most. The velocity dispersion values obtained from the XSH and GAMA spectra are compared in Fig. 9. We plot the quantity \(\Delta\sigma/\sigma_{\rm XSH}\equiv(\sigma_{\rm GAMA}-\sigma_{\rm XSH})/\sigma _{\rm XSH}\), representing the relative disagreement between the two values, which are listed in Table 5, along with the uncertainties given by pPXF (only random errors). The agreement is fairly good. The velocity dispersion of only one system (J1142\(+\)0012) is different by more than 1\(\sigma\) error. The scatter is non-negligible for this system, which has the lowest velocity dispersion (\(\sigma<150\) km/s) and also the highest S/N. Table 5 is indeed ordered from the highest to the lowest S/N spectra, highlighting that \(\Delta\sigma\) does not depend on the S/N of INSPIRE spectra. Except for three systems with the lowest velocity dispersion, all the \(\Delta\sigma/\sigma\) are negatives. This is in line with the fact that the \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline ID & \(\sigma\) (km/s) & \(\sigma\) (km/s) & \(\sigma\) (km/s) & \(\sigma\) (km/s) & \(\Delta\sigma_{COMB-UVB}\) & \(\Delta\sigma_{COMB-VIS}\) & \(\Delta\sigma_{COMB-NIR}\) \\ KIDS & UVB & VIS & NIR & COMB & (km/s) & (km/s) & (km/s) \\ \hline J1142\(+\)0012 & 112\(\pm\)7 & 147\(\pm\)5 & 173\(\pm\)12 & 129\(\pm\)3 & 17\(\pm\)10 & -17\(\pm\)9 & -44\(\pm\)16 \\ J1026\(+\)0033 & 208\(\pm\)4 & 229\(\pm\)6 & 221\(\pm\)13 & 225\(\pm\)3 & 16\(\pm\)7 & -4\(\pm\)9 & 4\(\pm\)16 \\ J0909\(+\)0147 & 365\(\pm\)8 & 429\(\pm\)11 & 449\(\pm\)27 & 401\(\pm\)7 & 36\(\pm\)15 & -27\(\pm\)18 & -47\(\pm\)33 \\ J1228\(-\)0153 & 177\(\pm\)5 & 188\(\pm\)6 & 207\(\pm\)7 & 191\(\pm\)3 & 15\(\pm\)8 & 3\(\pm\)9 & -16\(\pm\)10 \\ J1128\(-\)0153 & 185\(\pm\)5 & 206\(\pm\)6 & 166\(\pm\)9 & 192\(\pm\)4 & 7\(\pm\)10 & -13\(\pm\)11 & 26\(\pm\)14 \\ J1411\(+\)0233 & 197\(\pm\)5 & 211\(\pm\)6 & 200\(\pm\)9 & 217\(\pm\)3 & 20\(\pm\)9 & 6\(\pm\)9 & 17\(\pm\)12 \\ J1436\(+\)0007 & 188\(\pm\)5 & 193\(\pm\)6 & 191\(\pm\)16 & 193\(\pm\)4 & 5\(\pm\)10 & 1\(\pm\)10 & 2\(\pm\)21 \\ J1156\(-\)0023 & 166\(\pm\)6 & 182\(\pm\)5 & 239\(\pm\)17 & 177\(\pm\)4 & 11\(\pm\)10 & -5\(\pm\)9 & -62\(\pm\)21 \\ J0920\(+\)0126 & 168\(\pm\)7 & 166\(\pm\)5 & 209\(\pm\)11 & 190\(\pm\)6 & 22\(\pm\)13 & 24\(\pm\)11 & -19\(\pm\)17 \\ J1114\(+\)0039 & 154\(\pm\)7 & 191\(\pm\)8 & 215\(\pm\)13 & 181\(\pm\)8 & 26\(\pm\)15 & -10\(\pm\)15 & -35\(\pm\)21 \\ J2204\(-\)3112 & 212\(\pm\)8 & 253\(\pm\)9 & 153\(\pm\)12 & 227\(\pm\)7 & 14\(\pm\)16 & -27\(\pm\)17 & 74\(\pm\)20 \\ J0917\(-\)0123 & 225\(\pm\)11 & 237\(\pm\)7 & 192\(\pm\)19 & 239\(\pm\)6 & 14\(\pm\)16 & 2\(\pm\)13 & 47\(\pm\)24 \\ J1040\(+\)0056 & 227\(\pm\)12 & 226\(\pm\)9 & 155\(\pm\)13 & 240\(\pm\)8 & 13\(\pm\)20 & 14\(\pm\)16 & 85\(\pm\)20 \\ J1202\(+\)0251 & 169\(\pm\)9 & 175\(\pm\)9 & 176\(\pm\)10 & 165\(\pm\)6 & -4\(\pm\)15 & -10\(\pm\)15 & -11\(\pm\)16 \\ J0844\(+\)0148 & 195\(\pm\)9 & 237\(\pm\)9 & 194\(\pm\)9 & 224\(\pm\)6 & 29\(\pm\)15 & -13\(\pm\)15 & 30\(\pm\)15 \\ J0904\(+\)018 & 206\(\pm\)10 & 192\(\pm\)20 & 200\(\pm\)15 & 205\(\pm\)6 & -1\(\pm\)16 & 13\(\pm\)15 & 4\(\pm\)21 \\ J1154\(-\)0016 & 136\(\pm\)7 & 194\(\pm\)8 & 94\(\pm\)9 & 163\(\pm\)4 & 26\(\pm\)12 & -31\(\pm\)12 & 69\(\pm\)14 \\ J2257\(-\)3306 & 170\(\pm\)6 & 186\(\pm\)9 & 200\(\pm\)18 & 185\(\pm\)6 & 16\(\pm\)13 & -1\(\pm\)15 & -15\(\pm\)25 \\ J2202-3101 & 187\(\pm\)10 & 227\(\pm\)7 & 238\(\pm\)15 & 221\(\pm\)5 & 33\(\pm\)16 & -7\(\pm\)13 & -17\(\pm\)20 \\ J1218\(+\)0232 & 168\(\pm\)8 & 191\(\pm\)9 & 170\(\pm\)9 & 171\(\pm\)6 & 3\(\pm\)14 & -20\(\pm\)15 & 1\(\pm\)15 \\ J2356-3332 & 148\(\pm\)9 & 181\(\pm\)10 & 151\(\pm\)10 & 162\(\pm\)6 & 14\(\pm\)15 & -19\(\pm\)16 & 12\(\pm\)16 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the tests on the input spectra and their resolution. Systems are ordered from the highest to the lowest S/N. We report from left to right the ID, the velocity dispersion values computed from the single-arm spectra at their original resolution and that measured from the combined and smoothed ones (COMB), and the differences in \(\sigma\) between these three spectra (\(\Delta\sigma\)). We note that the uncertainties quoted in the table are just the formal errors produced by pPXF. Figure 7: Effect of changing the resolution on the \(\sigma\) measurement. _First panel (from the left):_ Comparison between the velocity dispersion values obtained from the combined spectrum and those obtained from the three single arms at their original resolution. The solid black line shows the one-to-one identity relation. _Second panel:_ Difference between the velocity dispersion computed from the combined and smooth spectra and those measured from the single-arm spectra as a function of the mean S/N of each system. The horizontal lines represent the mean offset found for the three arms and correspond to the vertical lines in the following panel (we use a different line style for each different arm). The \(y\)-axis is flipped to better compare it with the \(x\)-axis in the third panel. No clear correlation with the S/N is found. _Third panel:_ Distribution of the \(\sigma\) differences between the arms, drawn from the histograms, assuming a Gaussian profile. A small offset for the UVB (VIS) is visible, which systematically underestimates (overestimates) the \(\sigma\) by 14 km/s (\(-\)8 km/s) on average. For the NIR, the distribution peaks at around 3 km/s, but a much larger scatter is found. _Fourth panel:_ Relative shift in \(\sigma\) against the \(\sigma\) measured from the single-arm spectra. XSH spectra are integrated over a slightly smaller aperture than the GAMA spectra. ## 6 Stellar mass-velocity dispersion relation In INSPIRE DR1, we found a quantitative difference between relics and non-relics in the stellar mass-stellar velocity dispersion space. In particular, \(\sigma\) at fixed stellar mass is higher for relics and especially extreme relics than for non-relics and normal-sized passive galaxies. We can now reproduce the M\({}_{\star}\)-\(\sigma\) plot taking advantage of the increased number statistics that we achieved with DR1+DR2. This plot can potentially have a strong predictive power if we identify that only relics (and not all UCMGs in general) are outliers in this relation, having higher velocity dispersion values. Figure 8: Fiducial best fit (red) over-plotted on the combined and smoothed galaxy spectra (red) for all the systems in DR2 ordered from high to low S/N (left to right and top to bottom). Noisier regions around 4300-4500 Å and around 7500-8000 Å show the wavelength at which the different arms were joined. \({}^{\star}\)For clarity of the stellar continuum best fit, the peaks of the strong emission lines in the spectrum of J1142+0012 are cut in the plot. \begin{table} \begin{tabular}{c|c c c} \hline \hline ID & \(\sigma\) (km/s) & \(\sigma\) (km/s) & \(\Delta\sigma_{XSH-GAMA}\) \\ KIDS & XSH & GAMA & (km/s) \\ \hline J1142+0012 & 130\(\pm\)4 & 155\(\pm\)12 & 25\(\pm\)16 \\ J0909+0147 & 403\(\pm\)7 & 381\(\pm\)17 & -22\(\pm\)24 \\ J1411+0233 & 219\(\pm\)3 & 207\(\pm\)17 & -13\(\pm\)20 \\ J1436+0007 & 194\(\pm\)4 & 186\(\pm\)11 & -8\(\pm\)15 \\ J1156-0023 & 168\(\pm\)5 & 175\(\pm\)35 & 8\(\pm\)40 \\ J0920+0126 & 186\(\pm\)5 & 175\(\pm\)18 & -12\(\pm\)22 \\ J0844+0148 & 229\(\pm\)9 & 185\(\pm\)49 & -44\(\pm\)58 \\ J0904-0018 & 200\(\pm\)7 & 169\(\pm\)29 & -31\(\pm\)36 \\ J1154-0016 & 166\(\pm\)5 & 195\(\pm\)38 & 29\(\pm\)43 \\ J2257-3306 & 180\(\pm\)6 & 168\(\pm\)23 & -12\(\pm\)29 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison with GAMA. Systems are ordered from highest to lowest S/N. The last column shows the difference between the measurements obtained from the two spectra, using a similar wavelength range and the same ADEGREE and MOMENT parameters. The uncertainty on the \(\Delta\sigma\) is assumed to be the sum of the two single uncertainties on the \(\sigma\). Figure 9: Relative difference between the velocity dispersion measured from XSH and that measured from GAMA plotted against the \(\sigma\) measured from XSH. The fit was made on the ten objects in common within the same wavelength range and with the same fit settings. The disagreement is about 20% (shaded region) or more for galaxies with \(\sigma\) \(<\)170 km/s, and it is much smaller for galaxies with a higher velocity dispersion. The M\({}_{\star}\)-\(\sigma\) relation is plotted in Fig. 10 for the 40 INSPIRE objects we analysed so far. We recall that the velocity dispersion values measured from the R50 spectra are underestimated because of the seeing effect, and thus must be interpreted as a lower limit to the \(\sigma\), as indicated by the arrows. Uncertainties are not drawn for each single system for clarity of the plot, but a mean error, derived from the bootstrapping procedure, is shown in the top left corner of the plot. Blue points are DR1 galaxies, and red points are the new objects that we added with this paper. The solid black line shows the M\({}_{\star}\)-\(\sigma\) relation for a sample of normal-sized ETGs from the KIDS survey for which SDSS DR16 spectroscopy is available. In this case, the velocity dispersion was corrected for R\({}_{\rm e}\). The shaded region represents the 16th-84th percentile confidence interval. The majority of DR1 confirmed relics (filled squares) generally lie above the median trend for normal galaxies in KiDS, and only DR1 confirmed relics systematically also lie above the 84th percentile. We note, nevertheless, that two (non-extreme) relics confirmed in DR1 do not have \(\sigma\) that exceeds expectations significantly. At least four red points (DR2 UCMGs) appear to have a higher velocity dispersion than normal-size galaxies of similar stellar mass. Hence, we identify them as the most reliable relic candidates from the DR2 sample. We therefore conclude that we currently cannot confirm the (tentative) result found in DR1, and that a careful and detailed stellar population analysis remains the only and best way to confirm the relic nature of ultra-compact massive galaxies. This will be performed on the 21 new objects in a forthcoming paper of the INSPIRE series (Spiniello et al., in prep.). With a larger number statistic of confirmed relics, we will be able to reassess the situation and finally conclude whether all relics are outliers in the M\({}_{\star}\)-\(\sigma\) relation. ## 7 Summary and conclusions This paper accompanies the INSPIRE second data release (DR2), which is also released as ESO Phase 3 collection (see Sec. 2). We have reduced and analysed the X-Shooter spectra of an additional 21 UCMGs, selected from the KiDS Survey (T18; S20) or from the GAMA Survey, to be good relic candidates. These new 21 spectra are added to those already released in Spiniello et al. (2021b) as part of the INSPIRE DR1, and bring the total number of objects analysed so far to 40. After reducing the data with the standard ESO XSH pipeline, up to the production of a 2D spectrum, we have extracted the 1D integrated spectra in each arm, from an aperture containing \(\sim\) 50% of the total light (R50). We note, however, that this does not exactly correspond to extracting spectra at R\({}_{\rm e}\) because the light contained in the R50 aperture is a mixture from inside and outside the real R\({}_{\rm e}\) being the data seeing limited. In DR1, we showed that the kinematics does not depend on the extraction method. We then corrected the VIS and NIR for telluric emission using the molecfit code and finally obtained a combined UVB+VIS+NIR spectrum for each galaxy after smoothing the spectra to the common resolution of 2.51A in FWHM. This is the same resolution as for the model spectrum templates. For each object, we release three final 1D spectra, one for each arm, at the original instrumental resolution (R\(=\) 3200, R\(=\) 5000, and R\(=\) 4300 in UVB, VIS, and NIR, respectively). The fluxes are given in units of erg cm\({}^{-2}\) s\({}^{-1}\) A\({}^{-1}\), and the wavelength is always measured in air. The combined and smoothed spectrum is also released as an additional ancillary product, together with the original spectra. Finally, we release the NIR spectra of the 19 systems presented in DR1, for which UVB and VIS are already publicly available. In this case, we provide both the R50 and the OptExt version for completeness (see DR1 for more information). We add the combined and smoothed spectra as ancillary files for these DR1 objects as well. The main result of this paper is an in-depth kinematical analysis that demonstrates the validity and robustness of the stellar velocity dispersion measurements and the systematics associated with the derivation of kinematic features with full spectral fitting approaches. We presented the integrated \(\sigma\) values obtained from the spectra of the 21 UCMGs and carried out a very detailed quantitative analysis of the statistical and systematic uncertainties on these values due to the different assumptions, parameters, and set-ups of the pPXF code, which was used for the full spectral fitting. For the purpose, we also set up a bootstrap analysis, which allowed us to take the statistical uncertainties from the observations and the uncertainties introduced by the specific parameter set-up into account. The conclusion of this analysis is that because of the fairly high S/N of the spectra (\(20<S/N_{\rm MEAN}<85\)), the stellar velocity dispersion values are robust and generally precise to the 5% level. However, the wavelength range used in the fit plays a non-negligible role in inferring the \(\sigma\) values: it shifts the inferred values also by \(\sim\) 30% in the worst cases. This mainly depends on the S/N of the input spectrum. In detail, we found that the degree of the additive Legendre polynomial that we used to correct for the continuum shape affects the inferred values of \(\sigma\) only slightly, unless too low (\(<\) 5) or too high (\(>\) 25) values are used. The number of moments of Figure 10: M\({}_{\star}\)–\(\sigma\) relation for the INSPIRE DR1+DR2 galaxies compared to normal-sized ETGs. Empty blue squares represent the UCMGs in DR1, and filled blue squares indicate confirmed relics. Empty red circles show the galaxies in DR2. The black line draws the median of the stellar mass-velocity dispersion relation for normal galaxies in KiDS with SDSS DR16 spectroscopy, while the shaded region highlights the 16th–84th percentile interval. Because the INSPIRE spectra are seeing dominated, the estimates of the velocity dispersion values must be considered as lower limits. The black error bar in the legend shows the mean error we retrieve from the bootstrap, and the arrows associated with the single INSPIRE objects show the estimated strength of a 7% systematic correction that takes the seeing effect into account (see Appendix A of the INSPIRE DR1). the Gauss-Hermite parameterisation of the LOSVD used in the fit also plays a negligible role on the stellar velocity dispersion values. We found a mean systematic shift of \(\sim 14\) (\(\sim-8\)) km/s between the stellar velocity dispersion values extracted from the UVB (VIS) spectra and those measured from the combined and smoothed spectra. This shift does not depend strongly on the S/N. The values computed from the NIR are overall consistent, but show a much larger scatter (and hence a larger formal error) because of the noisier nature of the spectra at these redder wavelengths (spikes and sky and telluric residuals) and the absence of strong and narrow absorption lines. The uncertainties computed from the NIR appear to be larger for systems with lower \(\sigma\). On the other hand, modifying the wavelength range on which the fit is performed plays the largest role, changing the inferred \(\sigma\) by more than the statistical uncertainties returned by the fit and by up to 20% in the worst cases. The velocity dispersion values are systematically higher when a redder wavelength range is used, within 3000-10000A. This mostly happens for lower S/N spectra, as shown in Fig. 4, where the velocity dispersion measurements are significantly larger when the blue end of the wavelength is excluded. We finally analysed the GAMA spectra of ten galaxies in common with the INSPIRE DR2 sample. We repeated the pPXF run on the GAMA and XSH spectra using an identical configuration for the fit. The velocity dispersion agreed fairly well with the dispersion measured from the XSH spectra in a similar wavelength region. Only for systems with the lowest velocity dispersion were the values inferred from GAMA spectra (with a much lower spectral resolution) overestimated by \(\sim 25\%\). In conclusion, our work has shown that assessments of stellar kinematics and measurements of the stellar velocity dispersion are mainly robust to different assumptions on the fitting parameters. They are more sensitive to changes in wavelength coverage than previously thought, however. In the next paper of the series (Spiniello et al., in prep.), we will focus on the stellar population properties of these 21 UCMGs. Constraining age, metallicity, and [Mg/Fe], we will be able to confirm a fraction of them as relics, further increasing the number of spectroscopically confirmed relics in the low-\(z\) Universe. We will therefore test whether the velocity dispersion can be used as a selection criterion to select the most reliable relic candidates among UCMGs. Finally, the third and final INSPIRE data release is foreseen after completion of all the observations. Twelve additional UCMGs will be targeted, whose kinematics and stellar populations will be studied. ## Acknowledgements GD acknowledges support by ANID, BASAL, FB210003. CS is supported by an 'Hintze Fellowship' at the Oxford Centre for Astrophysical Surveys, which is funded through generous support from the Hintze Family Charitable Foundation. CS, CT, FLB, AG, SZ, and PS acknowledge funding from the INAF PRIN-INAF 2019 program 1.05.01.85.11. AFM has received financial support through the Postdoctoral Junior Leader Fellowship Programme from 'La Caixa' Banking Foundation (LCF/BQ/L118/11630007) and from the Severo Ochoa Excellence scheme of the MCIU (CEX2019-000920-S). DS is a member of the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. The Geryon cluster at the Centro de Astro-Ingenieria UC was extensively used for the calculations performed in this paper. BASAL CATA PFB-06 and FB210003, the Anillo ACT-86, FONDECQUIP AIC-57, and QUIMAL 130008 provided funding for several improvements to the Geryon cluster. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is [http://www.gama-survey.org/](http://www.gama-survey.org/). Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 179.A-2004 and ID 177.A-3016. The authors wish to thank the ESO Archive Science Group for the great support with the Data Release. We finally thank the anonymous referee for the helpful comments and constructive remarks on this manuscript.
2302.13231
A Synthetic Texas Power System with Time-Series High-Resolution Weather-Dependent Spatio-Temporally Correlated Grid Profiles
This study introduced a synthetic power system with spatio-temporally correlated profiles of solar power, wind power, dynamic line ratings and loads at one-hour resolution for five continuous years, referred to as the Texas 123-bus backbone transmission (TX-123BT) system. Unlike conventional test cases that offer a static snapshot of system profile, the designed TX-123BT system incorporates weather-dependent profiles for renewable generation and transmission thermal limits, mimicking the actual Electric Reliability Council of Texas (ERCOT) system characteristics. Three weather-dependent models are used for the creation of wind and solar power production, and dynamic line rating (DLR) separately. Security-constrained unit commitment (SCUC) is conducted on TX-123BT daily profiles and numerical results are compared with the actual ERCOT system for validation. The long-term spatio-temporal profiles can greatly capture the renewable production versatility due to the environmental conditions. An example of hydrogen facilities integration studies is presented to illustrate the advantage of utilizing detailed spatio-temporal profiles of TX-123BT.
Jin Lu, Xingpeng Li, Hongyi Li, Taher Chegini, Carlos Gamarra, Y. C. Ethan Yang, Margaret Cook, Gavin Dillingham
2023-02-26T04:09:47Z
http://arxiv.org/abs/2302.13231v2
# A Synthetic Texas Backbone Power System with Climate-Dependent Spatio-Temporal Correlated Profiles ###### Abstract Most power system test cases only have electrical parameters and can be used only for studies based on a snapshot of system profiles. To facilitate more comprehensive and practical studies, a synthetic power system including spatio-temporal correlated profiles for the entire year of 2019 at one-hour resolution has been created in this work. This system, referred to as the synthetic Texas 123-bus backbone transmission (TX-123BT) system, has very similar temporal and spatial characteristics with the actual Electric Reliability Council of Texas (ERCOT) system. It has a backbone network consisting of only high-voltage transmission lines in Texas, which is obtained by the K-medoids clustering method. The climate data extracted from the North American Land Data Assimilation System (NLDAS) are used to create the climate-dependent profiles of renewable generation and transmission thermal limits. Two climate-dependent models are implemented to determine wind and solar power production profiles respectively. In addition, two sets of climate-dependent dynamic line rating (DLR) profiles are created with the actual climate information: (i) daily DLR and (ii) hourly DLR. Simulation results of security-constrained unit commitment (SCUC) conducted on each of the daily system profiles have validated the developed one-year hourly time series dataset. Annual power system profiles, backbone transmission topology, climate-dependent renewable models, dynamic line rating, K-medoids clustering, power system operations, security-constrained unit commitment, test power system, transmission line rating. ## I Introduction Many power system studies are simulation-based and rely on the synthetic power system test cases due to the limited availability of real system data. These studies cover a wide range of topics including power system stability, reliability, operation, planning, restoration, and state estimation. For example, security-constrained unit commitment (SCUC), security-constrained economic dispatch (SCED), and transmission expansion planning (TEP) are commonly used in power system operational and planning studies [1]-[3]. Restoration and other operation strategies in grid resilience studies also require test case validation [4]-[5]. Generally, a test case includes all the relevant information for generation, transmission and load. The commonly used test cases are IEEE and CIGRE benchmarks such as the IEEE 118 bus system and CIGRE medium voltage system [6]. Besides these small-scale test power systems, very few large-scale real power system cases are publicly accessible due to the confidentiality of the power industry. To meet the research requirements of large-scale test cases without the access of real large systems, some synthetic test cases are created that resemble the actual systems based on their electrical characteristics. The Polish 2746-bus system is created based on the real power system of Poland [7]. The synthetic grids utilizing the footprint of the western, northeastern, and eastern U.S. regions have been created, and each grid contains more than ten thousand buses [8]. Most existing test cases provide the technical details for steady-state analysis, such as power flow and/or transient-state analysis such as stability simulation. However, these test cases only provide the data for a certain snapshot in time. The long-term time series system profiles are not provided in these test power system cases. Prior efforts show that power systems will face more climate-related challenges in the 21st century [9]. Despite the new challenges brought by climate change, the performance and mitigation strategy of the climate-impacted power system are not being investigated in depth. To pave the way for these research and industry developments, we have developed a synthetic Texas 123-bus backbone transmission (TX-123BT) system using actual historical climate information to represent the standalone Texas grid within the Electric Reliability Council of Texas (ERCOT) region. The developed TX-123BT system contains the spatio-temporal correlated profiles generated by various climate-dependent models for transmission line rating and renewable generation. Multiple power system studies such as SCUC on large-scale systems are computationally intensive [10]. To facilitate climate-impact studies and other studies involving large geographical areas and time-sequential data, the test system should focus on the critical backbone transmission network while containing all the essential details of the test case. Hence, the proposed synthetic TX-123BT system has a backbone network topology, which is obtained by the K-medoids clustering method. It has one entire year of data at one-hour resolution with detailed nodal information of all 123 buses in the system. The created TX-123BT system including its network and generator configurations, spatio-temporal correlated profiles and related climate data have all been published [11]-[12] and can be freely accessed for research purposes. The main contri butions of this paper are as follows, * The TX-123BT system has a backbone network created by K-medoids clustering. It comprises the essential geographical and electrical information of the unreduced system. The backbone system is time efficient for computation intensive simulations. * The climate data at all the bus locations in TX-123BT are extracted from North American Land Data Assimilation System (NLDAS) [13]-[14]. It includes air temperature, solar radiation, and wind speed for 2019, and is used to create the climate-dependent time series profiles of the TX-123BT. Hence, the created profiles of solar power, wind power, electrical load, and line thermal rating are spatio-temporal correlated with each other. * The TX-123BT includes both the hourly and daily dynamic line rating (DLR) profiles. The performances of these two DLR techniques are examined and compared. * The SCUC simulation is conducted on all the daily profiles in 2019 for validation. The transmission line congestion and locational marginal prices (LMPs) are also analyzed. The rest of this work is structured as follows. Section II explains the procedures implemented in this work to transform a large-area transmission network into a 123-bus backbone network. Section III presents the generator specifications while section IV presents the climate-dependent renewable production models used for creating renewable production profiles of the TX-123BT. The proposed method for creating the nodal load profiles is described in Section V. Section VI presents the climate-dependent transmission line rating model; the line capacity data representing daily and hourly DLR are also summarized. The SCUC simulation results are analyzed in Section VII. The conclusions are drawn in Section VIII. ## II Cluster-Based Backbone Network Topology A bulk power system connects the generation resources and loads with the meshed network including substations and transmission lines. Hence, the power system topology highly depends on the distribution of generation and load, which is related to locational conditions. For example, most large loads are located in/near the cities, and the number of transmission lines in these areas are generally more than other areas. The synthetic TX-123BT test case is proposed to serve as a benchmark power system for studies on large geography areas, such as analyzing the impact of climate change on power systems. The transmission network in a wide geographical area contains a large number of transmission lines and substations. The complexity of the transmission topology is due to the locally detailed transmission information, which is not necessary for many research topics of interest. Such complexity will increase the computational burden and become an obstacle for most research studies. Hence, we need to reduce the power system topology to relieve the computational burden while retaining critical information. A typical power system network includes transmission lines at different voltage levels. The transmission lines with higher voltage levels usually have higher capacities and can carry more power flows; in addition, a higher voltage level indicates a longer transmission distance. By extracting the high-voltage transmission network, we can easily obtain a backbone network that includes almost all the transmission lines with large power flows. We first extract the 345kV backbone transmission network of the synthetic power system based on the footprint of Texas [15]. This creates a 225-bus 345 kV transmission network topology as shown in Fig. 1. For a wide geographical area, the high voltage level transmission network may still include local transmission details, which are unimportant for system-level analysis. Hence, we need to further reduce the data scale while keeping the key information of the test system. Based on different test cases and actual grid data, most short-distance transmission lines are located near the cities. The reactance of these lines is small, and the power flows in these lines are often well below the thermal limits. Therefore, a clustering method should be implemented to aggregate the buses of the 225-bus backbone case. The K-medoids clustering method can find the centroid that is the actual point in the cluster. Also, it is less sensitive to outliers than the typical K-mean clustering method. Using the K-medoids clustering method can better select the backbone buses from the unreduced transmission network which includes buses distributed in remote areas. Fig. 1: Illustration of the Texas 225-bus transmission network topology. Fig. 2: Illustration of the 123-bus transmission network topology. The K-medoids clustering algorithm can be described by the following steps: i) randomly select some buses as the initial centroids of clusters; ii) for each bus which is not the centroid, assign it to the cluster whose centroid is closest to it; iii) for each cluster, identify the bus that has the smallest total distance to all other buses in the same cluster as the new centroid of the cluster; iv) if the centroid solution changes, go to step ii; otherwise, stop and report the clustering results. The distance between two buses is calculated using the haversine formula as shown below. \[a=sin^{2}(\frac{\varphi_{1}-\varphi_{2}}{2})+\cos\varphi_{1}\cdot\cos\varphi_{2} \tag{1}\] \[\cdot sin^{2}(\frac{\lambda_{1}-\lambda_{2}}{2})\] \[c=2\cdot\mathrm{atan2}(\sqrt{a},\sqrt{1-a})\] \[d=R\cdot c \tag{3}\] where \(\varphi\) represents the latitude of a bus, and \(\lambda\) represents the longitude of a bus. \(R\) is the radius of Earth. \(d\) is the distance between the two buses. We created the backbone transmission network topology using the K-medoids clustering method. The transmission network has well-clustered buses. However, some essential buses in remote areas are not included in the backbone network. Hence, we manually add three buses and related lines into the backbone network after a topology comparison with the original 225-bus system. AC power flow simulation is conducted to verify the 123-bus backbone topology. The reduced 123-bus backbone transmission topology is illustrated in Fig. 2, which is very similar to the 225-bus network topology represented by Fig. 1. The major difference is that the 123-bus network has much less buses in the dense city areas. This change can reduce computational complexity in various simulations but does not affect studies such as network congestion analysis. ## III Conventional Generation Profiles ### _Generation Fuel Mix_ Based on the ERCOT's energy production and generation capacity by fuel types [16, 17], we can conclude that the major generation fuel types in the ERCOT system are natural gas, wind, coal, nuclear, and solar. In addition, the generation fuel composition may vary in different regions of the ERCOT system. For example, most wind generators are in the northwest of Texas due to the wind source distribution. The TX-123BT system is created to have a very similar system-wide as well as region-wide generation fuel mix with the actual ERCOT system. Based on the generation characteristics of various fuel types for different weather zones and the whole ERCOT [18, 19, 20], the fuel types of the generators in the TX-123BT system are assigned accordingly and the generator's power capacity is within the capacity range of the corresponding type of generators. In addition, based on the data provided by Energy Information Administration (EIA) [18], ten hydro power plants, each of which is over 10 MW, are also added to the TX-123BT system. Although there are fewer hydro power plants compared to other types of power plants in ERCOT representing a small portion of total generation, they have low operation costs and high ramping rates which may substantially affect the electricity market and grid reliability. The statistical data of generation profiles of the developed TX-123BT system are shown in Tables I-II. The system-wide generation fuel mix in the TX-123BT (last row in Table II) is similar to the actual fuel mix provided by ERCOT [21]. Part of the generator's power capacity and fuel type data are shown in Table III. It is worth noting that the renewable generation capacity has grown rapidly in ERCOT. We have adjusted the wind and solar capacity to match the actual ERCOT renewable capacity in 2019. ### _Conventional Generator Cost & Operation Parameters_ #### Iii-B1 Coal & Natural Gas Generator The quadratic function (4) is used to model the thermal power plant's operation cost (\(c_{g}\)). The coefficients \(\mathcal{C}_{0}\), \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) of typical coal generators and natural gas generators are determined per [22]. The generator startup cost (\(c_{g}^{SU}\)) can be calculated using (5). \[c_{g}=\mathcal{C}_{0}+\mathcal{C}_{1}*P+\mathcal{C}_{2}*P^{2} \tag{4}\] \[c_{g}^{SU}=\eta_{g}^{SU}*P_{B}^{Max}*C^{F}+P_{g}^{SU}*C_{g}^{O} \tag{5}\] where \(p_{g}^{Max}\) is the generator active power capacity, \(\eta_{g}^{SU}\) is the startup fuel per unit capacity, \(\mathcal{C}^{F}\) is the fuel price, \(p_{g}^{SU}\) is the startup power and \(C_{g}^{O}\) is another startup cost related to the required startup power. The coal price used in creating the TX-123BT system is 1.78 $/MMBtu based on EIA [22]. The annual average natural gas price in Texas is 2.29 $/\(Kft^{3}\). As the natural gas heat \begin{table} \begin{tabular}{c c c c c c c} \hline Weather Zone & Natural Gas & Wind & Coal & Solar & Nuclear & Hydro \\ \hline Coast & 31 & 0 & 2 & 8 & 1 & 0 \\ East & 15 & 0 & 2 & 0 & 0 & 0 \\ Far West & 4 & 11 & 0 & 17 & 0 & 0 \\ North & 4 & 9 & 1 & 6 & 0 & 1 \\ North Central & 14 & 10 & 4 & 9 & 1 & 1 \\ South & 11 & 18 & 1 & 3 & 0 & 1 \\ South Central & 29 & 0 & 3 & 23 & 0 & 6 \\ West & 5 & 34 & 0 & 6 & 0 & 1 \\ Total & 113 & 82 & 13 & 72 & 2 & 10 \\ \hline \end{tabular} \end{table} TABLE II: Total Capsacities (MW) of Different Fuel Type Generators \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Generator & Bus & Pmax & Pmin & Qmax & Qmin & Fuel type \\ Number & Number & (MW) & (MW) & (MVar) & (MVar) & Fuel type \\ \hline 1 & 107 & 2430 & 729 & 894.24 & -199.26 & Nuclear \\ 6 & 100 & 842.5 & 252.75 & 392.6 & -102.79 & Coal \\ 7 & 100 & 177.3 & 53.19 & 99.29 & -11.35 & Natural Gas \\ 9 & 50 & 1.5 & 0.45 & 0 & 0 & Solar \\ 19 & 32 & 643.2 & 0 & 332.2 & 0 & Wind \\ 283 & 61 & 54.9 & 0 & 23.5 & 0 & Hydro \\ \hline \end{tabular} \end{table} TABLE III: Sample of Generator Capacity Profiles in the TX-123BT System content is set to be \(1000\;Btu/ft^{3}\)[23], then the natural gas price becomes \(2.29\;\$/MMBtu\). Based on the above information, the total startup cost is calculated for the coal and natural gas generators in the TX-123BT test system. In addition, the shutdown costs are also calculated following [19]. The ramping rate, minimum off time, and maximum on time of the coal and natural gas generators are obtained per [20]. The startup time and shutdown time are obtained from [25]. Tables IV through VII show the parameters for representative coal and natural gas generators respectively. \(z_{0}\) is the roughness of the surface. Based on the terrain of Texas, \(z_{0}\) is set to 0.3 and \(d\) is set to 6. The wind speed at 80m is about 2.13 times of wind speed at 10m. To create more practical climate-dependent wind production profiles, the capacities and geographic locations of the wind power plants in the TX-123BT should be close to the actual ERCOT system. The wind plant capacities in the TX-123BT system are adjusted to match the actual ERCOT wind generation in 2019 using the least square method, as described by (9)-(14). \[min\sum_{h}^{H}(\sum_{l}^{h^{W}}p_{l,h}^{W,Case}-p_{h}^{ERCOT})^{2} \tag{9}\] \[P_{l,h}^{W,Case}=k_{l,h}^{W}\cdot C_{l}^{W}\cdot V_{l,h}^{3}\quad\forall\;i\in N ^{W},\;h\in H \tag{10}\] \[k_{l,h}^{W}=k_{l,h+24}^{W}\quad\forall\;i\in N^{W}\,,h\in H \tag{11}\] \[-0.0001\leq k_{l,h}^{W}-k_{l,h-1}^{W}\leq 0.0001\quad\forall\;i\in N^{W}, \tag{12}\] \[h\in H\] \[-50\leq C_{l}^{W}-C_{l}^{W0}\leq 50\quad\forall\;i\in N^{W},\;h\in H \tag{13}\] \[C_{l}^{W}\geq 0\quad\forall\;i\in N^{W} \tag{14}\] The least square method can adjust the wind power plant capacities in the TX-123BT to minimize the square error between the wind production of the TX-123BT and the ERCOT per (9). \(p_{l,h}^{W,Case}\) is the power output of wind farm \(i\) in hour \(h\) in the TX-123BT. \(p_{h}^{ERCOT}\) is the total wind output power of the actual ERCOT system in hour \(h\). Hour \(h\) is an hour in 2019. The aggregated wind power production in a wind farm is related to the adjusted capacity of wind farm \(C_{l}^{W}\) and the wind speed \(V_{l,h}\) per (10). The wind turbine coefficient \(k\) is a comprehensive coefficient considering various factors including the wind direction and wind turbine efficiency. The wind turbine coefficient \(k\) is assumed to be a constant for a specific wind turbine for each hour of the day per (11). The changing magnitude of \(k\) is limited to 0.0001 over two consecutive hours. Besides, the adjustment of the capacity for each wind farm in TX-123BT is less than 50MW per (13). The adjusted wind capacity should be non-negative per (14). The least square method can find the most realistic wind turbine coefficients and capacities for wind farms in the TX-123BT system. The created wind production hourly time series profiles are compared to the corresponding real ERCOT wind production in 2019 in Fig. 3. The mean hourly wind power profile (within a day), averaged over 365 days in 2019, is compared to the actual ERCOT hourly statistics in Fig. 4. According to the comparison, we can conclude that the created wind production profiles are very similar to the actual situation. The hourly wind power production from seven wind farms at bus 119 on January 3, 2019, is illustrated in Fig. 5. We can observe that each wind plant's production varies according to the wind speed on bus 119. ### _Climate-Dependent Solar Model and Production Profiles_ A five-parameter single diode equivalent circuit is commonly used and suitable for PV cell, module and array [34]. In [34], the operation condition variables (temperature, radiation, and air mass) are used in a five-parameter equation. Since we are mainly interested in the maximum available solar power output at different radiation and temperature, we can calculate the maximum power point using (15) [35]-[36]. \[P_{mp}=\frac{E_{e}}{E_{0}}\cdot p_{mp0}\cdot[1+\gamma\cdot(T_{c}-T_{0})] \tag{15}\] where \(P_{mp}\) is the maximum power output for the certain operation condition. \(E_{e}\) and \(T_{c}\) are the effective radiation and temperature on the solar cells respectively. \(P_{mp0}\) is the maximum power output at the standard testing condition (STC). \(E_{0}\) and \(T_{0}\) are the radiation and temperature at STC respectively. \(\gamma\) is the temperature coefficient that indicates the influence of the temperature on the solar power transfer efficiency. The NLDAS-2 provides the historical data for shortwave and longwave solar radiation flux downwards. Based on the widely used solar panel's spectral response range, we use the radiation flux downwards to estimate the effective solar radia Fig. 4: Hourly wind power profiles comparison. Fig. 5: Output power of seven wind plants (WP) on January 3, 2019. Fig. 3: Wind power production for all hours in 2019. tion on the solar panels. We also estimate the solar cell temperature using the ambient air temperature at the corresponding solar panel. Based on the processed climate data and solar power production model, the solar power production for all the solar farms in the TX-123BT system is calculated. The system-wide hourly solar production of the TX-123BT is compared with ERCOT solar production in 2022 (the historical data of 2019 is not accessible). The hourly solar productions averaged over all the days in Quarter 1 for the synthetic TX-123BT system and the actual ERCOT system are shown in Fig. 6. We can observe that the deviation is within a reasonable range. The hourly solar power production for four solar farms in TX-123BT is shown in Fig. 7. ## VI Climate-dependent Transmission Line Rating The transmission line thermal capacity in the three-phase system can be calculated given the line ampacity and line voltage. The IEEE Std 738-2012 [39] is used to calculate the ampacity of lines at different temperature, solar radiation and wind speed conditions. The detailed calculation is described by (16)-(23). \[q_{c}+q_{r}=q_{s}+I^{2}\cdot R(T_{avg}) \tag{16}\] \[I=\sqrt{\frac{q_{c}+q_{r}-q_{s}}{R(T_{avg})}} \tag{17}\] \[q_{c1}=K_{angle}\cdot[1.01+1.35\cdot N_{Re}^{0.52}]\cdot k_{f}\cdot(T_{s} \tag{18}\] \[-T_{a}) \tag{19}\] \[q_{c2}=K_{angle}\cdot 0.754\cdot N_{Re}^{0.6}\cdot k_{f}\cdot(T_{s}-T_{a}) \tag{20}\] \[K_{angle}=1.194-\cos(\varphi)+0.194\cdot\cos(2\varphi) \tag{21}\] \[+0.368\cdot\sin(2\varphi)\] \[q_{r}=17.8\cdot D_{0}\cdot\varepsilon\cdot[(\frac{T_{s}+273}{100})^{4}-(\frac {T_{a}+273}{100})^{4}] \tag{22}\] \[q_{s}=\alpha\cdot Q_{se}\cdot\sin(\theta)\cdot A^{\prime} \tag{23}\] \[\theta=\cos^{-1}[\cos(H_{c})\cdot\cos(Z_{c}-Z_{l})] \tag{24}\] Equation (16) is the heat balance equation of the conductor. \(q_{c}\) is the convective heat loss. \(q_{r}\) is the radiated heat loss rate. \(q_{s}\) is the rate of solar heat gain. \(I\) is the current in the conductor and \(R(T_{avg})\) is the conductor resistance at temperature \(T_{avg}\), which is the average temperature in the conductor. (16) can be transformed into (17), which can be used to calculate the current in the conductor at the conductor maximum temperature. In (18)-(20), \(q_{c1}\) and \(q_{c2}\) are the forced convection and the higher value of \(q_{c1}\) and \(q_{c2}\) will be used as the value of \(q_{c}\). \(N_{Re}\) is the dimensionless Reynolds number. \(K_{f}\) is the thermal conductivity of air. \(T_{s}\) is the conductor surface temperature, and \(T_{a}\) is the ambient temperature. \(K_{angle}\) is wind direction factor and \(\varphi\) is the angle between the wind direction and the conductor axis. In (21), the radiated heat loss is related to the diameter of the conductor \(D_{0}\), the emissivity \(\varepsilon\), the conductor surface temperature \(T_{s}\), and ambient temperature \(T_{a}\). The rate of solar heat gain can be calculated by (22). \(\alpha\) is the solar absorptivity. \(Q_{se}\) is total solar and sky radiated heat intensity corrected for the elevation. \(A^{\prime}\) is the projected area of the conductor. \(\theta\) is the effective angle of incidence of the sun's rays. In (23), \(\theta\) is determined by the altitude of the sun \(H_{c}\), the azimuth of the sun \(Z_{c}\), and the azimuth of the line \(Z_{l}\). There are three types of aluminium conductor steel reinforced (ACSR) conductors used for the transmission lines in the TX-123BT system: Kiwi, Bobolink and Finch. Different types of ACSR conductors have different conductor diameters and resistances versus temperature characteristics. We use the linear approximation as shown in (24) for determining the conductor resistance at a certain temperature. \(R(T_{high})\) and \(R(T_{low})\) are the conductor resistance at temperature \(T_{high}\) and \(T_{low}\) respectively. \[R(T_{avg})=[\frac{R(T_{high})-R(T_{low})}{T_{high}-T_{low}}]\cdot(T_{high}-T_{ low}) \tag{25}\] \[+R(T_{low})\] \begin{table} \begin{tabular}{c c c c c c c} \hline Bus Number & Hour 1 & Hour 5 & Hour 9 & Hour 13 & Hour 17 & Hour 21 \\ \hline Bus 1 & 96.51 & 102.39 & 118.19 & 91.76 & 84.25 & 96.28 \\ Bus 2 & 97.86 & 103.56 & 116.47 & 95.4 & 87.29 & 97.41 \\ Bus 3 & 171.3 & 172.01 & 173.67 & 161.79 & 158.4 & 165.91 \\ Bus 4 & 198.59 & 202.18 & 231.79 & 213.28 & 208.4 & 212.48 \\ Bus 5 & 25.58 & 24.5 & 28.54 & 28.53 & 28.73 & 28.21 \\ Bus 6 & 606.36 & 641.1 & 754.35 & 608.1 & 555.7 & 613.43 \\ Bus 7 & 42.61 & 45.05 & 53.01 & 42.73 & 39.05 & 43.11 \\ \end{tabular} \end{table} TABLE IX: Sample Load Profiles for Different Hours on December 31, 2019 Fig. 6: Averaged hourly solar power production in Quarter 1. Fig. 7: Hourly power output of four solar plants (SP) on January 1, 2019. Although the extracted historical climate data have detailed nodal information at one-hour resolution, they do not perfectly meet the needs of transmission line calculation. Several assumptions are made as follows. First, the line ambient temperature is assumed to be the same as the temperature 2 meters above ground. Second, since most long-distance transmission lines are overhead lines and the transmission towers are generally 55-150 feet (16.8m-45.72m), the wind speed at the transmission line's height is estimated using the aforementioned log wind profile method. Third, the angle between the wind direction and the transmission line is assumed to be 45-degrees. The wind speed perpendicular to the conductor \(V_{w}\) can be calculated using (25)-(26). In (25), \(V_{\text{z}}\) and \(V_{\text{m}}\) is the zonal and meridional wind speed extracted from NLDAS. \(V_{wind}\) is the composite speed. \[V_{wind}=\sqrt{{V_{\text{z}}}^{2}+{V_{\text{m}}}^{2}} \tag{25}\] \[V_{w}=V_{wind}*sin(45^{*}) \tag{26}\] The total heat intensity corrected for elevation \(Q_{se}\) is calculated using (27)-(29) per the IEEE Std 738-2012. \[Q_{s}=A+B\cdot H_{c}+C\cdot H_{c}{}^{2}+D\cdot H_{c}{}^{3}+E\cdot H_{c}{}^{4} \tag{27}\] \[+F\cdot H_{c}{}^{5}+G\cdot H_{c}{}^{6}\] \[Q_{se}=K_{solar}\cdot Q_{s} \tag{28}\] \[K_{solar}=A+B\cdot H_{e}+C\cdot H_{e}{}^{2} \tag{29}\] In (27)-(29), \(Q_{s}\) is the total heat flux density by a surface at sea level. \(K_{solar}\) is the elevation corrective factor. \(A\), \(B\), \(C\), \(D\), \(E\), \(F\), and \(G\) are polynomial coefficients. The total heat flux by the Earth's surface \(Q_{s}\) is assumed to be the summation of the downward shortwave radiation \(Q_{short}\) and longwave radiation \(Q_{long}\), which are the data extracted from NLDAS. Radiation from the Earth's surface is omitted. Thus, \(Q_{s}\) can be calculated using (30). \[Q_{s}=Q_{short}+Q_{long} \tag{30}\] In the calculation, the environmental parameters are determined based on the actual Texas conditions. The altitude and azimuth of the sun at noon are used in the calculation. The altitude of the sun \(H_{c}\) is calculated based on the average latitude of Texas which is 30.5\({}^{\circ}\) N. The elevation of the conductor above the sea level \(H_{e}\) is set to Texas average elevation. For ACSR transmission lines, the common continuous operational maximum temperature is 90\({}^{\circ}\)C. The parameters for the Texas line ampacity calculation are listed in Table X. Dynamic line rating is an effective strategy in power system operations to fully utilize the available transmission capacity of the lines under various environmental conditions. In this paper, we have created two profiles using the daily DLR and the hourly DLR, respectively. The daily DLR profile has the same fixed line ratings in the entire day, which is used by many power system operators. We use the highest hourly temperature, solar radiation, and the lowest hourly wind speed as the environmental values in the daily line rating calculation. The daily line rating profiles are calculated for all the days in 2019 respectively. The daily thermal ratings of line 15 during the year 2019 are shown as an example in Fig. 8. The hourly line rating method can capture the suitable line ratings for each hour in the daily operation. Each hour's temperature, solar radiation, and wind speed are used in the line rating calculation for the corresponding hour. Hourly line ratings are higher than the daily line rating in most instances. Hence, using hourly line rating can reduce operational costs and improve system operational efficiency. The hourly line ratings are calculated for all the hours in 2019. The hourly thermal ratings of line 15 in four days for different quarters are shown in Fig. 9. ## VII SCUC Simulation and Analysis ### _SCUC model_ To analyze the daily operational conditions of the created TX-123BT system, a standard SCUC model in [40] is used \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(H_{c}\) & The altitude of the sun & 30.5 & deg \\ \hline \(Z_{c}\) & The azimuth of the sun & 180 & deg \\ \hline \(H_{e}\) & The elevation of conductor above sea level & 520 & m \\ \hline \(\varepsilon\) & The emissivity & 0.8 & - \\ \hline \(\alpha\) & Solar absorption & 0.8 & - \\ \hline \(\mu_{f}\) & The air viscosity & 2.04e-5 & - \\ \hline \(T_{film}\) & Average temperature of the boundary layer & 70 & °C \\ \hline \(k_{f}\) & The thermal conductivity of air & 0.0295 & W/m-°C \\ \hline \(T_{c}\) & The conductor maximum temperature & 90 & °C \\ \hline \end{tabular} \end{table} TABLE X: Some Input Data for Texas Line Ampacity Calculation Fig. 8: The daily thermal ratings of line 15 during the year 2019. Fig. 9: Thermal ratings of line 15 in four typical days for different quarters. and simulation is conducted on the TX-123BT. The objective of the daily operational model is to minimize the total cost of the power system, which includes the generator operation cost, no-load cost and startup cost. The generator related constraints such as maximum output, reserve, and ramping constraints are included. The line flow equation and nodal power balance equation are also included. In the SCUC model, the line thermal limit for fixed line rating during a day is shown in (31). \[-P_{k}^{max}\leq P_{kt}\leq P_{k}^{max}\quad\forall k,t \tag{31}\] The SCUC simulations are conducted to verify all 365 daily system profiles of the TX-123BT test case with daily line ratings. The SCUC optimization problems are feasible for all daily profiles without any load shedding. Since the renewable production, line rating and loads are different in those 365 daily profiles, the feasibility of SCUC optimization problems for all the daily profiles can validate that the created test system is practical and reliable. ### _Comparison of the Electricity Market_ Since the locational marginal prices (LMPs) can be obtained from the SCUC simulations, the electricity market results of the TX-123BT and actual ERCOT can be compared. #### Iv-B1 Actual ERCOT Electricity Market Electricity prices are affected by many factors and one year's price data may not well reflect the actual electricity market. Hence, we collect and analyze the day ahead market (DAM) price data in a 5-year period (2015-2019). After observing DAM prices for different hours and load zones under different scenarios, some characteristics of the actual ERCOT electricity prices are observed and summarized as follows, * The electricity prices on weekends are usually lower than the prices on weekdays. * Quarter 3 has the highest electricity price while Quarter 1 has the lowest electricity price. * The electricity prices at different load zones are slightly different during off-peak hours, but the electricity prices usually have larger locational variety during peak hours, especially in Quarter 3. * For the peak hours around 15:00-18:00, the electricity prices are much higher than the off-peak hour prices in Quarter 3. #### Iv-B2 Synthetic TX-123BT Electricity Market The LMPs of the TX-123BT are obtained using the dual variables of the nodal power balance constraints in SCUC simulation. After the analysis of the TX-123BT LMPs and the ERCOT DAM prices, we conclude that the two systems have very similar nodal electricity prices range under different scenarios, which is shown in Table XI. The system-wide electricity prices for different typical seasonal days are shown in Figs. 10a-10d. We can observe that, in Quarters 3, the electricity prices are higher than in Quarters 1. This disparity can be explained by the larger demands in Quarters 3. The high demands require generators which are more expensive for electricity production to come online, resulting in higher electricity prices in these quarters. The day with the highest load among all the days in 2019 is selected as the peak load day. Two scatter plots of nodal LMPs for the normal load day and peak load day are shown in Fig. 11 - Fig. 12. From the simulation results, we can conclude that the electricity prices at different load zones are slightly different during low load demand scenarios (for most buses). However, the electricity prices locational variety is large during peak hours in Quarter 3. The characteristics of the LMPs are in line with the actual ERCOT electricity price characteristics that we summarized in the above subsection. ### _Peak Load Scenarios and Line Congestion Analysis_ Congested lines are classified based on the power flow results of the SCUC simulations. Two types of congested lines are classified: (i) 100% loaded lines and (ii) 90%+ loaded lines. The 100% loaded lines are the transmission lines on which the active power flow is 100% of the line capacity. The 90%+ loaded lines are the transmission lines on which the active power flow is over 90% but less than 100% of the line capacity. The numbers of congested lines at different hours during the peak load day are shown in Fig. 13. We can observe that more transmission lines are congested in the peak hours. ### _DLR Performance Analysis_ The SCUC simulation is also conducted on the TX-123BT with hourly DLR profiles. The line thermal limit constraint (32) is replaced by the following constraint since the line limits \(P_{kt}^{max}\) are now different for different hours and need an extra index of time interval \(t\). \[-P_{kt}^{max}\leq P_{kt}\leq P_{kt}^{max}\quad\forall k,t \tag{32}\] The SCUC simulation results including total operational cost, renewable generation, LMPs, and transmission congestion, are analyzed and compared with SCUC using daily DLR profiles. The overall numerical results are shown in Table XII. The total operational cost of the hourly DLR case is lower than the daily line rating case, and the cost saving is about 1.7% with hourly DLR. One reason is that the increased transmission capacity can relieve network congestion and reduce the curtailment of renewable energy that has a much lower (zero) cost than the conventional generation. The average LMP of the hourly DLR case is also lower than the case using conservative daily DLR. The systemwide average LMPs for a normal day in Quarter 2 are shown in Fig. 14. We can observe that the average LMPs of hourly DLR is lower than the LMPs of daily DLR for majority of the hours. ## VIII Conclusion In this paper, we present the methods and implementation details to create the synthetic TX-123BT test system, which covers the wide geographical area of Texas. The created test case has reduced system size while retaining geographical characteristics. Hence, the test case is suitable for power system studies that require geographical information and less computational burden. The hourly climate data in NLDAS-2, including solar radiation, air temperature and wind speed for all the 123 bus locations are extracted and utilized. Using the climate-dependent models for solar/wind production and transmission line rating, the associated spatio-temporal correlated profiles of the TX-123BT are created. The time series nodal load profiles are also created in a way to match the actual zonal load for each of the eight weather zones in ERCOT for each hour in the entire year of 2019. The created TX-123BT system with both daily DLR and hourly DLR profiles is validated through SCUC simulations. The SCUC results in peak load scenarios and the comparison between the conservative daily DLR and hourly DLR are also discussed in this paper, demonstrating the effectiveness and practicality of the created synthetic TX-123BT system for facilitating the research and studies in various power system areas. Since it covers a period of one entire year at one-hour resolution with strong practical spatiotemporal correlations embedded in the dataset, it would also facilitate power system studies involving machine learning such as reinforcement learning and graph neural networks [42]-[43].
2303.13118
Ground state and fission properties of even-$A$ uranium isotopes from multidimensionally-constrained relativistic mean field model
The multidimensionally-constrained covariant density functional theories (MDC-CDFTs) have been developed to study the influence of octupole and triaxial deformations on the ground state and fission properties. In this paper, we present a brief review of the applications of MDC-CDFTs and discuss the results of a systematical study of even-$A$ uranium isotopes with the MDC-RMF model which is one of MDC-CDFTs with pairing correlations treated by using the BCS approach. We examine in detail the two-dimensional potential energy surfaces $E(\beta_{20},\beta_{30})$ of these U isotopes and discuss the ground state and fission properties as well as third and fourth minima on the potential energy surfaces. The emphasis is put on the effects of octupole and triaxial deformations.
Xiang-Quan Deng, Shan-Gui Zhou
2023-03-23T09:09:58Z
http://arxiv.org/abs/2303.13118v1
Ground state and fission properties of even-\(A\) uranium isotopes from multidimensionally-constrained relativistic mean field model ###### Abstract The multidimensionally-constrained covariant density functional theories (MDC-CDFTs) have been developed to study the influence of octupole and triaxial deformations on the ground state and fission properties. In this paper, we present a brief review of the applications of MDC-CDFTs and discuss the results of a systematical study of even-\(A\) uranium isotopes with the MDC-RMF model which is one of MDC-CDFTs with pairing correlations treated by using the BCS approach. We examine in detail the two-dimensional potential energy surfaces \(E(\beta_{20},\beta_{30})\) of these U isotopes and discuss the ground state and fission properties as well as third and fourth minima on the potential energy surfaces. The emphasis is put on the effects of octupole and triaxial deformations. ## 1 Introduction Atomic nuclei are quantum many-body systems consisting of nucleons, i.e., protons and neutrons. The ground states of nuclei are characterized by structure properties such as the spin and parity \(I^{\pi}\), mass excess or binding energy, size and shape, which are governed by both the nucleon-nucleon interaction and many-body features from which various nuclear phenomena emerge [1, 2, 3]. In the intrinsic frame, many nuclear shapes may appear and manifest themselves in the low-lying collective spectra: The rotational spectrum \(E(I)\sim I(I+1)\) corresponds to an axial quadrupole deformation [4, 5]; a static octupole deformation results in the parity doublet bands [2, 6, 7, 8, 9]; the triaxial quadrupole deformation is characterized by the chiral doublet bands [10, 11] or the wobbling motion [2] in certain nuclei. More exotic intrinsic nuclear shapes have been explored extensively, e.g., the hyperdeformed shapes, see Ref. [12] and references therein, the rod shapes [13], the tetrahedral or octahedral shapes [14, 15, 16], the triangle shape [17] and the toroidal or ring shapes [18, 19]. Nuclear fission--the large amplitude collective motion--can be described as the evolution of nuclear shape in a multidimensional deformation space [20, 21, 22, 23, 24]. The potential energy surface (PES) in such a deformation space is crucial for the study of nuclear fission [12, 24, 25, 26, 27, 28, 29, 30, 31]. In particular, the characteristics of fission barrier, e.g., the height and width, are important inputs for theoretical models of nuclear fission [32, 33, 34, 35, 36, 37, 38, 39]. For instance, in the statistical models for calculating the survival probability of hot compound superheavy nuclei (SHN), the competition between the neutron evaporation and fission is mainly determined by the neutron separation energy and fission barrier height [40, 41, 42, 43, 44]. It has been revealed that, besides the axial quadrupole deformation which is the most important shape degree of freedom in describing nuclear fission, the nonaxial and reflection asymmetric deformations are crucial as well [23, 24]. Many theoretical models have been developed and used to study the PESs and barrier heights, including the macroscopic-microscopic (MM) models [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55], the extended Thomas-Fermi plus Strutinsky intergral (ETFSI) method [56] and the density functional theories (DFTs) [57, 58, 59, 60, 61, 62, 63, 64]. The multidimensionally-constrained (MDC) covariant density functional theories (CDFTs) have been developed for the study of ground state and fission properties as well as the PESs [16, 28, 61]. In MDC-CDFTs, both nonaxial and reflection asymmetric deformations are considered self-consistently. The MDC-CDFTs include the MDC relativistic mean field (MDC-RMF) model in which the pairing correlations are treated by using the BCS approach and the MDC relativistic Hartree-Bogoliubov (MDC-RHB) model. In the present work, by using the MDC-RMF model, we study systematically the PESs of even-\(A\) U isotopes from the two-proton drip line up to the two-neutron drip line. We study the ground state properties and the primary barrier heights and examine the effects of octupole and triaxial deformations. The third and fourth minima appearing on the PESs of some U isotopes are also discussed. The paper is organized as follows. The applications of the MDC-CDFTs is briefly reviewed in Sec. 2. In Sec. 3, we present numerical results of the PESs and discuss in detail the ground state and fission properties of even-\(A\) U isotopes. A summary of this work is given in Sec. 4. ## 2 Brief Review of MDC-CDFTs The CDFT has been very successful in self-consistent descriptions of atomic nuclei throughout almost the whole chart of nuclides [61, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76]. In the CDFT, one usually derives the equations of motion of nucleons from the Lagrangian incorporating the nucleon fields and the interaction between nucleons which is realized either via meson exchanges (ME) or through point couplings (PC). In the MDC-CDFTs, the equations of motion for nucleons are solved with the basis expansion method. For axially deformed nuclei with the reflection asymmetry, a two-center basis would be more appropriate; e.g., the reflection asymmetric RMF (RAS-RMF) model [76] has been developed in a two-center harmonic-oscillator basis and used to study octupole deformations in \({}^{226}\)Ra [76] and Sm, [77] Ba, [78] Th [79] and Dy [80] isotopes. However, in the MDC-CDFTs, the reflection symmetric, axially deformed harmonic oscillator (ADHO) basis [81] was adopted for convenience. In Ref. [81], the nuclei in question were assumed to be spherical or axially deformed with the reflection symmetry, i.e., in the following multipole expansion of the nuclear surface, only even \(\lambda\) is considered and \(\mu=0\), \[R(\theta,\varphi)=R_{0}\left[1+\sum_{\lambda=2}^{\infty}\sum_{\mu=-\lambda}^{ \lambda}\beta_{\lambda\mu}^{*}Y_{\lambda\mu}(\theta,\varphi)\right]. \tag{1}\] In the MDC-CDFTs, the \(V_{4}\) symmetry was imposed when solving the equations of motion for nucleons. Thus all deformations \(\beta_{\lambda\mu}\) with even \(\mu\), including nonaxial and reflection asymmetric deformations, are included self-consistently. The pairing correlations are treated by using the BCS method in the MDC-RMF model [28] and by implementing the Bogoliubov transformation in the MDC-RHB model [16]. For the details of the MDC-CDFTs, the readers are referred to Refs. [16, 28, 61]. The MDC-CDFTs have been extensively used to study normal and hypernuclei, with emphasis on the properties concerning various deformation effects on nuclear properties including the ground state properties, PESs, superdeformed and hyperdeformed states and fission properties. Next we present a brief review of the applications and further developments of the MDC-CDFTs. **Fission barriers of actinide nuclei**: The MDC-CDFTs have been applied to the study of fission barriers of actinide nuclei [24, 28]. The typical double-humped fission barriers in actinides were well reproduced. Besides the lowering effects of the triaxial distortion on the first barrier and the large influence of the octupole deformation on the second barrier, it was found that the triaxial deformation also lowers the second barrier considerably. When both triaxial and octupole deformations are included, the calculated fission barriers heights agree satisfactorily with the available empirical values. **Third minima in PESs of light actinides**: The PESs of light actinides were carefully examined and the third minima were investigated [12]. The origin of these minima, corresponding to hyperdeformed states, has been attributed to the \(Z=90\) proton shell gap at very large deformations. **Shapes and PESs of superheavy nuclei**: One- and two-dimensional PESs have been obtained for \({}^{270}\)Hs--a deformed doubly magic SHN [82]. The influences of the nonaxial and reflection asymmetric distortions on the fission barrier and fission pathway have been investigated: When the axial symmetry is imposed, the reflection symmetric and asymmetric fission paths both show a double-humped structure and the latter is energetically favored; when nonaxial shapes are allowed, the reflection symmetric fission pathway becomes favorable. The higher-order deformation effects on the ground state properties of SHN of and near \({}^{270}\)Hs have been studied [83]. It was found that among higher-order deformations \(\beta_{\lambda}\) (\(\lambda=4,6,8\) and \(10\)), \(\beta_{6}\) has the greatest impact. **Non-axial octupole \(Y_{32}\) correlations**: The nonaxial reflection-asymmetric \(\beta_{32}\) shape in transfermium nuclei with \(N=150\) have been studied [84]. It was found that in these nuclei, the origin of the \(Y_{32}\) correlations is mainly from a pair of neutron orbitals, \([734]9/2(\nu j_{15/2})\) and \([622]5/2(\nu g_{9/2})\), and a pair of proton orbitals, \([521]3/2(\pi f_{7/2})\) and \([633]7/2(\pi i_{13/2})\). The tetrahedral shapes in neutron-rich even-even Zr isotopes have been investigated [16]. The tetrahedral ground states are mainly caused by large shell gaps around \(Z=40\) and \(N=70\). **Axial octupole \(Y_{30}\) correlations**: The MDC-CDFTs have been used to investigate the coexistence of chirality and octupole correlations in \({}^{76}\)Br [85] and \({}^{78}\)Br [86, 87]; the latter is the first example of chiral geometry in octupole soft nuclei. The octupole correlations in \({}^{123,125}\)Ba [88], \({}^{73}\)Br [89] and \({}^{71}\)Ge [90] have been revealed with the MDC-CDFTs. **Structure of hypernuclei**: The structure of hypernuclei has been studied by using the MDC-CDFTs [91, 92, 93, 94, 95, 96]. For brevity, we only briefly mention two works concerning the deformation effects. In the study of the shapes of light normal and \(\Lambda\) hypernuclei, it was found that the shape polarization effect of \(\Lambda\) is so strong that the shapes of some \(\Lambda\) hypernuclei, e.g., \({}^{13}_{\Lambda}\)C, \({}^{23}_{\Lambda}\)C, and \({}^{31}_{\Lambda}\)Si, are drastically different from the corresponding cores [91]. The superdeformed (SD) states and corresponding SD \(\Lambda\) hypernuclei of Ar isotopes were examined. A strong localization effect with a ring structure was predicted in density distributions of the SD states, resulting in a larger \(\Lambda\) separation energy in SD states than in the ground states [92]. **Fission dynamics and \(\alpha\) decay**: Based on the PESs from the MDC-CDFTs, the dynamics of spontaneous and induced fissions in actinide nuclei has been studied extensively [97, 98, 99, 100, 101, 102, 103, 104, 105, 106]. For example, in Ref. [98], the spontaneous fission dynamics of \({}^{264}\)Fm and \({}^{250}\)Fm was explored and it was concluded that the inclusion of pairing correlations in the space of collective coordinates favors axially symmetric shapes along the dynamic pathway of the fissioning system. Such dynamic studies tell us more about the role played by the various deformations in fission than static studies focusing only on PESs. Recently, microscopic investigations of half-lives for \(\alpha\) decays in \({}^{108}\)Xe and \({}^{104}\)Te [107] and \(\alpha\) and \(2\alpha\) decays in \({}^{212}\)Po and \({}^{224}\)Ra [108] have also been performed based on the PESs calculated from the MDC-CDFTs. **Angular momentum and parity projections**: The angular momentum projection (AMP) and parity projection (PP) have been implemented in the MDC-RHB model to restore the rotational and parity symmetries which are both broken in the mean-field level. Such a projected-MDCRHB (p-MDCRHB) model was presented in Ref. [17]. With the p-MDCRHB model, one may study nuclear spectra corresponding to exotic intrinsic shapes, e.g., triangle or tetrahedron. In Ref. [109], an anatomy of octupole correlations in \({}^{96}\)Zr has been performed with the p-MDCRHB model. It was found that the PESs of this nucleus are strongly dependent on the angular momentum and parity and both triaxial and octupole deformations should be included in order to give a decent description of the structure of \({}^{96}\)Zr. ## 3 Results and Discussions In this section, we systematically study the properties of even-\(A\) U isotopes with the MDC-RMF model. Firstly, we present and discuss two-dimensional (2D) PESs of several selected U isotopes. Secondly, we focus on the ground state properties of the isotopes and show the density distribution profiles of several typical nuclei with deformed ground state shapes. Thirdly, for each isotope, we perform calculations in the vicinity of each saddle point with the reflection symmetry and the axial symmetry broken. The primary barrier heights with octupole and triaxial deformations are compared with the available empirical values and/or calculation results of other models. Finally, we discuss hyperdeformed third and even fourth minima on the PESs of several U isotopes. In our calculations, the effective interaction PC-PK1[110] is used. The truncation in the expansion of the large component of the Dirac spinor \(N_{\rm f}=20\) and that for the small component \(N_{\rm g}=21\). The quadrupole deformation parameter of the ADHO basis is chosen to be half of the desired \(\beta_{20}\) value. The pairing correlations are treated with the BCS approach and the separable pairing force of finite-range[111, 112, 113] is used. The pairing strength and effective range are taken as \(G=1.1G_{0}\) with \(G_{0}=728\) MeV fm\({}^{3}\) and \(a=0.644\) fm. ### Two-dimensional PESs We calculate 2D PESs \(E(\beta_{20},\beta_{30})\) of the U isotopes, from the two-proton drip line \({}^{214}\)U, the lightest known uranium isotope,[114] to the two-neutron drip line \({}^{350}\)U. In these calculations the nuclei are assumed to be axially symmetric. For each isotope, \(\beta_{20}\) runs from \(-0.20\) to \(2.00\) and \(\beta_{30}\) from \(0.00\) to \(1.00\), both with a step size of \(0.05\). The PESs with negative \(\beta_{30}\) value are obtained through the relation \(E(\beta_{20},\beta_{30})=E(\beta_{20},|\beta_{30}|)\). The PESs of several selected isotopes are shown in Fig. 1 where the ground states and the highest saddle points are shown by red dots and red triangles. The 2D PESs of U isotopes are mostly smooth and continuous with obvious minima and saddle points and the static fission pathways can be easily identified. First let us take \({}^{214}\)U as an example. There is a reflection-symmetric (RS) fission pathway and a reflection-asymmetric (RA) one on the PES, both starting from a spherical ground state. There are three saddle points on the RS pathway and two on the RA pathway. Two shallow third minima can be seen, meaning that \({}^{214}\)U may fission through both RS and RA pathways. Figure 1: (Color online) 2D PESs of selected U isotopes obtained in the MDC-RMF calculations with the effective interaction PC-PK1 [10]. The ground states are indicated by red dots. The saddle points corresponding to primary barriers are shown by red triangles. The contour interval is 1.0 MeV. By examining the PESs of \({}^{214-250}\)U, it is found that the RA fission pathways are energetically favorable, as can be clearly seen in Fig. 1. For instance, the PES of \({}^{238}\)U is featured by a RS inner barrier, a low RA outer barrier and a high RS outer barrier which strongly hinder the nucleus from fissioning through the RS pathway. Specially, for \({}^{242-248}\)U there are two RA fission pathways starting from the second minimum; an example, \({}^{246}\)U, is shown in Fig. 1. Similar results were discussed for \({}^{248}\)Cm in Ref. [24, Fig. 5]. For \({}^{252-260}\)U, we can find two saddle points on the PESs, corresponding to the typical double-humped barriers in actinide nuclei. In these U isotopes, besides the lowest static RS fission path, there is also a RA one, as seen in the PES of \({}^{252}\)U in Fig. 1. For \({}^{268-282}\)U, the fission barriers are very high. There are three or four RS barriers on the PESs and the potential wells between the barriers may correspond to hyperdeformed fission isomers. The hyperdeformed third or fourth minima will be discussed in Sec. 3.4. The RS and RA pathways also appear on the PESs of the isotopes \({}^{286-334}\)U. From the PESs of \({}^{336-350}\)U, it can be expected these isotopes fission mainly through the RS pathway. We can readily locate the ground states of the U isotopes from the 2D PESs. For most of the isotopes, the ground states are reflection symmetric. But for \({}^{226-232}\)U and \({}^{288-294}\)U, the \(\beta_{3}\) values are nonzero and are around 0.2 for \({}^{228,290}\)U as seen in Fig. 1. Detailed ground state properties will be discussed in the next subsection. ### Ground state properties The two-neutron separation energies \(S_{\rm 2n}\) and root-mean-square (rms) radii of neutron, proton, matter and charge distributions for U isotopes are shown in Fig. 2. In Fig. 2(a), the two-neutron separation energies are compared with available experimental or evaluated values from the AME2020[115, 116] and the results cal Figure 2: (Color online) Ground state properties of U isotopes obtained in the MDC-RMF calculations with the effective interaction PC-PKI [110]. (a) Two-neutron separation energies \(S_{\rm 2n}\) compared with available values from the AME2020[115, 116] and the DRHBc mass table [117]; (b) root-mean-square (rms) radii of neutron, proton, matter and charge distributions as functions of the mass number \(A\). Experimental data for charge radii [118] are shown by the black crosses in (b). culated from the deformed relativistic Hartree-Bogoliubov theory in continuum (DRHBc) [117]. Since the reflection symmetry is imposed in the DRHBc theory, the octupole deformations in the ground states of some U isotopes would certainly result in differences in the binding and separation energies between the MDC-RMF model and the DRHBc theory. The pairing approach and pairing interaction used in these two models are different, which would also lead to some differences in the ground state properties. For \({}^{218}\)U, both the MDC-RMF model and the DRHBc theory overestimate the experimental \(S_{\rm 2n}\) value and for \({}^{220-226}\)U, they underestimate the experiments. In other cases, the MDC-RMF and the DRHBc \(S_{\rm 2n}\) values are in agreement with the AME2020 whenever available. As \(A\) increases, the \(S_{\rm 2n}\)'s from both models share the same trend. For \(342\leq A\leq 350\), the MDC-RMF model predicts that \(S_{\rm 2n}\) decreases monotonically with \(A\) increasing and \({}^{350}\)U is the last bound even-\(A\) U isotope. However, in the DRHBc mass table, \(S_{\rm 2n}=1.4\) MeV for these five U isotopes though \({}^{350}\)U is also predicted to be the last bound even-\(A\) U isotope. Such differences concerning \(S_{\rm 2n}\) between these two models and the different predictions concerning the lightest bound U isotopes, i.e., \({}^{212}\)U by the DRHBc theory [117] and \({}^{214}\)U by the MDC-RMF model, may stem from that the DRHBc theory can provide a proper description of exotic nuclei by considering the deformation and continuum effects [119, 120, 121]. The root-mean-square (rms) radii of neutron, proton, matter and charge distributions of the ground states are shown in Fig. 2(b). The available experimental values for the charge radii [118] are also included for comparison. The neutron radii grows faster than proton as the nucleus gets heavier, leading to thicker neutron skins. The charge radius \(R_{\rm ch}\) is a significant observable characterizing the size of a nucleus. For \({}^{234}\)U, \({}^{236}\)U and \({}^{238}\)U, the experimental values for charge radii are 5.829 fm, 5.843 fm, and 5.857 fm, respectively. The charge radii calculated with the MDC-RMF model for the three isotopes are 5.864 fm, 5.882 fm and 5.900 fm and are agree with the data within 1%. However it is clear that the MDC-RMF model overestimates the data systematically. Note that such systematic overestimation of the charge radii for \({}^{234,236,238}\)U (\(R_{\rm ch}=5.863\) fm, 5.882 fm and 5.897 fm) also exists in the DRHBc calculations [117]. Although an accuracy of 1% is acceptable for microscopic models like MDC-RMF and DRHBc, such systematic discrepancy may hint some insights concerning the possible improvements of these models. As we have mentioned in the previous subsection, quite a large number of U isotopes have non-spherical ground state shapes. We plot the quadrupole and octupole deformation parameters (\(\beta_{20}\) and \(\beta_{30}\)) of the ground states in Fig. 3. It can be seen that spherical, prolate RA and prolate RS ground state shapes appear alternately as the mass number increases. For \({}^{214-224}\)U, \({}^{264-286}\)U and \({}^{334-350}\)U, the ground state shapes are spherical. For \({}^{226-262}\)U, \(\beta_{20}\) increases to a maximum then decreases gradually, corresponding to various prolate shapes. In this mass region, \(\beta_{20}\) peaks at \({}^{242}\)U with the maximal value 0.30. Similarly, in the region of \({}^{288-324}\)U, \({}^{306}\)U and \({}^{308}\)U both have prolate ground state shapes with \(\beta_{20}=0.29\). From \({}^{326}\)U to \({}^{332}\)U, the ground state evolves from a largely deformed oblate shape to a nearly spherical one, with \(\beta_{20}=-0.19\) and \(-0.02\) for \({}^{326}\)U and \({}^{332}\)U, respectively. From Fig. 3, we find that for eight isotopes, i.e., \({}^{226-232}\)U and \({}^{288-294}\)U, both the quadrupole and octupole deformations appear in their ground states. These isotopes are featured by pear-shaped ground states. In Fig. 4, we display density distributions of four typical nuclei with different kinds of ground state shapes. The ground states of \({}^{224}\)U, \({}^{242}\)U, and \({}^{326}\)U have a spherical shape, an axially symmetric prolate shape and an axially symmetric oblate shape, respectively. The ground state of \({}^{230}\)U has a pear shape with \(\beta_{20}=0.23\) and \(\beta_{30}=0.17\). ### Primary barrier heights calculated with octupole and triaxial deformations considered Next we focus on the primary barrier, the highest one, in each even-\(A\) U isotope. The height of the primary barrier is defined as the energy difference between the highest saddle point and the ground state. With the calculation results of the 2D PESs we can find all saddle points for each isotope and determine the primary barrier height. As seen in the PESs, for many isotopes, the primary barrier in the RA pathway is obviously lower than the primary barrier in the RS pathway, which means that the primary barrier height of the nuclei is lowered when the octupole distortion is allowed. As we have mentioned, the triaxiality may further lower the height of a barrier [23, 24]. So in addition to the octupole deformation, we must take the triaxial deformation into consideration to give a more accurate description of the primary barrier heights. Since the MDC-RMF calculations are very time-consuming in the whole (\(\beta_{20}\),\(\beta_{22}\),\(\beta_{30}\)) deformation space, we break the axial symmetry only in the vicinity of each saddle point to take the lowering effect of triaxiality on the barriers into account. With the octupole deformation considered, the primary barrier Figure 3: (Color online) Quadrupole and octupole deformation parameters of even-\(A\) U isotopes as functions of the mass number \(A\) obtained in the MDC-RMF calculations with the effective interaction PC-PK1 [110]. heights of U isotopes with and without triaxiality are shown in Fig. 5(a). For most of the U isotopes, the triaxiality lowers the height of the primary barrier. The primary barrier may "shift" from one barrier to another when the triaxiality is considered because the lowering effects of the triaxial distortion for different barriers may be different. But the general trend of the primary barrier heights versus \(A\) is unchanged. No matter whether we break the axial symmetry or not, the barrier height peaks at three isotopes with \(N=126\), \(150\) and \(184\), corresponding to neutron magic numbers and neutron subshells. For \(N=184\), i.e., \({}^{276}\)U, the primary barrier height is as large as \(17.48\) MeV, leading to a relatively high stability against fission. We compare the primary barrier heights with available empirical values taken from RIPL-3[122] in Fig. 5(b). The calculated primary barrier heights agree with the RIPL-3 data for \({}^{232}\)U, \({}^{236}\)U and \({}^{238}\)U; in both our calculation and RIPL-3, for \({}^{232}\)U and \({}^{236}\)U the primary barrier is the outer one and for \({}^{238}\)U the inner one. The barrier height for \({}^{234}\)U is reproduced by our calculation, but we predict the inner Figure 4: (Color online) Density distribution profiles of the ground states of (a) \({}^{224}\)U, (b) \({}^{230}\)U, (c) \({}^{242}\)U and (d) \({}^{326}\)U obtained in the MDC-RMF calculations with the effective interaction PC-PK1.[110] The \(z\)-axis is the symmetry axis. The quadrupole and octupole deformation parameters of the nuclear shapes are shown. barrier as the primary one for this isotope. Based on the FUNF program [123, 124], the fission properties of some actinides have been investigated in detail [125]. For each isotope, the empirical heights of the inner, middle and outer barriers are given. The empirical barrier heights of \({}^{232-238}\)U and our calculation results are shown together in Fig. 5(c). The heights of outer barriers for \({}^{232,236}\)U and the inner barrier for \({}^{238}\)U given by the MDC-RMF model are in agreement with the FUNF results. The inner barrier for \({}^{234}\)U from our calculation is about 0.5 MeV lower than that from the FUNF program. However, in most cases the primary barriers predicted by the MDC-RMF model are different from the FUNF results. Only for \({}^{238}\)U, both the MDC-RMF model and the FUNF program show that the inner barrier is the primary one. The FUNF calculations predict that the middle barrier is the primary one for \({}^{232}\)U and \({}^{234}\)U. For \({}^{236}\)U, the FUNF program Figure 5: (Color online) (a) Primary barrier heights with and without triaxiality obtained in the MDC-RMF calculations with the effective interaction PC-PKI [116] and comparisons of the barrier heights (b) with empirical values from RIPL-3 [122], (c) with the results given by the FUNF program [123, 124, 125] and (d) with HFB-14 calculations [58]. For the MDC-RMF calculation results, we symbolize the barrier height with a red circle, blue square or green triangle if the primary barrier of the isotope is the inner barrier, one of the middle barriers or the outer barrier, respectively. The barrier heights without and with triaxiality are shown by full symbols and hollow symbols in (a). The empirical values of the barrier heights and the FUNF and the HFB-14 results are shown by full symbols in (b), (c) and (d), respectively. shows that the inner barrier is the highest among the investigated barriers. It is worth noting that the FUNF program predicts the appearance of a middle barrier with a considerable height for \({}^{232-236}\)U. Such results for the middle barriers are different from those given by the MDC-RMF calculations. Based on other microscopic models, the barrier heights of the U isotopic chain are also investigated. In Fig. 5(d), the comparison of the MDC-RMF results with fission barrier heights for \({}^{232-296}\)U given by HFB-14 calculations [58] is shown. For \({}^{232-286}\)U, the barrier heights from the MDC-RMF model and the HFB-14 method show the same trend as the mass number increases. For \({}^{232-262}\)U and \({}^{288-296}\)U, the HFB-14 method gives higher barriers in comparison with the MDC-RMF model; while in \({}^{264-286}\)U, our calculation results predict higher barriers. A drastic difference between the two model calculation results can be seen for the mass region \(286\leq A\leq 296\): The HFB-14 barrier heights increase with \(A\) while the MDC-RMF values decrease. The reason behind such a different behavior is yet not clear to us and should be studied in the future. ### Hyperdeformed third and fourth minima on the PESs By investigating the potential energy curves or surfaces of some actinides, the appearance of third minima corresponding to hyperdeformed nuclear shapes has been reported and many experiment efforts have also been made [12, 126, 57, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137]. From our MDC-RMF calculation results for the even-\(A\) U isotopes, we can find obvious RS third and even fourth minima in PESs of \({}^{268-282}\)U. As examples, we show the potential energy curves \(E(\beta_{20})\) at \(\beta_{30}=0\) for \({}^{276}\)U and \({}^{280}\)U in Fig. 6. \({}^{276}\)U, the isotope with neutron magic number \(N=184\), has the highest fission barriers among the U isotopes. The height of the inner barrier is 18.8 MeV and the value is lowered by 1.64 MeV when the triaxial deformations are considered. The depths (the energy difference between the minimum and the lower one of the two saddle points around it) of the second, third and fourth potential wells for \({}^{276}\)U are 2.29 MeV, 1.87 MeV and 0.80 MeV. The positions and depths of these wells are indicated in Fig. 6. The depths of the third well of \({}^{226,228,230,232}\)Th and \({}^{232,234,238}\)U have been studied by using the MDC-RMF model with functionals DD-ME2 and PC-PK1 [12]. With PC-PK1, the third well only appears in \({}^{226,228,230}\)Th and the depths are 1.29 MeV, 0.78 MeV and 0.44 MeV, respectively. By investigating the deformed single-particle levels of proton and neutron, the appearance of the third minimum has been attributed to a proton shell gap at large deformations which stems from several pairs of single-proton states in the vicinity of the Fermi surface [12]. In comparison with the results given in Ref. [12], the third well of \({}^{276}\)U with a depth of 1.87 MeV is deeper than those of the previously investigated Th isotopes. The fourth well of \({}^{276}\)U is even deeper than the third wells of \({}^{228}\)Th and \({}^{230}\)Th. On the potential energy curve of \({}^{280}\)U in Fig. 6, the second and third minima can be identified, but the fourth well no longer exists due to the absence of a fourth saddle point. The density distribution profiles of \({}^{276}\)U at the ground state and the second, third and fourth minima are shown in Fig. 7. The ground state of \({}^{276}\)U is spherical. At the second minimum, the nucleus is in a prolate shape. The nucleus further elongates when it is in the third well. At the fourth minimum, the neck appears and the nucleus shows a tendency to split up. With \(\beta_{20}=0.95\) and \(\beta_{20}=1.40\) at the third and fourth minima, \({}^{276}\)U is highly deformed, corresponding to hyperdeformed fission isomers. ## 4 Summary In summary, we have systematically studied the properties of even-\(A\) U isotopes by using the MDC-RMF model. The PESs of \({}^{214}\)U to \({}^{350}\)U in the (\(\beta_{20}\),\(\beta_{30}\)) plane are obtained and examined. Most of the PESs are characterized by a double-humped barrier structure. But for some isotopes, three or four saddle points appear. Detailed results concerning the ground state properties, including two-neutron separation energies, root-mean-square radii for the neutron, proton, matter and charge distributions and quadrupole and octupole deformations, are presented. It can be seen from the calculation results that many of these U isotopes have non-spherical ground states and several of them, e.g., \({}^{239}\)U, have a pear-shaped ground state with large \(\beta_{20}\) and \(\beta_{30}\). By considering both the triaxiality and the reflection asymmetry in the vicinities of the saddle points, the primary barriers of these U isotopes have been investigated in detail. For many of them, the octupole deformations lower the primary barrier. The triaxiality may further lower the barriers and in some cases the primary barrier may shift from one barrier to another because the lowering effects of the triaxial Figure 6: (Color online) One-dimensional potential energy curves of \({}^{276}\)U and \({}^{280}\)U obtained in the MDC-RMF calculations with the effective interaction PC-PK1 [110]. For \({}^{276}\)U, the positions and depths (the energy difference between the minimum and the lower one of the two saddle points around it) of the second, third and fourth potential wells are indicated. The curves represent fission pathways with the axial symmetry and the reflection symmetry imposed. distortion for the inner, middle and outer barriers may be different. The primary barrier heights with octupole and triaxial deformations can reproduce available empirical values. The hyperdeformed third and fourth minima on the PESs are discussed. Taking \({}^{276}\)U as an example, the depths of the third and the fourth potential well are 1.87 MeV and 0.80 MeV respectively. ## Acknowledgements Helpful discussions with Bao-Ge Deng, Bing-Nan Lu, Yu-Ting Rong, Xiang-Xiang Sun, Zhong-Hao Tu, Kun Wang, Xiao-Qian Wang and Zhen-Hua Zhang are gratefully acknowledged. We thank Xiao-Jun Sun for providing us the FUNF results prior publication. This work has been partly supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grants No. XDB34010000 and No. XDPB15), the National Key R&D Program of China (Grant No. 2018YFA0404402), the National Natural Science Foundation of China (Grants No. 11525524, No. Figure 7: (Color online) Density distribution profiles of \({}^{276}\)U at (a) the ground state and (b) the second, (c) third and (d) fourth minima obtained in the MDC-RMF calculations with the effective interaction PC-PK1 [110]. The nucleus is in axial-symmetric shapes at these minima and the quadrupole deformation parameters are shown. 12070131001, No. 12047503, and No. 11961141004), the Inter-Governmental S&T Cooperation Project between China and Croatia and the IAEA Coordinated Research Project "F41033". The results described in this paper are obtained on the High-performance Computing Cluster of ITP-CAS and the ScGrid of the Supercomputing Center, Computer Network Information Center of Chinese Academy of Sciences.
2307.06784
Robotic surface exploration with vision and tactile sensing for cracks detection and characterisation
This paper presents a novel algorithm for crack localisation and detection based on visual and tactile analysis via fibre-optics. A finger-shaped sensor based on fibre-optics is employed for the data acquisition to collect data for the analysis and the experiments. To detect the possible locations of cracks a camera is used to scan an environment while running an object detection algorithm. Once the crack is detected, a fully-connected graph is created from a skeletonised version of the crack. A minimum spanning tree is then employed for calculating the shortest path to explore the crack which is then used to develop the motion planner for the robotic manipulator. The motion planner divides the crack into multiple nodes which are then explored individually. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. If a crack is detected, also the length, width, orientation and number of branches are calculated. This is repeated until all the nodes of the crack are explored. In order to validate the complete algorithm, various experiments are performed: comparison of exploration of cracks through full scan and motion planning algorithm, implementation of frequency-based features for crack classification and geometry analysis using a combination of vision and tactile data. From the results of the experiments, it is shown that the proposed algorithm is able to detect cracks and improve the results obtained from vision to correctly classify cracks and their geometry with minimal cost thanks to the motion planning algorithm.
Francesca Palermo, Bukeikhan Omarali, Changae Oh, Kaspar Althoefer, Ildar Farkhatdinov
2023-07-13T14:50:38Z
http://arxiv.org/abs/2307.06784v1
Robotic surface exploration with vision and tactile sensing for cracks detection and characterisation ###### Abstract This paper presents a novel algorithm for crack localisation and detection based on visual and tactile analysis via fibre-optics. A finger-shaped sensor based on fibre-optics is employed for the data acquisition to collect data for the analysis and the experiments. To detect the possible locations of cracks a camera is used to scan an environment while running an object detection algorithm. Once the crack is detected, a fully-connected graph is created from a skeletonised version of the crack. A minimum spanning tree is then employed for calculating the shortest path to explore the crack which is then used to develop the motion planner for the robotic manipulator. The motion planner divides the crack into multiple nodes which are then explored individually. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. If a crack is detected, also the length, width, orientation and number of branches are calculated. This is repeated until all the nodes of the crack are explored. In order to validate the complete algorithm, various experiments are performed: comparison of exploration of cracks through full scan and motion planning algorithm, implementation of frequency-based features for crack classification and geometry analysis using a combination of vision and tactile data. From the results of the experiments, it is shown that the proposed algorithm is able to detect cracks and improve the results obtained from vision to correctly classify cracks and their geometry with minimal cost thanks to the motion planning algorithm. Crack Recognition, Sensing, Extreme Environment, Optical Sensing, Fibre-optics. ## I Introduction Detecting mechanical fractures on an object, such as containers and pipes, is an important task often performed in remote hazardous environments. In this situation, crack detection is particularly important since it can avoid spillage of hazardous material from the container or detect cracks on the surface of the concrete. Majority of existing crack detection approaches rely on computer vision techniques of the examined section [1], eddy current implementation in metallic structures [2], or ultrasonic techniques [3]. In supervised environments in which the cracks have clear continuity and, when acquired with a camera, can produce high contrast images, edge detection and image segmentation methods can be implemented. However, cracks are more commonly encountered in noisy backgrounds, resulting in poor continuity, low contrast, and a detrimental impact on the acquired picture quality. Deep learning-based approaches for locating and classifying cracks have recently been developed [4]. Chen et al [5] proposed a fusion deep learning framework called NB-CNN (Naive Bayes - Convolutional Neural Network) which discovered crack patches in each video frame by analysing individual video frames for crack detection. DeepCrack, a deep convolutional neural network for automatic crack identification, was proposed by Zou et al [6]. It uses multi-scale deep convolutional features learned at hierarchical convolutional stages to recognise line formations. VGG-16 DCNN, a pre-trained deep neural convolutional neural network, was used by Gopalakrishnan et al. [7] to automatically detect fractures in two surface pavement datasets. Images captured from a camera can be further analysed in the frequency domain applying wavelets. Subirats et al. [8] presented a method for crack detection based on 2D continuous wavelet transform to create a binary image which indicates the presence or not of cracks on the pavement surface image. Zhou et al. [9] proposed an algorithm to separate cracks on roads from noise and background using statistical criteria developed through wavelet coefficients. In [10] Jiang et al. introduced a method for detecting beams based on complex Continuous Wavelets Transforms which is more robust when the signal is contaminated by noise in respect to the simple Continuous Wavelets Transforms. Considering the anisotropic characteristics of wavelet transformations, these techniques may be at disadvantage when analysing cracks with high curvature or reduced continuity. The above-described crack detection methods are based on computer vision techniques and can fail in remote environments with limited luminosity or noise due to radiation. Furthermore, vision-based methods are not capable of acquiring material properties such as texture and hardness. In contrast to the visual modality, tactile and proximity sensing can provide important information on material properties such as shape, texture and hardness [11, 12]. The stiffness of objects has been investigated [13] implementing a hybrid force and proximity finger-shaped sensor achieving 87% classification accuracy on a set of household objects with different stiffness values. In [14, 15] it was demonstrated how to use fibre optics to recognise and classify fractures on surfaces using time-domain features. Jiang et al. [16] proposed a vision-tactile algorithm for detecting cracks using RGB-D images segmented with fine-tuned Deep Convolutional Neural Networks and a set of contact points generated to guide the collection of tactile images by a camera-based optical tactile sensor. During contact between the sensor and the crack, a pixel-wise mask of the crack was obtained from tactile images to improve the shape of the crack. In addition to machine learning algorithms which need engineered features extracted from the data, deep learning models (such as CNN) can be applied to tactile analysis when converting tactile data to their respective figures. The authors of [17] suggested an algorithm for recognising the object touched on an electronic skin via human interactions. The skin's 3D tactile data was transformed into 2D pictures and fed into a CNN that outperformed traditional tactile data classification methods. In [18], the effects of combining touch and sliding movement for tactile texture categorisation via CNN were investigated. It was shown that touch data can be used to make an initial estimate, which can then be revised via sliding. The authors of [19] demonstrated that using a multi-modal technique based on CNN with both visual and physical contact signals resulted in more accurate findings and more robust classification compared to hand-designed features methods. **Proposed Approach.** In this paper, as shown in Figure 1, we propose a multi-modal algorithm based on computer vision and tactile analysis to detect and classify cracks. First, object detection is applied to detect possible cracks in a scene. The extracted figures are then further analysed via a graph theory algorithm to create a motion planner for a robot manipulator with a tactile sensor attached as an end-effector. Each of the detected cracks is then explored via the proposed motion algorithm and a machine learning classifier confirms if there is indeed a crack or just a false positive from vision. For each of the detected crack additional geometry information are calculated. ## II Proposed System For real-time applications, exploring whole surfaces using only a tactile approach would be too time-consuming and may produce errors. Because of this, we propose a multi-modal algorithm based on a combination of vision and tactile modalities shown in Figure 1. First, the camera scans the environment and faster R-CNN is performed to detect the possible location of cracks as described in [20]. Once the crack is detected, a graph theory algorithm is performed to extract the motion planning algorithm for the robotic manipulator, as described in Section III-A. The motion planning divides the crack into multiple nodes which are then explored individually. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm, described in Section IV-A. This is repeated until all the nodes of the crack are explored. The finger-shaped sensor based on fibre-optics described in [13] was employed for the data acquisition to collect data for the analysis and the experiments. The sensor is made of two 3D-printed rigid parts acting as the distal and proximal phalanges of a finger and one 3D-printed soft part, the intermediate phalanx, positioned among the two rigid phalanges. Three pairs of fibre optics (D1, D2, D3) are used to measure the sensor's soft part deformation via changes in the reflected light intensity. A fourth pair of optical fibre cables (P) is positioned at the tip of the finger and it is used to sense the proximity to external objects. The sensor was attached as an end-effector to a Franka Emika's Panda robot 1 which was used to explore and acquire the data from surfaces of interest. The experimental setup with the video camera and the tactile sensor mounted at the end-effector of a robotic manipulator is shown in Figure 2 a). Different 3D-printed surfaces were provided for exploration. Footnote 1: [https://www.franka.de/](https://www.franka.de/) ### _3D Printed Samples_ A set of 13 (9 different cracks geometry, 3 bumpy surfaces and 1 flat surface) 3D printed surfaces was employed for the experiments. Each of the surfaces was 3D printed with an Ultimaker III, 0.2 mm layer height, 0.4 mm nozzle diameter. The crack surfaces were extracted by analysing real crack images. Using Inkscape, it was possible to extract the bitmap of the images and to create a model in Blender which was then converted into an.slf file and 3D printed using the Ultimaker Cura software. Each model is 125x125x5 mm in size but has a different shape, length, and width. Furthermore, a flat surface was printed with the same size. 3 different bumpy surfaces are also printed to create more types of possible non-cracked surfaces. A sample of the surfaces used for the acquisitions and the experiments is shown in Figure 2 b). ## III Motion Planning with Vision for Tactile Exploration ### _Experimental Methods for Motion Planning_ Previously, a multi-modal robotic visual-tactile algorithm was developed to detect and localise possible fractures [15, 20]. The method employed Faster-RCNN for fracture detection. Once the region of interest containing fractures was extracted, the images could be further explored by extracting the geometry information to plan an optimally controlled tactile exploration. To analyse the obtained localised crack images, image processing and computer vision techniques were implemented to create a skeletonised version of the image of the fracture which was then transformed into a graph and explored via graph theory. The key steps are demonstrated in Figure 3. First, to avoid obtaining open contours and incomplete masks, a uniform coloured padding was introduced in the acquired image (size 100\(\times\)200 pixels) of the fracture (Figure 3a) to close potentially open locations. The colour of the padding was chosen as the average of the total RGB colours of the image. The original image was then converted to greyscale and blurred with a Gaussian filter (3x3 kernel). The resulting image was converted to a binary image using Otsu and binary thresholds (Figure 3b). To connect the disconnected cracks in the estimated binary image, the dilation operation in morphological transformations was applied. Then, Canny Edge detection was implemented (Figure 3c). The average of the intensities of the pixels was used to automatically estimate the lower and higher threshold for edge detection. The resulting edges were improved with an additional dilation operation. Using the obtained edges, the contours of the fractures were calculated (Figure 3d). The averaged area of the contours was used to eliminate any outliers. The object mask was then created (Figure 3e), which was used to skeletonise the fracture (Figure 3f). For this purpose, we used an open-source PlantCV library [21]2 which provides a useful method to create a skeleton from the mask and to prune it. The _skmv_ library3 was applied to the resulting skeleton of the crack to convert it into a graph in which each ending point and branching point (a point in which various branches of the crack are created) were the vertices and the lines connecting these points were the edges (Figure 3g). This graph was further reduced by calculating the middle point of each of the branches (edges) and using those points as vertices of the new graph with _Networkx_ library4 (Figure 3h). In addition, for each middle point, shifted left and right points were created which were used to develop the motion plan of the manipulator with the fibre optic sensor, described in [15], attached as the end-effector. These coordinates and weights were used to define the tactile exploration path. Being the robotic manipulator external to the surface, it does not have to comply with the curves of the crack to move from one point to the other. Thus, the edges can be converted to a straight line connecting the two vertices. The Euclidean distance between each vertex corresponds to the weight to move from one node to the other. To find the least costly path to explore the whole graph, a revised version of the Minimum Spanning Tree was implemented. In the proposed scenario, the explorable graph was bidirectional for each node and there was no specific starting point. The main goal was to explore all the vertices only once with the minimum weight which corresponds to the minimum total Euclidean distance. Each node was then analysed as starting point and all the possible paths were explored. The node which produces the least expensive path based on the sum of the Euclidean distances of all the branches was then chosen as starting point and the path was sent to the manipulator (Figure 3i). Footnote 2: [https://github.com/danforthcenter/plantcv](https://github.com/danforthcenter/plantcv) Footnote 3: [https://github.com/Image-Py/skmv](https://github.com/Image-Py/skmv) Footnote 4: [https://github.com/networkx/networkx](https://github.com/networkx/networkx) Figure 1: Complete Algorithm for the crack detection and exploration. First, faster R-CNN is performed to detect the possible location of cracks. Once a crack is detected, a graph theory algorithm is performed to extract the motion planning algorithm for the robotic manipulator The motion planning divides the crack into multiple nodes which are then explored one by one. At this point, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. This is repeated until all the nodes of the crack are explored. Figure 2: a) The experimental setup for the surface exploration multi-modal approach with vision and the tactile sensor consisting of 3 pairs of fibre optics to calculate the deformation of the soft middle part of the sensor (D1, D2 and D3) and a pair of fibre optics positioned at the tip to calculate the proximity value. b) 3D printed samples of real cracks and bumpy and flat surfaces: 3 simple cracks, 3 y-shaped cracks and 3 difficult cracks, 2 possible combinations of bumpy surfaces and a flat surface. As a result, a robotic manipulator with a tactile sensor attached at the end-effector can directly explore only the main elements of complex cracks branch by branch following the paths identified with the help of geometrical analysis of the image of the cracks. ### _Comparison Between Motion Planning and Complete Explorations_ To validate the algorithm proposed in Section III-A, using the Franka Panda manipulator, the cracked surfaces shown in Figure 2(b) were scanned for their whole entirety. In total 9 cracked surfaces were employed: 3 simple-shaped cracks, 3 y-shaped cracks and 3 cracks of more complicated shape. The surfaces were positioned in front of the robot and 10 points (from left to right) were sent to the control of the robot for the exploration. This was repeated 10 times. Figure 4 a) shows an example of scanning the whole surface on the left and of the implemented motion planning algorithm for crack exploration. When scanning the entire surface, the scanning and classification took an average of \(\sim\)5 minutes, regardless of the crack's shape or number of branches. When using graph theory to scan, the time required for exploration was determined by the number of branches and middle points found by the vision method. The more branches there were, the longer it took to investigate the crack, simple ones with one or three branches required on average \(\sim\)30 seconds to two minutes to explore. Cracks with a higher number of detected middle points required more time to be explored. On average, it took \(\sim\)30 seconds for each branch of the fracture to be detected, analysed, and classified using the motion planning algorithm. Figure 4(b) shows the comparison of the time required when exploring cracked surfaces via full scans and motion planner on the 9 cracks introduced in Figure 2. With the complete scanning to avoid losing possible minor cracks, it is necessary to accurately indicate the change in vertical exploration although this results in a longer exploration duration. Using the proposed graph theory algorithm, it is possible to also detect the orientation of the crack and to perform the scanning perpendicularly to the crack. The complete scanning, on the other hand, has the risk of sliding parallel or over the crack's surface, missing the start and end of the branch of the crack Fig. 4: a) On the left, it is shown the scanning movement of the manipulator. On the right, the motion planning obtained through the proposed graph theory algorithm is shown. For both images, yellow represents the movement of the robot to reach a specific point, in blue the movements corresponding to the node exploration and in green the start and end point for each exploration movement. Symbol S shows the starting position of the robot. For each blue movement, the robot is moving from one direction to the other and then backwards to the starting green point before moving to the next one. b) Comparison of time required to explore cracked surfaces of various types (basic, y-shaped and complex-shaped) via full scans and motion planner during 10 runs. Fig. 3: Example of image processing for the crack’s geometry analysis: (a) original image; (b) binary image; (c) threshold Canny edge detection with morphological transformations; (d) extraction of contours; (e) binary mask; (f) pruned skeleton; (g) graph and branches identified, in red the middle points of each edge are shown; (h) the fully-connected middle points graph is created; (i) the optimal exploration path is defined (only the node 1 left and right exploration are shown for brevity). and, consequently, information on the crack's width. However, using the object detection algorithm and graph theory there is the possibility of not exploring cracks which are not detected by the vision algorithm. In conclusion, depending on the information required, scanning the entire surface of the crack may not be necessary; instead, only a few points extracted using graph theory can be used to obtain accurate information and speed up explorations. ## IV Tactile Detecting Cracks with Frequency Domain Features In the following section, it will be introduced the analysis of the tactile and proximity data acquired via the sensor described in II. Machine learning techniques are implemented to classify the signals and features are investigated in the frequency domain to detect the presence of cracks on the explored surfaces. ### _Tactile Data Acquisition_ Following the algorithm proposed in Figure 1, once a crack was detected by the object detection algorithm, introduced in [20], and further investigated via the graph algorithm proposed in Section III-A, the motion planning for exploring the crack was sent to the Franka Panda controller via ROS Bridge. The experimental setup is shown in Figure 2(a). Two PCs communicated via ROS bridge with standard implementation on the ROS side and ROS-sharp implementation with Unity game engine that was used as a middleware for integration with Virtual Reality based user interface. After the crack was detected and crack nodes were extracted, desired crack scan start and end positions in image pixel space were sent to the robot controller. Those pixel coordinates were then projected into the robot's 3D space resulting in a list of crack scan start and end positions. These 3D scan start and end positions were then added to the list of waypoints that would be used to plan the robot's trajectory. For each couple of scan start and end positions, the robot would begin a linear scanning motion at the start position until end position is reached, then reverse back to the start position. The robot would always approach to scan and retract from scan vertically. The finger was always oriented perpendicular to the scanned surface and the end-effector's y-axis was always parallel to the scan motion. The tactile data recording started when the robot finished the positioning and the approach to the scan start position and tactile data recording stopped when robot reached the end position of exploration and retracted from the scan. The tactile data then was used to classify the crack node and the robot would proceed scanning the remaining cracks. During the exploration of the crack, deformation and proximity data were recorded at 400 Hz via an Arduino Mega ADK micro-controller connected via serial port (USB) to the computer and further analysed via feature extraction. To avoid any unwanted displacement during the movement, the samples were fixed to the laboratory desk during the acquisitions. For each class (no crack, crack) \(\sim\)150 acquisitions were performed and each of the samples was differently positioned and oriented, Figure 2(b). The data was first over-sampled from 400 Hz to 800 Hz and filtered with a Butterworth filter with a cutoff frequency of 30 Hz to extract frequency domain features. The derivative was taken from these data and then filtered once more using a Butterworth filter. The extracted features were then applied to the generated data. Fourier transforms, spectrograms, continuous and discontinuous wavelets transformations, and other combinations of characteristics were investigated. ### _Experimental Methods for Frequency Domain Features_ In previous works [15, 14], time domain features were investigated and analysed. The main concern with time domain features in this setup is that the implemented sensor highly relies on the position of the fibre optic cables and the colour of the explored surface. In a previous version of the sensor, the cables were glued together with the 3D-printed parts of the sensor. As a result, when one of the fibre optics cables broke, the whole sensor needed to be replaced. To avoid having to replace the sensor and create additional waste, in the current version, the cables were left free. Because of this, there was the possibility of one of the cables may slightly move and obtain different magnitudes in the data when relocating the sensor. In the previous experiments, it was noticed that the implemented models with the time domain features were not robust to the movements of the cables when the sensor was moved from one physical location to another, even after applying standardisation and further optimisation techniques. To overcome this problem, alternatives to the time domain features were investigated. It was noticed that the shape of the data when exploring cracks was similar through different acquisitions performed in various experiments. Frequency domain features were then explored to investigate the spectrum of the signal. To extract frequency domain features, the data was first over-sampled from 400 Hz to 800 Hz and filtered with a Butterworth filter with a cutoff frequency of 30 Hz. From this data, the derivative was extracted and filtered again with the Butterworth filter. The resulting data was then used for feature extraction. Multiple combinations of features were investigated: continuous and discrete wavelets transformations, Fourier transforms and spectrograms. #### Iv-B1 Spectrograms From the above-mentioned data, to have a visual representation of the shape of the sensor and investigate the signal strength over time, the spectrograms of the derivative of the signal were extracted. Spectrograms were normalised and the greyscale result was used. The right side of Figure 5 shows an example of obtained spectrogram for a flat surface, a bumpy surface and a surface with a crack for each of the fibre optic pairs of the implemented sensor. #### Iv-B2 Fast Fourier Transform The Fast Fourier Transform [22] (FFT) is a mathematical operation which converts a signal into individual spectral components and provides frequency information about the signal. FFTs have been used in multiple applications for quality control, and condition monitoring of machines or systems [23] and also for crack detection [24]. FFTs were calculated on the derivative of the signal data. The left side of Figure 5 shows an example of the FFT applied to the proximity P sensor for a flat surface, a bumpy surface and a surface with a crack. #### Iii-B3 Discrete and Continuous Wavelets Although Fourier Transforms have a high resolution in the frequency domain, they have zero resolution in the time domain. In order to investigate the problem of loss of components from the time domain, the use of Wavelet transform [25] has been proposed [26]. The Wavelet transform alters the shape of the Fourier transform's simple sine and cosine functions. In contrast to Fourier, where sine and cosine run from (\(-\infty\),\(+\infty\)), the mother function in a Wavelet is finite in time. A wavelet decomposition, unlike a Fourier decomposition, uses a time-localised oscillatory function as the analysing or mother wavelet. There are two possible types of Wavelets Transforms: Discrete and Continuous. The difference between these two types of Wavelets Transforms is that the Continuous Wavelet Transform (CWT) uses an infinite number of scales and locations. On the other hand, the Discrete Wavelet Transform (DWT) makes use of a finite set of wavelets, which are defined at certain scales and locations. Below, are the equations for Continuous Wavelets Transform: \[T(a,b)=\frac{1}{\sqrt{a}}\int_{-\infty}^{\infty}x(t)\psi^{*}\frac{(t-b)}{a}dt \tag{1}\] And Discrete Wavelet Transform: \[T_{m,n}=\int_{-\infty}^{\infty}x(t)\psi_{m,n}(t)dt \tag{2}\] Where \(x\) is the original signal, \(\psi\) is an arbitrary mother wavelet, \(a\) is the scale factor, \(b\) is the translation factor applied to the mother wavelet. In addition to the two types of Wavelets, for each type, there are multiple possible families to choose from to best extract the informative data from the shape of the signal. Discrete and Continuous Wavelets have been used in audio analysis [27], image processing [28] and crack detection [29] In this work, the library Pywavelets [30] was used to investigate both types of Wavelets Transforms and all the corresponding families and to find the most accurate Wavelet Transform method for the implemented data. On the left of Figure 5, is shown an example of discrete wavelet transformation for flat, bumpy and cracked surfaces and on the right is an example of continuous wavelet transformation. ### _Model for Crack Recognition with Tactile Sensing_ For the analysis of the data, three models were implemented: Random Forest using fast Fourier transform and discrete wavelets, Convolutional Neural Network (CNN) for the spectrogram images and the continuous wavelets and Multi-CNN for all the above-mentioned features. #### Iii-C1 Random Forest Random Forest [31] is a machine learning algorithm which is used for both classification and regression tasks. It is an ensemble method which makes use of multiple learning trees. It can handle missing data and it is more robust to over-fitting than other classifiers while producing at the same time reasonable predictions with little or no hyper-parameter optimisation. Random Forest classifiers have been used for remote sensing and crack detection in multiple applications [32, 15, 14]. In the following experiments, a Random Forest classifier with 100 trees has been implemented. #### Iii-C2 Convolutional Neural Network Convolutional Neural Network (CNN) is a type of Deep Learning algorithm made up of one or more convolutional layers. It was first introduced in the 1980s with the "neocognitron" form for visual pattern recognition [33] and later evolved into the commonly known CNN and presented for handwritten character recognition [34]. CNN networks have been implemented in multiple applications for the detection of cracks [1]. In this work, a CNN model is implemented for classifying cracks using tactile and proximity data. Figure 6 shows the complete architecture of the implemented CNN. Based on the use of a spectrogram, the input of the model was the grayscaled image of the wavelet plus the grayscaled image of the spectrogram creating a 150x150x2 input shape. The following sequence is then repeated thrice based on the filter size of the convolution layer which goes from 16 to 32 and 64: Convolutional layer of 3x3 stride filter with Rectified Linear Unit (RELU) activation, followed by batch normalisation and dropout layer, set to a 0.5 rate that helps prevent overfitting, and finally a Max Pooling layer of 2x2 stride filter. A _softmax_ with two neurons (the two possible values True or False for crack detection) was implemented as the output layer. Categorical cross-entropy was used for loss and Adam [35] was used as an optimiser to minimise the cost function during training. A simpler architecture with two Convolutional layers was also investigated but it resulted in higher overfitting. The model is trained for 50 epochs to avoid overfitting. #### Iii-C3 Multi Modal-Convolution Neural Network An alternative model is investigated to improve the results obtained with the best combination of features for both Random Forest and CNN with mixed data making use of TensorFlow Keras API. Fast Fourier and the best Discrete Wavelet are used as numeric values while the spectrogram and the continuous wavelet are included as image data. The Multi-modal fusion-Convolutional Neural Network (M-CNN) used the same architecture as CNN shown in Figure 6 to analyse the figure's features. Two hidden layers are used to analyse the discrete wavelets and Fourier transforms. The two models were then concatenated. Adam was used as optimiser and Categorical Crossentropy as loss. The model was trained for 50 epochs. ### _Experiments and Results_ To validate the developed models, introduced in Section IV-C, two experiments were performed. The first experiment used the dataset acquired with the Franka Panda robot as training and validation dataset to classify the data acquired in [15] which were used as testing set. The third experiment was performed to detect the crack online during the exploration with the Franka Panda manipulator. In the experiment, the data was split 60% for training, 20% for validation and 20% for testing. When using the Random Forest as the classifier, the discrete wavelets transform was used in combination with the Fast Fourier transformation. It was investigated which one among discrete wavelets transformation or the Fast Fourier transformation was better for classifying the crack or if a combination of the two could improve the results. For each feature, the number of peaks, the max value of the peak and the minimum value of the peak were extracted. When using CNN and M-CNN as classifiers, the continuous wavelets transformation was implemented as features in combination with the Spectrograms of the data. Following the same format as with the Random Forest, it was investigated if each feature would perform better than the other or if a combination of the two would be more accurate in detecting a cracked surface. For brevity, in the following sections, the results with the wavelets achieving the best results are shown. Furthermore, the models trained with Fourier and discrete wavelets or spectrogram and continuous wavelets are shown. In the experiment, a passive tactile exploration is performed by exploring the points extracted from the vision side of the algorithm. Thanks to this, it is possible to create a model of the crack and speed up its exploration. Active exploration could also be performed by sliding over the whole surface but it would incur in higher exploration time. #### Iv-B1 Training and Testing with Different Database To investigate the ability to generalise of the models, an experiment was performed. The data acquired in Section IV-A were used as training set and the data acquired in [15, 14] were implemented as testing set. Table I shows the results of the classification. The Random Forest is the most balanced one for almost all the various sensors combinations when using the proximity P and the deformations ones but it achieves the highest F1 score of \(88.27\%\), recall \(99.2\%\) and precision of \(83.39\%\) when using a combination of the discrete wavelet transform DB-11 and the Fast Fourier Transform acquired from D2, D3 and P. The CNN model achieves the highest score when considering a combination of proximity P for the deformation sensors D2 and D3, with an F1 score of \(77.20\%\), recall of \(83.33\%\) and precision of \(78.40\%\) for the Fig. 5: **On the left.** Sample the features implemented in the frequency data analysis for the proximity P sensor for crack, bumpy and flat surfaces. For each surface, on the top the raw data acquired from the tactile sensor are shown, followed by the time derivative data extracted from the raw data. In the middle, the discrete wavelet transformations used in the Random Forest and Multi-modal fusion Convolutional Neural Network (M-CNN) models are shown. At the bottom, the Fourier Transformation implemented in the Random Forest and M-CNN models are shown. **On the right.** Sample of the features implemented in the frequency data analysis for crack, bumpy and flat surfaces. For each surface, the spectrograms and continuous wavelets are shown that are implemented in the CNN and M-CNN models. CGAU-1 continuous wavelet and the spectrogram. For the M-CNN model, the best result was achieved when considering a combination of Fourier, Spectrograms, Continuous and Discrete Wavelets with an F1 score of \(80.25\%\), recall of \(80.90\%\) and precision of \(83.54\%\) for D1, D2, D3 and P sensors, CGAU-1 as continuous wavelet and DB-11 as discrete wavelet. Using the M-CNN model the sensitivity can be increased when considering only D2, D3 and P sensors at the expense of specificity. Among the three models, the Random Forest Algorithm with the combination of implemented frequency-based features of Discrete Wavelet and Fourier Transformation is the most balanced and reliable model with fewer fluctuations among the various tactile data. #### Iv-A2 Online Experiment with Different Modalities In addition to the offline experiment, one experiment was conducted online with three different modalities (single and multiple cracks, occluded cracks and painted cracks) by integrating the computer vision algorithm introduced in Section III-A and the tactile analysis to further evaluate the generated classification models' abilities. **Online Experiment Modality 1 - Single and Multiple Cracks Detection.** First, each of the 3D printed samples introduced in Section III-A was positioned in different orientations and explored 10 times for a total of 90 explorations and a total of 347 explored nodes. Furthermore, multiple cracks were introduced into the environment and detected by the computer vision algorithm and then classified with the sensor. 10 explorations were performed. For each exploration, each crack was divided into nodes which are classified one by one for a total of 64 nodes. In total, 100 explorations Fig. 6: **CNN model architecture.** It uses a combination of the spectrogram and wavelet figure which consists of a 150x150x2 input shape. This is followed by a sequence which is repeated 3 times for a 16, 32 and 64 filter and consists of a Convolutional layer of 3x3 stride filter with Rectified Linear Unit (ReLU) activation, followed by batch normalisation and dropout layer, set to a 0.5 rate that helps prevent overfitting, and finally a Max Pooling layer of 2x2 stride filter. The final layer consists of a \(softmax\) activated layer with two neurons. **M-CNN model architecture.** The data input consisting of Fourier and Wavelets extracted features is combined with the final layer of the CNN network. were performed and 411 nodes were explored. The models introduced in Section IV-C were used to classify the various samples. In our setup, the weight of not detecting cracks present on the surface was higher than wrongly classifying a flat surface as crack. Undetected cracks may continue to grow and impact the functionality of the surface on which they are present. Because of this, lowering the number of false negatives is crucial, but we also want to keep the number of false positives to a minimum. Recall is thus the most crucial parameter in these studies, but F1, which takes both precision and recall into consideration, should also be taken into account. Figure 7 (a) shows the results compared when implementing the random forest classifier with DB-11 discrete wavelet transformation and Fourier transformation, the CNN with CGAU-1 continuous wavelet and spectrogram plot and the M-CNN with CGAU-1 continuous wavelet and DB-11 discrete wavelet transformations, Fourier transformation and spectrogram plots. The model which achieves the highest F1 score is the Random Forest classifier with 93.50\(\%\), recall of 89.56\(\%\) and precision of 97.90\(\%\) using D1, D2 and P as sensors. On the other hand, the model which achieves the highest recall of 99.30\(\%\) at the expense of 86.51 \(\%\) of precision is the M-CNN with D1,D2,D3 and P as sensors for a total of 92.48\(\%\) F1 score. **Online Experiment Modality 2 - Detection of Occluded Crack.** Occlusions were added to the cracks. The occlusions were created with screws and textile fabric. Figure 8 (a) shows an example of the produced occlusions. 10 explorations were performed for a total of 54 explored nodes. Figure 7 (b) shows the results compared when implementing the random forest classifier with DB-11 discrete wavelet transformation, the CNN with CGAU-1 continuous wavelet and D-11 discrete wavelet transformation, Fourier transformation and spectrogram plots. Figure 8 (a) shows the results compared when implementing the random forest with 100\(\%\), recall of 100\(\%\) and precision of 100\(\%\) using D1, D2, D3 and P as sensors. When using the sensors D1, D2 and P, the model which achieves the highest result is still the random forest with 100\(\%\) recall, precision of 97.62\(\%\) and F1 score of 98.80\(\%\). **Online Experiment Modality 3 - Painted Crack.** To test the robustness of the complete algorithm, fake cracks were created with a marker. Figure 8 (b) shows an example of the crack painted with the marker. 10 explorations were performed for a total of 33 explored nodes. Figure 7 (c) shows the results compared when implementing the random forest classifier with DB-11 discrete wavelet transformation and Fourier transformation, the CNN with CGAU-1 continuous wavelet and DB-11 discrete wavelet transformations, Fourier transformation and spectrogram plots. Since this experiment was made up of negative labels only, the accuracy metric was used instead. 3 possible combinations of models achieved an accuracy score of 100\(\%\): CNN or random forest with D1, D2 and D3 (only force sensors) and CNN with D2, D3 and P. Introducing the proximity sensor P in the models resulted in higher false positive labels detected when classifying the data Fig. 7: Results of the online experiment. For each modality, the results are compared when implementing the random forest classifier with the DB discrete wavelet transformation and Fourier transformation, the CNN with the CGAU continuous wavelet and spectrogram plot and the M-CNN with CGAU continuous wavelet and DB discrete wavelet transformations, Fourier transformation and spectrogram plots. NC = No Crack, C = Crack. (a) Modality 1 shows the results for online detection of single and multiple cracks. (b) Modality 2 shows the results of occluded cracks with screws, textile and other materials. (c) Modality 3 shows the results of fake cracks painted with a marker on surfaces. achieved from painted cracks. This may be due to the fact that when acquiring the data used for training the models, no similar data was added. Thus, when the model notices spikes in the proximity P was most prone to classify the data as a cracked surface. On the other hand, when using only the force sensors, the data was classified as no crack because it was similar to a flat surface since no deformation in the sensors was detected. ## V Crack characterisation via geometrical inspection In this section, a characterisation of the explored cracks is performed. Using the data acquired in the previous sections, when a crack is detected, geometrical data are calculated for its length, width, orientation and number of branches. ### _Crack Characterisation_ During the crack exploration, in addition to detecting the crack presence, further information characterising the crack can be extracted. If a crack was detected during the exploration, the width, length, number of branches and orientation were calculated. The set of cracked surfaces explored is shown in Figure 2(b). The width of the crack was determined by using the derivative of the fibre optics proximity data (P in Figure 2 (a)). From the derivative, the indexes of the highest slopes were extracted. These represent the start and end points of the width of the crack. The time necessary to explore this segment was then multiplied by the speed of the Franka Panda robot and the width was obtained. The same was applied to both movements on the same section (from the start node to the endpoint and backward) and then the mean was extracted and used as the final width. \[\begin{split} T&=\frac{1}{f_{s}}\\ t&=NT\\ \overline{w}&=\frac{v_{s}t_{s}+v_{e}t_{e}}{2}\end{split} \tag{3}\] where \(T\) is the period, \(f_{s}\) the sampling rate, \(t\) the time necessary for exploring the section which is made of \(N\) values, \(v_{s}\) and \(t_{s}\) indicate the velocity and time for the movement from start to end point, \(v_{e}\) and \(t_{e}\) the velocity and time for the opposite movement. The length of the branches was calculated using the graph theory information. The x and y coordinates of the points of the branch were stored and used to calculate the length \(l\) via the Euclidean distance: \[l=\sqrt{\sum_{i=0}^{n-1}\left(x_{i+1}-x_{i}\right)^{2}+\left(y_{i+1}-y_{i} \right)^{2}} \tag{4}\] The orientation of the branches was calculated using the start point of the branch and the endpoint and the arctangent between the two points. \[\theta=tan^{-1}(y_{e}-y_{s},x_{e}-x_{s}) \tag{5}\] where \((x_{e},y_{e})\) are the coordinates of the end point of the branch and \((x_{s},y_{s})\) are the coordinates of the starting point of the branch. The total number of branches of the crack was also calculated. When the classifier identified an explored segment as a crack, the number of branches was increased by 1. ### _Experiments and Results_ Using the data acquired during Experiment 1 in Section IV-D2, the geometry analysis of the fractures was performed. Each of the explored node results was compared with the ground truths which were automatically created using an image of the fracture in plain visibility and with no additional external condition (e.g. occlusion, rotation, etc). The Mean Relative Error (MRE) was calculated for length, orientation, number of branches and width. Figure 9 shows the results for the geometry experiments and the MAE error for the calculated measurements. Using only the deformations data for the Random Forest achieved the worst results since a majority of the cracks were mislabelled. Thus, this model should not be taken into consideration in the following discussion. The length measurements were calculated in millimetres. The maximum value for length was 157.13mm and the minimum value was 10.99mm. The model showing the lowest MRE score of \(\sim\)18% is the CNN with the deformations data. The width measurements were calculated in millimetres. The biggest value was equal to 7mm and the minimum value was equal to 1.5mm. The main problem with the width was that as the 3D printed samples were reconstructed from real cracks, the width changed from point to point and, as the cracks occurred in different orientation and positions, the middle point of the crack did not always correspond to the middle point of the ground truth and variations in millimetres occurred. Nevertheless, the models achieved a MRE error of \(\sim\)30%. Fig. 8: (a) Example of the occlusion effect produced on the 3D printed cracked surfaces. (b) Example of the cracks drawn with a marker. The best model was the CNN using only the deformation data, achieving a \(\sim\)20% MRE. The orientation measurements were calculated in radians. In this case, the main concern was the different orientations of the captured pictures in respect to the ground labels of the original pictures. The majority of the acquisitions for experiments was a series of rotations of the 3d printed surfaces. Thus, by knowing the performed rotation, it was possible to remap the detected nodes with the ground nodes to appropriately calculate the degrees. The best model for this measurement was the CNN with the deformation values which achieved a MRE of \(\sim\)15%. The number of branches measurement identifies the number of calculated sections of cracks. The maximum number of detected branches is 6 and the minimum is 1. For this experiment, the best model was the Random Forest with a MRE of \(\sim\)9% when using the deformation data together with the proximity data. ## VI Conclusions In this paper, an algorithm to detect and classify cracks is presented via visual and tactile data. The method uses the camera to scan an environment and faster R-CNN is performed to detect the possible location of cracks. Once a crack is detected, a graph theory algorithm is performed to calculate the least expensive motion planning sequence for the robotic manipulator. The motion planning divides the crack into multiple nodes which are then explored individually. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. If a crack is detected, also the length, width, orientation and number of branches are calculated. This is repeated until all the nodes of the crack are explored. In order to validate the complete algorithm, various experiments are performed. From the results of the experiments, it is shown that the proposed algorithm is able to detect cracks and improve the results obtained from vision to correctly classify cracks and their geometry with minimal weight thanks to the motion planning algorithm. This approach may be implemented also in extreme environments since gamma radiation does not interfere with the sensing mechanism of fibre optic-based sensors. The paper has contributed to advances in crack detection by introducing a multi-modal algorithm that is used to detect cracks in the environment via computer vision and then confirming the presence of a crack via tactile exploration and machine learning classification of the data acquired from a fibre-optic-based sensor. Few methods currently use tactile sensing for crack characterisation and detection and this is the first study which shows the reliability of tactile-based methodologies for crack detection via machine learning analysis. Furthermore, this is the first method which combines both tactile and vision for crack analysis. **Future Work.** The proposed algorithm was developed on flat surfaces and the camera was always perpendicular to the surface. Considering the small region of interest corresponding to the detected crack for the exploration, the model may still perform with less accuracy. Proximity only may be used in this case to overcome displacement errors. In the future, this may be improved by creating a depth mask of the explored surface through an RGB-D camera.
2309.01547
Discrepancies and their means
It is shown that the discrepancy function for point distributions on a torus is expressed by an explicit formula in terms of its mean values on sub-tori. As an application of this formula, a simple proof of a theorem of Lev on the equivalence of $L_{\infty}$- and shifted $L_q$-discrepancies is given.
M. M. Skriganov
2023-09-04T11:59:55Z
http://arxiv.org/abs/2309.01547v1
# Discrepancies and their means ###### Abstract. It is shown that the discrepancy function for point distributions on a torus is expressed by an explicit formula in terms of its mean values on sub-tori. As an application of this formula, a simple proof of a theorem of Lev [2] on the equivalence of \(L_{\infty}\)- and shifted \(L_{q}\)-discrepancies is given. Key words and phrases:Point distributions, discrepancy theory 2010 Mathematics Subject Classification: 11K38 The point distribution problem on the \(d\)-dimensional torus \(\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}\) is conveniently considered as a periodic problem on the covering space \(\mathbb{R}^{d}\). For \(X=(x_{1},\dots,x_{d})\in\mathbb{R}^{d}\) and \(Y=(y_{1},\dots,y_{d})\in\mathbb{K}^{d}=[0,1]^{d}\), we define the periodic discrepancy function by \[L(X,Y)=\chi(X,Y)-v(Y), \tag{1}\] where \(\chi(X,Y)=\prod_{j=1}^{d}\chi(x_{j},y_{j}),\ v(Y)=\prod_{j=1}^{d}v(y_{j}),v(y)=y\), and \[\chi(x,y)=\begin{cases}1,&\text{if $\{x\}<y$,}\\ 0,&\text{otherwise,}\end{cases} \tag{2}\] here \(\{x\}\) is the fractional part of \(x\in\mathbb{R}\), and \(y\in[0,1]\). It is clear that \(\chi(X,Y)\) is the indicator function of the periodic collection of rectangular boxes \[\mathcal{B}(Y)=\bigcup\nolimits_{(m_{1},\dots,m_{d})\in\mathbb{Z}^{d}}\prod \nolimits_{j=1}^{d}[m_{j},y_{j}+m_{j}).\] The mean value of the discrepancy function (1) has the form \[\lambda(X)=\int_{\mathbb{K}^{d}}L(X,Y)\,\mathrm{d}Y=\prod\nolimits_{j=1}^{d}( 1-\{x_{j}\})-2^{-d}. \tag{3}\] Let \([d]=(1,\dots,d)\) denote the set of coordinate indexes. For subsets \(J\subseteq[d]\) we introduce the partial discrepancy functions by \[L_{J}(X,Y)=\chi_{J}(X,Y)-v_{J}(Y), \tag{4}\] where \(\chi_{J}(X,Y)=\prod_{j\in J}\chi(x_{j},y_{j}),\ v_{J}(Y)=\prod_{j\in J}v(y_{j})\), The corresponding mean values have the form \[\lambda_{J}(X)=\int_{\mathbb{T}^{|J|}}L_{J}(X,Y_{J})\,\mathrm{d}Y_{J}=\prod \nolimits_{j\in J}(1-\{x_{j}\})-2^{-|J|}, \tag{5}\] where \(\mathbb{T}^{|J|}\subseteq\mathbb{T}^{d}\) denotes the sub-torus corresponding to the subset \(J\subseteq[d]\), and \(|J|\) denotes the number of elements of \(J\). The quantities (4), (5) depend on the projections \(X_{J}=(x_{j})_{j\in J}\in\mathbb{R}^{|J|},Y_{J}=(y_{j})_{j\in J}\in\mathbb{K} ^{|J|}\), but not on the additional variables \(X_{J^{\prime}}=(x_{j})_{j\in J^{\prime}}\in\mathbb{R}^{|J^{\prime}|},Y_{J^{ \prime}}=(y_{j})_{j\in J^{\prime}}\in\mathbb{K}^{|J^{\prime}|}\), where \(J^{\prime}=[d]\setminus J\) denotes the compliment of \(J\). For \(J=[d]\), we write \(L_{[d]}(X,Y)=L(X,Y)\), \(\lambda_{[d]}(X)=\lambda(X)\), and for the empty set, we put \(L_{\emptyset}(X,Y)=0,\,\lambda_{\emptyset}(X)=0\). **Definition.** For a periodic function \(f(X)=f(X_{J},X_{J^{\prime}}),X\in\mathbb{R}^{d}\), and a vector \(Y_{J}=(y_{j})_{j\in J}\in\mathbb{K}^{|J|}\), the _alternant_ is defined by \[f^{(alt)}(X\,|\,Y_{J})=f^{(alt)}(X_{J},X_{J^{\prime}}\,|\,Y_{J})=\sum\nolimits_ {\Theta_{J}}(-1)^{|\Theta_{J}|}f(X_{J}-\Theta_{J}\cdot Y_{J},X_{J^{\prime}}), \tag{6}\] where \(\Theta_{J}=(\theta_{j})_{j\in J}\in\{0,1\}^{|J|}\) are the vertexes of the cube \(\mathbb{K}^{|J|}\), and summation in (6) is taken over all such vertexes, \(\Theta_{J}\cdot Y_{J}=(\theta_{j}y_{j})_{j\in J}\) and \(|\Theta_{J}|=\sum_{j\in J}\theta_{j}\). **Remark.** If \(f(X)=\prod_{j\in J}f_{j}(x_{j})\), then \[f^{(alt)}(X\,|\,Y_{J})=\prod\nolimits_{j\in J}f_{j}^{(alt)}(x_{j}\,|\,y_{j}) =\prod\nolimits_{j\in J}(f(x_{j})-f(x_{j}-y_{j})), \tag{7}\] and if at least one of the functions \(f_{j}\) is constant, then \(f^{(alt)}=0\). **Theorem (Main Identity).**_The discrepancy function \(L(X,Y)\) satisfies the identity_ \[L(X,Y)=\sum\nolimits_{J\subseteq[d]}\,v_{J^{\prime}}(Y)\,\lambda_{J}^{(alt)} (X\,|\,Y_{J}). \tag{8}\] Proof.: Let \(\omega(x)=\frac{1}{2}-\{x\}\), then \(\int_{0}^{1}\omega(x)\mathrm{d}x=0\). We put \[\omega_{J}(X)=\prod\nolimits_{j\in J}\omega(x_{j})\qquad\text{and}\qquad \omega_{\emptyset}(X)=0. \tag{9}\] The indicator function (2) can be written in the form \[\begin{split}\chi(x,y)&=y-\{x\}+\{x-y\}\\ &=y+\omega(x)-\omega(x-y)=y+\omega^{(alt)}(x\,|\,y),\end{split} \tag{10}\] This formula can be proved by considering the graph of the function \(\{x\}-\{x-y\},x\in\mathbb{R}\). Substituting (10) into (1) and using (9) and (7), we obtain \[L(X,Y)=\sum\nolimits_{J\subseteq[d]}\,v_{J^{\prime}}(Y)\,\omega_{J}^{(alt)} (X\,|\,Y_{J}).\] For the mean value (5), we find \[\lambda_{J}(X)=\prod\nolimits_{j\in J}(2^{-1}+\omega(x_{j}))-2^{-|J|}=\sum \nolimits_{I\subseteq J}\,2^{-|J\setminus I|}\,\omega_{I}(X). \tag{11}\] Let us calculate the alternant of the mean value (11). We have \[\lambda_{J}^{(alt)}(X\,|\,Y_{J})=\omega_{J}^{(alt)}(X)+\sum\nolimits_{I\subset J }\,2^{-|J\setminus I|}\,\omega_{I}^{(alt)}(X\,|\,Y_{J}).\] By the above Remark \(\omega_{I}^{(alt)}(X\,|\,Y_{J})=0\) for proper subsets \(I\subset J\), since \(\omega_{J}(X)=\prod\nolimits_{j\in I}\omega(x_{j})\prod\nolimits_{j\in J \setminus I}1\). Therefore, \(\lambda_{J}^{(alt)}(X\,|\,Y_{J})=\omega_{J}^{(alt)}(X\,|\,Y_{J})\), and Theorem follows. We consider periodic point distributions \(\mathcal{D}\) on \(\mathbb{R}^{d},\mathcal{D}+M=\mathcal{D},M\in\mathbb{Z}^{d},\) with a finite set of residues \(\mathcal{D}/\mathbb{Z}\). Notice that instead of point distributions, arbitrary periodic complex Borel measures on \(\mathbb{R}^{d}\) finite on \(\mathbb{T}^{d}\) could be considered but we do not consider such a generalization in order not to complicate the notation. We define the local discrepancy \[L[\mathcal{D},Y]=\sum\nolimits_{X\in\mathcal{D}/\mathbb{Z}^{d}}L(X,Y), \tag{12}\] and the \(L_{q}\)- discrepancies \[L_{q}[\mathcal{D}]=\left(\int\nolimits_{\mathbb{K}^{d}}|L[\mathcal{D},Y]|^{q} \,\mathrm{d}Y\right)^{1/q},\,0<q<\infty,\qquad L_{\infty}[\mathcal{D}]=\sup \nolimits_{Y\in\mathbb{K}^{d}}|L[\mathcal{D},Y]|.\] We also introduce the shifted discrepancies \[L_{q}^{*}[\mathcal{D}]=\sup\nolimits_{Z\in\mathbb{T}^{d}}L_{q}[\mathcal{D}+Z], \qquad L_{\infty}^{*}[\mathcal{D}]=\sup\nolimits_{Z\in\mathbb{T}^{d}}L_{ \infty}[\mathcal{D}+Z],\] and their mean values \[\lambda_{J}[\mathcal{D}]=\sum\nolimits_{X\in\mathcal{D}/\mathbb{Z}^{d}}\lambda_{J} (X),\qquad\lambda_{J}^{*}[\mathcal{D}]=\sup\nolimits_{Z\in\mathbb{R}^{d}} \lambda_{J}[\mathcal{D}+Z].\] The above Theorem immediately implies the following. **Lemma 1.**_The local discrepancy \(L[\mathcal{D},Y]\) satisfies the identity_ \[L[\mathcal{D},Y]=\sum\nolimits_{J\subseteq[d]}\,v_{J^{\prime}}(Y)\,\lambda_{J }^{(alt)}\,[\mathcal{D}\,|\,Y_{J}], \tag{13}\] _where \(\lambda_{J}^{(alt)}[\mathcal{D}\,|\,Y_{J}]=\sum\nolimits_{X\in\mathcal{D}/ \mathbb{Z}^{d}}\,\lambda_{J}^{(alt)}(X\,|\,Y_{J})\)._ _The discrepancy \(L_{\infty}[\mathcal{D}]\) satisfies the inequality_ \[L_{\infty}[\mathcal{D}]\leq\sum\nolimits_{J\subseteq[d]}\,2^{|J|}\,\lambda_{ J}^{*}[\mathcal{D}]. \tag{14}\] Proof.: The identity (13) follows from (8). The definition (6) implies \(\lambda_{J}^{(alt)}[\mathcal{D}\,|\,Y_{J}]\leq 2^{|J|}\,\lambda_{J}^{*}[ \mathcal{D}],\) and the inequality (14) follows, since \(0\leq v_{J^{\prime}}(Y)\leq 1\). The \(L_{q}^{*}[\mathcal{D}]\)-discrepancies can be easily estimated by means from below. **Lemma 2.**_For \(1\leq q\leq\infty\) and any subset \(J\subseteq[d]\), the discrepancy \(L_{q}^{*}[\mathcal{D}]\) satisfies the inequality_ \[L_{q}^{*}[\mathcal{D}]\geq 2^{d-|J|}\,\lambda_{J}^{*}[\mathcal{D}]. \tag{15}\] Proof.: We have \[L_{q}^{*}[\mathcal{D}]\geq L_{1}^{*}[\mathcal{D}] =\sup\nolimits_{Z\in\mathbb{T}^{d}}\int\nolimits_{\mathbb{R}^{d}} \left|L[\mathcal{D}+Z,Y]\right|\mathrm{d}Y\geq\sup\nolimits_{Z\in\mathbb{T}^ {d}}\left|\int\nolimits_{\mathbb{R}^{d}}L[\mathcal{D}+Z,Y]\,\mathrm{d}Y\right|\] \[\geq\sup\nolimits_{Z\in\mathbb{T}^{d}}\left|\lambda[\mathcal{D}+Z ]\right|=\lambda^{*}[\mathcal{D}].\] For simplicity, we put \(Z=(Z_{J},Z_{J^{\prime}})\in\mathbb{T}^{d},Z_{J}\in\mathbb{T}^{|J|},Z_{J^{ \prime}}\in\mathbb{T}^{|J^{\prime}|}\), and continue \[\lambda^{*}[\mathcal{D}] =\sup\nolimits_{Z_{J}}\sup\nolimits_{Z_{J^{\prime}}}\left| \lambda[\mathcal{D}+Z]\right|\geq\sup\nolimits_{Z_{J}}\int\nolimits_{\mathbb{ R}^{|J^{\prime}|}}\left|\lambda[\mathcal{D}+Z]\right|\mathrm{d}Z_{J^{ \prime}}\] \[\geq\sup\nolimits_{Z_{J}}\left|\int\nolimits_{\mathbb{R}^{|J^{ \prime}|}}\lambda[\mathcal{D}+Z]\,\mathrm{d}Z_{J^{\prime}}\right|=2^{-|J^{ \prime}|}\sup\nolimits_{Z_{J}}\left|\lambda[\mathcal{D}+Z_{J}]\right|=2^{d-|J |}\,\lambda_{J}^{*}[\mathcal{D}],\] that completes the proof. The next simple fact is well-known, see, for example, [2, p. 4]. For completeness, we give a short proof. **Lemma 3.**_The \(L_{\infty}\)- and \(L_{\infty}^{*}\)- discrepancies are equivalent:_ \[L_{\infty}[\mathcal{D}]\,\leq\,L_{\infty}^{*}[\mathcal{D}]\,\leq\,3^{d}L_{ \infty}[\mathcal{D}]. \tag{16}\] Proof.: The left inequality (16) is obvious. Let us prove the right. For \(y\in[0,1],\,z\in[0,1)\), we introduce the notation \[\delta_{y,z}=\begin{cases}1,&\text{if $y+z\geq 1$,}\\ 0,&\text{otherwise,}\end{cases}\] The shifted indicator function, see (2) and (10), can be written in the form \[\chi(x+z,y)=\chi(x,y+z)[1-\delta_{y,z}]+\chi(x,y+z-1)\,\delta_{y,z}-\chi(x,z) +\chi(x,1)\,\delta_{y,z},\] and similarly \(v(y)=v(y+z)[1-\delta_{y,z}]+v(y+z-1)\,\delta_{y,z}-v(z)+v(1)\,\delta_{y,z}.\) Moreover, each of these formulas contains at most three non-zero terms. Substituting these formulas into the definitions (1) and (12), we find that the shifted discrepancy can be written as the sum \(L[\mathcal{D}+Z,Y]=\sum\nolimits_{k}c_{k}\,L[\mathcal{D},V_{k}],\) with some vectors \(V_{k}=V_{k}(Y,Z)\in\mathbb{K}^{d}\) and coefficients \(c_{k}=c_{k}(Y,Z)\) equal either \(\pm 1\) or \(0\). Moreover, the sum contain at most \(3^{d}\) non-zero terms. This implies the right inequality (16). Let [2] established the equivalence of \(L_{\infty}\)- and \(L_{q}^{*}\)- discrepancies: \(L_{\infty}[\mathcal{D}]\;\cong\;L_{q}^{*}[\mathcal{D}]\), for all \(q\geq 1\) with the implicit constants depending only on the dimension \(d\). Another proof of the equivalence was given later by Kolountzakis, we refer to the survey article [1] by Chen for a detailed discussion of these issues. The equivalence of the \(L_{\infty}\)- and \(L_{q}^{*}\)- discrepancies can be easily derived from the foregoing statements. We will formulate and prove the corresponding result in the following somewhat more general form. **Corollary (Lev's Equivalence).**_For \(0<q<\infty\), the \(L_{\infty}\)- and \(L_{q}^{*}\)- discrepancies are equivalent_: \[3^{-d}\,L_{q}^{*}[\mathcal{D}]\;\leq\;L_{\infty}[\mathcal{D}]\;\leq\;C_{d,q} \,L_{q}^{*}[\mathcal{D}], \tag{17}\] _where the constant_ \[C_{d,q}=\left\{\begin{array}{ll}(5/2)^{d},&\mbox{if $1\leq q<\infty$,}\\ (5/2)^{d/q}\,3^{d/q-d},&\mbox{if $0<q<1$.}\end{array}\right. \tag{18}\] We did not seek to obtain the best constant \(C_{d,q}\) in (17), however, we note that the formula (18) correctly reflects the order of the constant for large and small \(q\). Proof of Corollary.: The proof consists of three steps. _(i)_ For \(0<q<\infty\), the lower bound (17) follows from Lemma 3: \(L_{q}^{*}[\mathcal{D}]\;\leq\;L_{\infty}^{*}[\mathcal{D}]\;\leq\;3^{d}L_{ \infty}[\mathcal{D}],\) since \(L_{q}^{*}[\mathcal{D}]\) is a non-decreasing function of \(q>0\). _(ii)_ For \(1\leq q<\infty\), the upper bound (17) follows from Lemma 1 and Lemma 2. Substituting (15) into (14), we obtain \[L_{\infty}[\mathcal{D}]\leq 2^{-d}\,\left(\sum\nolimits_{J\subseteq[d]}\;2^{2 |J|}\right)\;\,L_{q}^{*}[\mathcal{D}]=(5/2)^{d}\;L_{q}^{*}[\mathcal{D}].\] _(iii)_ Finally, for \(0<q<1\), the upper bound (17) follows from the logarithmic convexity of \(L_{q}^{*}[\mathcal{D}]\) as a function of \(q>0\). By the standard interpolation at points \(q<1<p\), we obtain \[L_{1}^{*}[\mathcal{D}]\leq(L_{q}^{*}[\mathcal{D}])^{q\frac{p-1}{p-q}}\,(L_{p}^ {*}[\mathcal{D}])^{p\frac{1-q}{p-q}}.\] This inequality takes the form \[L_{1}^{*}[\mathcal{D}]\leq(L_{q}^{*}[\mathcal{D}])^{q}\,(L_{\infty}^{*}[ \mathcal{D}])^{1-q}, \tag{19}\] as \(p\to\infty\). The bounds \((5/2)^{-d}\,L_{\infty}[\mathcal{D}]\leq L_{1}^{*}[\mathcal{D}]\) and \(L_{\infty}^{*}[\mathcal{D}]\leq 3^{d}\,L_{\infty}[\mathcal{D}],\) are already established. Substituting these bounds into (19), we obtain the upper bound (17) with the constant (18). The proof of Corollary is completed.
2308.16312
On Euler's Solution of the simple Difference Equation
In this note we will discuss Euler's solution of the simple difference equation that he gave in his paper{\it ``De serierum determinatione seu nova methodus inveniendi terminos generales serierum"} \cite{E189} (E189:``On the determination of series or a new method of finding the general terms of series") and also present a derivation for the values of the Riemann $\zeta$-function at positive integer numbers based on Euler's ideas.
Alexander Aycock
2023-08-30T20:39:55Z
http://arxiv.org/abs/2308.16312v1
# On Euler's Solution of the simple Difference Equation ###### Abstract In this note we will discuss Euler's solution of the simple difference equation that he gave in his paper "_De serierum determinatione seu nova methodus inveniendi terminos generales serierum_" [6] (E189: "On the determination of series or a new method of finding the general terms of series") and also present a derivation for the values of the Riemann \(\zeta\)-function at positive integer numbers based on Euler's ideas. ## 1 Introduction In his paper "_De serierum determinatione seu nova methodus inveniendi terminos generales serierum_" [6] (E189: "On the determination of series or a new method of finding the general terms of series"), Euler, amongst other difference equations, gave a general solution of the simple difference equation: \[f(x+1)-f(x)=g(x). \tag{1}\] He had found a solution to (1) in form of the Euler-Maclaurin summation formula before, e.g., in his paper "_Inventio summae cuiusque seriei ex dato termino generali_" [2] (E47: "Finding of a sum of a series from the given general term"). But whereas the Euler-Maclaurin summation formula is a particular solution and leads to an asymptotic series for most choices of \(g(x)\), his solution offered in [6] is the complete solution to (1) and contains the Euler-Maclaurin summation formula as a special case. Therefore, in this note we will present Euler's solution of (1) (see section 2), address a conceptual error in Euler's approach (see section 3) and we will show how to correct it (see section 3.2). Furthermore, we argue that Euler could have corrected his formula himself applying results that he discovered after he wrote [6] (see section 3.3). Finally, we will present a derivation of the formula for the values of the Riemann \(\zeta\)-function at positive integer numbers based on the solution to the simple difference equation (see section 4). Euler's Solution of the Simple Difference Equation Euler's general idea was to transform (1) into a differential equation of infinite order with constant coefficients and apply the procedure he had formulated for the finite order case earlier in his paper _"Methodous aequationes differentiales altiorum gradum integrandi utterius promota"_[5] (E188: "The method to integrate differential equations of higher degrees expanded further"). In that paper he outlined the following procedure: Given the differential equation: \[\left(a_{0}+a_{1}\frac{d}{dx}+a_{2}\frac{d^{2}}{dx^{2}}+\cdots+a_{n}\frac{d^{ n}}{dx^{n}}\right)f(x)=g(x), \tag{2}\] with complex coefficients \(a_{1},a_{2},\cdots,a_{n}\), Euler told us to first find the zeros with their multiplicity of the following expression: \[P(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots+a_{n}z^{n}.\] Next, assume \(z=k\) is a solution of \(P(z)=0\). Then, if \(k\) is a simple zeroa of \(P(z)\), the solution of (2) is given by the sum of all functions of the form: Footnote a: In this note, we will only need the case of simple zeros and hence will only state the corresponding formula. In [5], Euler stated all cases from order 1 to 4 explicitly. \[f_{k}(x)=\frac{e^{kx}}{P^{\prime}(k)}\int e^{-kx}g(x)dx. \tag{3}\] Note that the indefinite integral introduces a constant of integration. Let's apply this to (1) by transforming it into a differential equation first. By Taylor's theorem we have: \[f(x+1)=f(x)+\frac{d}{dx}f(x)+\frac{1}{2}\frac{d^{2}}{dx^{2}}f(x)+\frac{1}{3!} \frac{d^{3}}{dx^{3}}f(x)+\cdots\] such that (1) can be rewritten as \[\left(\frac{d}{dx}+\frac{1}{2}\frac{d^{2}}{dx^{2}}+\frac{1}{3!}\frac{d^{3}}{ dx^{3}}+\cdots\right)f(x)=g(x). \tag{4}\] Thus, according to Euler's approach we need to find all zeros and their multiplicities of the expression: \[P(z)=\frac{z}{1!}+\frac{z^{2}}{2!}+\frac{z^{3}}{3!}+\frac{z^{4}}{4!}+\cdots=e^ {z}-1. \tag{5}\] The general zero of this equation is \(z=\log(1)\). But having established that the complex logarithm is a multivalued function in his work _"De la controverse entre Mrs. Leibnitz et Bernoulli sur les logarithmes des nombres negatifs et imaginaires"_[4] (E168:"On the controversy between Leibnitz and Bernoulli on logarithms of negative and imaginary number"), Euler knew that (5) has infinitely many solutions, namely - aside from the trivial \(z=0\) - the solutions are \[\pm 2\pi i,\pm 4\pi i,\pm 6\pi i,\pm 8\pi i,\cdots.\] Therefore, the formula (3) applied to (4) and hence the solution to (1) gives: \[f(x)=\int g(x)dx+e^{-2\pi ix}\int g(x)e^{2\pi ix}dx+e^{2\pi ix}\int g(x)e^{-2 \pi ix}dx \tag{6}\] \[+e^{-4\pi ix}\int g(x)e^{4\pi ix}dx+e^{4\pi ix}\int g(x)e^{-4\pi ix}dx+\cdots\] This is the solution Euler gave in [6]. Unfortunately, it is not quite correct. We will discuss this in the following section. ## 3 Discussion of Euler's Solution ### Example of linearly increasing Differences Applying Euler's formula (6) to certain examples, we quickly discover that it does not give the correct results. For the purpose of illustration, let us take \(g(x)=x\) such that we want to solve: \[f(x+1)-f(x)=x. \tag{7}\] The general solution to this equation is easily seen to be given as \[f(x)=\frac{1}{2}x(x-1)+h(x), \tag{8}\] where \(h(x)\) satisfies \(h(x+1)=h(x)\). Now let us apply (6). For this, we need to evaluate: \[e^{-2k\pi ix}\int xe^{2k\pi ix}dx=\frac{1-2k\pi ix}{4\pi^{2}k^{2}}+C_{k}e^{-2k \pi ix},\] where \(C_{k}\) is a constant of integration. For \(k=0\), we have \(\int xdx=\frac{x^{2}}{2}+C_{0}\), where \(C_{0}\) is the constant of integration. Inserting all this into (6), we find: \[f(x)=\frac{x^{2}}{2}+C_{0}+\sum_{k\in\mathbb{Z}\setminus\{0\}}\left(\frac{1}{4\pi^ {2}k^{2}}-\frac{x}{2k\pi i}\right)+C_{k}e^{-2k\pi ix}.\] Calling \(C_{0}+\sum_{k\in\mathbb{Z}\setminus\{0\}}C_{k}e^{-2k\pi ix}=h(x)\), we see that \(h(x)=h(x+1)\). Furthermore, \[\sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{2k\pi i}=0,\] because all terms cancel. Finally, \[\sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{4\pi^{2}k^{2}}=\frac{2}{4\pi^{2}} \sum_{k=1}^{\infty}\frac{1}{k^{2}}=\frac{1}{2\pi^{2}}\cdot\frac{\pi^{2}}{6}= \frac{1}{12},\] where we used the result \(\sum_{k=1}^{\infty}\frac{1}{k^{2}}=\frac{\pi^{2}}{6}\) that Euler had discovered in his paper _"De summis serierum reciprocarum"_[1] (E41:"On the sums of series of reciprocals") in the last step. Thus, Euler's formula (6) gives the following solution to (7): \[g(x)=\frac{x^{2}}{2}+h(x),\] where we absorbed the value \(\frac{1}{12}\) in the formula in the periodic function. Comparing this result to (8), we see that the solution from Euler's formula is off by the term \(-\frac{x}{2}\). In the next sections we will elaborate on why (6) is wrong and how to correct it. ### Correction of Euler's Formula Euler's formula (6) is actually almost correct. Indeed, the correct formula reads: \[f(x)=-\frac{1}{2}g(x)+\int g(x)dx+e^{-2\pi ix}\int g(x)e^{2\pi ix}dx+e^{2\pi ix }\int g(x)e^{-2\pi ix}dx \tag{9}\] \[e^{-4k\pi ix}\int g(x)e^{4\pi ix}dx+e^{4\pi ix}\int g(x)e^{-4\pi ix}dx+\cdots\] such that Euler's formula is off by just the term \(-\frac{1}{2}g(x)\). As we mentioned in the introduction, Euler missed this term since the method of construction the solution to a differential equation from the zeros of the characteristic polynomial does not carry over smoothly from the finite to the infinite order case. Indeed, we have to construct the solution from the reciprocal of the characteristic polynomial, if we want the method to be applicable in the infinite order case. For, setting \(z=\frac{d}{dx}\), we can rewrite (4) as: \[f(x)=\frac{1}{P(z)}g(x) \tag{10}\] with \(P(z)=e^{z}-1\). In order to apply the operator \(\frac{1}{P(z)}\) to \(g(x)\), we need to rewrite it in integer powers of \(z\). There are many ways to achieve this task. The one we will need to prove (9) is the following partial fraction decomposition that can be proved, e.g., by using complex analysisb: Footnote b: In section 3.3 we will present a proof that uses only method that were available to Euler. \[\frac{1}{e^{z}-1}=-\frac{1}{2}+\sum_{k\in\mathbb{Z}\setminus\{0\}}^{\infty} \frac{1}{z-2k\pi i}. \tag{11}\] Thus, next we have to evaluate: \[\frac{1}{z-2k\pi i}g(x). \tag{12}\] Writing \(2k\pi i=\alpha\) we have: \[\frac{1}{z-\alpha}g(x)=\frac{1}{z\left(1-\frac{\alpha}{z}\right)}g(x)=\sum_{n= 0}^{\infty}\frac{\alpha^{n}}{z^{n+1}}g(x).\] Since \(z=\frac{d}{dx}\), we can interpret \(\frac{1}{z}\) as an integral and hence \(\frac{1}{z^{n}}\) is an \(n\)-times iterated integral. Writing \(\int^{n}\) for a \(n\)-times iterated integral, the following formula holds: \[\int^{n}g(x)dx=\int^{x}\frac{(x-t)^{n-1}}{(n-1)!}g(t)dt. \tag{13}\] Inserting this into (12), we have: \[\frac{1}{z-\alpha}g(x)=\sum_{n=0}^{\infty}\alpha^{n}\int^{x}\frac{(x-t)^{n}}{ n!}g(t)dt=\int^{x}e^{\alpha(x-t)}g(t)dt.\] Therefore, by (11) our equation (10) reads: \[f(x)=-\frac{1}{2}g(x)+\sum_{k\in\mathbb{Z}}\int\limits^{x}e^{2k\pi i(x-t)}g(t)dt=- \frac{1}{2}g(x)+\sum_{k\in\mathbb{Z}}e^{2k\pi ix}\int\limits^{x}e^{-2k\pi it}g(t)dt,\] which is (9). It is the same solution as in [8], which derived (9) using complex analysis. ### Discussion Although we operated on a purely formal basis in our derivation of (9), the procedure can be justified applying the Fourier transform which allows to consider (4) as an algebraic equation in the new variable, say, \(p\). To find (12) we then need the inverse Fourier transform, which we can either calculate using complex analysis or look up in a table. But Fourier analysis was not available to Euler, of course. Nevertheless, we argue that Euler could have given the proof we presented himself. The proof hinges essentially on the proof of (11). Later in his career, in his paper "_De resolutione fractionum transcendentium in infinitas fractiones simplices_" [7] (E592:"On the resolution of transcendental fractions into infinitely many simple fractions"), Euler indeed considered partial fraction decompositions of transcendental functions. The method outlined there would have given him the formula: \[\frac{1}{e^{z}-1}=R(z)+\sum_{k\in\mathbb{Z}}\frac{1}{z-2k\pi i}, \tag{14}\] where \(R(z)\) is a function to be determined. Next, one could expand the sum into a Laurent series around \(z=0\) by expanding each geometric series and compare it to the Laurent series obtained by direct expansion. The direct expansion reads: \[\frac{1}{e^{z}-1}=-\frac{1}{2}+\frac{1}{z}+\sum_{k=0}^{\infty}B_{n}\frac{z^{n }}{n!}, \tag{15}\] where \(B_{n}\) are the Bernoulli numbers. Since Euler considered a similar function, namely \(\frac{z}{1-e^{-z}}\), and its series expansion around \(z=0\) in his work "_De seriebus quibusdam considerations_" [3] (E130:"Considerations about certain series"), the previous formula could definitely also been found by him. Finally, comparing the Laurent series obtained from (14) to (15), we can infer that \(R(z)=-\frac{1}{2}\). An Application of the Solution to the simple Difference Equation In this section, we want to consider the choice \(g(x)=x^{n}\) for \(n\in\mathbb{N}\) in (1), since it is one of the few cases in which (9) can be evaluated explicitly. As it will turn out, we will be led to the values \(\zeta(2n)\), i.e., the sums \[\zeta(2n):=\sum_{k=1}^{\infty}\frac{1}{k^{2n}} \tag{16}\] in the process. Euler evaluated these sums on many occasions using a large number of different methods. We mention his papers [1] and [3] as examples, but the way we will arrive at those values seems to be different from all methods used by Euler. ### Preparation Considering (9) we need to evaluate the expression: \(e^{ax}\int e^{-ax}x^{n}dx\). This can be done as follows: First, we note that \[\int e^{-ax}dx=-\frac{e^{ax}}{a}=-e^{ax}\cdot a^{-1}, \tag{17}\] where we omitted the constant of integration, since it will not be necessary in the following. Next, we differentiate (17) with respect to \(a\) exactly \(n\) times. The left-hand side gives: \[\frac{d^{n}}{da^{n}}\int e^{-ax}dx=\int\frac{d^{n}}{da^{n}}e^{-ax}dx=(-1)^{n} \int e^{-ax}x^{n}dx,\] whereas the right-hand side gives: \[-\frac{d^{n}}{da^{n}}e^{-ax}\cdot a^{-1}=-\sum_{k=0}^{n}\binom{n}{k}\frac{d^{ k}}{da^{k}}e^{-ax}\cdot\frac{d^{n-k}}{da^{n-k}}a^{-1}\] \[=-e^{-ax}\cdot\frac{(-1)^{n}}{a^{n+1}}\sum_{k=0}^{n}\frac{n!}{k!}(ax)^{k},\] where we used Leibniz' rule for the differentiation of products in the first step. Thus, combining both results we arrive at: \[e^{ax}\int e^{-ax}x^{n}dx=-\frac{1}{a^{n+1}}\sum_{k=0}^{n}\frac{n!}{k!}a^{k}x^ {k}.\] Inserting this into (9) for the special case \(g(x)=x^{n}\) we get: \[f(x)=\frac{x^{n+1}}{n+1}-\frac{x^{n}}{2}-\sum_{k\in\mathbb{Z}\backslash\{0\}} \frac{1}{(2k\pi i)^{n+1}}\cdot\sum_{j=0}^{n}\frac{n!}{j!}(2k\pi i)^{j}x^{j}+h(x), \tag{18}\] where \(h(x)\) satisfies \(h(x+1)=h(x)\). ### The Application (18) is the general solution to (1) for the particular choice \(g(x)=x^{n}\). But we can also easily find a particular solution to (1) by noting that for integer \(x\): \[f(x)=\sum_{k=1}^{x-1}g(k)=\sum_{k=1}^{x}g(k)-g(x)\] satisfies the equation. Therefore, for the particular choice \(g(x)=x^{n}\) we also have the solution: \[f(x)=\sum_{k=1}^{x-1}k^{n}=\sum_{k=1}^{x}k^{n}-x^{n}=\frac{x^{n+1}}{n+1}+\frac {x^{n}}{2}+\frac{1}{n+1}\sum_{j=2}^{n}\binom{n+1}{j}B_{j}x^{n+1-j}-x^{n} \tag{19}\] \[=\frac{x^{n+1}}{n+1}-\frac{x^{n}}{2}+\frac{1}{n+1}\sum_{j=2}^{n}\binom{n+1}{j }B_{j}x^{n+1-j},\] where we used Faulhaber's formula for the sums of integer powers and \(B_{n}\) is the \(n\)-th Bernoulli number as above. (19) is a polynomial in \(x\) and hence \(x\) is not restricted to integer values in this form. Let us transform (18) into a similar form. Ignoring the periodic function we have: \[f(x)=\frac{x^{n+1}}{n+1}-\frac{x^{n}}{2}-\sum_{j=0}^{n}\frac{n!}{j!}\sum_{k \in\mathbb{Z}\backslash\{0\}}(2k\pi i)^{j-(n+1)}x^{j}. \tag{20}\] Since (19) and (20) differ only by a periodic function, we can compare coefficients of respective powers of \(x\). Let us call \[B(n,j)=\frac{1}{n+1}\binom{n+1}{j}B_{j}\quad\text{for}\quad j\geq 2, \tag{21}\] for all other values of \(j\) we set \(B(j,n)=0\); furthermore, we set \[A(n,j)=-\frac{n!}{j!}\sum_{k\in\mathbb{Z}\setminus\{0\}}(2k\pi i)^{j-(n+1)}. \tag{22}\] Then, comparing coefficients from (19) and (20) gives: \[A(n,n+1-j)=B(n,j).\] Thus, substituting the values form (22) and (21), respectively: \[-\frac{n!}{(n+1-j)!}\sum_{k\in\mathbb{Z}\setminus\{0\}}(2k\pi i)^{-j}=\frac{1} {n+1}\binom{n+1}{j}B_{j}.\] Finally, solving for the sum: \[\sum_{k\in\mathbb{Z}\setminus\{0\}}(2k\pi i)^{-j}=-\frac{(n+1-j)!}{n!}\cdot \frac{1}{n+1}\cdot\frac{(n+1)!}{j!(n+1-j)!}B_{j}=-\frac{B_{j}}{j!}.\] Thus, \[\sum_{k\in\mathbb{Z}\setminus\{0\}}k^{-j}=-(2\pi i)^{j}\frac{B_{j}}{j!}.\] But due canceling terms, the sum vanishes for odd \(j\) such that we arrive at: \[\zeta(2j)=\sum_{k=1}^{\infty}\frac{1}{k^{2j}}=\frac{(-1)^{j-1}(2\pi)^{2j}B_{2 j}}{2(2j)!}. \tag{23}\] This is Euler's famous formula for the even values of the \(\zeta\)-function that he gave, e.g., in [3]. ## 5 Conclusion In this note we considered Euler's general solution to the simple difference equation (1) that he gave in [6]. His final formula (6) is slightly incorrect due to unjustified application of his solution (3) to differential equations of infinite order. Nevertheless, we discussed how to fix Euler's derivation (see section 3.2) and also argued that Euler could have done so himself, if he just reconsidered the same subject later in his career (see section 3.3). Furthermore, we used the correct solution (9) to (1) to give a proof of Euler's famous formula for the values of the Riemann \(\zeta\)-function at even positive integers (see section 4). The method of derivation of (23) that we presented seems to be not to have been used by Euler in any of his other papers. Finally, we mention that our approach allowed to derive the exact values of (16) from the corresponding _finite_ sums of natural powers (19). Although this is clear, since (19) and (23) are connected via the Bernoulli numbers - as Euler also pointed out, e.g., in [3] -, the deeper explanation for this connection is provided by (9). Despite the minor mistake, [6] is an interesting paper and contains subjects and approaches that are not found in any other of Euler's papers.
2306.00962
Ultracold field-linked tetratomic molecules
Ultracold polyatomic molecules offer intriguing new opportunities in cold chemistry, precision measurements, and quantum information processing, thanks to their rich internal structure. However, their increased complexity compared to diatomic molecules presents a formidable challenge to employ conventional cooling techniques. Here, we demonstrate a new approach to create ultracold polyatomic molecules by electroassociation in a degenerate Fermi gas of microwave-dressed polar molecules through a field-linked resonance. Starting from ground state NaK molecules, we create around $1.1\times 10^3$ tetratomic (NaK)$_2$ molecules, with a phase space density of $0.040(3)$ at a temperature of $134(3)\,\text{nK}$, more than $3000$ times colder than previously realized tetratomic molecules. We observe a maximum tetramer lifetime of $8(2)\,\text{ms}$ in free space without a notable change in the presence of an optical dipole trap, indicating these tetramers are collisionally stable. The measured binding energy and lifetime agree well with parameter-free calculations, which outlines pathways to further increase the lifetime of the tetramers. Moreover, we directly image the dissociated tetramers through microwave-field modulation to probe the anisotropy of their wave function in momentum space. Our result demonstrates a universal tool for assembling ultracold polyatomic molecules from smaller polar molecules, which is a crucial step towards Bose--Einstein condensation (BEC) of polyatomic molecules and towards a new crossover from a dipolar Bardeen-Cooper-Schrieffer (BCS) superfluid to a BEC of tetramers. Additionally, the long-lived FL state provides an ideal starting point for deterministic optical transfer to deeply bound tetramer states.
Xing-Yan Chen, Shrestha Biswas, Sebastian Eppelt, Andreas Schindewolf, Fulin Deng, Tao Shi, Su Yi, Timon A. Hilker, Immanuel Bloch, Xin-Yu Luo
2023-06-01T17:55:17Z
http://arxiv.org/abs/2306.00962v1
# Ultracold field-linked tetratomic molecules ###### Abstract Ultracold polyatomic molecules offer intriguing new opportunities [1] in cold chemistry [2; 3], precision measurements [4], and quantum information processing [5; 6], thanks to their rich internal structure. However, their increased complexity compared to diatomic molecules presents a formidable challenge to employ conventional cooling techniques. Here, we demonstrate a new approach to create ultracold polyatomic molecules by electroassociation [7; 8] in a degenerate Fermi gas of microwave-dressed polar molecules through a field-linked resonance [9; 10; 11]. Starting from ground state NaK molecules, we create around \(1.1\times 10^{3}\) tetratomic (NaK)\({}_{2}\) molecules, with a phase space density of 0.040(3) at a temperature of 134(3) nK, more than 3,000 times colder than previously realized tetratomic molecules [12]. We observe a maximum tetramer lifetime of 8(2) ms in free space without a notable change in the presence of an optical dipole trap, indicating these tetramers are collisionally stable. The measured binding energy and lifetime agree well with parameter-free calculations, which outlines pathways to further increase the lifetime of the tetramers. Moreover, we directly image the dissociated tetramers through microwave-field modulation to probe the anisotropy of their wave function in momentum space. Our result demonstrates a universal tool for assembling ultracold polyatomic molecules from smaller polar molecules, which is a crucial step towards Bose-Einstein condensation (BEC) of polyatomic molecules and towards a new crossover from a dipolar Bardeen-Cooper-Schrieffer (BCS) superfluid [13; 14; 15] to a BEC of tetramers. Additionally, the long-lived FL state provides an ideal starting point for deterministic optical transfer to deeply bound tetramer states [16; 17; 18]. ## I Introduction Molecules exhibit a rich set of internal and external degrees of freedom, which can only be fully controlled under ultracold temperatures (\(<1\) mK) [19; 20]. For example, ultracold molecules prepared in well-defined quantum states allow studying quantum dynamics [21], chemical reactions with state-to-state control [20], and quantum scattering [3; 11; 22] at an unprecedented level. The highly tunable long-range interactions in dipolar molecules also give rise to novel many-body phenomena [23] such as exotic dipolar supersolids [10] and \(p\)-wave superfluids [13; 14; 15]. Furthermore, ultracold polyatomic molecules have emerged as a powerful platform for various applications including tests of beyond-Standard-Model physics [4], non-equilibrium dynamics [24], and quantum information processing [5; 6; 25], thanks to their additional degrees of freedom compared to diatomic molecules. Significant progress has recently been made in the field of molecular cooling, enabling quantum degeneracy in ultracold gases of diatomic dipolar molecules [26; 27; 28]. However, for larger molecules, reaching the ultracold regime remains challenging due to their increased complexity and adverse collisional properties. Direct cooling techniques such as buffer gas cooling [29], supersonic expansion [30], beam deceleration [31], cryofuges [32], and optoelectrical Sisyphus cooling [12] have only marginally reached ultracold temperatures. Direct laser cooling has been applied to certain classes of polyatomic molecules [33], reaching tens of microkelvin [34]. However, laser cooling of larger polyatomic molecules faces a rapid increase in the number of vibrational states limiting efficient photon scattering. Recently, magnetoassociation of ultracold molecules via Feshbach resonances has been extended to triatomic NaK\({}_{2}\) molecules in the 100 nK regime [35], where the molecules inherit the low temperature from the atom-diatomic molecule mixture. However, this technique requires resolvable Feshbach resonances between the collisional partners. For larger, polyatomic molecules, the high number of the intermediate collisional states and their fast loss mechanisms at short range results in a nearly universal collisional loss rate [36], preventing the occurrence of such Feshbach resonances. Here we demonstrate a novel and general approach to form ultracold polyatomic molecules by electroassociation of smaller polar molecules [7; 8]. We create ultracold tetratomic (NaK)\({}_{2}\) molecules from pairs of microwave dressed fermionic NaK molecules by ramping the microwave field across a field-linked (FL) scattering resonance [9; 10; 11]. This approach benefits from the universality of FL resonances and can be applied to any molecule with a sufficiently large dipole moment. We measure a lifetime up to \(8(2)\,\mathrm{ms}\) of our FL tetramers near the dissociation threshold and achieve a phase space density of \(0.040(3)\). With microwave-field modulation dissociation after time-of-flight, we directly image the tetramers and reveal the expected anisotropic angular distribution. ## II Field-linked tetramers A microwave FL molecule consists of two microwave dressed polar molecules bound by long-range dipole-dipole interactions. Each constituent molecule is dressed by a near circularly polarized microwave field, which mixes different rotational states and induces a rotating dipole moment of up to \(d_{0}/\sqrt{6}\) in the laboratory frame, where \(d_{0}\approx 2.7\,\mathrm{Debye}\) is the dipole moment of NaK in its body-fixed frame. The strong induced dipole-dipole interaction potential can host stable tetratomic bound states which give rise to scattering resonances [11]. By ramping the microwave field across these resonances, a pair of scattering NaK dimers can be adiabatically associated into a (NaK)\({}_{2}\) tetramer, as depicted in Fig. 1a. We refer to this process as electroassociation [7], analogous to magnetoassociation using a magnetic Feshbach resonances [37]. The concept behind electroassociation involves a smooth transition from low-lying scattering states of a dimer pair to the bound tetramer state by gradually ramping the microwave field over time [7; 8]. The increase of the microwave field ellipticity, as depicted in Fig. 1b,c, enhances the depth of the interaction potential, leading to the emergence of the tetramer state from the collisional threshold and an increase in its binding energy (see Fig. 1d). Moreover, microwave shielding of the dimers leads to an enhanced collisional stability of the FL tetramers [38; 39; 8], which can therefore be efficiently associated from a low entropy gas of dimers. ## III Binding energy and lifetime Our experiments begin with an ultracold gas of optically trapped (\(1064\,\mathrm{nm}\)) ground-state \({}^{23}\mathrm{Na}^{40}\mathrm{K}\) molecules with nuclear spin projections (\(m_{i,\mathrm{Na}},m_{i,\mathrm{K}})=(3/2,-4)\), which are formed from an ultracold atomic mixture by means of magnetoassociation and stimulated Raman adiabatic passage (STIRAP) [27]. We subsequently dress the molecules with a circularly polarized microwave field, blue detuned to the transition between the ground and the first rotational excited states, in order to shield the molecules from two-body collisions and perform evaporative cooling [40]. Depending on the trap depth at the end of the evaporation, we prepare various initial conditions of the molecular gas. The minimum temperature is \(T=50(1)\,\mathrm{nK}\) at a dimer molecule number \(N_{\mathrm{D}}\) of \(5.7(3)\times 10^{3}\), corresponding to \(T/T_{\mathrm{F}}=0.44(1)\), where \(T_{\mathrm{F}}\) is the Fermi temperature of the trapped gas. The trapping frequencies are \((\omega_{\mathrm{z}},\omega_{\mathrm{\tilde{y}}},\omega_{\mathrm{z}})=2\pi \times(42,61,138)\,\mathrm{Hz}\), where \(z\) is the vertical direction. We probe the binding energy of the tetramers via microwave-field modulation association spectroscopy. We start the experiment with a circularly polarized microwave field at a Rabi frequency \(\Omega=2\pi\times 29(1)\,\mathrm{MHz}\) and detuning \(\Delta=2\pi\times 9.5\,\mathrm{MHz}\)[40]. We then quickly ramp the microwave in \(100\,\mathrm{\SIUnitSymbolMicro s}\) to a target ellipticity \(\xi\) above the FL resonance and modulate the ellipticity at various frequencies for up to \(400\,\mathrm{ms}\). The ellipticity \(\xi\) is defined such that \(\tan\xi\) gives the ratio of the left- and right-handed circularly polarized field components. When the modulation frequency \(\nu\) is slightly above the binding energy, tetramers are formed and subsequently decay into lower dressed states accompanied by a large release energy. This leads to a significant reduction of the remaining dimer number, which we detect in the experiment. As shown in Fig. 2a, we observe clear asymmetric line shapes in the spectra, where the onset frequency of the tetramer association corresponds to the binding energy of the tetramer (Methods). We can thereby determine the binding energy of the tetramers for different target ellipticities (see Fig. 2b) and find excellent agreement between the experimental data and coupled channel calculations Figure 1: **Electroassociation of field-linked (FL) tetramers.****a**, Microwave dressed NaK dimers are associated into (NaK)\({}_{2}\) tetramers as the microwave polarization is ramped from circular to elliptical. **b,c**, Interaction potentials between two dimers approaching along the long axis of the microwave field at \(\xi=0^{*}\) (blue) and \(\xi=14^{*}\) (orange). The potential depth increases and a tetramer bound state emerges from the collisional threshold. The light orange line shows the radial wave function of the tetramer, and the black solid line indicates its binding energy. **d**, Calculated binding energy of the tetramers. The FL resonance (dashed line) marks the onset of the tetramer state. The stars and the arrow mark the electroassociation trajectory in the experiment. Within the range of experimental parameters, there exists only a single FL tetramer state. without free parameters (Methods). Next, we probe the lifetime of the tetramers by measuring their loss dynamics. The dominant loss process for tetramers is spontaneous dissociation into lower microwave dressed states [8; 38] accompanied by a large gain in kinetic energy, which, effectively, leads to a one-body decay of the tetramer number. In order to investigate this process, we first create tetramers by ramping the ellipticity to \(\xi=8(1)^{*}\) in \(0.67\,\mathrm{ms}\), and then quickly ramp to a target ellipticity in \(20\,\mathrm{\SIUnitSymbolMicro s}\). The quick ramp make sure that the measurements at different ellipticities start with the same tetramer and dimer number. There we hold for a variable time, then reverse the ellipticity ramps to dissociate the tetramers back to dimer pairs to map the loss of tetramers during the hold time onto the total dimer number. We turn off the trap after the association to minimize collisional loss. We observe that when the binding energy is high, the observed dimer number quickly undergoes a fast initial decay and afterwards remains constant during the hold time. Near the collisional threshold, the decay is much slower (see inset of Fig. 2c). These initial decays are much faster than the expected dimer-dimer collisional loss rates, and are absent if we jump from \(\xi=0^{*}\) to the target ellipticity, so that no tetramers are expected to form. We therefore attribute this initial decay process to the one-body loss of the tetramers, in good agreement with theory predictions (Methods). The corresponding \(1/e\) lifetime is longer than \(6(1)\,\mathrm{ms}\) when the binding energy is below \(8.2(2)\,\mathrm{kHz}\), and a maximum of \(8(2)\,\mathrm{ms}\) lifetime is observed near the dissociation threshold. With higher Rabi frequencies and at circular polarization, theory predicts lifetimes in excess of \(100\,\mathrm{ms}\) at \(E_{\mathrm{b}}<h\times 4\,\mathrm{kHz}\) where \(h\) denotes the Planck constant (Methods). To investigate the collisional stability of tetramers, we also assess their lifetimes while the dipole trap remains active. Our observations indicate a combined one-body and two-body loss, and we confirm that the two-body loss arises from dimer-dimer collisions (Methods). Apart from data near the collisional threshold \(\xi=5(1)^{*}\), where in-trap measurements are influenced by thermal dissociation, we do not detect notable additional loss of tetramers in in-trap measurements compared to those in time-of-flight experiments. This suggests that tetramers are collisionally stable against collisions with dimers or other tetramers. ## III Association and Dissociation Processes We probe the association and dissociation process by ramping the ellipticity starting from \(\xi=0\) with a constant ramp speed of \(14^{*}\,\mathrm{ms}^{-1}\) (\(27^{*}\,\mathrm{ms}^{-1}\) for the dissociation) to a target ellipticity, as illustrated in Fig. 3b,c. To distinguish the tetramers from the unpaired dimers we selectively remove the tetramers from the dimer-tetramer mixture by quickly ramping the ellipticity to \(\xi=14(1)^{*}\) in \(20\,\mathrm{\SIUnitSymbolMicro s}\) and hold for \(0.4\,\mathrm{ms}\). At this point the tetramers are deeply bound and rapidly decay, which removes them from the sample. Figure 3 shows the evolution of the detected dimer number during the association and the dissociation processes. The number of unpaired dimers (light blue in Fig. 3a) reduces as we ramp the ellipticity across the FL resonance, indicating tetramer formation. Remarkably, Figure 2: **Tetramer binding energy and lifetime.****a**, Tetramer association spectra at different ellipticities obtained by modulating the ellipticity of the microwave field. The solid lines show the fitted line shape and the dashed lines mark the extracted binding energies. The line shapes are shifted and broadened by the line width of the tetramer states and other technical broadening effects (Methods). The Rabi frequency of the microwave field is \(\Omega=2\pi\times 29(1)\,\mathrm{MHz}\) and detuning \(\Delta=2\pi\times 9.5\,\mathrm{MHz}\). The peak-to-peak modulation amplitude is \(1^{*}\) and the modulation time is \(100\,\mathrm{ms}\), except for the lowest ellipticity where we use an amplitude of \(0.5^{*}\) and modulation time of \(400\,\mathrm{ms}\). The error bars represent the standard error of the mean of four repetitions. **b**, Binding energy \(E_{\mathrm{b}}\) obtained from the association spectra (circles) compared with theory prediction (line). The statistical error bars are smaller than the symbol size. The black error bar marks the systematic uncertainty of ellipticity. The shaded area shows theory calculations including the systematic uncertainty of the Rabi frequency \(\Omega\). The inset illustrates the RF association from free to bound states. **c**, Decay rate \(\Gamma\) of the tetramers in time-of-flight (circle) and in trap (triangle), compared to theory calculations (line). The error bars show the fitting errors. The inset shows example decay curves at \(\xi=7(1)^{*}\) and \(\xi=11(1)^{*}\) in time-of-flight. The error bars represent the standard error of the mean of eight data sets. as shown in Fig. 3d, the number of detected dimers revives as we ramp back to circular polarization, indicating that the formed tetramers can be reversibly dissociated back into dimer pairs. In addition, we characterize the association process without removing the tetramers but followed by a dissociation ramp back to \(\xi=0^{*}\). The detected dimer number (dark blue in Fig. 3a) partially revives until \(\xi\gtrsim 12^{*}\), where the tetramers decay during the ramps before they can be dissociated back into dimers. ## V Conditions for efficient electroassociation We move on to identify the optimum condition for electroassociation. We obtain the tetramer number from the difference between images with and without the tetramer removal process outlined previously. First, we probe the timescale of the tetramer formation. We ramp the ellipticity from \(\xi=0(1)^{*}\) to \(8(1)^{*}\) and vary the ramp speed. As shown in Fig. 4a, we observe the formation of tetramers within \(0.3(1)\,\mathrm{ms}\) and subsequently decay due to the finite lifetime. Assuming the elastic dimer-tetramer scattering rate is on the same order of magnitude as for the dimer-dimer collisions, we estimate that the tetramers scatter in average once during the association, bringing them close to thermal equilibrium with the remaining dimers. Next, we investigate the role of quantum degeneracy for efficient electroassociation. For magnetoassociation of Feshbach molecules, it has been shown that a low entropy sample is crucial to achieve high conversion efficiency, due the improved phase-space overlap between the atoms [41]. Here we vary the degeneracy of our initial dimer samples by changing the final trap depth of the evaporation [40]. We observe an increase of the conversion efficiency \(\eta\), that is the fraction of dimers converted into tetramers, with quantum degeneracy of the dimer gas. We achieve a maximum \(\eta=25(2)\%\) conversion efficiency at \(T=0.44(1)T_{\mathrm{F}}\). Similar as for magnetoassociation [41], a maximum unity conversion efficiency is expected at zero temperature. ## VI Imaging of the dissociated tetramers We use two methods to obtain absorption images of the tetramers. Firstly, we image the adiabatically dissociated tetramers in time-of-flight to directly probe their temperature. Specifically, we turn off the trap after the electroassociation and image the cloud after \(4.5\,\mathrm{ms}\) of expansion time. To image the molecules, we ramp the ellipticity back to circular to rapidly dissociate the tetramers in \(0.3\,\mathrm{ms}\), then turn off the microwave, reverse the STIRAP to transfer the dimers to the Feshbach-molecule state. Finally we separate the bound atoms via magnetoassociation, directly followed by absorption imaging of the atoms to minimize additional cloud expansion from residual release energy of the tetramer and Feshbach molecule dissociation. The images of the tetramer Figure 3: **Association and dissociation processes.****a**, Remaining dimer number \(N_{\mathrm{D}}\) after the association ramp. The dark blue circles show the total number of dimers including dissociated tetramers, while the light circles show the dimer number after removal of the tetramers. The solid blue line is a fit to an error function. The vertical dashed line marks the theoretical resonance position. **b,c**, Waveform of the association (**b**) and dissociation (**c**) ramps. In (**b**) the blue solid(dashed) line shows the waveform with(without) removal of the tetramers. The horizontal dashed lines indicates the theoretically predicted resonance position. The circles show the target ellipticity of the association or dissociation ramp, which is plotted in (**a**) and (**d**), respectively. **d**, Increase of the detected dimer number during the dissociation ramp. The solid orange line is a fit to an error function. The vertical dashed line shows the predicted resonance position. The error bars represent the standard error of the mean of ten experiment repetitions. Figure 4: **Conditions for efficient electroassociation.****a**, Tetramer number \(N_{\mathrm{T}}\) as a function of the association time. The solid blue line is a fit to a double exponential function, which captures the formation and decay of the tetramers (Methods). The error bars represent the standard error of the mean of eight repetitions. **b**, Conversion efficiency \(\eta\) as a function of the initial \(T/T_{\mathrm{F}}\) of the dimer gas. We use a ramp speed of \(7^{*}\,\mathrm{ms}^{-1}\) for the electroassociation. The initial \(T/T_{\mathrm{F}}\) are extracted separately, without performing electroassociation. The error bars represent the standard error of the mean of four repetitions. momentum distribution are obtained by substracting images without from images with removal of tetramers at high ellipticity. Examples of such tetramer images are shown in Fig. 5a. From a fit to such time-of-flight images and considering the mass of the particles, we determine the temperature of the tetramers to be \(134(3)\,\mathrm{nK}\), which is slightly higher compared to the dimer temperature \(97(6)\,\mathrm{nK}\). The fact that the tetramer cloud is smaller than the dimer background suggests partial thermalization and therefore elastic scattering during the electroassociation. Beyond that, heating might occur during the association and dissociation process. From the number and trapping frequencies, we obtain a peak density of \(5.0(2)\times 10^{11}\,\mathrm{cm^{-3}}\) and a phase space density of \(0.040(3)\) in the trap. We only consider the statistical error in this analysis. Secondly, we image modulation-dissociated tetramers to probe the angular distribution of their single-particle wave function. A similar protocol has been demonstrated in the photodissociation of diatomic molecules [42]. Here we modulate the ellipticity at a modulation frequency \(\nu>E_{\mathrm{b}}/h\), which couples the tetramer states to the scattering continuum. The coupled scattering state possesses a large wave function overlap with the tetramer state, and thus exhibits a similar momentum distribution, which is then probed by time-of-flight imaging. We note that the dissociation pattern is not a one-to-one mapping of the tetramer wave function, but only preserves its angular distribution (Methods). We begin by measuring the dissociation spectrum of the tetramers. We create tetramers at \(\xi=8(1)^{*}\) via electroassociation, then modulate the ellipticity for \(2\,\mathrm{ms}\) to dissociate tetramers. Meanwhile, we turn off the trap to suppress further association of dimers. Afterwards we remove the remaining tetramers and let the dissociated dimers expand for another \(6\,\mathrm{ms}\) before absorption imaging. The dissociation spectrum, depicted in Figure 5b, demonstrates an increase in the observed dimer number \(N_{\mathrm{D}}\) caused by the presence of dissociated tetramers when the modulation frequency \(\nu\) exceeds the frequency associated with the binding energy of the tetramer \(E_{\mathrm{b}}/h=17.8(3)\,\mathrm{kHz}\). However, at higher frequencies, \(N_{\mathrm{D}}\) declines due to a decrease in dissociation efficiency resulting from the diminished Frank-Condon factor. We take the difference between images with and without modulation to obtain images of the dissociated tetramers. We verify that modulation at a higher frequency results in a larger pattern due to the higher dissociation energy. We choose a modulation frequency of \(\nu=30\,\mathrm{kHz}\) to optimize the contrast of the images. As shown in Fig. 5c,d, the dissociation pattern has two lobes, which are oriented along the long axis \(x\) of the microwave polarization and match qualitatively with the theoretical wave function in Fig. 5e,f. Radial integration of the image reveals the angular distribution of the wave function, Figure 5: **Momentum distributions of the dissociated tetramers.****a**, Azimuthally averaged optical density (OD) of the samples after ramp dissociation and \(4.5\,\mathrm{ms}\) time-of-flight. The difference (blue) between images with (orange) and without (green) removal of tetramers shows the momentum distribution of the tetramer cloud. The error bars represent standard error of the mean of \(60\) repetitions. Inset shows the difference image. **b**, Tetramer dissociation spectrum. We create the tetramers at \(\xi=8(1)^{*}\) and modulate the ellipticity with an amplitude of \(1.4^{*}\) for \(2\,\mathrm{ms}\). The solid line is a fit to the dissociation line shape (Methods). The error bars represent the standard error of the mean of ten repetitions. **c,d**, Time-of-flight images of modulation-dissociated tetramers. We use a modulation frequency of \(30\,\mathrm{kHz}\), with an amplitude of \(3.6^{*}\) for \(2\,\mathrm{ms}\). While the microwave ellipticity is about the same in (**c**) and (**d**) the field orientation differs by about \(90^{\circ}\). The dashed lines marks the extracted long axes of the patterns (Methods). The images are averaged over \(84\) and \(40\) measurements for (**c**) and (**d**), respectively. Each pixel is a binning of \(5\times 5\) pixels from the raw images. **e**, Theoretical tetramer wave function in momentum space. The microwave field propagates along the \(z\) axis, and its long axis is oriented along the \(x\) axis. The cut-open surfaces correspond to probability density of \(1.5\times 10^{8}\mathrm{a}_{0}^{3}\) (orange), \(3.5\times 10^{8}\mathrm{a}_{0}^{3}\) (blue), and \(6\times 10^{8}\mathrm{a}_{0}^{3}\) (green), respectively. **f**, The theoretical wave function integrated along the propagation axis of the microwave field. The imaging plane (**a,c**, and **d**) is roughly perpendicular to the \(z\) axis. which follows \(p\)-wave symmetry [43] in the \(p_{x}\) channel \(\cos^{2}\phi\), where \(\phi\) is the angle from the \(x\) axis (Methods). The broken rotational symmetry along the quantization axis is a result of the elliptical microwave polarization. When we rotate the microwave field by roughly 90\({}^{\circ}\), by flipping the sign of the relative phase between the two feeds of the antenna (Methods), the dissociation pattern is similar but rotated by about 90\({}^{\circ}\), which demonstrates the tunable control of the tetramer wave function through the microwave field. ## Discussion By efficient electroassociation in a degenerate Fermi gas of diatomic molecules, we have created a gas of field-linked tetramers at unprecedentedly cold temperature. The associated (NaK)\({}_{2}\) molecules are more than 3,000 times colder than any other tetratomic atomic molecules produced so far [12]. The created tetramers possess a phase space density 11 orders of magnitude higher the previous record, and is only two orders of magnitude below quantum degeneracy. Remarkably, the lifetime of the long-range FL tetramers is much longer than those observed in polyatomic Feshbach molecules, which are either short lived (\(<1\,\mathrm{\SIUnitSymbolMicro s}\)) [22] or unstable in the presence of an optical trap [35]. These features make them a promising candidate for realizing a BEC of polyatomic molecules. There are two possible ways to create a BEC of FL tetramers. Firstly, we can make use of the increasing conversion efficiency with lower temperatures. Starting below the critical temperature of \(0.14T_{\mathrm{F}}\), we expect a tetramer BEC to emerge from a degenerate Fermi gas of dimers [44], realizing a BCS-BEC crossover [45] which features anisotropic pairing due to the dipolar interactions [15]. The other possibility is to extend the tetramer lifetime using the resonance at circular polarization, where the improved shielding increases tetramer lifetime to hundreds of milliseconds. As our experiments suggest that they are collisional stable against dimer-tetramer collisions, it is promising to evaporatively cool tetramers to lower temperatures [46]. Another interesting direction is to study the excited states of the FL tetramers. As the potential depth increases at higher Rabi frequencies, the interaction potential supports high order FL states, which correspond to excitations of the radial or angular motion of the constituent dimers [47]. Such excited FL states have more complex structures, which can be probed similarly with microwave-field modulation. The creation of FL tetramers opens up a pathway to explore the rich landscape of the four-body potential energy surfaces (PESs). Similar to diatomic molecules, the long-lived weakly bound FL state provides an ideal starting point for deterministic optical transfer to deeply bound states within the PES [16; 18]. For the PES of (NaK)\({}_{2}\) molecules, there are seven energy minima which features distinct geometries including D\({}_{2h}\), \(C_{s}\), and \(C_{2v}\) symmetries [17]. These states possess electric dipole and/or quadruple moments, and together with their rich rovibrational structures, opening up new possibilities for studying eight-body collisions and quantum many-body phenomena with both strong dipolar and quadrupolar interactions. The demonstrated electroassociation via FL resonances is applicable to any polar molecules with a sufficiently large dipole moment [7; 8; 10; 39; 48; 49]. For example, it can be applied to laser cooled polyatomic molecules to form hexatomic molecules and beyond. Electroassociation can be generalized to d.c. electric fields, where interspecies FL resonances could allow association of two molecules from distinct molecular species. One can even imagine a scalable assembling process, where we sequentially associate pairs of tightly bound molecules into weakly bound FL molecules, convert them into deeply bound states via optical transfer [16; 18], then associate these molecules into even larger FL molecules. ## Conclusion We have created and characterized field-linked tetratomic (NaK)\({}_{2}\) molecules, which is so far the first tetratomic molecules attained in the 100\(\,\mathrm{nK}\) regime. The properties of these tetramers are highly tunable with the microwave field, and can be sufficiently long-lived and collisional stable. Thanks to the universality of field-linked resonance, our approach can be generalized to a wide range of polar molecules, including more complex polyatomic molecules. Our results provide a general approach to assemble ultracold polyatomic molecules and open up new possibilities to investigate new quantum many-body phenomena. _Note_: During completion of this work, we became aware of a related theoretical proposal on electroassociation of field-linked tetramers from bosonic dimers [7]. ## Acknowledgements We thank G. Quemener for stimulating discussions. We gratefully acknowledge support from the Max Planck Society, and the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC-2111 - 390814868 and under Grant No. FOR 2247. F.D., T.S., and S.Y. acknowledge support from National Key Research and Development Program of China (Grant No. 2021YFA0718304), and National Natural Science Foundation of China (Grants No. 11974363 and No. 12274331). ## Author Contributions All authors contributed substantially to the work presented in this manuscript. X.-Y.C. and S.B. carried out the experiments and together with S.E. and A.S. improved the experimental setup. X.-Y.C., S.E., and S.B. analyzed the data. F.D., T.S., and S.Y. performed the theoretical calculations. T.H., I.B., and X.-Y.L. supervised the study. All authors worked on the interpretation of the data and contributed to the final manuscript.
2302.03744
3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation
The ability to perceive and understand 3D scenes is crucial for many applications in computer vision and robotics. Inverse graphics is an appealing approach to 3D scene understanding that aims to infer the 3D scene structure from 2D images. In this paper, we introduce probabilistic modeling to the inverse graphics framework to quantify uncertainty and achieve robustness in 6D pose estimation tasks. Specifically, we propose 3D Neural Embedding Likelihood (3DNEL) as a unified probabilistic model over RGB-D images, and develop efficient inference procedures on 3D scene descriptions. 3DNEL effectively combines learned neural embeddings from RGB with depth information to improve robustness in sim-to-real 6D object pose estimation from RGB-D images. Performance on the YCB-Video dataset is on par with state-of-the-art yet is much more robust in challenging regimes. In contrast to discriminative approaches, 3DNEL's probabilistic generative formulation jointly models multiple objects in a scene, quantifies uncertainty in a principled way, and handles object pose tracking under heavy occlusion. Finally, 3DNEL provides a principled framework for incorporating prior knowledge about the scene and objects, which allows natural extension to additional tasks like camera pose tracking from video.
Guangyao Zhou, Nishad Gothoskar, Lirui Wang, Joshua B. Tenenbaum, Dan Gutfreund, Miguel Lázaro-Gredilla, Dileep George, Vikash K. Mansinghka
2023-02-07T20:48:35Z
http://arxiv.org/abs/2302.03744v3
# 3D Neural Embedding Likelihood for Robust Probabilistic Inverse Graphics ###### Abstract The ability to perceive and understand 3D scenes is crucial for many applications in computer vision and robotics. Inverse graphics is an appealing approach to 3D scene understanding that aims to infer the 3D scene structure from 2D images. In this paper, we introduce probabilistic modeling to the inverse graphics framework to quantify uncertainty and achieve robustness in 6D pose estimation tasks. Specifically, we propose 3D Neural Embedding Likelihood (3DNEL) as a unified probabilistic model over RGB-D images, and develop efficient inference procedures on 3D scene descriptions. 3DNEL effectively combines learned neural embeddings from RGB with depth information to improve robustness in sim-to-real 6D object pose estimation from RGB-D images. Performance on the YCB-Video dataset is on par with state-of-the-art yet is much more robust in challenging regimes. In contrast to discriminative approaches, 3DNEL's probabilistic generative formulation jointly models multi-object scenes, quantifies uncertainty in a principled way, and handles object pose tracking under heavy occlusion. Finally, 3DNEL provides a principled framework for incorporating prior knowledge about the scene and objects, which allows natural extension to additional tasks like camera pose tracking from video. ## 1 Introduction 3D scene understanding is a fundamental problem in computer vision and robotics with numerous applications, including object recognition [47], robotic manipulation[48], and navigation[42]. Inverse graphics is an "analysis-by-synthesis" approach to 3D scene understanding that has found successful applications in a wide variety of tasks [15, 21, 9, 11, 24]. By synthesizing images from possible 3D descriptions of the scene and selecting the 3D scene description that best agrees with the observed image, inverse graphics offers an intuitive and appealing way to reason about the 3D structure of a scene from 2D images. However, challenges such as modeling the gap between rendered images and real-world observations and efficient inference have limited the widespread usage of 3D inverse graphics. In this paper, we focus on 6D pose estimation, an important task in 3D scene understanding using inverse graphics that aims to infer the rigid \(\mathbb{SE}(3)\) transformations (position and orientation) of objects in the camera frame given an image observation. We emphasize principled probabilistic modeling as a way to address the central challenges in 3D inverse graphics, and propose 3D Neural Embedding Likelihood (3DNEL). Instead of naively rendering RGB images, 3DNEL uses learned neural embeddings to predict 2D-3D correspondences from RGB and combines this with depth to robustly evaluate the agreement of scene descriptions and real-world observations. This results in a unified probabilistic model over RGB-D images that jointly models multi-object scenes. We additionally develop efficient inference procedures using 3DNEL, both with stochastic search for 6D object pose estimation from static RGB-D images, and with particle filtering for object pose tracking from video. We conduct extensive experiments on the popular YCB-Video (YCB-V) dataset [48]. Our results demonstrate that 3DNEL's probabilistic formulation addresses 3D inverse graphics' central challenges of bridging the gap between rendered images and real-world observations, signif icantly improving robustness in sim-to-real 6D pose estimation on challenging scenes with principled pose uncertainty quantification, while achieving accuracy on par with state-of-the-art (SOTA) approaches that require extensive tuning. Additionally, 3DNEL's joint modeling of multi-object scenes and natural support for uncertainty quantification enables robust object pose tracking under occlusion. Furthermore, 3DNEL's probabilistic formulation provides a principled framework for incorporating prior knowledge about the scene and objects, enabling easy extension to additional tasks like camera pose tracking from video, using principled inference in the same probabilistic model without task-specific retraining. While the field of 6D pose estimation is currently dominated by discriminative approaches based on deep learning, our probabilistic inverse graphics approach provides a complementary alternative that offers unique advantages in terms of robustness, uncertainty quantification and support for multiple tasks due to its probabilistic generative formulation. Our main contributions are three-fold: * We propose a probabilistic inverse graphics approach to 6D pose estimation that can naturally support uncertainty quantification, track object poses with particle filtering, and incorporate additional knowledge about the scene and objects to handle camera pose tracking without task-specific retraining. * We conduct extensive experiments on YCB-V and perform on par with SOTA while improving robustness with significantly fewer large-error predictions. * We show 3DNEL can handle challenging cases such as identifying pose uncertainties for symmetric objects and object pose tracking under heavy occlusion. ## 2 Related Work 3D inverse graphicsOur method follows a long line of work in the _analysis-by-synthesis_ paradigm that treats perception as the inverse problem to computer graphics [23, 49, 30, 22, 37, 26, 27]. While conceptually appealing, robustly modeling the gap between the rendered images and real-world observations, especially using appearance information, remains challenging in 3D inverse graphics. Moreover, without uncertainty estimates, even small errors in 3D scene description estimations can be catastrophic for downstream tasks. In recent years, there has been growing interests in leveraging probabilistic formulations in an inverse graphics approach for shape and scene modeling with principled uncertainty quantification [11, 19]. Our work builds on this trend, and additionally integrates appearance modeling through learned dense 2D-3D correspondences[16, 32, 14, 8, 12] with depth information in a unified probabilistic framework to allow superior sim-to-real transfer. **Discriminative 6D object pose estimation** Discriminative approaches based on deep learning have recently yielded strong performance on 6D object pose estimation. Existing methods either directly regress poses [48, 45, 13, 46], or first establish 2D-3D correspondences [16, 32, 40, 14, 12] followed by a variant of Perspective-n-point (PnP) and random sample consensus (RANSAC) [7]. While such approaches achieve impressive results on a wide variety of datasets, their discriminative nature means there is no natural way to quantify uncertainty, and they cannot be easily extended beyond object pose estimation to additional tasks like object or camera pose tracking from video. **Neural embeddings for dense correspondence** Many pose estimation methods directly regress 3D object coordinates at each pixel [16, 32, 40, 41] to predict dense 2D-3D correspondences. Recent works [8, 38] show that we can instead learn neural embeddings for 2D pixel and 3D surface locations, and use the embedding similarities to establish dense correspondence. Several recent pose estimation methods [12, 50] demonstrate the benefits of this approach for symmetry handling and category-level generalization. We observe we can combine such embedding similarities with a noise model on the depth information into a unified probabilistic model on RGB-D images. Specifically, we build on SurfEMB [12], and show how the additional probabilistic modeling improves both robustness and accuracy while additionally allowing principled uncertainty quantification and easy extension to additional tasks. **Render-and-compare for pose refinement** Several recent works [28, 29, 31, 36, 33] adopt a render-and-compare approach for pose refinement, which resembles the idea of "analysis-by-synthesis" in an inverse graphics approach. However, these methods are all discriminative in nature, and train neural networks that take the rendered and real images as inputs, and either directly predict the pose transformations [28, 29, 31, 36] or predict a flow field [33]. In contrast, 3DNEL adopts a probabilistic generative formulation which allows natural support for uncertainty quantification and multiple tasks using principled inference within the same probabilistic model. Moreover, existing render-and-compare methods all consider different objects separately, while 3DNEL jointly models multi-object scenes. **Sim-to-real transfer** Recent advances in photorealistic rendering and physics-based simulations [18, 5] and domain randomization [44] have yielded impressive results [12, 46, 34] in sim-to-real 6D object pose estimation. 3DNEL builds on such advances, and demonstrates that principled probabilistic modeling of the noise distribution between rendered and real-world data can further improve robustness and accuracy in sim-to-real transfer. **Uncertainty Quantification and Pose Tracking** Several works [4, 39, 35, 43] propose to quantify pose uncertainties, especially for rotations, to achieve robust performance in ambiguous settings such as symmetric objects and heavily occluded scenes. In our work, we demonstrate how 3DNEL naturally supports such uncertainty quantification, and additionally show how this helps enable the challenging task of object pose tracking under occlusion. ## 3 Methods ### Preliminaries Probabilistic inverse graphics3D inverse graphics formulates the perception problem as searching for the 3D scene description that can be rendered by a graphics engine to best reconstruct the input image. We propose a likelihood \(\mathbb{P}(\)_Observed RGB-D Image\(|\)3D scene description_\()\) that can assess how well an observed RGB-D image is explained by a 3D scene description. We define a 3D scene description in terms of the number \(N\) of objects in the scene, their classes \(t_{1},\cdots,t_{N}\in\{1,\cdots,M\}\), and their corresponding poses \(\mathcal{D}=(\mathbf{P}_{1},\cdots,\mathbf{P}_{N})\) where \(\mathbf{P}_{1},\cdots,\mathbf{P}_{N}\in\mathbb{SE}(3)\). Each object is associated with a textured mesh, which captures the 3D shape and appearance information of the object. We assume uniform prior distributions over object poses (uniform over a bounded volume for position and uniform on \(\mathbb{SO}(3)\) for orientation). Note that our probabilistic formulation jointly models all objects in the scene, as opposed to many existing probabilistic models where different objects are considered separately. Noise model on depth informationWe use the probabilistic model \(\mathbb{P}_{\text{depth}}(\mathbf{c}|\tilde{\mathbf{c}};r)=\frac{1}{3}\| \mathbf{c}-\tilde{\mathbf{c}}\|_{2}\leq r|\) from 3DP3 [11] as our noise model on depth information. \(\mathbb{P}_{\text{depth}}\) is a uniform distribution in a radius-\(r\) ball centered at a rendered point \(\tilde{\mathbf{c}}\in\mathbb{R}^{3}\), and models the small spatial displacements in the observed point \(\mathbf{c}\in\mathbb{R}^{3}\). \(r\) is a hyperparameter that controls the variance of the noise model. Noise model on RGB informationInstead of directly operating on RGB images, we leverage similarity measurements of learned neural embeddings for 2D pixel and 3D surface locations [8, 38, 12, 50] to specify the noise model on RGB information. Concretely, we reuse components from SurfEMB [12] to highlight how we can bring principled probabilistic modeling to any such similarity measurements with added benefits on robustness and uncertainty quantification. For each object class \(t\in\{1,\cdots,M\}\), SurfEMB learns two neural embedding models: (1) a _query embedding model_ which maps an RGB image \(\mathbf{I}\) to a set of query embeddings \(\mathbf{Q}^{t}\), one for each 2D pixel location, and (2) a _key embedding model_\(g_{t}:\mathbb{R}^{3}\mapsto\mathbb{R}^{E}\) which maps each 3D location \(\mathbf{x}\in\mathbb{R}^{3}\) (object frame coordinate) on the object surface to a key embedding \(g_{t}(\mathbf{x})\in\mathbb{R}^{E}\). Given a pixel with query embedding \(\mathbf{q}\in\mathbb{R}^{E}\), SurfEMB measures the similarity between the query and the key embeddings using a surface distribution \(\mathbb{P}_{\text{RGB}}(g_{t}(\mathbf{x})|\mathbf{q},t)\propto\exp(\mathbf{q} ^{T}g_{t}(\mathbf{x}))\) that describes which point \(\mathbf{x}\) on the object surface the given pixel corresponds to. Importantly, these models can be trained entirely from synthetic data (with photorealistic rendering, physics-based simulations and domain randomization). See Appendix A for a more detailed review. ### 3D Neural Embedding Likelihood (3DNEL) Processing 3D scene description for 3DNEL evaluationFor a given 3D scene description, we use a 3D graphics engine to render it into: (1) A rendered point cloud image \(\tilde{\mathbf{C}}\), where \(\tilde{\mathbf{C}}_{i,j}\in\mathbb{R}^{3}\) represents the camera frame coordinate at pixel \((i,j)\). (2) A semantic segmentation map \(\tilde{\mathbf{S}}\) where \(\tilde{\mathbf{S}}_{i,j}\in\{0,1,\cdots,M\}\) represents the class to which the pixel \((i,j)\) belongs. Here \(0\) represents background. (3) An object coordinate image \(\tilde{\mathbf{X}}\) where \(\tilde{\mathbf{X}}_{i,j}\in\mathbb{R}^{3}\) represents the object frame coordinate at pixel \((i,j)\) of the object of class \(\tilde{\mathbf{S}}_{i,j}\). Processing RGB-D image for 3DNEL evaluationFor an observed RGB image \(\mathbf{I}\) and depth image, we use the learned query embedding models to obtain \(M\) sets of query embeddings \(\mathbf{Q}^{t},t\in\{1,\cdots,M\}\), one for each object class, where \(\mathbf{Q}^{t}_{i,j}\in\mathbb{R}^{E}\) represents the query embedding at pixel \((i,j)\), and use camera intrinsics to unproject the depth image into an observed point cloud image \(\mathbf{C}\), where \(\mathbf{C}_{i,j}\in\mathbb{R}^{3}\) represents the camera frame coordinate at pixel \((i,j)\). 3DNEL evaluationFigure 1 visualizes 3DNEL evaluation using processed 3D scene descriptions and observed RGB-D images. 3DNEL combines the noise model \(\mathbb{P}_{\text{depth}}\) on depth information and the dense 2D-3D correspondence distribution \(\mathbb{P}_{\text{RGB}}\), and jointly models multiple objects in a scene through a mixture model formulation. This results in a unified probabilistic model on real RGB-D images. Intuitively, we assess how well each pixel \((i,j)\) in the observed point cloud image \(\mathbf{C}\) is explained by a pixel \((\tilde{i},\tilde{j})\) in the rendered point cloud \(\tilde{\mathbf{C}}\), by combining the noise model \(\mathbb{P}_{\text{depth}}\) on depth and the noise model \(\mathbb{P}_{\text{RGB}}\) on RGB. To jointly model multiple objects in a scene, we assume each pixel \((i,j)\) in \(\mathbf{C}\) can be explained by multiple pixels in \(\tilde{\mathbf{C}}\). We formalize this with a mixture model formulation, where the mixture component associated with the rendered pixel \((\tilde{i},\tilde{j})\) combines \(\mathbb{P}_{\text{depth}}\) and \(\mathbb{P}_{\text{RGB}}\) to assess how well the observed pixel \((i,j)\) is explained by the rendered pixel \((\tilde{i},\tilde{j})\). To model background pixels in \(\mathbf{C}\), we assume the observed point cloud image \(\mathbf{C}\) resides in a bounded region of volume \(B\), and introduce a uniform distribution \(\mathbb{P}_{\text{BG}}(\mathbf{c};B)=1/B\) on the bounded region with mixture probability \(\epsilon\) as an additional mixture component for background modeling. Representing the total number of non-background pixels in the rendered images as \(\tilde{K}=\sum_{\tilde{i},\tilde{j}}\mathbf{1}[\tilde{\mathbf{S}}_{\tilde{i}, \tilde{j}}>0]\), the mixture probability for the mixture component associated with rendered pixel \((\tilde{i},\tilde{j})\) is given by \((1-\epsilon)/\tilde{K}\). Since the query embedding at a pixel depends on the entire image \(\mathbf{I}\), the mixture components are not properly normalized. This leads to the following energy-based formulation: \(\mathbb{P}_{\text{3DNEL}}(\mathbf{I},\mathbf{C}|\mathcal{D})\) is proportional to \[\prod_{\mathbf{c}}\left(\epsilon^{\mathbb{E}}_{\text{BG}}(\mathbf{c};B)+\frac{1 -\epsilon}{K}\sum_{\delta:\tilde{s}>0}\mathbb{P}_{\text{depth}}(\mathbf{c}| \tilde{\mathbf{c}};r)\mathbb{P}_{\text{RGB}}(g_{\tilde{s}}(\tilde{\mathbf{x}} )|\mathbf{q}^{\tilde{s}},\tilde{s})\right) \tag{1}\] where we denote \(\mathbf{C}_{i,j}\) by \(\mathbf{c}\), \(\mathbf{\tilde{C}}_{\tilde{i},\tilde{j}}\) by \(\tilde{\mathbf{c}}\), \(\mathbf{\tilde{S}}_{\tilde{i},\tilde{j}}\) by \(\tilde{s}\), \(\mathbf{\tilde{X}}_{\tilde{i},\tilde{j}}\) by \(\tilde{x}\), and \(\mathbf{Q}_{i,j}^{t}\) by \(\mathbf{q}^{t}\). The product is over all observed pixels, and the sum is over all non-background rendered pixels. \(\epsilon,B\) and \(r\) are hyper-parameters that we pick in the experiments. See Appendix B for more details. ### Inferring the 3D scene description Stochastic search with 3DNEGiven an observed RGB-D image (represented as \(\mathbf{I}\) for the RGB image and \(\mathbf{C}\) for the observed point cloud) and a 3D scene description (with object poses \(\mathcal{D}=(\mathbf{P}_{1},\cdots,\mathbf{P}_{N})\)), 3DNEL evaluates the likelihood \(\mathbb{P}(\mathbf{I},\mathbf{C}|\mathcal{D})\) using Equation 1 as described in Section 3.2. We develop an OpenGL-based parallel renderer, and a JAX [3] based likelihood evaluation using the rendered outputs. This allows efficient parallel evaluation of the likelihood of an observed RGB-D image for hundreds of 3D scene descriptions on modern GPUs. We design a stochastic search procedure with 3DNEL to infer the 3D scene description from an observed RGB-D image. Given the current 3D scene description \(\mathcal{D}\), the stochastic search procedure is an iterative process where at each iteration, we propose \(K\) candidate poses \(\mathbf{\tilde{P}}_{1},\cdots,\mathbf{\tilde{P}}_{K}\) for a randomly picked object \(i\in\{1,\cdots,N\}\). We evaluate Figure 1: **Evaluating 3DNEL** 3DNEL defines the probability of an observed RGB-D image conditioned on a 3D scene description. We first render the 3D scene description into: (1) a depth image, which is transformed to a rendered point cloud image, (2) a semantic segmentation map, and (3) the object coordinate image (each pixel contains the object frame coordinate of the object surface point from which the pixel originates). The object coordinate image is transformed, via the key models, into key embeddings. The observed RGB image is transformed, via the query models, into query embeddings. The observed depth is transformed into an observed point cloud image. The 3DNEL Energy Function (Equation 1) is evaluated using the rendered point cloud image, semantic segmentation, key embeddings, the observed point cloud image, and query embeddings. Figure 2: **Using 3DNEL for 3D Scene Parsing** The 3DNEL MSIGP pipeline starts by computing the query embeddings for each object and the observed point cloud image from RGB-D observations. Then, a fast enumerative procedure produces the pose hypotheses for the objects, and construct an initial 3D scene description. We further perform stochastic search with 3DNEL using three types of MH proposals (1) pose hypotheses proposals (2) ICP proposals to align an object to point cloud data, and (3) random walk proposals that refines poses with local perturbations. The result is a 3D scene description that explains the observed RGB-D image. 3DNEL’s joint modeling of multiple objects through the mixture model formulation enables robust estimation on this challenging scene with two similar-looking clamps. in parallel the likelihood of \(K\) 3D scene descriptions obtained by replacing the pose \(\mathbf{P}_{i}\) of object \(i\) in \(\mathcal{D}\) with each of the \(K\) candidate poses, and identify the candidate pose with the highest likelihood. We update \(\mathbf{P}_{i}\) to this candidate pose if this increases the likelihood. We consider 3 types of pose proposals: (1) pose hypotheses proposal proposes a pre-specified set of pose hypotheses (obtained either from the coarse enumerative procedure in Section 3.4 or from a different pose estimation method); (2) ICP proposal uses ICP to align the object to the observed point cloud, and proposes a set of candidate poses sampled from a Gaussian-von Mises-Fisher (Gaussian-VMF) distribution centered around the aligned pose; (3) random walk proposal proposes a set of candidate poses sampled from a Gaussian-VMF distribution centered around the object's current pose. The Gaussian-VMF distribution means a multivariate Gaussian centered at the current position and a VMF distribution centered at the current orientation. Given a set of coarse pose estimations as pose hypotheses, the stochastic search procedure with 3DNEL can be used for pose refinement. As we demonstrate in Section 4.1, 3DNEL's joint modeling of multi-object scenes and principled combination of RGB and depth information allows such pose refinement process to further improve robustness and accuracy of previous SOTA. **Particle filtering for object pose tracking from video** We formulate the problem of object pose tracking from video as probabilistic inference in a state-space model. At each timestep \(t=1,\dots,T\), we have the 3D scene description \(\mathcal{D}_{t}=(\mathbf{P}_{1}^{(t)},\cdots,\mathbf{P}_{N}^{(t)})\) as the latent state and the RGB-D image \(\mathbf{I}_{t},\mathbf{C}_{t}\) as the observed variable. We use a simple dynamics model \(\mathbb{P}_{\text{dynamics}}(\mathcal{D}_{t+1}|\mathcal{D}_{t})\) that independently samples the poses of each object at time \(t+1\) from Gaussian-VMF distributions centered at the poses of the objces at time \(t\), and use 3DNEL as the likelihood. We have the following state space model \(\mathbb{P}(\mathcal{D}_{1:T},\mathbf{I}_{1:T},\mathbf{C}_{1:T})\): \[\mathbb{P}(\mathcal{D}_{1})\prod_{t=1}^{T-1}\mathbb{P}_{\text{ dynamics}}(\mathcal{D}_{t+1}|\mathcal{D}_{t})\prod_{t=1}^{T}\mathbb{P}_{\text{3DNEL}}( \mathbf{I}_{t},\mathbf{C}_{t}|\mathcal{D}_{t}) \tag{2}\] Given a sequence of RGB-D frames from a video, we use the Sampling Importance Resampling (SIR) particle filter [10, 1] to infer the posterior distribution \(\mathbb{P}(\mathcal{D}_{1:T}|\mathbf{I}_{1:T},\mathbf{C}_{1:T})\), and use \(\operatorname*{arg\,max}_{\mathcal{D}_{t}}\mathbb{P}(\mathcal{D}_{t}|\mathbf{ I}_{1:t},\mathbf{C}_{1:t})\) as our tracking estimate at time \(t\). ### 6D object pose estimation pipeline **Coarse Enumerative Pose Hypotheses Generation** Existing pose estimation methods based on dense 2D-3D cor Figure 3: **3DNEL MSIGP improves robustness over SurfEMB** (a) Comparison of prediction error (measured by VSD) between SurfEMB and 3DNEL MSIGP across 4123 object instances in YCB-V. Each point on the scatter plot represents an instance. Points above the dashed line represent instances for which 3DNEL MSIGP has lower prediction error. (b) Number of instances with prediction error above a certain error threshold, across multiple thresholds. 3DNEL MSIGP makes significantly less high-error predictions than SurfEMB (over 50% less above 0.5). (c) Scatter plots for 6 representative object classes. (d)(e) 3DNEL MSIGP is more robust than SurfEMB on challenging scenes. respondences typically use PnP to generate pose hypotheses. However, PnP does not take depth information into account, requires the use of the time-consuming RANSAC to deal with noisy 2D-3D correspondences, and needs separate 2D detections to localize and mask out the object. Motivated by the above, we develop novel spherical voting and heuristic scoring procedures, and use a coarse enumerative procedure to efficiently generate pose hypotheses. Given a set of keypoints sampled from the object surface using farthest point sampling, spherical voting leverages dense 2D-3D correspondences to estimate the 3D distance between an observed point and possibly present keypoints around it, and cast votes towards all points on spheres with the predicted distances as radiuses to aggregate information from the entire RGB-D image into a 3D accumulator space. We coarsely discretize the object pose space, and heuristically score the discretized poses using the aggregated information to output top scoring pose hypotheses. Refer to Appendix C for a detailed description of the process. Our coarse enumerative pose hypotheses generation combines depth information with dense 2D-3D correspondences, and can be implemented efficiently on the GPU (we use Taichi [20]). As we show in Section 4.1, it performs competitively even without separate 2D detections, and can additionally leverage available 2D detections to filter out noisy query embeddings and restrict voting to only the relevant image regions to further improve performance. **3DNEL multi-stage inverse graphics pipeline (MSIGP)** We design a MSIGP based on 3DNEL for sim-to-real 6D object pose estimation. We generate a set of pose hypotheses for each object class using the above coarse enumerative procedure, and initialize the 3D scene description with the top scoring pose hypothesis for each object class. Starting from the initial 3D scene description, we use the stochastic search procedure as described in Section 3.3 to infer the 3D scene description \[\mathbf{\tilde{P}}_{1},\cdots,\mathbf{\tilde{P}}_{N}=\operatorname*{arg\, max}_{\mathbf{P}_{1},\cdots,\mathbf{P}_{N}}\mathbb{P}(\mathbf{I},\mathbf{C}| \mathbf{P}_{1},\cdots,\mathbf{P}_{N})\] We start with the pose hypotheses proposal, followed by the ICP proposal, before finally applying the random walk proposal. For each type of proposal, we go through all the objects once. See Figure 2 for an illustration of the 3DNEL MSIGP. We follow [13] and use [25] to fill in missing depth. We pick hyperparameters by visually inspecting inference results on a small number of real training images outside the test set. See Appendix D for details. **Training** To highlight that performance improvements \begin{table} \begin{tabular}{l|l|c} \hline **Category** & **Method** & **Average Recall** \\ \hline **Core comparison** & 3DNEL MSIGP (Ours) & 84.85\% \\ \cline{2-3} & SurEMB [12] & 80.00\% \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & CosyPose [28] & 71.42\% \\ & Coupled Iterative Refinement [33] & 76.58\% \\ & PFBD [13] & 75.80\% \\ & MergPose [29] (also supports novel objects) & 63.3\% \\ & GDNPDP[4] (counter work) & 90.66\% \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & No RGB in Likelihood & 61.57\% \\ & No Depth in Likelihood & 50.85\% \\ \cline{1-1} & SurEMB initialization + stochastic search & 82.73\% \\ \cline{1-1} & No 2D Detection & 72.08\% \\ \cline{1-1} & No pose hypotheses proposal & 80.57\% \\ \cline{1-1} & No ICP proposal & 81.86\% \\ \cline{1-1} & No random walk proposal & 78.28\% \\ \hline \end{tabular} \end{table} Table 1: **3DNEL MSIGP achieves accuracy on par with SOTA, and outperforms ablations** We report Average Recall on the YCB-V dataset in the sim-to-real setup using RGB-D inputs. Results for 3DNEL MSIGP are averaged over 5 runs. Standard deviation is below \(0.2\%\) for all setups. 3DNEL MSIGP significantly outperforms SurfEMB depiste using the same underlying models, highlighting the benefits of 3DNEL’s principled probabilistic modeling. In addition, 3DNEL MSIGP achieves results that are on par with SOTA, and outperforms all the included baselines in the sim-to-real setup by a large margin, except concurrent work GDRNPP [34] which achieves a new SOTA but requires extensive tuning in terms of 2D detection, backbone architectures, data augmentation, training hyper-parameters and pose refinement [33]. Figure 4: **3DNEL naturally quantifies pose uncertainty in the scene.** 3DNEL identifies pose uncertainty for the red bowl due to its inherent symmetry, and accurately captures the range of equally likely poses for the red mug when its handle is not visible. are coming entirely from 3DNEL's probabilistic formulation, we use publicly released pretrained SurfEMB models, and also use SurfEMB as our primary baseline. These include query models, key models, mask predictors for different object classes, and an additional 2D detector from CosyPose [28], all trained entirely from synthetic data. ## 4 Experiments In this section, we aim to answer the following questions: (1) Can 3DNEL achieve improved robustness in challenging sim-to-real setups compared to more discriminative baselines? (2) Does 3DNEL perform on par with SOTA? (3) Can 3DNEL be additionally used to quantify uncertainty, as well as for object and camera pose tracking? ### Sim-to-real object pose estimation on YCB-V EvaluationWe follow the evaluation protocol of the Benchmark for 6D Object Pose Estimation (BOP) challenge [18]. The task is to estimate the 6D poses of objects in a scene from a single RGB-D image, assuming knowledge of the number of instances of each object class in the scene. For a predicted pose, we calculate three error metrics: Visible Surface Discrepancy (VSD) [17, 18], Maximum Symmetry-Aware Surface Distance (MSSD) [6], and Maximum Symmetry-Aware Projection Distance [2]. Average recalls AR\({}_{VSD}\), AR\({}_{MSSD}\), AR\({}_{MSPD}\) are computed for each error metric across a range of error thresholds. The aggregate Average Recall (as reported in Table 1) is the average of AR\({}_{VSD}\), AR\({}_{MSSD}\), and AR\({}_{MSPD}\). BaselinesWe use SurfEMB as our main baseline for a series of detailed analysis. We additionally include the sim-to-real performance of several recent SOTA 6D object pose estimation method [28, 33, 13, 29, 34] as context. For [28, 33], we use the publicly available codebase to retrain on only synthetic data, and re-evaluate their performance. RobustnessFigure 3 illustrates how 3DNEL's probabilistic formulation significantly reduces large-error pose estimations when compared with SurfEMB and improves robustness. Across all YCB-V test images, there are 4123 object instances. Each point on the scatter plots corresponds to an object instance, and the point's \(x\) and \(y\) coordinates are the pose prediction error of 3DNEL MSIGP and SurfEMB, respectively. Points above the dashed line correspond to object instances for which 3DNEL MSIGP had a lower prediction error than SurfEMB. Figure 3(a) shows the scatter plot for all 4123 predictions and Figure 3(c) shows the scatter plots for 6 representative object classes. Figure 3(b) shows, across a range of error thresholds, the number of pose predictions with error above that threshold. Qualitatively, we observe 3DNEL's probabilistic formulation is especially helpful in challenging situations: (1) Scenes (e.g. Figure 2) with similar-looking objects. SurfEMB makes per-object predictions and incorrectly predicts both clamps to be in the back, while 3DNEL jointly models all objects in the scene and makes correct prediction. (2) Scenes (e.g. Figure 3(d)) with objects like the red bow where RGB alone is not informative enough. With a principled combination of RGB and depth information, 3DNEL MSIGP can reliably correct a large number of related errors. (3) Scenes (e.g. Figure 3(e)) with missing 2D detections. SurfEMB cannot recover from missing 2D detections, while 3DNEL MSIGP can robustly aggregate information from the entire image and reliably avoid such errors. Identifying pose uncertaintiesPose uncertainty may arise from partial observability, viewpoint ambiguity, and inherent symmetries of the object. 3DNEL can naturally quantify such pose uncertainties due to its probabilistic formulation. Figure 4(a) illustrates how 3DNEL identifies the pose uncertainty of the red bowl due to its inherent symmetry. Figures 4(b)(c) consider the red mug in YCB objects. 3DNEL can accurately capture that while there is no pose uncertainty when the mug handle is visible, there are a range of equally likely poses when the mug handle is not visible. Comparison with baselinesIn Table 1, we report the Average Recall for 3DNEL MSIGP and representative recent baselines. 3DNEL MSIGP significantly outperforms SurfEMB depiste using the same underlying models, highlighting the benefits of 3DNEL's principled probabilistic modeling. In addition, 3DNEL MSIGP achieves results that are on par with SOTA, and outperforms all the included baselines in the sim-to-real setup by a large margin, except concurrent work GDRNPP [34] which requires extensive tuning. lustrating the contributions from all three proposals. Inference speedInverse graphics approaches are traditionally computationally expensive. We fully leverage recent hardware advances to develop efficient GPU implementations. We use a single NVIDIA A100 GPU for our experiments. When tested in the same setup, SurfEMB reported taking 1.2s for pose hypotheses generation using PnP+RANSAC, while our coarse enumerative pose hypotheses generation takes **0.2s** with better results. Our stochastic search with 3DNEL runs at a similar speed to SurfEMB's pose refinement, and averages around 1s per object. Most of our implementation (except parallel rendering with OpenGL) is written in Python, using a mix of JAX, PyTorch and Taichi. We empirically observe that inter-communication between different packages creates additional overhead. We expect a compiled implementation using a single framework to further speed up inference. ### Object pose tracking under occlusion We apply 3DNEL under the particle filtering framework as described in Section 3.3 for object pose tracking under occlusion. Figure 5(a) visualizes tracking with 3DNEL with 200 particles on a representative YCB-V video, where the tomato can gets fully occluded before reappearing. Existing per-object likelihood [4] cannot handle such cases without ad hoc occlusion modeling. In contrast, 3DNEL's joint modeling of multi-object scenes naturally handles occlusion through rendering and can reliably track through occlusion. In Figure 5(a), the tomato can is briefly occluded by a narrow occluder, and the estimated posterior from particle filtering indicates there is little uncertainty about where the tomato can is even when it is fully occluded. However, accurate uncertainty quantification is important for tracking objects through extended occlusion. To illustrate this point, we generate a synthetic video in which a sugar box moves from left to right and becomes fully occluded by a cracker box. Figure 5(b) visualizes tracking with 3DNEL with 400 particles in this challenging video. We observe that 3DNEL can accurately quantify uncertainty with particle filtering: the estimated posterior concentrates on the actual pose when the sugar box is visible, yet spreads to cover a range of possible poses when the sugar box becomes occluded. Such modeling of the full posterior helps 3DNEL to regain track when the sugar box reappears, after which the posterior again concentrates on the actual pose. We observe that if we instead use a smaller number of particles (e.g. 50), we would not be able to accurately represent the uncertainty introduced by the occlusion and would lose track. ### Extension to Camera Pose Tracking from Video We demonstrate that 3DNEL's probabilistic formulation provides a principled framework for incorporating prior knowledge about the scene and objects, and enables easy extension to camera pose tracking from video using probabilistic inference in the same model without task specific retraining. We extend the our single-frame 3DNEL MSIGP to the multi-timestep setup by introducing a dynamics prior that samples the object pose at time \(t+1\) from a Gaussian-VMF distribution restricted to poses with position at most 3cm away from the object pose at time \(t\). We initialize object poses at the first frame to ground truth annotations to avoid introducing systematic errors. We again apply stochastic search with 3DNEL, using just ICP and random walk proposals. However, we further assume we know the \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} & \multicolumn{10}{c}{**Scene ID**} \\ & 48 & 49 & 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 \\ \hline SurfEMB Single Frame & 77.6\% & 67.0\% & 83.7\% & 91.3\% & 80.0\% & 59.8\% & 88.4\% & 76.7\% & 70.5\% & 77.3\% & 92.4\% & 84.1\% \\ 3DNEL MSIGP Single Frame & 71.9\% & 77.5\% & 83.1\% & 87.7\% & 87.5\% & 84.1\% & 88.4\% & 80.4\% & 82.8\% & 85.3\% & 94.3\% & 86.4\% \\ 3DNEL camera pose Tracking & **81.5\%** & **94.7\%** & **97.5\%** & **97.0\%** & **97.0\%** & **97.2\%** & **97.5\%** & **96.8\%** & **92.2\%** & **98.0\%** & **97.0\%** \\ \end{tabular} \end{table} Table 2: **Extending 3DNEL to camera pose tracking improves performance compared to single-frame setups** We apply 3DNEL to camera pose tracking from video. We demonstrate that 3DNEL’s probabilistic formulation allows us to incorporate additional knowledge that the scene is static and leverage temporal information to significantly improve pose estimation accuracy over the single frame setting. Figure 5: **3DNEL’s probabilistic formulation enables robust object pose tracking under occlusion with particle filtering.** Green dots visualize the particles used to estimate posterior distributions in particle filtering. (a) 3DNEL can track objects through heavy occlusions. (b) 3DNEL can accurately quantify uncertainty (shown by the spread-out particles), which enables tracking through extended occlusions. scene is static and only the camera moves, which translates into jointly updating all object poses by the same amount in a scene. Table 2 shows that the same inference procedure can readily handle such extensions, taking into account the dynamics prior and the knowledge of a static scene within the same probabilistic model. We observe comprehensive improvements over single frame predictions. ## 5 Conclusion In conclusion, we propose a probabilistic inverse graphics approach to pose estimation. We leverage learned neural embeddings and depth information to model likelihood of observed RGB-D images given 3D scene descriptions, and build efficient inference procedures for both pose estimation and tracking. Our approach achieves performance on par with SOTA in sim-to-real setups on YCB-V, and can more robustly handle challenging scenes. Finally, thanks to our probabilistic formulation, we can jointly model all object poses in the scene and easily extend to additional tasks such as uncertainty quantification and camera tracking.
2308.01108
Hamiltonian formulation of gravity as a spontaneously-broken gauge theory of the Lorentz group
A number of approaches to gravitation have much in common with the gauge theories of the standard model of particle physics. In this paper, we develop the Hamiltonian formulation of a class of gravitational theories that may be regarded as spontaneously-broken gauge theories of the complexified Lorentz group $SO(1,3)_C$ with the gravitational field described entirely by a gauge field valued in the Lie algebra of $SO(1,3)_C$ and a `Higgs field' valued in the group's fundamental representation. The theories have one free parameter $\beta$ which appears in a similar role to the inverse of the Barbero-Immirzi parameter of Einstein-Cartan theory. However, contrary to that parameter, it is shown that the number of degrees of freedom crucially depends on the value of $\beta$. For non-zero values of $\beta$, it is shown that three complex degrees of freedom propagate on general backgrounds, and for the specific values $\beta=\pm i$ an extension to General Relativity is recovered in a symmetry-broken regime. For the value $\beta=0$, the theory propagates no local degrees of freedom. A non-zero value of $\beta$ corresponds to the self-dual and anti-self-dual gauge fields appearing asymmetrically in the action, therefore in these models, the existence of gravitational degrees of freedom is tied to chiral asymmetry in the gravitational sector.
Mehraveh Nikjoo, Tom Zlosnik
2023-08-02T12:36:28Z
http://arxiv.org/abs/2308.01108v2
# Hamiltonian formulation of gravity as a spontaneously-broken gauge theory of the Lorentz group ###### Abstract A number of approaches to gravitation have much in common with the gauge theories of the standard model of particle physics. In this paper, we develop the Hamiltonian formulation of a class of gravitational theories that may be regarded as spontaneously-broken gauge theories of the complexified Lorentz group \(SO(1,3)_{C}\) with the gravitational field described entirely by a gauge field valued in the Lie algebra of \(SO(1,3)_{C}\) and a 'Higgs field' valued in the group's fundamental representation. The theories have one free parameter \(\beta\) which appears in a similar role to the inverse of the Barbero-Immirzi parameter of Einstein-Cartan theory. However, contrary to that parameter, it is shown that the number of degrees of freedom crucially depends on the value of \(\beta\). For non-zero values of \(\beta\), it is shown that three complex degrees of freedom propagate on general backgrounds, and for the specific values \(\beta=\pm i\) an extension to General Relativity is recovered in a symmetry-broken regime. For the value \(\beta=0\), the theory propagates no local degrees of freedom. A non-zero value of \(\beta\) corresponds to the self-dual and anti-self dual gauge fields appearing asymmetrically in the action, therefore in these models, the existence of gravitational degrees of freedom is tied to chiral asymmetry in the gravitational sector. ## 1 Introduction A great achievement of General Relativity has been the introduction of the notion of spacetime diffeomorphism symmetry as a cornerstone of gravitational physics. Less well known are formulations of non-gravitational physics which nonetheless possess the same symmetry - these theories are named _parameterized_ field theories. As an example, consider the action for degrees of freedom \(q^{i}(\tau)\) in Newtonian mechanics: \[S[q]=\int d\tau\bigg{(}\sum_{i}m_{i}\bigg{(}\frac{d}{d\tau}q^{i}\bigg{)}^{2}- V(q)\bigg{)} \tag{1}\] Alternatively, one can consider an action where the Newtonian time \(\tau\) is itself promoted to a dynamical field: \[\tau \rightarrow\tau(\lambda) \tag{2}\] \[S[q,\tau] =\int d\lambda\frac{d\tau}{d\lambda}\bigg{(}\sum_{i}m_{i}\bigg{(} \frac{d\tau}{d\lambda}\bigg{)}^{-2}\bigg{(}\frac{d}{d\lambda}q^{i}\bigg{)}^{2 }-V(q)\bigg{)} \tag{3}\] Under a transformation generated by the infinitesimal vector \(\zeta=\epsilon\partial_{\lambda}\) \[\tau\rightarrow\tau+\epsilon\mathcal{L}_{\zeta}\tau,\quad q^{i}\to q ^{i}+\epsilon\mathcal{L}_{\zeta}q^{i} \tag{4}\] - where \({\cal L}\) is the Lie derivative - the action (3) changes by a boundary term and hence the transformations (4), which represent diffeomorphisms on the manifold coordinatized by \(\lambda\), are a symmetry of the theory. This is a symmetry which is not present for the action (1), however the equations of motion following from (3) admit the same solutions as those following from (1) if the gauge \(\tau\stackrel{{*}}{{=}}\lambda\) is accessible, with the \(\tau\) equation of motion expressing conservation of energy. The extension to parameterized field theory in higher dimensional special-relativistic actions is via the replacement of the Minkowski metric tensor \(\eta_{\mu\nu}\) with \[\eta_{\mu\nu}\rightarrow\eta_{IJ}\partial_{\mu}\phi^{I}(x)\partial_{\nu}\phi^ {J}(x) \tag{5}\] where \(\eta_{IJ}={\rm diag}(-1,1,1,1)\) and \(x^{\mu}\) are coordinates in spacetime. Analogously to the model (3), actions with the replacement (5) with the promotion of \(\phi^{I}\) to dynamical fields then possess a four-dimensional spacetime diffeomorphism symmetry despite not including the gravitational interaction. If the gauge \(\phi^{I}\stackrel{{*}}{{=}}x^{I}(x^{\mu})\) is accessible, where \(x^{I}\) are fields playing the role of Minkowski coordinates in spacetime, then special-relativistic physics is recovered with the \(\phi^{I}\) equations of motion corresponding to equations expressing conservation of stress energy. It is possible then to recover a description of special-relativistic physics which nonetheless possesses the symmetries associated with gravitational theory. Can the gravitational interaction be recovered from this starting point and, if so, is the resulting theory General Relativity? To take steps towards this, we note that actions built using (5) have an additional symmetry which corresponds to: \[\phi^{I}\rightarrow\Lambda^{I}{}_{J}\phi^{J}+P^{I} \tag{6}\] where \(\Lambda^{I}{}_{J}\in SO(1,3)\) and \(P^{I}\) are independent of coordinates \(x^{\mu}\) and hence (6) can be interpreted as a global Poincare transformation acting on the fields \(\phi^{I}\). If some of \(\{\Lambda^{I}{}_{J},P^{I}\}\) do depend on position then the ordinary derivative \(\partial_{\mu}\phi^{I}\) in (5) no longer transforms homogeneously under the local generalization of (6) and so actions containing (5) will not be invariant under such transformations. This can be remedied by the introduction of fields \(\{\omega^{I}{}_{J\mu},\theta^{I}_{\mu}\}\) such that an operator \({\cal D}_{\mu}\) can be constructed, acting on \(\phi^{I}\) as: \[{\cal D}_{\mu}\phi^{I}\equiv\partial_{\mu}\phi^{I}+\omega^{I}{}_{J\mu}\phi^{J} +\theta^{I}_{\mu} \tag{7}\] It can be shown that (7) transforms homogeneously under the local generalization of (6) if \[\omega^{I}{}_{J\mu} \rightarrow\Lambda^{I}{}_{K}\omega^{K}{}_{L\mu}(\Lambda^{-1})^{L}{ }_{J}-\partial_{\mu}\Lambda^{I}{}_{K}(\Lambda^{-1})^{K}{}_{J} \tag{8}\] \[\theta^{I}_{\mu} \rightarrow\Lambda^{I}{}_{J}\theta^{J}_{\mu}-\partial_{\mu}P^{I} \tag{9}\] It follows then that the tensor \[g_{\mu\nu}\equiv\eta_{IJ}{\cal D}_{\mu}\phi^{I}{\cal D}_{\mu}\phi^{J} \tag{10}\] is invariant under the local Poincare transformations and it is this composite object that will play the role of the metric tensor. Equation (10) can be seen as a definition of the metric tensor and is a composite object built from \(\{\phi^{I},\omega^{I}{}_{J\mu},\theta^{I}_{\mu}\}\) which may be regarded as the fields describing gravity. Indeed, it is straightforward to build polynomial actions in these variables that correspond to the Einstein-Cartan formulation of gravity [1]. However, remarkably, other theories of gravity may emerge if only a subgroup of the global Poincare symmetry (6) is promoted to a local one. If just the translational part is localized (hence gravity is described entirely by \(\{\phi^{I},\theta^{I}_{\mu}\}\)), then the resulting gravitational theory is teleparallel gravity [2]. On the other hand, one can consider the case where only the global Lorentz symmetry is promoted to a local one, hence gravity is to be described entirely by \(\{\phi^{I},\omega^{I}{}_{J\mu}\}\). Remarkably, extensions of General Relativity can be recovered from the following family of actions: \[S[\phi^{I},\omega^{IJ}_{\mu}]=\frac{1}{2}\int d^{4}x\,\tilde{e}^{\mu\nu\alpha \beta}\big{(}\epsilon_{IJKL}+2\beta\eta_{K[I}\eta_{J]L}\big{)}D_{\mu}\phi^{I} D_{\nu}\phi^{J}R^{KL}{}_{\alpha\beta}(\omega) \tag{11}\] when \(\beta=\pm i\)[3, 4, 5, 6], where \(D_{\mu}\phi^{I}=\partial_{\mu}\phi^{I}+\omega^{I}_{\ J\mu}\phi^{I}\) is the \(SO(1,3)\)-covariant derivative of \(\phi^{I}\) and \(R^{IJ}_{\ ### The Hamiltonian formalism A classical field theory will be described by an action \(S\) which is a functional of fields \(\chi^{\mathcal{A}}\) (which we use to denote a set of any tensor fields such as \(V,\sigma,g\) as defined in Section 2.1). The action can be written as an integral of a spacetime density \(\tilde{\mathcal{L}}(\chi^{\mathcal{A}})\) called the Lagrangian density i.e. \[S[\chi^{\mathcal{A}}]=\int\tilde{\mathcal{L}}dtd^{3}x \tag{15}\] Given the 3+1 decomposition of tensorial fields, the Lagrangian density \(\tilde{\mathcal{L}}\) can typically be written in the following form: \[\tilde{\mathcal{L}}=\sum_{\mathcal{B}}a_{\mathcal{B}}(\chi^{ \mathcal{A}},\partial_{a}\chi^{\mathcal{A}})(\dot{\alpha}^{\mathcal{B}})^{2}+ \sum_{\mathcal{C}}b_{\mathcal{C}}(\chi^{\mathcal{A}},\partial_{a}\chi^{ \mathcal{A}})\dot{\beta}^{\mathcal{C}}-\tilde{\mathcal{U}}(\chi^{\mathcal{A}}, \partial_{a}\chi^{\mathcal{A}}) \tag{16}\] i.e. the collection of fields \(\chi^{\mathcal{A}}\) can be divided into those which appear quadratically in time derivatives (the set \(\{\alpha^{\mathcal{B}}\}\)), linear in time derivatives (the set \(\{\beta^{\mathcal{C}}\}\)), and without time derivatives (the set which we will call \(\{\gamma^{\mathcal{D}}\}\)). By introducing auxiliary'velocity' fields \(\mathcal{V}\) and Lagrange multiplier fields \(\mathcal{P}\), the following _extended_ Lagrangian density can be constructed which yields identical equations of motion to (16): \[\tilde{\mathcal{L}} =\sum_{\mathcal{B}}\tilde{\mathcal{P}}_{\mathcal{B}}(\dot{ \alpha}^{\mathcal{B}}-\mathcal{V}^{\mathcal{B}})+\sum_{\mathcal{C}}\tilde{ \mathcal{P}}_{\mathcal{C}}(\dot{\beta}^{\mathcal{C}}-\mathcal{V}^{\mathcal{C}} )+\sum_{\mathcal{D}}\tilde{\mathcal{P}}_{\mathcal{D}}(\dot{\gamma}^{\mathcal{ D}}-\mathcal{V}^{\mathcal{D}})\] \[+\sum_{\mathcal{B}}a_{\mathcal{B}}(\chi^{\mathcal{A}},\partial_{ a}\chi^{\mathcal{A}})(\mathcal{V}^{\mathcal{B}})^{2}+\sum_{\mathcal{C}}b_{ \mathcal{C}}(\chi^{\mathcal{A}},\partial_{a}\chi^{\mathcal{A}})\mathcal{V}^{ \mathcal{C}}-\tilde{\mathcal{U}}(\chi^{\mathcal{A}},\partial_{a}\chi^{ \mathcal{A}}) \tag{17}\] For the fields \(\mathcal{V}^{\mathcal{B}}\), the equation of motion for \(\mathcal{V}^{\mathcal{B}}\) allows for this field to be solved for in terms of the fields \((\chi^{\mathcal{A}},\mathcal{P}^{\mathcal{B}})\) allowing it to be eliminated from the variational principle. For fields \(\mathcal{V}^{\mathcal{C}}\) and \(\mathcal{V}^{\mathcal{D}}\), their equations of motion do not allow for the fields to be solved for and eliminated from the variational principle. The Lagrangian density then can be reduced to the following form: \[\tilde{\mathcal{L}} =\sum_{\mathcal{A}}\tilde{\mathcal{P}}_{\mathcal{A}}\dot{\chi}^{ \mathcal{A}}-\tilde{\mathcal{H}}(\tilde{\mathcal{P}}_{\mathcal{A}},\chi^{ \mathcal{A}},\partial_{a}\chi^{\mathcal{A}},\mathcal{V}^{\mathcal{C}}, \mathcal{V}^{\mathcal{D}}) \tag{18}\] \[\tilde{\mathcal{H}} =\mathcal{H}_{0}(\tilde{\mathcal{P}}_{\mathcal{A}},\chi^{ \mathcal{A}},\partial_{a}\chi^{\mathcal{A}})+\sum_{\mathcal{C}}\mathcal{V}^{ \mathcal{C}}\mathcal{C}^{\mathcal{C}}(\tilde{\mathcal{P}}_{\mathcal{A}},\chi^ {\mathcal{A}},\partial_{a}\chi^{\mathcal{A}})+\sum_{\mathcal{D}}\mathcal{V}^ {\mathcal{D}}\mathcal{C}^{\mathcal{D}}(\tilde{\mathcal{P}}_{\mathcal{A}}, \chi^{\mathcal{A}},\partial_{a}\chi^{\mathcal{A}}) \tag{19}\] where the \((\mathcal{V}^{C},\mathcal{V}^{D})\) are Lagrange multipliers (which enforce via their equations of motion \((\mathcal{C}^{\mathcal{C}}=0,\mathcal{C}^{\mathcal{D}}=0)\)) and \(\tilde{\mathcal{P}}_{\mathcal{A}}\) consists of the collected fields \((\tilde{\mathcal{P}}_{\mathcal{B}},\tilde{\mathcal{P}}_{\mathcal{C}},\tilde{ \mathcal{P}}_{\mathcal{D}})\). Equation (18) represents the Hamiltonian form of a theory, with stationarity of the action with respect to small variations of \((\chi^{\mathcal{A}},\tilde{\mathcal{P}}_{\mathcal{A}})\) yielding Hamilton's equations: \[\dot{\chi}^{\mathcal{A}} =\{\chi^{\mathcal{A}},\int d^{3}x\tilde{\mathcal{H}}\} \tag{20}\] \[\dot{\tilde{\mathcal{P}}}_{\mathcal{A}} =\{\tilde{\mathcal{P}}_{\mathcal{A}},\int d^{3}x\tilde{\mathcal{H}}\} \tag{21}\] where the Poisson bracket \(\{\mathcal{F},\mathcal{G}\}\) between two functions \(\mathcal{F}(\chi^{\mathcal{A}},\tilde{\mathcal{P}}_{\mathcal{A}})\) and \(\mathcal{G}(\chi^{\mathcal{A}},\tilde{\mathcal{P}}_{\mathcal{A}})\) is defined to be: \[\{\mathcal{F},\mathcal{G}\}\equiv\int d^{3}x\sum_{\mathcal{A}} \left(\frac{\delta\mathcal{F}}{\delta\chi^{\mathcal{A}}}\frac{\delta\mathcal{G }}{\delta\tilde{\mathcal{P}}_{\mathcal{A}}}-\frac{\delta\mathcal{G}}{\delta \chi^{\mathcal{A}}}\frac{\delta\mathcal{F}}{\delta\tilde{\mathcal{P}}_{ \mathcal{A}}}\right) \tag{22}\] Furthermore, it follows from (20) and (21) that for some function \(\mathcal{F}(\chi^{\mathcal{A}},\tilde{\mathcal{P}}_{\mathcal{A}})\) that \[\dot{\cal F}=\{{\cal F},\int d^{3}x\tilde{\cal H}\} \tag{23}\] The equations of motion that follow from the variation of fields \(({\cal V}^{\cal B},{\cal V}^{\cal C})\) are equations \[{\cal C}^{\cal B}(\tilde{\cal P}_{\cal A},\chi^{\cal A},\partial_{a}\chi^{\cal A })=0,\quad{\cal C}^{\cal C}(\tilde{\cal P}_{\cal A},\chi^{\cal A},\partial_{a} \chi^{\cal A})=0 \tag{24}\] These equations represent constraints that the fields \((\chi^{\cal A},\tilde{\cal P}_{\cal A})\) must obey amongst themselves. If at some initial moment \(t=t_{0}\), the constraints are satisfied, then it must further be required that the time derivative of these functions - defined via (23) - is zero. This may imply additional constraints and, if so, their own time derivatives must be ensured to be zero. The process continues until no further constraints are generated. ### Local Lorentz symmetry in gravitation and its complexification A slight modification to the variables describing gravity is necessary to couple gravity to fermionic fields. This requires the introduction of the co-tetrad field \(e^{I}_{\mu}\) from which the metric \(g_{\mu\nu}\) is constructed as \[g_{\mu\nu}=\eta_{IJ}e^{I}_{\mu}e^{J}_{\nu} \tag{25}\] Where \(\eta_{IJ}={\rm diag}(-1,1,1,1)\). Due to the appearance of the matrix \(\eta_{IJ}\), the expression (25) is invariant under transformations \[e^{I}_{\mu}\rightarrow\Lambda^{I}_{\ J}e^{J}_{\mu} \tag{26}\] where \(\Lambda^{I}_{\ J}\in SO(1,3)\) i.e. \(\Lambda^{I}_{\ J}\) are elements of the Lorentz group. The Weyl spinors of the standard model transform in the fundamental representations of the group \(SL(2,C)\) and invariance under global \(SL(2,C)\) transformations necessitates coupling to \(e^{I}_{\mu}\) in spinor Lagrangians and the identification of \(\Lambda^{I}_{\ J}\) as the \(SO(1,3)\) element corresponding to that \(SL(2,C)\) transformation. Note that (25) is invariant under transformations associated with \(\Lambda^{I}_{\ J}\) which can depend on spacetime position. For spinorial actions then to be invariant under the associated _local_\(SL(2,C)\) transformation, it is necessary to introduce a field \(\bar{\omega}^{I}_{\ J\mu}\) (where \(\bar{\omega}^{IJ}_{\mu}=-\bar{\omega}^{JI}_{\mu}\) when an index has been raised with \(\eta^{IJ}\), the matrix inverse of \(\eta_{IJ}\)) which transforms as a connection under local \(SO(1,3)\) transformations (indeed, it should transform precisely as (8) does). In General Relativity, this field is defined as the solution to the equation \[\partial_{[\mu}e^{I}_{\nu]}+\bar{\omega}^{I}_{\ J[\mu}e^{J}_{\nu]}=0 \tag{27}\] Therefore in General Relativity \(\bar{\omega}^{I}_{\ J\mu}\) is determined by \(e^{I}_{\mu}\) and its derivatives. A variation on General Relativity is provided by instead introducing a field \(\omega^{I}_{\ J\mu}\) - called the spin connection - in place of \(\bar{\omega}^{I}_{\ J\mu}(e,\partial e)\) which is to be regarded as an independent field with its own equations of motion. This is the Einstein-Cartan formulation of gravity. In its simplest form, the equation of motion for \(\omega^{I}_{\ J\mu}\) yields a solution \(\omega^{I}_{\ J\mu}=\bar{\omega}^{I}_{\ J\mu}(e,\partial e)+\dots\) where the dots denote terms linear in spinorial currents. A further generalization of the Einstein-Cartan model is provided by the Ashtekar chiral theory of gravity [15]. To motivate this, we note that it has been up to now assumed that \(\Lambda^{I}_{\ J}\) are elements of the real Lorentz group. However, the transformation (25) is invariant under \(\Lambda^{I}_{\ J}\) belonging to the _complexified_ Lorentz group \(SO(1,3)_{C}\)1. Can classical General Relativity also arise if the theory possesses a complex Lorentz symmetry? To understand the answer to this, it is first helpful to introduce self- and anti-self duality concepts for representations of \(SO(1,3)_{C}\). Footnote 1: Which in terms of properties of matrices \(\Lambda^{I}_{\ J}\in SO(1,3)_{C}\) is defined to be the set of complex-valued matrices that satisfy \(\eta_{IJ}=\eta_{KL}\Lambda^{K}_{\ J}\Lambda^{L}_{\ J}\) and \(\det(\Lambda)=1\) We have seen that it is helpful to introduce a field \(e^{I}_{\mu}\) where under an \(SO(1,3)\) transformation \(e^{I}_{\mu}\rightarrow\Lambda^{I}_{\ J}e^{J}_{\mu}\). One can consider more general 'Lorentz tensors' with a more complicated index structure. Particularly useful will be antisymmetric Lorentz tensors \(F^{IJ}=-F^{JI}\) which transform as follows under Lorentz transformations \[F^{IJ}\rightarrow\Lambda^{I}{}_{K}\Lambda^{J}{}_{L}F^{KL} \tag{28}\] When the transformations are complexified Lorentz transformations, further decomposition of this (now complex-valued object) is possible. We can consider the following decomposition of \(F^{IJ}\): \[F^{IJ}=F^{+IJ}+F^{-IJ} \tag{29}\] where \[F^{\pm IJ} = \frac{1}{2}(F^{IJ}\mp\frac{i}{2}\epsilon^{IJ}{}_{KL}F^{KL}) \tag{30}\] \[\frac{1}{2}\epsilon^{IJ}{}_{KL}F^{\pm KL} = \pm iF^{\pm IJ} \tag{31}\] where recall that \(\epsilon_{IJKL}\) is the four-dimensional Levi-Civita symbol and indices are lowered or raised with \(\eta_{IJ}\) and its matrix inverse \(\eta^{IJ}\) respectively. If follows, for example, that for some matrix \(Y_{IJ}\) that \(Y_{IJ}F^{IJ\pm}=Y_{IJ}^{\pm}F^{\pm IJ}\). When the fields and Lorentz transformations are real then \(F^{+IJ}\) and \(F^{-IJ}\) are simply complex conjugates of one another. When the fields are complexified, they become genuinely independent objects. Equation (31) defines the property of self-dualness (here \(F^{+IJ}\)) or anti-self-dualness (here \(F^{-IJ}\)). It is possible to parameterize a self-dual or anti-self-dual Lorentz tensor in terms of a field \(E^{I}\) as follows: \[F^{\pm IJ} = \frac{1}{2}(n^{[I}E^{J]}\mp\frac{i}{2}\epsilon^{IJKL}n_{K}E_{L}) \tag{32}\] \[= (n^{[I}E^{J]})^{\pm} \tag{33}\] where \(n^{I}\) is an arbitrary Lorentz vector of non-vanishing norm i.e. \(\eta_{IJ}n^{I}n^{J}=\xi\) and \(E_{I}n^{I}=0\), where \(\xi<0\) for timelike \(n^{I}\) and \(\xi>0\) for spacelike \(n^{I}\), and furthermore, for example, for a Lorentz tensor \(W_{...[IJ]^{+}}\) which is self-dual in a pair of indices, we have: \[W_{...[IJ]^{+}}F^{IJ^{+}}=W_{...[IJ]^{+}}n^{I}E^{J} \tag{34}\] Finally, it is useful to define the following objects: \[\mathcal{K}^{\pm}_{IJKL}=\frac{1}{2}(\epsilon_{IJKL}\pm 2i\eta_{I[K}\eta_{L]J}) \tag{35}\] where it can be shown that \[\mathcal{K}^{\pm}_{IJKL}F^{KL}=\epsilon_{IJKL}F^{\pm KL} \tag{36}\] i.e. the objects \(\mathcal{K}^{\pm}_{IJKL}\) act to project out self- or anti-self-dual parts of an antisymmetric Lorentz tensor. The spin connection \(\omega^{IJ}_{\mu}=-\omega^{JI}_{\mu}\) present in Einstein-Cartan gravity is an antisymmetric Lorentz tensor and so can be decomposed into self-dual and anti-self-dual parts: \[\omega^{I}{}_{J\mu}=\omega^{+I}{}_{J\mu}+\omega^{-I}{}_{J\mu} \tag{37}\] Upon complexification of the fields (which results from complexification of the \(SO(1,3)\) gauge symmetry) then \((\omega^{+I}{}_{J\mu},\omega^{-I}{}_{J\mu})\) become truly independent fields and this independence will be shown in Section 3 to be crucially important in the structure of gravitational fields based on this complexified Lorentz symmetry. ### Spacetime structure It will be very useful to relate some fields appearing in the canonical formalism to quantities appearing in the 3+1 metric formalism of gravity. To this end, we can use the following general parameterization of \(e^{I}_{\mu}\)[16]: \[e^{I}=e^{I}_{t}dt+e^{I}_{a}dx^{a} \tag{38}\] where \[e^{I}_{t}=NN^{I}+N^{a}e^{I}_{a} \tag{39}\] \[q_{ab}=\eta_{IJ}e^{I}_{a}e^{J}_{b} \tag{40}\] where \(N^{I}e_{Ia}=0\) and \(N^{I}N_{I}=-1\). Computing the metric \(g_{\mu\nu}=\eta_{IJ}e^{I}_{\mu}e^{J}_{\nu}\) confirms that \((N,N^{a},q_{ab})\) should be identified with corresponding quantities appearing in (14). Furthermore, it follows that \[\eta^{IJ}=-N^{I}N^{J}+e^{aI}e^{J}_{a} \tag{41}\] where \(e^{a}_{I}=q^{ab}e_{bI}\) where \(q^{ab}\) is the matrix inverse of \(q_{ab}\). In the present work, the basic variables describing the gravitational field will be \((\phi^{I},\omega^{IJ}_{\mu})\) with the identification \[D_{\mu}\phi^{I}=\partial_{\mu}\phi^{I}+\omega^{I}{}_{J\mu}\phi^{J}=e^{I}_{\mu} \tag{42}\] and hence we will look to identify the spacetime metric with \(g_{\mu\nu}=\eta_{IJ}D_{\mu}\phi^{I}D_{\nu}\phi^{J}\). We may also usefully decompose \(\phi^{I}\) into parts parallel with and orthogonal to \(N^{I}\): \[\phi^{I}=\phi_{(N)}N^{I}+\varphi^{I} \tag{43}\] where \(\varphi_{I}N^{I}=0\). It follows then that \(\phi_{I}e^{I}_{a}=\frac{1}{2}\partial_{a}\phi^{2}=\varphi_{I}e^{I}_{a}\) where we've used the fact that \(N_{I}e^{I}_{a}\). Therefore, \[\varphi_{I}=\frac{1}{2}q^{ab}e_{Ia}\partial_{b}\phi^{2} \tag{44}\] and hence \[\phi_{(N)}=-\xi\sqrt{-\phi^{2}+\frac{1}{4}q^{ab}\partial_{a}\phi^{2}\partial_ {b}\phi^{2}} \tag{45}\] where \(\xi=\mp 1\). There are therefore two distinct options for the sign of \(\phi_{(N)}\). ## 3 Gravitational actions We now briefly survey several theories of gravitation and their symmetries. The action for Einstein's General Relativity can be written as: \[S_{GR}[g_{\mu\nu}]=\frac{1}{16\pi G}\int_{M}d^{4}x\sqrt{-g}R+\int_{\partial M} d^{3}y\,\tilde{\ell}_{GHY} \tag{46}\] Where the second term - the Gibbons-Hawking-York term - is a boundary action necessary to provide a well-defined variational principle. A spacetime diffeomorphism generated by a vector field \(\xi^{\mu}\) transforms the spacetime metric as \(g_{\mu\nu}\to g_{\mu\nu}+{\cal L}_{\xi}g_{\mu\nu}\) and it can readily shown that this changes the action by a boundary term and hence such diffeomorphisms are symmetries of the theory. As discussed in Section 2.3, the necessity to couple gravitation to fermions motivates the introduction of the fields \((e^{I}_{\mu},\omega^{IJ}_{\mu})\) as the descriptors of gravity. One of the simplest actions that can be constructed that has a General-Relativistic limit is the Einstein-Cartan Palatini action: \[S_{EC}[e^{I}_{\mu},\omega^{IJ}_{\mu}]=\frac{1}{64\pi G}\int d^{4}x\,\tilde{ \epsilon}^{\mu\nu\alpha\beta}\epsilon_{IJKL}e^{I}_{\mu}e^{J}_{\nu}R^{KL}_{ \phantom{KL}\alpha\beta}(\omega) \tag{47}\] where \(\tilde{\epsilon}^{\mu\nu\alpha\beta}\) is the Levi-Civita density and \[R^{IJ}_{\phantom{IJ}\alpha\beta}(\omega)=2\partial_{[\mu}\omega^{IJ}_{ \phantom{IJ}\nu]}+2\omega^{I}_{\phantom{IJ}K[\mu}\omega^{KJ}_{\phantom{IJ}\nu]} \tag{48}\] are the components of the curvature tensor associated with \(\omega^{I}_{\phantom{IJ}\mu}\). The Einstein-Cartan Palatini action possesses the same spacetime diffeomorphism symmetry as the action for General Relativity and is additionally invariant under local Lorentz transformations parameterized by matrices \(\Lambda^{I}_{\phantom{IJ}J}(x)\). As the spin connection is an antisymmetric tensor in its Lorentz indices, it decomposed into self- and anti-self- dual parts. Upon complexification of the local Lorentz symmetry, these two fields are in principle independent of one another and remarkably the equations of motion from the following actions \[S_{EC\pm} = \frac{1}{32\pi G}\int d^{4}x\,\tilde{\epsilon}^{\mu\nu\alpha\beta }{\cal K}^{\pm}_{IJKL}e^{I}_{\mu}e^{J}_{\nu}R^{KL}_{\phantom{KL}\alpha\beta}( \omega)=\frac{1}{32\pi G}\int d^{4}x\,\tilde{\epsilon}^{\mu\nu\alpha\beta} \epsilon_{IJKL}e^{I}_{\mu}e^{J}_{\nu}R^{KL\pm}_{\phantom{KL}\alpha\beta}(\omega) \tag{49}\] \[= \frac{1}{32\pi G}\int d^{4}x\,\tilde{\epsilon}^{\mu\nu\alpha\beta }\epsilon_{IJKL}e^{I}_{\mu}e^{J}_{\nu}R^{KL\pm}_{\phantom{KL}\alpha\beta}( \omega^{\pm})\] yield the (complexified) Einstein's equations, where the solutions of real General Relativity can be imposed after the imposition of appropriate reality conditions on fields. The actions (49) form Ashtekar's chiral formulation of gravity in which only one of the \(\omega^{+IJ}_{\phantom{IJ}\mu}\) or \(\omega^{-IJ}_{\phantom{IJ}\mu}\) appear in the action. The models that we will look at are models where \(g_{\mu\nu}\) is recovered from the combination (10) with \(\theta^{I}_{\mu}=0\) and the dynamical variables of the theory will be \(\{\phi^{I},\omega^{IJ}_{\phantom{IJ}\mu}\}\). This suggests that ultimately we should identify \(e^{I}_{\mu}\) as being recovered from the object \(D_{\mu}\phi^{I}=\partial_{\mu}\phi^{I}+\omega^{I}_{\phantom{IJ}\mu}\phi^{J}\) and so, as in the case of Einstein-Cartan theory and Ashtekar's chiral theory we can look to construct Lagrangian densities which are quadratic in this field and linear in the curvature of \(\omega^{IJ}_{\phantom{IJ}\mu}\), anticipating that this may be the simplest action giving non-trivial gravitational dynamics [5]: \[S=\frac{1}{2}\int d^{4}x\,\tilde{\epsilon}^{\mu\nu\alpha\beta}\big{(}\epsilon_{ IJKL}+\frac{2}{\gamma}\eta_{K[I}\eta_{J]L}\big{)}D_{\mu}\phi^{I}D_{\nu}\phi^{J}R^{ KL}_{\phantom{KL}\alpha\beta}(\omega) \tag{50}\] where \(\gamma=1/\beta\) and where for notational compactness we have omitted an overall multiplicative factor of \(1/(32\pi G)\). With the aid of the symbols (35) we can write (50) as: \[S[\phi^{I},\omega^{+IJ}_{\mu},\omega^{-IJ}_{\mu}]=\int d^{4}x\,\tilde{\epsilon }^{\mu\nu\alpha\beta}\epsilon_{IJKL}D_{\mu}\phi^{I}D_{\nu}\phi^{J}\bigg{(}g_{ +}R^{KL}_{\phantom{KL}\alpha\beta}(\omega^{+})+g_{-}R^{KL}_{\phantom{KL}\alpha \beta}(\omega^{-})\bigg{)} \tag{51}\] where \[g_{\pm}=\frac{1}{2}\bigg{(}\frac{\gamma\mp i}{\gamma}\bigg{)} \tag{52}\] Note that \(g_{+}+g_{-}=1\). The aim of this paper will be to develop the canonical formulation of the action (51). ## 4 3+1 decomposition of Lagrangian density and canonical formulation We now proceed to perform the 3+1 decomposition of the Lagrangian density for the action (51). Motivated by the 3+1 decomposition of a spacetime one-form introduced in (13), we introduce the following fields: \[\omega^{\pm IJ}{}_{\mu}dx^{\mu}=\Omega^{\pm IJ}dt+\beta^{\pm IJ}{}_{a}dx^{a} \tag{53}\] Furthermore, for notational compactness, we introduce the following quantities: \[R^{\pm IJ}_{ab} \equiv 2\bigg{(}\partial_{[a}\beta^{\pm IJ}_{\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; ### The evolution of constraints For functions \(F\) and \(G\) depending on phase space variables we can define the Poisson bracket as follows: \[\{F,G\}\equiv\int d^{3}x\bigg{[}\frac{\delta F}{\delta\beta_{d}^{+IJ}}\frac{ \delta G}{\delta\tilde{P}_{IJ}^{+d}}+\frac{\delta F}{\delta\beta_{d}^{-IJ}} \frac{\delta G}{\delta\tilde{P}_{IJ}^{-d}}+\frac{\delta F}{\delta\phi^{I}}\frac {\delta G}{\delta\tilde{P}_{I}}\bigg{]}-(F\leftrightarrow G) \tag{64}\] Explicit forms for functional derivatives with respect to the phase space fields are given in Appendix A. Time evolution of fields \((\phi^{I},\beta_{a}^{\pm IJ},\tilde{P}_{I},\tilde{P}_{IJ}^{\pm a})\) are obtained from the Euler-Lagrange equations following from variation of (56). Therefore for a function \(F\) of these fields then \[\dot{F}=\{F,\int d^{3}x\tilde{\mathcal{H}}\} \tag{65}\] Finally, it will be useful to introduce the notion of smearing of phase space functions. For a phase space function \(F\), its smearing with a test function \(\alpha(x)\) (i.e. a function which does not depend on the phase space fields) is defined as: \[F[\alpha]\equiv\int d^{3}x\alpha F \tag{66}\] We require that time evolution according to (65) preserves the set of constraints \((\tilde{\mathcal{G}}_{IJ}^{\pm},\tilde{\mathcal{C}}_{I},\tilde{\mathcal{C}}_{ IJ}^{\pm a})\). For illustrative purposes, a detailed example of the evaluation of the Poisson bracket of two constraints is presented in Appendix B. It turns out that preservation of \(\mathcal{G}_{IJ}^{\pm}\) under time evolution is ensured if the primary constraints are satisfied - indeed these constraints generate self and anti-self dual Lorentz transformations, and hence the Poisson bracket of these constraints with any other constraint (when both constraints are smeared) will be proportional to constraints up to boundary terms that we do not consider in this analysis. For the remaining constraints \((\tilde{\mathcal{C}}_{I},\tilde{\mathcal{C}}_{IJ}^{\pm a})\) we recover the following equations: \[\partial_{t}\tilde{C}_{I} \approx-V_{a}^{+KL}W_{KLI}^{a+}-V_{a}^{-KL}W_{KLI}^{a-} \tag{67}\] \[\partial_{t}\tilde{C}_{IJ}^{+e} \approx\bigg{[}g_{+}Y_{IJKL}^{de}+g_{+}Y_{KLIJ}^{de}\bigg{]}^{+} V_{d}^{+KL}+\bigg{[}g_{+}Y_{IJKL}^{de}+g_{-}Y_{KLIJ}^{de}\bigg{]}^{+}V_{d}^{-KL}\] \[+W_{IJM}^{e+}V^{M}\] (68) \[\partial_{t}\tilde{C}_{IJ}^{-e} \approx\bigg{[}g_{-}Y_{IJKL}^{de}+g_{-}Y_{KLIJ}^{de}\bigg{]}^{-} V_{d}^{-KL}+\bigg{[}g_{-}Y_{IJKL}^{de}+g_{+}Y_{KLIJ}^{de}\bigg{]}^{-}V_{d}^{+KL}\] \[+W_{IJM}^{e-}V^{M} \tag{69}\] where \(\approx\) denotes weak equality (i.e. equality up to the addition of constraints) and where we've defined the following: \[Y_{IJKL}^{de} =Y_{[IJ][KL]}^{de}\equiv 4\hat{\varepsilon}^{abe}\epsilon_{MIJ[K} \phi_{L]}e_{b}^{M} \tag{70}\] \[W_{IJK}^{\pm} \equiv\bigg{[}2(g_{\mp}-g_{\pm})\vec{\varepsilon}^{ebc}\epsilon_{ KMN[I}\phi_{J]}R_{bc}^{\mp MN}+g_{\pm}\hat{\varepsilon}^{ebc}\phi_{K} \epsilon_{IJMN}R_{bc}^{\pm MN}\bigg{]}^{\pm} \tag{71}\] This object is a tensor in the space coordinatized by antisymmetric Lorentz tensors. Now, adding the projection of (68) along \(V_{e}^{+IJ}\) to the projection of (69) along \(V_{e}^{-IJ}\) we obtain: \[V_{e}^{+IJ}\partial_{t}\tilde{C}_{IJ}^{+e}+V_{e}^{-IJ}\partial_{t}\tilde{C}_{ IJ}^{-e}=-V^{I}\partial_{t}\tilde{C}_{I} \tag{72}\] It is shown in Appendix C that \(V^{I}\) contains information about the lapse (\(N\)) and shift (\(N^{a}\)) functions from the \(3+1\) decomposition of the spacetime metric (14) and whose functional forms are arbitrary insofar as they reflect the freedom to foliate spacetime in different ways. As such, (72) can be taken to imply that generally the preservation of constraints \(\hat{C}^{\pm I}_{IJ}\) under time evolution implies the preservation of \(\hat{C}_{I}\). To proceed, it will be useful to explicitly work out self-dual and anti-self dual projections of indices of the object (70), for which explicit expressions for which are given in equation (142) in Appendix D. Using the decomposition of self and anti-self-dual Lorentz tensors defined in (33) and making use of the vector \(N^{I}\) introduced in (39) we can define the objects \(\mathcal{V}^{\pm I}_{a}\) as follows \[V^{\pm IJ}_{a}=\left[N^{[I}\mathcal{V}^{\pm J]}_{a}\right]^{\pm} \tag{73}\] where \(\mathcal{V}^{\pm I}_{a}N_{I}=0\). We would like to find out whether the constraint propagation equations (68) and (69) for vanishing left-hand side amount to equations which uniquely determine \(\mathcal{V}^{\pm I}_{a}\) Projecting these equations along \(N^{I}\) we have: \[0\approx ig_{+}\tilde{\varepsilon}^{dbe}\frac{1}{2}\partial_{b}\phi^{2}\eta_{ IJ}\mathcal{V}^{+J}_{d}+(g_{+}-g_{-})N^{L}N^{J}Y^{de}_{[L]^{+}[JK]_{-}} \mathcal{V}^{-K}_{d}+W^{e+}_{KIJ}N^{K}V^{J} \tag{74}\] \[0\approx -ig_{-}\tilde{\varepsilon}^{dbe}\frac{1}{2}\partial_{b}\phi^{2} \eta_{IJ}\mathcal{V}^{-J}_{d}+(g_{-}-g_{+})N^{L}N^{J}Y^{de}_{[L]^{-}[JK]_{+}} \mathcal{V}^{+K}_{d}+W^{e-}_{KIJ}N^{K}V^{J} \tag{75}\] where we've used the fact that \(\phi_{K}e^{K}_{b}=\frac{1}{2}\partial_{b}\phi^{2}\). #### 4.1.1 The special case \(g_{+}=g_{-}=1/2\) A number of terms in (77) and (78) vanish when \(g_{+}=g_{-}\), substantially simplifying the equations. If we further define \[R^{\pm IJ}=\left[N^{[I}\mathcal{R}^{\pm J]}_{bc}\right]^{\pm} \tag{76}\] where \(N_{I}\mathcal{R}^{\pm I}_{bc}=0\) then (77) and (78) take the form: \[0\approx-\tilde{\varepsilon}^{dbe}\partial_{b}\phi^{2}\delta_{ IJ}\mathcal{V}^{+J}_{d}+\tilde{\varepsilon}^{ebc}\mathcal{R}^{+}_{Ibc}\phi_{J}V^{J} \tag{77}\] \[0\approx-\tilde{\varepsilon}^{dbe}\partial_{b}\phi^{2}\delta_{ IJ}\mathcal{V}^{-J}_{d}+\tilde{\varepsilon}^{ebc}\mathcal{R}^{-}_{Ibc}\phi_{J}V^{J} \tag{78}\] where \(\delta_{IJ}=\eta_{IJ}+N_{I}N_{J}\) and recall that \(\phi^{2}\equiv\phi_{I}\phi^{I}\). Equations (77) and (78) can be regarded as a pair of linear inhomogeneous equations, involving either \(\mathcal{V}^{+I}_{a}\) or \(\mathcal{V}^{-I}_{a}\), each of which can be regarded as a 9 dimensional vector. The quantity \(M^{de}_{\phantom{a}IJ}=\tilde{\varepsilon}^{dbe}\partial_{b}\phi^{2}\delta_{ IJ}\) that multiplies these vectors in each equation can be thought of as a \(9\times 9\) matrix. If this matrix is invertible then equations (77) and (78) uniquely determine \(\mathcal{V}^{\pm I}_{a}\). However, the matrix has the following three null-eigenvectors \[\partial_{a}\phi^{2}S^{I(i)} \tag{79}\] where \(i=1,2,3\) and \(S^{I(i)}N_{I}=0\). This suggests that the matrix \(M^{de}_{\phantom{a}IJ}\) is not invertible and not all \(\mathcal{V}^{\pm I}_{a}\) can be determined from these equations. Acting on (77) and (78) with these null eigenvectors we obtain \[0=\tilde{\varepsilon}^{ebc}R^{\pm IJ}_{bc}\partial_{e}\phi^{2} \tag{80}\] However, these are not new constraints on the phase space. This is due to the following identity: \[D^{(\beta^{\pm})}_{c}\hat{C}^{\pm c}_{IJ}\approx\hat{\mathcal{G}}^{\pm}_{IJ}+ ig_{\pm}\tilde{\varepsilon}^{bca}R^{\pm}_{IJca}\partial_{b}\phi^{2} \tag{81}\] i.e. the primary constraints imply (80). The existence of null eigenvectors of \(M^{ab}_{\phantom{a}IJ}\) shows that not all components of \(\mathcal{V}^{\pm I}_{a}\) are determined by the propagation constraint equations; however, some can be solved for. Acting on the propagation equations with \(\underline{\epsilon}_{fca}\partial^{a}\phi^{2}\eta^{KI}\) (where \(\partial^{a}\phi^{2}\equiv q^{ab}\partial_{b}\phi^{2}\)) and introducing the projector: \(\mathcal{P}^{a}_{\ b}=\delta^{a}_{b}-\frac{\partial_{b}\phi^{2}\partial^{a} \phi^{2}}{(\partial_{b}\phi^{2}\partial^{c}\phi^{2})}\), the equations can be solved to yield: \[\bar{\mathcal{V}}^{+I}_{a} \equiv\mathcal{P}^{b}_{\ a}\mathcal{V}^{+I}_{b}\approx\frac{2}{( \partial_{b}\phi^{2}\partial^{b}\phi^{2})}\partial^{c}\phi^{2}\phi_{J} \mathcal{R}^{+I}_{ca}V^{J}\] \[\bar{\mathcal{V}}^{-I}_{a} \equiv\mathcal{P}^{b}_{\ a}\mathcal{V}^{-I}_{b}\approx\frac{2}{( \partial_{b}\phi^{2}\partial^{b}\phi^{2})}\partial^{c}\phi^{2}\phi_{J} \mathcal{R}^{-I}_{ca}V^{J} \tag{82}\] where we have introduced the convention that barred Lagrange multipliers denote multipliers that have been solved for in terms of fields in phase space. It will further be useful to introduce the symbols \[\bar{V}^{\pm IJ}_{a}=\left[N^{[I}\mathcal{V}^{\pm J]}_{a}\right]^{\pm}=\bar{U} ^{\pm IJ}_{\ Weak equalities have been used to allow for the fact that the constraint propagation analysis fixes some or all of \(V_{a}^{\pm IJ}\) (depending on whether \(g_{+}=g_{-}\) or not) to explicitly depend on phase space fields. Hence, additional terms involving derivatives of \(V_{a}^{\pm IJ}\) with respect to these fields will appear in Hamilton's equations but they will all be proportional to constraints \(\tilde{C}_{IJ}^{\pm a}\) and so vanish on the constraint surface. We finally point out an exotic solution that has not been covered by the prior analysis: that in which \(\phi^{I}=0\) throughout spacetime. From (86) we see that if \(\phi^{I}=0\) initially then it will remain so only if \(V^{I}=0\). From the results of Appendix C we see that this implies that the function \(N=0\) and furthermore if \(\phi^{I}=0\) then \(q_{ab}=0\) and hence from equation (14) the spacetime metric \(g_{\mu\nu}=0\). Furthermore, if \(\phi^{I}=0\) then \(Y_{IJKL}^{de}\) and \(W_{IJK}^{e\pm}\) are zero and hence \(V^{\pm KL}\) are completely undetermined by the constraint propagation equations, thus implying from (87) and (88) that the time evolution of fields \(\beta_{a}^{\pm IJ}\) is undetermined. It is unclear whether such solutions play any phenomenological role. ### The Algebra of Constraints Having completed the calculation of propagation of constraints, we can now classify the primary constraints in terms of whether they are first-class constraints (i.e. their Poisson bracket with all other constraints weakly vanishes) or second-class constraints (i.e. their Poisson bracket with some constraints does not weakly vanish). The character of the constraints depends on the values of \((g_{+},g_{-})\). #### 4.2.1 Case \(g_{+}\neq g_{-}\) For the general case \(g_{+}\neq g_{-}\), the classification of constraints is illustrated in Figure 1. Given the classification of constraints, we can now count how many (complex) degrees of freedom on a general background. The dimensionality of the phase space per spatial point is \[P=8\,(\phi^{I},\tilde{P}_{I})+18\,(\beta_{a}^{+IJ},\tilde{P}_{IJ}^{+a})+18\,( \beta_{a}^{-IJ},\tilde{P}_{IJ}^{-a})=44\] The number of first-class constraints is \[F=3\,(\tilde{G}^{+IJ})+3\,(\tilde{G}^{-IJ})+4\,(\tilde{\mathscr{H}}^{I})=10\] and the number of second-class constraints is \[S=9\,(\tilde{C}_{IJ}^{+a})+9\,(\tilde{C}_{IJ}^{-a})=18\] The number of degrees of freedom per spatial point is therefore \[DOF =\frac{1}{2}(P-2F-S)\] \[=3 \tag{93}\] Figure 1: The structure of constraints in the case \(g_{+}\neq g_{-}\). First-class constraints are blue whilst constraints that are individually second-class are shown as green. The constraint analysis reveals that a linear combination of second-class constraints yields the first-class constraints \(\tilde{\mathscr{H}}^{I}\). #### 4.2.2 Case \(g_{+}=g_{-}\) For the special case \(g_{+}=g_{-}\), the classification of constraints is illustrated in Figure 2. As in the previous case, the dimensionality of the phase space per spatial point is \[P=8\left(\phi^{I},\tilde{P}_{I}\right)+18\left(\beta_{a}^{+IJ},\tilde{P}_{IJ}^{+ a}\right)+18\left(\beta_{a}^{-IJ},\tilde{P}_{IJ}^{-a}\right)=44\] However, now the number of first-class constraints is \[F=3\left(\tilde{G}^{+IJ}\right)+3\left(\tilde{G}^{-IJ}\right)+4\left(\tilde{ \mathscr{H}}^{I}\right)+3(\partial_{a}\phi^{2}\tilde{C}_{IJ}^{+a})+3\left( \partial_{a}\phi^{2}\tilde{C}_{IJ}^{-a}\right)=16\] and the number of second-class constraints is \[S=6\left(\mathcal{P}_{\;\;b}^{a}\tilde{C}_{IJ}^{+b}\right)+6\left(\mathcal{P} _{\;\;b}^{a}\tilde{C}_{IJ}^{-b}\right)=12\] The number of degrees of freedom per spatial point is therefore \[DOF =\frac{1}{2}(P-2F-S)\] \[=0 \tag{94}\] Therefore the theory with \(g_{+}=g_{-}\) propagates no degrees of freedom and can be regarded as a topological field theory. The case \(g_{+}=g_{-}\) has more first-class constraints than the case \(g_{+}\neq g_{-}\) and so it is to be expected that this specific case has more symmetry than the general case. The precise additional symmetry that the theory possesses compared to the case \(g_{+}\neq g_{-}\) can be demonstrated in the Lagrangian formalism. It is useful to write the action for general \((g_{+},g_{-})\) in the language of differential forms: \[S=2\int\epsilon_{IJKL}D\phi^{I}\wedge D\phi^{J}\wedge\left(g_{+}R^{+KL}+g_{-}R ^{-KL}\right) \tag{95}\] where we use \(D\) to denote the covariant derivative \(d+\omega\) according to the entire spin connection \(\omega=\omega^{+}+\omega^{-}\). Now consider the following field transformation: \[\phi^{I}\rightarrow\phi^{I},\quad\omega_{\mu}^{IJ\pm}\rightarrow\omega_{\mu} ^{IJ\pm}+\partial_{\mu}\phi^{2}\xi^{\pm IJ} \tag{96}\] Under (96) the action changes as: \[\delta S =\int 4(g_{+}-g_{-})\epsilon_{IJKL}d\phi^{2}\wedge D\phi^{J} \wedge\bigg{(}\xi^{-IM}R^{+KL}+\xi^{+IM}R^{-KL}\bigg{)}\phi_{M} \tag{97}\] \[+2D\bigg{(}\epsilon_{IJKL}d\phi^{2}\wedge D\phi^{I}\wedge D\phi^ {J}(g_{+}\xi^{+KL}+g_{-}\xi^{-KL})\bigg{)} \tag{98}\] Figure 2: The structure of constraints in the case \(g_{+}=g_{-}\). First class constraints are blue whilst second class constraints are green. Unlike in the case \(g_{+}\neq g_{-}\), a subset of the individual \(\tilde{C}_{IJ}^{\pm a}\) constraints are first class. As in the \(g_{+}\neq g_{-}\) constraint analysis reveals that a linear combination of individually second class constraints yields the first class constraints \(\tilde{\mathscr{H}}^{I}\). Therefore in the case \(g_{+}=g_{-}\) and only in this case does the action change by a total derivative - and hence boundary term - under the transformation of fields (96). This result holds even 'off-shell' and therefore the field transformation is a symmetry of the theory [17]. Note that the transformation (96) when applied to the pullback of \(\omega_{\mu}^{\pm IJ}\) to surfaces of constant time \(\beta_{a}^{\pm IJ}\) agrees with the transformation generated by the first class constraints \(\partial_{a}\phi^{2}\tilde{C}_{IJ}^{\pm a}\) i.e. \[\delta\beta_{a}^{\pm IJ} =\{\beta_{a}^{\pm IJ},\partial_{b}\phi^{2}\tilde{C}_{KL}^{\pm b}[ \xi^{\pm KL}]\}\] \[=\partial_{a}\phi^{2}\epsilon^{\pm IJ} \tag{99}\] ### Reality conditions We have seen that models with \(g_{+}\neq g_{-}\), propagate three _complex_ degrees of freedom on general backgrounds. Because of the inherent complexity of the theory, in principle, it is possible that the Hamiltonian will generate classical time evolution of fields to become complex even if initially real at some moment \(t=t_{0}\). A standard requirement is that the spacetime metric be real. From (14) this is ensured if fields \((N,N^{a},q_{ab})\) are real. From Appendix C it's clear that \((N,N^{a})\) are real if \(V^{I}\) and \(e^{I}_{a}\) are real 2. We will require that \(V^{I}\) is real and that \(q_{ab}=\eta_{IJ}e^{I}_{a}e^{J}_{b}\) be initially real and for this realness to be preserved by time evolution. Additionally, anticipating that the norm \(\phi^{2}=\phi_{I}\phi^{I}\) will have physical significance, this quantity should also be required to be real. Time evolution is generated by the Hamiltonian Footnote 2: The complex-valued fields that coordinatize the phase space may combine with complex \(V^{I}\) to produce real four dimensional metrics of Euclidean signature but we do not explore that possibility in this work \[H=-\tilde{\mathcal{G}}_{IJ}^{+}[\Omega^{+IJ}]-\tilde{\mathcal{G}}_{IJ}^{-}[ \Omega^{-IJ}]+\tilde{\mathcal{C}}_{I}[V^{I}]+\tilde{\mathcal{C}}_{IJ}^{+d}[V_ {d}^{+IJ}]+\tilde{\mathcal{C}}_{IJ}^{-d}[V_{d}^{-IJ}] \tag{100}\] Then, recalling the definition (55) of \(e^{I}_{b}\) we have for the general case \(g_{+}\neq g_{-}\) that \[\partial_{t}q_{ab} =2\eta_{IJ}e^{I}_{a}\{\partial_{b}\phi^{J}+\beta^{J}{}_{Kb}\phi^{ K},H\}\] \[=2\eta_{IJ}e^{I}_{a}\big{(}-\partial_{b}V^{J}+(\tilde{Z}_{b}^{+JKL }+\tilde{Z}_{b}^{-JKL})V_{L}\phi_{K}+\beta^{J}{}_{Kb}V^{K}\big{)} \tag{101}\] \[\partial_{t}\phi^{2} =2\phi_{I}\{\phi^{I},H\}\] \[=2\phi_{I}V^{I} \tag{102}\] From (102), we see that given our assumption that \(V^{I}\) is real then an initially real \(\phi^{2}\) remains real if \(\phi_{I}\) is real. In the general case, it's likely, not possible to find closed expressions for \((\tilde{Z}_{b}^{+JKL},\tilde{Z}_{b}^{-JKL})\), however, they will depend on the generally complex \((g_{+},g_{-})\) which may create an imaginary part to \(\partial_{t}q_{ab}\) even if initial data for the phase space fields is real. It is challenging to determine in the case of general \((g_{+},g_{-})\) whether maintaining the reality of \(q_{ab}\) generates further constraints on the complex phase space. We will see, however, that in the special cases \((g_{+}=1,g_{-}=0)\) and \((g_{+}=0,g_{-}=1)\) that contact with familiar results from the Ashtekar model is possible. First, it is helpful to illustrate a manifestation of the challenge of finding reality conditions in the general case in a simple physical example: the propagation of linear metric perturbations on a Minkowski space background. ### The propagation of metric perturbations on Minkowski space The Euler-Lagrange equations following from the Lagrangian density (50) have a solution \(R^{IJ}{}_{\mu\nu}(\omega)=0\)[5]. Thus a gauge can be found where \(\omega_{\mu}^{IJ}\stackrel{{*}}{{=}}0\) and \(g_{\mu\nu}\stackrel{{*}}{{=}}\eta_{IJ}\partial_{\mu}\phi^{I} \partial_{\nu}\phi^{J}\). If \(R^{IJ}{}_{\mu\nu}(\omega)=0\) then it turns out that \(\phi^{I}\) is otherwise undetermined by the field equations and hence \(\phi^{I}\) can take a profile where it forms a set of Minkowski coordinate such that \(g_{\mu\nu}=\eta_{\mu\nu}\) i.e. Minkowski space is a solution to the theory for general values of \(\gamma\). Now we restrict ourselves to a wedge where \(\phi^{2}=\eta_{IJ}\phi^{I}\phi^{J}<0\) and adopt a Lorentz gauge where \(\tilde{\phi}^{I}\stackrel{{*}}{{=}}T\delta^{I}_{0}\). Then, using \(T\) as a time coordinate and denoting \(x^{a}\) as spatial coordinates on the surface of constant time we have \[ds^{2}=-dT\otimes dT+T^{2}\delta_{ij}E_{a}^{i}E_{b}^{j}dx^{a}\otimes dx^{b} \tag{103}\] where \(i,j=1,2,3\) index spatial coordinates, \(E_{a}^{1}dx^{a}=d\chi,E_{a}^{2}dx^{a}=\sinh(\chi)d\theta,E_{a}^{3}dx^{a}=\sinh( \chi)\sin(\theta)d\phi\). Therefore we can identify the region \(\phi_{I}\phi^{I}<0\) in Minkowski spacetime coordinatized by \((t,x^{a})\) as an open Friedmann-Robertson-Walker universe with scale factor \(a=T\). This is a Milne wedge and without loss of generality we choose the upper Milne wedge of Minkowski spacetime. Now consider the following small perturbations to the spin connection \[\delta\omega^{0i}=\frac{1}{2}H^{i}_{\ j}E^{j} \tag{104}\] \[\delta\omega^{ij}=a^{ij}dt+\epsilon^{ijl}W_{l}^{\ k}E_{k} \tag{105}\] where \((H^{ij},a^{ij},W^{ij})\) are symmetric and traceless. These perturbations produce a perturbed spatial metric: \[\delta g_{ab}=H^{jk}E_{ja}E_{kb} \tag{106}\] It can be shown that the perturbation \((a_{ij},W_{ij})\) can be solved for algebraically in the field equations in terms of first derivatives of \(H_{ij}\) and so eliminated from the variational principle to recover the following Lagrangian density \[\frac{1}{2}\tilde{\mathcal{L}}_{(H)}=a^{3}\bigg{(}-\frac{1}{\gamma^{2}}\dot{H }_{ij}\dot{H}^{ij}-\frac{1}{a^{2}}h^{ab}\mathcal{D}_{a}H^{ij}\mathcal{D}_{b}H_ {ij}-\frac{2k}{a^{2}}H_{ij}H^{ij}\bigg{)} \tag{107}\] where here \(h^{ab}\) is the unperturbed inverse spatial metric on the surface of constant \(T\) - with spatial curvature constant \(k=-1\), \(a=T\) - is the 'cosmological' scale factor, and where \(H_{ij}\) is assumed to have support in the upper Milne wedge. In the case \(\gamma^{2}=-1\) (which corresponds to either \((g_{+}=1,g_{-}=0)\) or \((g_{+}=0,g_{-}=1)\)) the Lagrangian density reduces to that of General Relativity, describing the lightlike propagation of the spin-2 perturbation \(H_{ij}\) on this background. For all other values of \(\gamma\), the propagation of \(H_{ij}\) is at a different speed, and it is readily seen that if \(\gamma^{2}\) has an imaginary component then an initially real perturbation \(H_{ij}(t=t_{0})\) will evolve by the equations of motion to become complex and hence in this simple case the only way to preserve reality of the spatial metric would be to constrain \(H_{ij}(t=t_{0})=0\), showing that the reality conditions can reduce the number of propagating degrees of freedom. ## 5 An extension of General Relativity We see from (107) that only for the case \(\gamma^{2}=-1\) does the propagation of gravitational waves correspond to that of General Relativity and we will now focus on this case. The condition \(\gamma^{2}=-1\) encompasses two independent possibilities: \((g_{+}=1,g_{-}=0)\) and \((g_{+}=0,g_{-}=1)\). We will first focus on the former case and later demonstrate how the latter case can be straightforwardly recovered. When \((g_{+}=1,g_{-}=0)\), the primary Hamiltonian density simplifies considerably. Recalling its general form \[\tilde{\mathcal{H}}=-\Omega^{+IJ}\tilde{\mathcal{G}}^{+}_{IJ}-\Omega^{-IJ} \tilde{\mathcal{G}}^{-}_{IJ}+V^{I}\tilde{C}_{I}+V_{a}^{+IJ}\tilde{C}^{+a}_{IJ }+V_{a}^{-IJ}\tilde{C}^{-a}_{IJ}, \tag{108}\] the constraints now simplify to: \[\tilde{\mathcal{G}}^{+}_{IJ} \equiv D_{c}^{(\beta^{+})}\tilde{P}^{+c}_{IJ}+\big{[}\tilde{P}_{I }\phi_{J}\big{]}^{+} \tag{109}\] \[\tilde{\mathcal{G}}^{-}_{IJ} \equiv D_{c}^{(\beta^{-})}\tilde{P}^{-c}_{IJ}+\big{[}\tilde{P}_{I }\phi_{J}\big{]}^{-}\] (110) \[\tilde{C}_{I} =\tilde{P}_{I}-2\epsilon_{IJKL}\tilde{\varepsilon}^{abc}e_{a}^{J} R_{bc}^{+KL}\] (111) \[\tilde{C}^{+c}_{IJ} =\tilde{P}^{+c}_{IJ}-2\big{[}\epsilon_{IJKL}\tilde{\varepsilon}^{ abc}e_{a}^{K}e_{b}^{L}\big{]}^{+}\] (112) \[\tilde{C}^{-c}_{IJ} =\tilde{P}^{-c}_{IJ} \tag{113}\] Given these simplifications, it is possible to algebraically solve for the \((\beta_{a}^{-IJ},\tilde{P}_{IJ}^{-a})\) and eliminate them from the variational problem. Firstly, from (113) we have \(\tilde{P}_{IJ}^{-a}=0\). Then, recalling the definition (55), the constraint \(\tilde{C}_{IJ}^{+c}\) can be regarded as an equation for which one can solve for \(\beta_{a}^{-IJ}=\beta_{a}^{-IJ}(\beta^{+},\tilde{P}^{+},\phi,\partial\phi)\). Therefore the second-class constraints can be solved. Given these solutions, the constraint \(\tilde{\mathcal{G}}_{IJ}^{-}\) simplifies to \(\left[\hat{P}_{I}\phi_{JI}\right]^{-}=0\); if we decompose \(\hat{P}^{I}=\hat{\Pi}\phi^{I}+\hat{P}_{\perp}^{I}\), where \(\tilde{P}_{\perp}^{I}\phi_{I}=0\), then \(\tilde{\mathcal{G}}_{IJ}^{-}=0\) can be taken to imply the solution \(\tilde{P}_{\perp}^{I}=0\) which we now adopt. Additionally the quantity \(V^{I}\tilde{C}_{I}\) can be expressed in terms of quantities (\(N\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! that (116) is invariant under the transformation (96). In the cases \((g_{+}=1,g_{-}=0)\) and \((g_{+}=0,g_{-}=1)\) the effect of (116) is that of a cosmological constant term. We also now briefly consider other possible terms in the gravitational action. After integration by parts, the action (95) can be expressed as a linear combination of integrated Lagrangian four-forms \(\phi^{M}\phi_{M}\epsilon_{IJKL}R^{IJ}\wedge R^{KL}\) and \(\phi_{J}\phi_{L}\eta_{IK}R^{IJ}\wedge R^{KL}\)[5]. There is one additional, independent four-form which is quadratic in \(\phi^{I}\) and in \(R^{IJ}\): \(\phi^{K}\phi_{K}R_{IJ}\wedge R^{IJ}\). This term - and generally other terms involving \(\phi_{I}\phi^{I}\) - may be excluded by the requirement that the gravitational action have a symmetry under 'translations' \(\phi^{I}\rightarrow\phi^{I}+p^{I}\) subject to \(Dp^{I}=0\) (the action (95) is manifestly invariant under this transformation). The coupling of fields \((\phi^{I},\omega^{\pm IJ})\) to matter fields depends on the representation of the \(SL(2,C)_{+}\times SL(2,C)_{-}\) gravitational symmetry that the matter field belongs to. For fields in the trivial representation, such as spacetime scalar fields \(\varphi\) or one-forms \(A_{\mu}\), coupling to gravity is expected to be entirely via the spacetime metric \(g_{\mu\nu}=\eta_{IJ}D_{\mu}\phi^{I}D_{\nu}\phi^{J}\). In which case, in the canonical formalism, time derivatives of \(\phi^{I}\) but not of \(\omega^{\pm IJ}_{\mu}\) will appear in matter actions, leading to a modification of the definition of the momentum \(\tilde{P}_{I}\) via additional terms appearing in (60). Additional couplings between gravity and matter fields are necessary when the fields belong to non-trivial representations of the gravitational symmetry group, such as left and right-handed fermions \(\psi^{\pm}\). Here some of the gravitational gauge fields \(\omega^{\pm IJ}_{\mu}\) must couple to \(\psi^{\pm}\) so as to create covariant derivative terms for these fields; these couplings will introduce no new time derivatives of fields \((\phi^{I},\omega^{\pm IJ}_{\mu})\) but will result in additional terms in the constraints \(\tilde{\mathcal{G}}^{\pm IJ}\). ## 7 Conclusions We now briefly summarize the main results of the paper and discuss the potential for future work. In Sections 1 to 4 we introduced the models looked at in this paper and produced the Hamiltonian form of the models, focusing on an analysis on the propagation and nature of the phase space constraints present. The Lagrangian density of the class of models considered is: \[\tilde{\mathcal{L}}[\phi^{I},\omega^{\pm IJ}_{\mu}]=\tilde{\epsilon}^{\mu\nu \alpha\beta}\epsilon_{IJKL}D_{\mu}\phi^{I}D_{\nu}\phi^{J}\bigg{(}g_{+}R^{KL}{}_ {\alpha\beta}(\omega^{+})+g_{-}R^{KL}{}_{\alpha\beta}(\omega^{-})\bigg{)} \tag{117}\] It was found that in the general case \(g_{+}\neq g_{-}\) three complex degrees of freedom propagate in the theory whereas for the particular case \(g_{+}=g_{-}\) no complex degrees of freedom propagate in the theory. Furthermore, it was shown that the cases \((g_{+}=1,g_{-}=0)\) and \((g_{+}=0,g_{-}=1)\) correspond to an extension of General Relativity that includes solutions corresponding to an additional effective pressureless perfect fluid matter source. Interestingly, such a matter source has been of prior interest as a possible solution to the problem of time in quantum gravity [19, 20, 21, 22, 23]. We find it encouraging that in the present models, the perfect fluid arises 'naturally' from the theory and does not have to be independently posited. It is useful to compare these results to a more familiar set of models of gravity. Setting \(1/(32\pi G)=1\), the Palatini Lagrangian density 47 of Einstein-Cartan theory can be generalized to: \[\tilde{\mathcal{L}}[e^{I}_{\mu},\omega^{\pm IJ}_{\mu}]=\tilde{\epsilon}^{\mu \nu\alpha\beta}\epsilon_{IJKL}e^{I}_{\mu}e^{J}_{\nu}\bigg{(}\bar{g}_{+}R^{KL}{ }_{\alpha\beta}(\omega^{+})+\bar{g}_{-}R^{KL}{}_{\alpha\beta}(\omega^{-}) \bigg{)} \tag{118}\] where \(\bar{g}_{\pm}=(\bar{\gamma}\mp i)/\bar{\gamma}\) with \(\bar{\gamma}\) being the Barbero-Immirzi parameter. The Palatini action corresponds to \((\bar{g}_{+}=1/2,\bar{g}_{-}=1/2)\) (\(\bar{\gamma}\rightarrow\infty\)) whilst the chiral Ashtekar theory corresponds to cases \((\bar{g}_{+}=1,\bar{g}_{-}=0)\) (\(\bar{\gamma}=i\)) and \((\bar{g}_{+}=0,\bar{g}_{-}=1)\) (\(\bar{\gamma}=-i\)) respectively. However, in contrast to the counterpart values of \((g_{+},g_{-})\), each of the previous three models propagates two complex degrees of freedom and describe identical solutions to Einstein's equations upon the imposition of reality conditions 3. Footnote 3: However, there are indications that important phenomenology such as the power spectrum of different chirality gravitational waves generated during inflation may depend crucially on the value of \(\bar{\gamma}\)[24, 25]. A natural generalization of (117) would be the introduction of fields \((\psi_{+},\psi_{-})\) (potentially with non-trivial Lorentz index structure) such that the hitherto constant \((g_{+},g_{-})\) are reflective of expectation values of these fields. The General-Relativistic limits \((g_{+}=1,g_{-}=0)\) and \((g_{+}=0,g_{-}=1)\) would then potentially arise from spontaneous symmetry breaking (with the action formally symmetric under the transformations (96) and accompanying transformation of \((\psi_{+},\psi_{-})\)) and with time variation of the new dynamical fields being of significance in the early universe [26]. ## Acknowledgements We thank William Barker, Amel Durakovic, and Hans Westman for helpful discussions. MJ and TZ are supported by the project No. 2021/43/P/ST2/02141 co-funded by the Polish National Science Centre and the European Union Framework Programme for Research and Innovation Horizon 2020 under the Marie Sklodowska-Curie grant agreement No. 945339. ## Appendix A Useful functional derivatives Useful non-zero functional derivatives are as follows: \[\frac{\delta\beta_{a}^{+IJ}(x)}{\delta\beta_{b}^{+KL}(x^{\prime})} =\frac{1}{2}\bigg{(}\delta_{K}^{[I}\delta_{L}^{J]}-\frac{i}{2} \epsilon_{KL}{}^{IJ}\bigg{)}\delta_{a}^{b}\delta(x-x^{\prime}) \tag{119}\] \[\frac{\delta\tilde{P}_{IJ}^{+a}(x)}{\delta\tilde{P}_{K}^{+b}(x^{ \prime})} =\frac{1}{2}\bigg{(}\delta_{I}^{[K}\delta_{J}^{L]}-\frac{i}{2} \epsilon_{IJ}{}^{KL}\bigg{)}\delta_{b}^{a}\delta(x-x^{\prime})\] (120) \[\frac{\delta\beta_{a}^{-IJ}(x)}{\delta\beta_{b}^{-KL}(x^{\prime})} =\frac{1}{2}\bigg{(}\delta_{K}^{[I}\delta_{L}^{J]}+\frac{i}{2} \epsilon_{KL}{}^{IJ}\bigg{)}\delta_{a}^{b}\delta(x-x^{\prime})\] (121) \[\frac{\delta\tilde{P}_{IJ}^{-a}(x)}{\delta\tilde{P}_{K}^{-b}(x^{ \prime})} =\frac{1}{2}\bigg{(}\delta_{I}^{[K}\delta_{J}^{L]}+\frac{i}{2} \epsilon_{IJ}{}^{KL}\bigg{)}\delta_{b}^{a}\delta(x-x^{\prime})\] (122) \[\frac{\delta\tilde{P}_{I}(x)}{\delta\tilde{P}_{J}(x^{\prime})} =\delta_{I}^{J}\delta(x-x^{\prime})\] (123) \[\frac{\delta\phi^{I}(x)}{\delta\phi^{J}(x^{\prime})} =\delta_{J}^{I}\delta(x-x^{\prime}) \tag{124}\] ## Appendix B Detailed example of the evaluation of the Poisson bracket between two constraints Making use of the functional derivatives defined in the previous section and the definition of the Poisson bracket (64), as an illustrative example we consider a detailed calculation of the Poisson bracket \(\{\tilde{C}_{IJ}^{-a}[A_{a}^{IJ}],\tilde{C}_{K}[V^{K}]\}\). To do so, we first calculate the functional derivative of each smeared constraint as follows: ### Functional derivatives of \(\tilde{C}_{IJ}^{-a}[A_{a}^{IJ}]\) Given a test function \(A_{a}^{IJ}=A_{a}^{-IJ}\) (i.e. a Lorentz tensor which depends on of \(x^{a}\) and is considered independent of phase space fields), we can consider the following smeared constraint: \[\tilde{C}_{IJ}^{-a}[A_{a}^{IJ}]=\int d^{3}xA_{c}^{IJ}\bigg{(}\tilde{P}_{IJ}^{ -c}-2g_{-}\bigg{[}\epsilon_{IJKL}\tilde{\varepsilon}^{abc}e_{a}^{K}e_{b}^{L} \bigg{]}^{-}\bigg{)} \tag{125}\] where recall that \(e_{a}^{I}=\partial_{a}\phi^{I}+\beta^{I}{}_{Ja}\phi^{J}\). Using the results of Section A we have: \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\tilde{P}_{IJ}^{+d}}=0 \tag{126}\] \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\beta_{d}^{+IJ}}=4g_{-} \bigg{[}A_{c}^{-KL}\epsilon_{MKL[I}\tilde{\varepsilon}^{dbc}\phi_{J]}e_{b}^{ M}\bigg{]}^{+} \tag{127}\] \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\tilde{P}_{IJ}^{-d}}=A_{d}^{IJ} \tag{128}\] \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\beta_{d}^{-IJ}}=4g_{-}\bigg{[} A_{c}^{-KL}\epsilon_{MKL[I}\bar{\varepsilon}^{dbc}\phi_{J]}e_{b}^{M}\bigg{]}^{-} \tag{129}\] \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\tilde{\phi}^{I}}=4g_{-}D_{a }^{(\beta)}\bigg{(}A_{c}^{-KL}\epsilon_{IKLM}\bar{\varepsilon}^{abc}e_{b}^{M} \bigg{)} \tag{130}\] \[\frac{\delta\tilde{C}_{KL}^{-a}[A_{a}^{KL}]}{\delta\tilde{P}_{I}}=0 \tag{131}\] ### Functional derivatives of \(\tilde{C}_{I}[A^{I}]\) Similarly for the constraint \(\tilde{C}_{I}\) we may consider a test function \(A^{I}\) and smear the constraint as follows: \[\tilde{C}_{I}[A^{I}]=\int d^{3}xA^{I}\bigg{[}\tilde{P}_{I}-2g_{+}\epsilon_{IJKL }\bar{\varepsilon}^{abc}e_{a}^{J}R_{bc}^{+KL}-2g_{-}\epsilon_{IJKL}\bar{ \varepsilon}^{abc}e_{a}^{J}R_{bc}^{-KL}\bigg{]} \tag{132}\] Again using the results of Section A we have: \[\frac{\delta\tilde{C}_{K}[A^{K}]}{\delta\tilde{P}_{IJ}^{+d}}=0 \tag{133}\] \[\frac{\delta\tilde{C}_{K}[A^{K}]}{\delta\beta_{d}^{+IJ}}=\bigg{[}-2A^{M} \epsilon_{MKL[I}\bar{\varepsilon}^{dbc}\phi_{J]}(g_{+}R_{bc}^{+KL}+g_{-}R_{bc }^{-KL})-4g_{+}\bar{\varepsilon}^{bad}D_{b}^{+}\bigg{(}\epsilon_{IJKL}A^{K}e_ {a}^{L}\bigg{)}\bigg{]}^{+} \tag{134}\] \[\frac{\delta\tilde{C}_{K}[A^{K}]}{\delta\tilde{P}_{IJ}^{-d}}=0 \tag{135}\] \[\frac{\delta\tilde{C}_{K}[A^{K}]}{\delta\beta_{d}^{-IJ}}=\bigg{[}-2A^{M} \epsilon_{MKL[I}\bar{\varepsilon}^{dbc}\phi_{J]}(g_{+}R_{bc}^{+KL}+g_{-}R_{bc }^{-KL})-4g_{-}\bar{\varepsilon}^{bad}D_{b}^{-}\bigg{(}\epsilon_{IJKL}A^{K}e_ {a}^{L}\bigg{)}\bigg{]}^{-} \tag{136}\] \[\frac{\delta\tilde{C}_{K}[A^{K}]}{\delta\phi^{I}}=2g_{+}D_{a}^{(\beta)}\bigg{(} A^{M}\epsilon_{MIKL}\bar{\varepsilon}^{abc}R_{bc}^{+KL}\bigg{)}+2g_{-}D_{a}^{( \beta)}\bigg{(}A^{L}\epsilon_{LIJK}\bar{\varepsilon}^{abc}R_{bc}^{-JK}\bigg{)} \tag{137}\] \[\frac{\delta\tilde{C}_{J}[A^{J}]}{\delta\tilde{P}_{I}}=A^{I} \tag{138}\] ### Poisson bracket \(\{\tilde{C}_{IJ}^{-a}[A_{a}^{IJ}],\tilde{C}_{K}[V^{K}]\}\) Applying the results from Sections B.1 and Sections B.2 we have that: \[\{\tilde{C}_{IJ}^{-c}[A_{c}^{IJ}],\tilde{C}_{K}[V^{K}]\}\stackrel{{ b}}{{=}}\int d^{3}xA_{d}^{IJ}V^{K}\bigg{[}2(g_{+}-g_{-})\bar{ \varepsilon}^{dbc}\epsilon_{KLM[I}\phi_{J]}R_{bc}^{+LM}+g_{-}\bar{\varepsilon} ^{dbc}\phi_{K}\epsilon_{IJMN}R_{bc}^{-MN}\bigg{]}^{-} \tag{139}\] where \(\stackrel{{ b}}{{=}}\) denotes equality up to a total derivative (and hence boundary) term. Interpretation of the Lagrange multiplier \(V^{i}\) From Hamilton's equations we have that \[\dot{\phi}^{I}=V^{I}-(\Omega^{+IJ}+\Omega^{-IJ})\phi_{J} \tag{140}\] Therefore, using the results of Section 2.4 we have that: \[V^{I}=D_{t}^{(\omega)}\phi^{I}=e_{t}^{I}=NN^{I}+N^{a}e_{a}^{I} \tag{141}\] and hence \(V^{I}\) is straightforwardly related to parts of the spacetime metric structure. ## Appendix D Some useful results ### Self and anti-self dual parts of the Lorentz tensor \(Y^{de}_{IJPR}\) Calculation shows self and anti-self dual parts of the Lorentz tensor \(Y^{de}_{IJKL}\) are given by: \[Y^{de}_{[IJ]^{(1)}\pm[KL]^{(2)}\pm} =\bar{\varepsilon}^{dbe}e_{b}^{M}\epsilon_{MIJ[K}\phi_{L]}+(\pm)^ {(1)}(\pm)^{(2)}\bar{\varepsilon}^{dbe}\phi^{M}e_{b[I}\epsilon_{J]MKL}\] \[+2i\tilde{\varepsilon}^{dbe}e_{b}^{M}((\pm)^{1}\eta_{M[I}\eta_{J] [K}\phi_{L]}+(\pm)^{(2)}\phi_{[I}\eta_{J][L}\eta_{K]M})\] \[-(\pm)^{(2)}\bar{\varepsilon}^{dbe}e_{b}^{M}\phi_{M}\eta_{K[I} \eta_{J]L} \tag{142}\] ### Development of the constraint \(\tilde{C}^{I}\) in the case \((g_{+}=1,g_{-}=0)\) Using (141) we see that the constraint \(\tilde{C}_{I}\) contributes to the Hamiltonian density via \[V^{I}\tilde{C}_{I}=(NN^{I}+N^{a}e_{a}^{I})\bigg{(}\tilde{P}_{I}-2\epsilon_{ IJKL}\bar{\varepsilon}^{abc}e_{a}^{J}R_{bc}^{+KL}\bigg{)} \tag{143}\] Furthermore, when the constraints hold we have \(\tilde{P}_{IJ}^{+c}=-2\big{[}\epsilon_{IJKL}\tilde{\varepsilon}^{abc}e_{a}^{ K}e_{b}^{L}\big{]}^{+}\). We can therefore recover the following useful results. Firstly, the product of the determinant of \(q_{ab}\) and its matrix inverse can be expressed in terms of momenta \(\tilde{P}_{IJ}^{+c}\) as: \[-16qq^{cd}=\tilde{P}_{IJ}^{+c}\tilde{P}^{+dIJ} \tag{144}\] Furthermore we can express individual parts of \(\tilde{C}^{I}\) as: \[\epsilon_{IJKL}\tilde{\varepsilon}^{abc}e_{d}^{I}e_{a}^{J}R_{bc}^ {+KL} =-\frac{1}{2}\tilde{P}_{IJ}^{+e}R_{de}^{+IJ} \tag{145}\] \[-2\sqrt{q}N^{I}\epsilon_{IJKL}e_{a}^{J}\tilde{\varepsilon}^{abc}R _{bc}^{+KL} =-\frac{1}{4}\tilde{P}_{IK}^{+b}\tilde{P}^{+cK}{}_{J}R^{+IJ}{}_{bc}\] (146) \[N^{I}\tilde{P}_{I} =\xi\tilde{\Pi}\sqrt{-\phi^{2}+\frac{1}{4}q^{ab}\partial_{a}\phi ^{2}\partial_{b}\phi^{2}}\] (147) \[e_{a}^{I}\tilde{P}_{I} =\frac{1}{2}\tilde{\Pi}\partial_{a}\phi^{2} \tag{148}\] where we have used the result that \(\tilde{P}^{I}\propto\phi^{I}\) in the cases where \((g_{+}=1,g_{-}=0)\) and \((g_{-}=0,g_{+}=1)\).
2308.06667
Isolating Neighborhood Trajectory Computations in Non-Autonomous Systems Including the Elliptic Restricted Three-Body Problem
Isolating block and isolating neighborhood methods have previously been implemented to find transit trajectories and orbits around libration points in the autonomous circular restricted three-body problem. For some applications, the direct computation of these types of trajectories in non-autonomous models more closely approximating real-world ephemerides is beneficial. Here, we apply isolating neighborhood methods to non-autonomous systems, including the elliptic restricted three-body problem (ERTBP). Specifically, simplified isolating neighborhood boundaries are computed around libration points in the ERTBP. These boundaries are used in combination with a bisection method to compute the forward asymptotic trajectories of the isolated invariant set and track orbits around a libration point.
Rodney L. Anderson, Robert W. Easton, Martin W. Lo
2023-08-13T02:48:31Z
http://arxiv.org/abs/2308.06667v1
Isolating Neighborhood Trajectory Computations in Non-Autonomous Systems Including the Elliptic Restricted Three-Body Problem ###### Abstract Isolating block and isolating neighborhood methods have previously been implemented to find transit trajectories and orbits around libration points in the autonomous circular restricted three-body problem. For some applications, the direct computation of these types of trajectories in non-autonomous models more closely approximating real-world ephemerides is beneficial. Here, we apply isolating neighborhood methods to non-autonomous systems, including the elliptic restricted three-body problem (ERTBP). Specifically, simplified isolating neighborhood boundaries are computed around libration points in the ERTBP. These boundaries are used in combination with a bisection method to compute the forward asymptotic trajectories of the isolated invariant set and track orbits around a libration point. ## I Lead Paragraph The computation of transit trajectories and orbits around libration points in the autonomous circular restricted three-body problem (CRTBP) has previously been enabled with the use of isolating blocks and isolating neighborhoods. The implementation of these methods in non-autonomous systems allows for a more straight-forward application of these algorithms to real-world models, although their use with non-autonomous systems presents several challenges. We address these issues by implementing new techniques in the non-autonomous elliptic restricted three-body problem. Specifically, the required boundaries are tested across a range of energies that span the potential energies that trajectories used in the computations may have, and the orbit tracking algorithm is adapted to perform corrections at any required time. The use of these methods allows the tracking of quasiperiodic orbits having the characteristics of 2-tori and 3-tori in the ERTBP. ## I Introduction Isolating block and isolating neighborhood methods have previously been used to describe the dynamics near libration points in the circular restricted three-body problem (CRTBP) [1, 2, 3, 4]. Isolating blocks have been used to compute forward asymptotic trajectories approaching the L\({}_{2}\) isolated invariant set and closely track periodic and quasiperiodic orbits within the isolated invariant set [5, 6]. Isolating neighborhoods were then used to simplify the computations and compute trajectories closely tracking periodic and quasiperiodic orbits at higher energies including the halo orbit and quasi-halo orbit families [7]. Another approach was introduced to compute desired periodic and quasiperiodic orbits within the L\({}_{2}\) Lyapunov orbit in a surface of section directly without using asymptotic approach trajectories.[8] This method also allows the computation of trajectories within the chaotic region in this surface of section.[9] While the ability to compute these asymptotic trajectories in the autonomous CRTBP is very useful, our recent analyses have shown that it is possible to extend the isolating neighborhood methods used for the CRTBP to non-autonomous systems. In this study, we extend our methods, previously developed for use in the autonomous CRTBP, to non-autonomous problems, including the elliptic restricted three-body problem (ERTBP). The isolating neighborhood boundaries developed for use in the CRTBP may be used as a starting point for computing analogous isolating neighborhood boundaries in the ERTBP. In the CRTBP, a chosen boundary may be checked to determine whether it is an isolating block boundary for a selected region by computing tangent trajectories at a particular Jacobi constant and integrating them forward and backward in time.[7] In the ERTBP, there is no longer a Jacobi constant, but a similar boundary may be computed around regions such as the L\({}_{2}\) point. In order to find an isolating neighborhood, we first specify a closed set \(B\) in phase space around a region of interest, in this case the region around the L\({}_{2}\) point. The closed set \(B\) is a manifold with two boundary components - a "right" and a "left" boundary. We verify the exit behavior of orbits to check that those that are tangent to the right boundary exit through the right boundary and orbits tangent to the left boundary exit left. In this case, the isolating neighborhood boundary for the ERTBP must be checked at various epochs to ensure that it remains an isolating neighborhood boundary for each epoch. While checking the tangent trajectories at each epoch, velocities must be computed for the tangent trajectories. Since the Jacobi constant does not exist in the ERTBP, a range of velocity magnitudes may be used to provide evidence that this boundary still works as an isolating boundary in the non-autonomous system. Our approach using isolating neighborhoods serves as an alternative to standard approaches that typically use some combination of expansions and differential correction to compute quasiperiodic orbits in multibody systems. In the autonomous CRTBP, methods such as Fourier approaches,[10, 11] Lindstedt-Poincare methods,[12] parameterization methods,[13, 14] normal form or semianalytical methods,[15, 16] Poincare maps,[17, 18] and stroboscopic maps in combination with collocation[19, 20, 21, 22] have been used to compute quasiperiodic orbits. Although much of the work in the non-autonomous ERTBP has focused on the computation of periodic orbits,[23, 24, 25, 26, 27, 28, 29, 30, 31, 32] some work has been done to explore quasiperiodic tori in the ERTBP and other non-autonomous systems. More specifically, Capinski and Zgliczynski[33] examined the effect of perturbations in the planar ERTBP on Lyapunov orbits for small eccentricities and showed that these orbits are perturbed to quasiperiodic invariant tori. Kumar, Anderson, and de la Llave explored resonant tori in the planar ERTBP.[34, 35] Farres and Jorba[36] explored orbits within the elliptic Hill problem, and Jorba, Jorba-Cusco, and Rosales[37, 38] and Jorba and Villanueva 1997[39] studied the bicircular problem. More recently, Fernandez, Haro, and Mondelo studied quasiperiodic orbits in the ERTBP.[40] In this paper, the isolating neighborhood boundaries are computed for a range of epochs and velocities. The implementation of bisection methods for the computation of the asymptotic trajectories approaching the isolated invariant set in the ERTBP is also discussed. To provide insight into these trajectories, we introduce several ERTBP coordinate systems. We first describe an ERTBP model using the standard constantly rotating coordinate system that is used to study the CRTBP.[41] We also use a non-uniformly rotating system with an axis that remains aligned with the primaries,[42, 43, 44] and finally, the commonly used ERTBP system with a non-uniformly rotating and pulsating frame.[45] Tracking methods for following periodic and quasiperiodic orbits within the isolated invariant set are also developed and used to compute quasiperiodic orbits in the ERTBP. The resulting orbits are described and compared to results from the CRTBP. ## Models The previous analyses using isolating blocks and isolating neighborhoods have focused on computations within the CRTBP. Once the primaries are allowed to move in orbits of non-zero eccentricity, the complexity of the problem increases significantly, and the use of different ERTBP models is useful to aid in both computations and visualization of the results. Four different models are introduced and described here. The standard CRTBP is described first, followed by an ERTBP model with a constantly rotating frame, an ERTBP model with a non-uniformly rotating coordinate frame, and finally an ERTBP model with both a non-uniformly rotating and pulsating coordinate frame. ### Circular Restricted Three-Body Problem While the primary focus of this work is on computing orbits within the ERTBP, it is useful to compare the results to the orbits computed within the CRTBP. In the CRTBP, the motion of a point mass is computed in a rotating frame aligned with the primary and secondary masses which are constrained to move on circular orbits. The primary mass (the Earth for this study) is located on the rotating \(x\) axis at \(E_{em}=(-\mu,0,0)\), while the secondary mass (the Moon) is located on the \(x\) axis at \(M_{em}=(\lambda,0,0)\) where \(\lambda=1-\mu\). The dimensionless mass \(\mu=m_{2}/(m_{1}+m_{2})\) is defined where \(m_{1}\) is the mass of the primary, and \(m_{2}\) is the mass of the secondary. The equations of motion in this model are \[\begin{split}\ddot{x}&=\partial_{x}\Phi\left(x,y,z \right)+2\dot{y}\\ \ddot{y}&=\partial_{y}\Phi\left(x,y,z\right)-2\dot{x} \\ \ddot{z}&=\partial_{z}\Phi\left(x,y,z\right)\end{split} \tag{1}\] where \[\Phi\left(x,y,z\right)=\frac{1}{2}(x^{2}+y^{2})+U(x,y,z) \tag{2}\] and \[U\left(x,y,z\right)=\lambda/r_{1}\left(x,y,z\right)+\mu/r_{2}\left(x,y,z\right). \tag{3}\] Computing the partials gives \[\begin{split}\ddot{x}&=2\dot{y}+x-\frac{\left(x+ \mu\right)\,\left(1-\mu\right)}{r_{1}^{3}}-\frac{\mu\,\left(x-1+\mu\right)}{r _{2}^{3}}\\ \ddot{y}&=-2\dot{x}+y\left[1-\frac{1-\mu}{r_{1}^{3}} -\frac{\mu}{r_{2}^{3}}\right]\\ \ddot{z}&=z\left[-\frac{1-\mu}{r_{1}^{3}}-\frac{\mu }{r_{2}^{3}}\right].\end{split} \tag{4}\] The distance between the point mass and the primary is \(r_{1}\left(x,y,z\right)\), and the distance from the point mass to the secondary is \(r_{2}\left(x,y,z\right)\). This study focused on the Earth-Moon system using a mass ratio of \(\mu=1.2150584270571545\times 10^{-2}\). A constant of motion (the Jacobi constant) exists in the CRTBP, and it is defined as \(C=-2J\) where \[J=\frac{1}{2}\left\langle\dot{q},\dot{q}\right\rangle-\Phi(q) \tag{5}\] \[C=-\dot{x}^{2}-\dot{y}^{2}-\dot{z}^{2}+x^{2}+y^{2}+\frac{2(1-\mu)}{r_{1}}+\frac{2 \mu}{r_{2}}. \tag{6}\] #### Elliptic Restricted Three-Body Problem One of the most common formulations of the ERTBP[45] uses a non-uniformly rotating coordinate system where the \(x\) axis is aligned with the primaries, and the length unit is chosen to be the varying distance between the primary and the secondary. The foundation for deriving the equations of motion for this system is laid here by first deriving the equations of motion in both uniformly and non-uniformly rotating coordinate systems. A new derivation for the non-uniformly rotating and pulsating coordinate system is then described. This results in the same equations of motion obtained using Szebehely's[45] derivation based on the work by Scheibner,[46] Petr and Nechvile,[47] Nechvile,[48] and Rein.[49] In the following, capital letters are used for inertial coordinates. Starting with the full three-body problem and making the third mass zero gives the equations of motion for a restricted three-body problem as \[\ddot{Q}_{1}=Gm_{2}|Q_{2}-Q_{1}|^{-3}(Q_{2}-Q_{1}) \tag{7}\] \[\ddot{Q}_{2}=Gm_{1}|Q_{2}-Q_{1}|^{-3}(Q_{1}-Q_{2}) \tag{8}\] \[\ddot{Q}_{3}=Gm_{1}|Q_{1}-Q_{3}|^{-3}(Q_{1}-Q_{3})+Gm_{2}|Q_{2}-Q_{3}|^{-3}(Q_{ 2}-Q_{3}). \tag{9}\] In these equations, \(Q\) is position, and \(V\) is velocity. The subscripts 1, 2, and 3 correspond to the primary, the secondary, and the infinitesimal mass, respectively. G is the universal gravitional constant, and the masses \(m_{1}\) and \(m_{2}\) are defined as they are in the CRTBP. Next, set the center of mass \(m_{1}Q_{1}+m_{2}Q_{2}=0\). Now, set \(K=Gm_{1}+Gm_{2}\) and define \(\mu=Gm_{2}/K\) as in the CRTBP. Equations 7 and 8 decouple from Equation 9 and can be combined as Kepler's equation: \[\ddot{Q}^{*}=-K|Q^{*}|^{-3}Q^{*}. \tag{10}\] If \(Q^{*}(t)\) is a solution of Equation 10 then \(Q_{1}^{*}(t)=-\mu Q^{*}(t)\) and \(Q_{2}^{*}(t)=(1-\mu)Q^{*}(t)\) are solutions of Equations 7 and 8. To derive a model for the ERTBP one chooses the eccentricity \(e\) and the semi-major axis \(a\) for the ellipse \(r(\nu)=\frac{a(1-e^{2})}{1+e\cos(\nu)}\). Then one solves for initial conditions for a solution \(Q^{*}(t)\) of Equation 10 that traverses the ellipse. Set \(Q_{1}^{*}(t)=\mu Q^{*}(t)\) and \(Q_{2}^{*}(t)=(1-\mu)Q^{*}(t)\). Then with \(Q_{3}\) replaced by \(Q\), the elliptic model is given by the equation \[\ddot{Q}=Gm_{1}|Q_{1}^{*}-Q|^{-3}(Q_{1}^{*}-Q)+Gm_{2}|Q_{2}^{*}-Q|^{-3}(Q_{2} ^{*}-Q). \tag{11}\] Elliptic parameters for a solution of Kepler's Equation 10 are related to the semi-major axis \(a\), the energy \(E\), and angular momentum \(\sigma\) of the solution. Using the relations \(e=\sqrt{1+2E\sigma^{2}/K^{2}}\) and \(a(1-e^{2})=\sigma^{2}/K\) one can choose initial conditions for a solution \((Q^{*},\dot{Q}^{*})\) of Equation 11 that traverses the ellipse. We choose the initial conditions \(Q^{*}(0)=[a(1-e);0,0]\) and \(\dot{Q}^{*}(0)=[0;\sqrt{\frac{K(1+e)}{a(1-e)}};0]\). A normalized distance unit can be set so that \(a(1-e)=1\). For this unit, \(Q^{*}(0)=[1;0;0]\) and \(\dot{Q}^{*}(0)=[0;\sqrt{K(1+e)};0]\). Alternately one may set \(a=1\). The CRTBP model results from the choice \(e=0\). A hyperbolic "fly-by" model results from the choice \(e>1\). ### Uniformly Rotating ERTBP Coordinate System A formulation of the ERTBP using a coordinate frame with fixed rotation was introduced by Easton.[41] This coordinate frame allows an easier comparison with the CRTBP since it uses the same frame. Both the primary and secondary will move in position in this frame. We will refer to Easton for the detailed derivation, but an overview is given here. The CRTBP is derived when a circular solution \(Q^{*}(t)\) of radius \(a\) of Kepler's equation is chosen, and a constantly rotating coordinate system is introduced. This system has time, position, and velocity coordinates \((t,q,\dot{q})\) and the rotation matrix \[R(\omega t)=\left(\begin{array}{ccc}\cos(\omega t)&-\sin(\omega t)&0\\ \sin(\omega t)&\cos(\omega t)&0\\ 0&0&1\end{array}\right). \tag{12}\] The circular solution is \(Q^{*}(t)=R(\omega t)q_{0}\), with \(q_{0}=(a,0,0)\) and \(\omega=\sqrt{\mu/a^{3}}\). The dynamics of the ERTBP in the uniformly rotating system require the primary and secondary masses to move in position in both the \(x\) and \(y\) directions. In order to derive the equations of motion in this system, we first introduce uniformly rotating coordinates \(T_{U}:(t,q,\dot{q})\rightarrow(t,Q,\dot{Q})\). Then we set \(Q=R(\omega t)q\) and \[\dot{R}=RA\omega \tag{13}\] where \[A=\left(\begin{array}{ccc}0&1&0\\ -1&0&0\\ 0&0&0\end{array}\right). \tag{14}\] Now, \[\ddot{Q}=R[\ddot{q}+2\omega A\dot{q}+\omega^{2}A^{2}q]. \tag{15}\] The right hand side of Equation 11 is \[Gm_{1}|Q_{1}^{*}-Q|^{-3}(Q_{1}^{*}-Q)+Gm_{2}|Q_{2}^{*}-Q|^{-3}(Q_{2}^{*}-Q). \tag{16}\] The elliptic solution \(Q_{e}^{*}(t)\) of Kepler's equation that we use is a perturbation of the circular solution \(Q^{*}\) used for the CRTBP model. \[Q_{e}^{*}(t)=R(\nu(t))r(\nu(t))Q_{0}^{*}=R(\omega t)R(\nu(t)-\omega t)q_{e}^{*}(t) \tag{17}\] with \(q_{e}^{*}(t)=r(\nu(t))Q_{0}^{*}\). \[Q_{j}^{*}-Q=R(\omega t)[R(\nu(t)-\omega t)(q_{j}^{*}-q)] \tag{18}\] with \(q_{1}^{*}=-\mu q_{e}^{*}\) and \(q_{2}^{*}=(1-\mu)q_{e}^{*}\). The rotation matrix \(R(\nu(t)-\omega t)\) compensates for the difference between the true anomaly as a function of time and the constant rotation rate. The elliptic model in uniformly rotating coordinates is \[\ddot{q}+2\omega Aq+\omega^{2}A^{2}q=R(\nu(t)-\omega t)[Gm_{1}d_{1}^{-3}(q_{1} ^{*}-q)+Gm_{2}d_{2}^{-3}(q_{2}^{*}-q)]. \tag{19}\] The distances are \(d_{1}=|q_{1}^{*}-q|,d_{2}=|q_{2}^{*}-q|\). #### Non-Uniformly Rotating ERTBP Coordinate System We will use the true anomaly \(\nu(t)\) of the solution \(Q^{*}(t)\) to non-uniformly rotate coordinates and to keep the primary and secondary masses on the x-axis. The true anomaly will replace time as the independent variable. We use prime to denote differentiation with respect to the true anomaly. Note that even though time is replaced with true anomaly, the phase portrait of the system is still preserved. Refer to Appendix A for a description of an alternative formulation of the ERTBP in non-uniformly rotating coordinates using time as the independent variable. The true anomaly of the solution \(Q^{*}(t)\) satisfies the equation \[\dot{\nu}=\sigma/r^{2}(\nu). \tag{20}\] First, we define a rotation matrix \[R(\nu)=\left(\begin{array}{ccc}\cos(\nu)&-\sin(\nu)&0\\ \sin(\nu)&\cos(\nu)&0\\ 0&0&1\end{array}\right). \tag{21}\] Then \(R^{\prime}(\nu)=R(\nu)A\) with \[A=\left(\begin{array}{ccc}0&1&0\\ -1&0&0\\ 0&0&0\end{array}\right). \tag{22}\] Next, introduce non-uniformly rotating coordinates \(T_{N}:(t,X,Y)\rightarrow(t,Q,V)\), and set \(Q=R(\nu(t))X.\) Then \(\dot{R}=R^{\prime}\dot{\nu}=RA\dot{\nu},\ \dot{Q}=R^{\prime}\dot{\nu}X+RX\). The true anomaly satisfies the conditions \[\dot{\nu}=\sigma r^{-2},\nu(0)=0,\ \sigma=\sqrt{(Gm_{1}+Gm_{2})a(1-e^{2})}. \tag{23}\] The elliptic solution \(Q^{*}(t)\) of Kepler's equation satisfies the equation \[Q^{*}(t)=R(\nu(t))r(\nu(t))Q^{*}(0) \tag{24}\] where \[r(\nu)=\frac{a(1-e^{2})}{\gamma(\nu)},\ \gamma(\nu)=1+e\cos(\nu). \tag{25}\] In the non-uniformly rotating system the varying positions of the primary and secondary masses on the \(x\)-axis are \(X_{1}^{*}(\nu(t))=-\mu r(\nu(t))Q^{*}(0)\) and \(X_{2}^{*}(\nu(t))=(1-\mu)r(\nu(t))Q^{*}(0).\) The equations for the ERTBP in non-uniformly rotating coordinates \((\nu,X,Y)\) are derived as follows. \[\ddot{Q}=R[\ddot{X}+2A\dot{\nu}\dot{X}+A\ddot{\nu}X+A^{2}\dot{\nu}^{2}X] \tag{26}\] with \(\dot{X}=X^{\prime}\dot{\nu}\) and \(\ddot{X}=X^{\prime\prime}\dot{\nu}^{2}+X^{\prime}\ddot{\nu}\). This gives the result \[\ddot{Q}=R\dot{\nu}^{2}[X^{\prime\prime}+2AX^{\prime}+A^{2}X]+R\ddot{\nu}[AX+ X^{\prime}]. \tag{27}\] Using the definition of \(\nu\) we see that \(\ddot{\nu}=(2\gamma^{\prime}/\gamma)\dot{\nu}^{2}\) and \(\dot{\nu}^{2}=\sigma^{2}r^{-4}\). Equation 26 is now \[\ddot{Q}=R\sigma^{2}r^{-4}[X^{\prime\prime}+2AX^{\prime}+A^{2}X+(2\gamma^{ \prime}/\gamma)(AX+X^{\prime})] \tag{28}\] Express Equation 16 in terms of \((\nu,X)\): \[F(\nu,X)=Gm_{1}\frac{RX_{1}^{*}(\nu)-RX}{r_{1}^{3}}+Gm_{2}\frac{RX_{2}^{*}( \nu)-RX}{r_{2}^{3}}. \tag{29}\] We have \(r_{1}=|X_{1}^{*}-X|,r_{2}=|X_{2}^{*}-X|\), \(r\sigma^{-2}Gm_{1}=\gamma^{-1}(1-\mu)\), \(r\sigma^{-2}Gm_{2}=\gamma^{-1}\mu\). These expressions are used to give the result that \[F(\nu,X)=R\sigma^{2}r^{-4}\left[r^{3}\gamma^{-1}(1-\mu)\frac{X_{1}^{*}(\nu)-X}{ r_{1}^{3}}+r^{3}\gamma^{-1}\mu\frac{X_{2}^{*}(\nu)-X}{r_{2}^{3}}\right]. \tag{30}\] The final result is the equation \[X^{\prime\prime}+2AX^{\prime}+A^{2}X+(2\gamma^{\prime}\gamma^{-1})(AX+X^{ \prime})=r^{3}\gamma^{-1}(1-\mu)\frac{X_{1}^{*}(\nu)-X}{r_{1}^{3}}+r^{3}\gamma^ {-1}\mu\frac{X_{2}^{*}(\nu)-X}{r_{2}^{3}} \tag{31}\] with \(X_{1}^{*}(\nu)=-\mu r(\nu)Q_{0}^{*}(0),\ X_{2}^{*}(\nu)=(1-\mu)r(\nu)Q_{0}^{*} (0)\). For the circular problem the term \(\gamma^{-1}\gamma^{\prime}\) is zero, and \(r=\gamma=1\). The true anomaly and the time variables are equal in this case. #### Non-Uniformly Rotating and Pulsating ERTBP Coordinate System The derivation of the equations of motion in non-uniformly rotating, pulsating coordinates is new and uses the results for the non-uniformly rotating coordinate system. The formulas that we use in the derivation are these: \(\dot{\nu}=\sigma r^{-2}\), \(\ddot{\nu}=2\phi\dot{\nu}^{2}\), \(\gamma=1+e\cos{(\nu)}\), \(\phi=\gamma^{-1}\gamma^{\prime}\), \(\phi^{\prime}=-\phi^{2}+\gamma^{-1}\gamma^{\prime\prime}\), \(r=p\gamma^{-1}\), \(r^{\prime}=-r\phi\), \(r^{\prime\prime}=r(\phi^{2}-\phi^{\prime})\), \(R^{\prime}=RA\), \(\dot{R}=R^{\prime}\dot{\nu}\), \(\ddot{R}=RA\dot{\nu}^{2}+RA\dot{\nu}\) Using the formulas we have \(X=ru\), \(X^{\prime}=r(u^{\prime}-\phi u)\), \(X^{\prime\prime}=-r\phi(u^{\prime}-\phi u)+r(u^{\prime\prime}-\phi^{\prime}u- \phi u^{\prime})\). Expanding \(X^{\prime\prime}=-r[-\phi u^{\prime}+\phi^{2}u+u^{\prime\prime}+\phi^{2}u- \gamma^{-1}\gamma^{\prime\prime}u-\phi u^{\prime}]\). After collecting terms we have \[X^{\prime\prime}=r[u^{\prime\prime}+2\phi^{2}u-2\phi u^{\prime}-\gamma^{-1} \gamma^{\prime\prime}u]. \tag{32}\] A short but key calculation shows that \([1-\gamma^{-1}]=\gamma^{-1}\gamma^{\prime\prime}\). Replacing each of the terms on the lefthand side of Equation 31 and simplifying, the result is \[[X^{\prime\prime}+2AX^{\prime}+A^{2}X+(2\gamma^{\prime}\gamma^{-1})(AX+X^{ \prime})]=r[u^{\prime\prime}+2Au^{\prime}+A^{2}u+(1-\gamma^{-1})u]. \tag{33}\] Express the righthand side \(F(\nu,X)\) of equation 31 in terms of \((\nu,u)\) as \[F(\nu,X)=r^{3}\gamma^{-1}(1-\mu)\frac{X_{1}^{*}(\nu)-X}{r_{1}^{3}}+r^{3}\gamma ^{-1}\mu\frac{X_{2}^{*}(\nu)-X}{r_{2}^{3}} \tag{34}\] \[F(\nu,u)=r^{4}\gamma^{-1}(1-\mu)\frac{u_{1}^{*}-u}{r_{1}^{3}}+r^{4}\gamma^{-1} \mu\frac{u_{2}^{*}-u}{r_{2}^{3}} \tag{35}\] with \(u_{1}^{*}=-\mu Q^{*}(0),\ u_{2}^{*}=(1-\mu)Q^{*}(0)\). Note that \(r_{1}=rd_{1}=r|u_{1}^{*}-u|\) and update. Then \[F(\nu,u)=r[\gamma^{-1}(1-\mu)\frac{u_{1}^{*}-u}{d_{1}^{3}}+\gamma^{-1}\mu\frac {u_{2}^{*}-u}{d_{2}^{3}}]. \tag{36}\] The final result is \[u^{\prime\prime}+2Au^{\prime}+A^{2}u+u=(1+e\cos(\nu))^{-1}[u+(1-\mu)|u_{1}^{* }-u|^{-3}(u_{1}^{*}-u)+(\mu)|u_{2}^{*}-u|^{-3}(u_{2}^{*}-u)]. \tag{37}\] The result is formulated in vector-matrix notation as follows: \[\begin{bmatrix}x^{\prime\prime}\\ y^{\prime\prime}\\ z^{\prime\prime}\end{bmatrix}+2\begin{bmatrix}0&-1&0\\ 1&0&0\\ 0&0&0\end{bmatrix}\begin{bmatrix}x^{\prime}\\ y^{\prime}\\ z^{\prime}\end{bmatrix}+\left(\begin{bmatrix}-1&0&0\\ 0&-1&0\\ 0&0&0\end{bmatrix}\begin{bmatrix}x\\ y\\ z\end{bmatrix}+\begin{bmatrix}x\\ y\\ z\end{bmatrix}\right)=\\ \frac{1}{1+e\cos(\nu)}\left[\begin{bmatrix}x\\ y\\ z\end{bmatrix}+\frac{1-\mu}{r_{1}^{3}}\left(\begin{bmatrix}-\mu\\ 0\\ 0\end{bmatrix}-\begin{bmatrix}x\\ y\\ z\end{bmatrix}\right)-\frac{\mu}{r_{2}^{3}}\left(\begin{bmatrix}x\\ y\\ z\end{bmatrix}-\begin{bmatrix}1-\mu\\ 0\\ 0\end{bmatrix}\right)\right]. \tag{38}\] The equations of motion for the system may then be written in the usual form as \[x^{\prime\prime}=2y^{\prime}+\frac{1}{1+e\cos(\nu)}\left[x-\frac{1-\mu}{r_{1} ^{3}}(x+\mu)-\frac{\mu}{r_{2}^{3}}(x-1+\mu)\right] \tag{39}\] \[y^{\prime\prime}=-2x^{\prime}+\frac{y}{1+e\cos(\nu)}\left[1-\frac{1-\mu}{r_{1} ^{3}}-\frac{\mu}{r_{2}^{3}}\right] \tag{40}\] \[z^{\prime\prime}=-z+\frac{z}{1+e\cos(\nu)}\left[1-\frac{1-\mu}{r_{1}^{3}}- \frac{\mu}{r_{2}^{3}}\right]. \tag{41}\] The primary and secondary are fixed in this coordinate frame at \((-\mu,0,0)\) and \((1-\mu,0,0)\), respectively. ## Coordinate Systems Summary It is often convenient to transform states and trajectories between the different ERTBP coordinate systems that we have defined so far. We will name the various coordinates as Inertial Coordinates: Variables \((t,Q,\dot{Q})\in R^{1}\times R^{3}\times R^{3}\) Constant Rotating Coordinates: Variables \((t,X,\dot{X})\in R^{1}\times R^{3}\times R^{3}\) Variable Rotating Coordinates: Variables \((\nu,X,X^{\prime})\in R^{1}\times R^{3}\times R^{3}\) Rotating Pulsating Coordinates: Variables \((\nu,u,u^{\prime})\in R^{1}\times R^{3}\times R^{3}\) Now, we summarize the transformations between the different coordinate systems. 1. Constant Rotating to Inertial: \(F:(t,X,\dot{X})\rightarrow(t,Q,\dot{Q})\) \[Q=R(\omega t)X,\ \dot{Q}=\dot{R}(\omega t)X+R(\omega t)\dot{X} \tag{42}\] \[R(t)=\begin{bmatrix}\cos(\omega t)&\sin(\omega t)&0\\ -\sin(\omega t)&\cos(\omega t)&0\\ 0&0&1\end{bmatrix} \tag{43}\] 2. Variable Rotating to Constant Rotating: \(F^{*}:(\nu,X,X^{\prime})\rightarrow(t,X,X^{\prime})\) Time is expressed as a function of the true anomaly by solving the differential equation \[\frac{dt}{d\nu}=r^{2}(\nu)/\sigma \tag{44}\] with initial condition \(t(0)=0\), \(\sigma=\sqrt{(Gm_{1}+Gm_{2})a(1-e^{2})}\), and \(r(\nu)=\frac{a(1-e^{2})}{1+e\cos(\nu)}\). 3. Pulsating to Variable Rotating: \(G:(\nu,u,u^{\prime})\rightarrow(\nu,X,X^{\prime})\) \[X=u/r=\frac{1+e\cos\nu}{1-e^{2}}u \tag{45}\] \[X^{\prime}=u^{\prime}/r-(u/r^{2})r^{\prime} \tag{46}\] Finally, it is often convenient to convert between time and true anomaly for different formulations of the ERTBP. These conversions may be performed using Equation 44 or standard procedures which are given in Appendix B for convenience. ## 3 Illustrative example using event functions, polyhedra, and Bisection The theory of normal forms for Hamiltonian systems spans more than a century with contributions from many great mathematicians. The analysis is beautiful, complex, and difficult. It requires years of research at the highest level to understand this material. A recent publication by Jorba et al. applies state of the art normal form techniques to calculate quasi-periodic solutions of the non-autonomous bicircular problem [37]. While these methods yield powerful results, the methods introduced here provide an alternative that we will use to obtain similar results that apply to spacecraft mission design. These methods also lay the foundation for analyses in more complicated non-autonomous systems such as the full ephemeris model. This model may be thought of as a small perturbation of the ERTBP in systems such as the Earth-Moon, Jupiter-Europa, or Saturn-Enceladus systems among others. To investigate the ERTBP in the vicinity of the collinear Lagrange points we use the topological concept of connectedness in combination with numerical solutions of equations of motion and event functions. In this work computer programs were used to compute exit times from regions of interest and to locate solutions that do not exit. A theoretical foundation motivating the method is based on the study of isolating blocks and the Conley index [50]. To illustrate our approach, we use a bisection method to find solutions of a non-autonomous system of differential equations that start in a specified region of phase space at a specified time and do not exit within a specified long time interval. The type of equation we use is a time-dependent vector field on \(R^{n}\): \[\dot{x}=F(t,x),x(t_{0})=x_{0}. \tag{47}\] An event function \(E_{k}\) is defined by using a base point \(b_{k}\) and a unit normal vector \(u_{k}\). \[E_{k}(x;b_{k},u_{k})=(x-b_{k})^{T}u_{k} \tag{48}\] The half space defined using the event function \(E_{k}\) is the space \(H_{k}=\{x:E_{k}(x;b_{k},u_{k})\geq 0\}\). A polyhedron \(B\) can be viewed as the intersection of half spaces. A boundary face of \(B\) is the set \(\partial_{k}B=B\cap\{E_{k}=0\}\). A convenient event function that signals the exit of a solution from the polyhedron \(B\) is the minimum of the event functions defining the polyhedron. Our problem is to find an initial condition \((t_{0},x_{0})\) in \(B\) so that the solution to the initial value problem does not exit within the time interval \([t_{0},t_{1}]\). The behavior of the vector field in Equation 47 on the boundary of \(B\) may be sufficient to guarantee a solution. The specific differential equation that we use as an example is given by \[\begin{bmatrix}x_{1}^{{}^{\prime}}\\ x_{2}^{{}^{\prime}}\end{bmatrix}=\begin{bmatrix}x_{1}\\ -x_{2}\end{bmatrix}+\epsilon\begin{bmatrix}\cos(t)x_{1}+\sin(t)x_{2}\\ -\sin(t)x_{1}+\cos(t)x_{2}\end{bmatrix}. \tag{49}\] The problem is to find orbits that remain in the square \(B=[-1,1]\times[-1,1]\) for a long time. The square is defined as the intersection of four half-spaces given by four event functions. The event functions are \(E_{R}(x)=1-x_{1}\), \(E_{L}(x)=-1+x_{1}\), \(E_{U}(x)=1-x_{2}\), \(E_{D}(x)=-1+x_{2}\). When one of these functions is zero this indicates a right, left, up, or down exit. The vector field in Equation 49 for \(|\epsilon|<1/\sqrt{2}\) is never tangent to the boundary of \(B\) and points outward on the right and left boundaries and inward at the top and bottom. This can be verified from the definition of the vector field. The square \(B\) is an isolating block with corners. In later, much more complicated examples the exit behaviors of solutions tangent to boundaries will be examined numerically. For this example the exit behavior of solutions to the vector field with \(\epsilon=0.6\) is illustrated for several initial start times between 0 and \(2\pi\) as shown in Figure 1. An orbit starting in \(B\) and exiting must exit either through the right or left boundary faces. One can show that the set of initial conditions whose orbits exit right form an open set and so do the initial conditions whose orbits exit left. Given an initial time and a line segment in \(B\) with one endpoint exiting right and the other exiting left, a topological argument shows that there is a point \((x_{1}^{*},x_{2}^{*})\) on that segment such that the solution of Equation 49 remains in \(B\) forward in time. The topological reason is that the line segment is a connected set, and it cannot be the union of disjoint non-empty open sets. An approximation to the point \((x_{1}^{*},x_{2}^{*})\) can be found by using a bisection method together with event functions and a numerical method for solving the differential equation. It is worth noting that the bisection method will work for all small perturbations of the basic linear equation in this example. Normal form and parameterization methods generally require more work to apply than the bisection approach. This problem was solved numerically, and the results for several cases are described next. As a first step, a line segment must be selected with end points that exit left and right. A simple case is chosen with \(x_{2}=0.8\), and \(-1\leq x_{1}\leq 1\). A quick check shows that the point \((-1,0.8)\) exits left, and \((1,0.8)\) exits right. The next step is to select an initial time for the non-autonomous system, and then a bisection method may be performed to find the value of \(x_{1}\) that remains in the unit square as long as possible. (Note that this calculation is practically limited by the numerical precision of the computer.) The results for initial times \(t_{0}=0\) and \(t_{0}+\pi/2\) are shown in Figure 2. An initial guess at \((0,0.8)\) is shown in green in each plot, and the converged final solution from the bisection method Figure 1: Vector fields for Equation 49 computed for different times. is shown in blue. The converged solutions in these cases stays within the unit square for a very long time in each of these cases, but because of the limits of numerical precision of the computer, they will eventually start to wander away. The converged solutions do stay within the unit square much longer than the initial guess from the bisection method though. ## Isolating block/Neighborhood boundaries in the CRTBP and CRTBP In previous papers, we have explored the use of both isolating block and isolating neighborhood boundaries in the CRTBP for a variety of systems [51, 6, 7, 8, 9]. Isolating block boundaries require that tangent trajectories immediately leave, or "bounce off" the boundaries. Isolating neighborhood boundaries are less stringent in the requirement for the test on these tangent trajectories. In this case, given "left" and "right" exit boundaries obtained in combination with the Hill's region, the requirement is that tangent trajectories on the left boundary exit left, and tangent trajectories on the right boundary exit right. We focus on the use of isolating neighborhood boundaries here. Cylinders were used as isolating neighborhood boundaries in Anderson, Easton, and Lo [8], and a similar approach is used here. As a first step, isolating neighborhood boundaries are defined and checked for the CRTBP. In this case, the inner boundary \(\partial_{L}N\) is defined by \(r_{L}^{2}=x^{2}+y^{2}\) where \(r_{L}=0.993\), and the outer boundary \(\partial_{R}N\) is defined by \(r_{R}^{2}=\left(x-1+\mu\right)^{2}+y^{2}\) where \(r_{R}=0.35\). In each case, the validity of the boundaries is verified for the planar problem by integrating tangent trajectories both forward and backward until they exit the isolating neighborhood. If tangent trajectories on the grid on the right side exit right and tangent trajectories on the left side exit left, this numerically verifies that the boundaries we have selected form an isolating neighborhood. Of course, this depends on the chosen grid, but by choosing a very fine grid, we expect that this procedure provides a strong argument that we have found valid boundaries. Some of the results from the integration of the tangent trajectories are illustrated in Figure 3. Here, the trajectories are integrated from the boundary until they exit a region a little larger than the boundary for easier visualization. It can be seen that the outer tangent trajectories exit the region quickly, while those Figure 2: Converged solutions for \(x_{2}=0.8\) at \(t_{0}=0\) and \(t_{0}=\pi/2\). on the inner boundary wander into the isolating neighborhood before exiting left. In each case, it is verified that all tangent trajectories exit on the appropriate side. For the autonomous CRTBP, this procedure is sufficient to check the boundaries, but for the non-autonomous ERTBP, additional computations are required. In the non-autonomous case, different tangent trajectories will be computed based on which time, or epoch, is chosen. The Jacobi constant is also no longer available to simplify the computations. Given these differences, the boundaries must be checked using tangent trajectories at a range of time spans, and a range of energies (or velocities) must also be checked. Fortunately, for the ERTBP, the system has a dimensionless period of \(2\pi\), so it is only necessary to check the boundaries on the interval from 0 to \(2\pi\) before the configuration repeats. It is more difficult to determine the range of velocities or energies to check, but an instantaneous Jacobi constant may be used as a guide. This instantaneous Jacobi constant, and the equivalent Hill's region, is computed from the instantaneous state in the pulsating ERTBP frame when it is transferred to the CRTBP frame. As the state varies in the ERTBP, the equivalent Jacobi constant and Hill's region may be computed in the CRTBP, and these parameters will vary as the infinitesimal mass moves along the trajectory. This instantaneous Jacobi constant is only an approximate guide, so during subsequent calculations, trajectories crossing the isolating neighborhood boundaries are checked to verify that this assumed envelope of velocities is not violated. We can define the tangency computations more precisely using the outer boundary as an example in the planar problem. The neighborhood boundary \(\partial_{R}N\) can be viewed in polar coordinates as \((r,\theta,s,\phi,\nu)\) where \(r\) is the radius of the cylinder, \(\theta\) gives the angular location on the cylinder, \(s\) is the speed, \(\phi\) gives the angular direction of the velocity, and \(\nu\) is the true anomaly. Here, the right exit boundary is again defined by the condition \(r=0.35\). The tangency condition is the condition \(\phi=\theta\pm\pi/2\). A point \((.35,\ \theta_{0},\ s,\phi_{0},\ \nu)\) in the tangency set has two free variables; speed and true anomaly. For the CRTBP the speed is determined by the Jacobi constant, and it may be calculated Figure 3: **Test showing the tangent trajectories exiting the isolating neighborhood in the CRTBP. The \(\mathrm{L}_{2}\) Lyapunov orbit at C = 3.10 is shown in gray for comparison with the isolating neighborhood boundaries.** as \[s^{2}=2\Phi(r,\theta)-2C. \tag{50}\] For the ERTBP, the Jacobi constant is also not constant but varies slowly, and a Jacobi function may be defined. The speed of a solution at an exit point will be close to the speed of a solution to the CRTBP at the same exit position. The Jacobi function and the associated Hill's zero velocity curve restricts the spacecraft positions in the rotating coordinate system for a fixed Jacobi constant. If an interval \([C_{0},C_{1}]\) of Jacobi constants is used, then the smallest constant (or highest energy) determines a barrier to the positions of a solution starting from an initial condition with \(J=(C_{0}+C_{1})/2\) and the speed satisfies the constraint \[2\Phi(r,\theta))-2C_{0}\leq s^{2}\leq 2\Phi(r,\theta)-2C_{1}. \tag{51}\] The angle \(\theta\) is also constrained by the inequality \(0\leq\Phi(x,y)-C_{1}\). Here, a Jacobi function in polar coordinates may be defined as \[J(t,r,\theta,s,\phi)=\frac{1}{2}s^{2}-\Phi(r,\theta). \tag{52}\] This function can provide some insight into how the Jacobi constant varies, but we have found that it is generally sufficient to use the instantaneous Jacobi constant for our numerical computations. An additional complication is the choice of boundaries to use relative to the CRTBP boundaries in each ERTBP model. In some of the models, the positions of the primary and secondary can vary quite noticeably, making the choice of boundary locations problematic. From this perspective, the non-uniformly rotating, pulsating ERTBP model provides the easiest gateway into studying this problem. In this model, the primary and secondary remain fixed, and we may start our analysis by using fixed boundaries similar to the approach we used in the CRTBP. An initial test using the CRTBP boundaries in the non-uniformly rotating, pulsating ERTBP revealed that the inner boundary failed the isolating neighborhood test when some of the tangent trajectories on this boundary exited right. By moving the inner boundary out to a radius of 1.02 centered on the barycenter, this problem was resolved for a range of energies or velocities. In this case, the boundaries for the ERTBP may now be defined as follows. The inner boundary \(\partial_{L}N\) is defined by \(r_{L}^{2}=x^{2}+y^{2}\) where \(r_{L}=1.02\), and the outer boundary \(\partial_{R}N\) is defined as before by \(r_{R}^{2}=(x-1+\mu)^{2}+y^{2}\) where \(r_{R}=0.35\). The range of required energies is defined iteratively using the instantaneous CRTBP Jacobi constant as a convenient way to parameterize energy, keeping in mind that the Jacobi constant does not exist in the ERTBP. Subsequent calculations of trajectories tracking orbits around L\({}_{2}\) are made, and the energy, or Jacobi constant, of trajectories intersecting the isolating neighborhood boundaries are computed. It was determined from this process that a range of Jacobi constants from 3.10 to 3.17 will be sufficient for testing the isolating neighborhood boundaries. In order to verify that the boundaries we have selected act as isolating neighborhood boundaries under all conditions required for the orbit tracking procedure, the boundaries are tested for a range of initial dimensionless times from 0 to \(2\pi\) and a range of energies, parameterized using the instantaneous CRTBP Jacobi constant, from C = 3.10 to 3.17. In each case, it is checked that the tangent trajectories on each boundary exit on that same boundary. The most stringent test is at the higher energy, corresponding to C = 3.10, and the results for this case using four different initial true anomalies are shown in Figure 4. Note that the instantaneous Hill's regions are provided for reference only since they do not exist in the ERTBP, and as expected, some ERTBP trajectories cross over into these regions. It can be seen from this figure that some of the trajectories on the left boundary occasionally travel into the interior before they exit left, but all tested trajectories meet the criteria. At lower energies, the trajectories generally exit from the isolating neighborhood more quickly. This result can be seen in Figure 5 where the tangent trajectories are shown for several energies for the \(\nu_{i}=0\) case. Figure 4: **Test showing the tangent trajectories exiting the isolating neighborhood in the ERTBP. (The Hill’s regions for the CRTBP are plotted here for comparison only.)** Figure 5: Test showing the tangent trajectories exiting the isolating neighborhood in the ERTBP for a range of energies with \(\nu_{i}=0\). (The Hill’s regions for the CRTBP are plotted here for comparison only.) ## 5 Orbit Tracking Computations in the Planar Ertbp In our previous work, we used a bisection method to compute forward asymptotic trajectories of the L\({}_{2}\) isolated invariant set in the CRTBP and then closely track periodic and quasiperiodic orbits [7]. In the planar CRTBP, this produces a periodic orbit (the planar Lyapunov orbit), but in the ERTBP, we expect to compute quasiperiodic orbits using this method. As a first step, we compare with the CRTBP by selecting a base point in the CRTBP at a chosen \((x,y)\) in the computed isolating neighborhood at C = 3.14. Various methods may be used as a basis for the bisection algorithm, but for this case, we compute a circle of velocities at the base point and find where they exit from the isolating neighborhood. The boundary between the sets of left and right exit trajectories in Figure 6 may be computed using bisection for the base point at (1.16, 0.0). Two possible trajectories may be obtained which, in this case, lie on the forward asymptotic set of the L\({}_{2}\) isolated invariant set. Choosing one of these trajectories, and then computing corrections using the orbit tracking algorithm at each \(y=0\) crossing [51] produces the trajectory in Figure 6(b) which closely tracks the L\({}_{2}\) Lyapunov orbit. Now a similar procedure may be performed in the non-uniformly rotating, pulsating ERTBP model. For this case, the same base point used in the CRTBP is chosen, although it is now in a different frame. The same velocities computed in the CRTBP for C = 3.14 are also used at this point in the ERTBP, and a similar procedure is performed to find both the left and right exit sets along with the boundary between them. Next, one of the forward asymptotic trajectories on this boundary is selected, and an orbit tracking procedure is performed at the same \(y=0\) crossings as in the CRTBP. A slight modification is required for the ERTBP however. In the CRTBP, the Jacobi constant is used to compute the velocity at each correction, insuring that the Jacobi constant varies very little from the initial Jacobi constant. Since there is no Jacobi constant in the ERTBP, the speed that the trajectory intersects \(\Sigma_{y=0}\) with is used to compute the new corrected velocity using bisection. The results from this procedure are shown in Figure 7. Note that the initial ap Figure 6: CRTBP forward asymptotic trajectory and quasiperiodic orbit computed for a base point at (1.16, 0) with C = 3.14. is shown in Figure 7(a), and only the trajectories closely tracking the quasiperiodic orbit are plotted in Figure 7(b). In this case, the computed trajectory in Figure 7(b) approaches a quasiperiodic orbit, or a 2-torus, rather than the periodic Lyapunov orbit obtained in the CRTBP. These results are in line with what would be expected in the planar ERTBP from Capinski and Zgliczynski [33], and it is similar to results found in the bicircular problem [37, 38, 52]. It is worth mentioning that some of these results require that the perturbation be sufficiently small, but our method does not specifically require this. The characteristics of the quasiperiodic orbit may be seen more clearly by plotting a Poincare section using \(\Sigma_{y=0}\) as shown in Figure 8. Now that we have implemented the algorithm Figure 8: Intersections of the quasiperiodic orbit in Figure 7(b) with the surface of section \(\Sigma_{y=0}\). Figure 7: ERTBP forward asymptotic trajectory and quasiperiodic orbit computed for a base point at (1.16, 0) with C = 3.14. to track the quasiperiodic orbit, we next check the trajectories used in the computation to ensure that they do not go outside the range of validity of the isolating neighborhood boundaries that we have already tested. To test this, we take all of the trajectories used in the bisection portion of the orbit tracking algorithm and compute their energies, parameterized by the Jacobi constant, as they intersect the boundaries. Performing this computation gives \(C_{min}\approx 3.1077\) and \(C_{max}\approx 3.1675\) which is inside the range of energies when the boundaries were checked earlier. One additional factor to include in the computation of orbits in the ERTBP is the initial epoch of the trajectory. The orbit in Figure 7 was computed for an initial epoch with \(\nu_{i}=0\). If a different initial epoch is chosen, a different quasiperiodic orbit will be computed. Orbits for \(\nu_{i}=\pi/2,\pi\) are shown in Figure 9. In each case, the orbit computed at \(\nu_{i}=0\) is plotted in gray in the background for reference. The largest apparent difference is found for \(\nu_{i}=\pi\). Figure 9: **Quasiperiodic orbit for base point at (1.16, 0) and C = 3.14 compute for different initial \(\nu_{i}\) values. The gray orbit in the background was computed at \(\nu_{i}=0\), and it is provided for reference.** ## Orbit Tracking Computations in the Spatial Ertbp The methods implemented for the planar ERTBP may be extended to the spatial ERTBP by first using cylinders rather than circles for the isolating neighborhood boundaries. The boundaries may be checked using methods very similar to those implemented in Anderson, Easton, and Lo.[7] In brief, the checks first require the computation of a grid of points on the outer and inner cylinders. Tangent trajectories are integrated forward and backward at each of the grid points to ensure that the tangent trajectories exit on the side they started on. For the ERTBP, these conditions were checked for a range of initial times between 0 and \(2\pi\) and for \(3.10\leq C\leq 3.17\) where \(C\) is once again used here as a convenient way to parameterize the velocities. The outer and inner cylinders used for \(C=3.10\) are shown relative to the Hill's region computed from the instantaneous Jacobi constant in the CRTBP in Figure 10. In this case, it is important to verify that the cylinder intersects the Hill's region at the top and the bottom. For the spatial ERTBP the seven-dimensional phase space with coordinates \((\nu,x,y,z,x^{\prime},y^{\prime},z^{\prime})\) and the differential Equations 39 through 41 are used for the numerical computations. A Jacobi manifold \(M(C)\) is the set of all points in phase space satisfying Equation 6. The Jacobi manifolds form a layer in phase space: \[\Lambda=\bigcup\{M(C):3.10\leq C\leq 3.17\}. \tag{53}\] The neighborhood \(N\) in phase space in which we search for quasiperiodic solutions is defined by three conditions: \[r_{L}^{2}=x^{2}+y^{2}\geq 1.02^{2} \tag{54}\] \[r_{R}^{2}=(x+1-\mu)^{2}+y^{2}\leq.35^{2}, \tag{55}\] Figure 10: Schematics showing the inner and outer isolating neighborhood cylinder boundaries relative to the Hill’s region for \(C=3.10\). Note that the Hill’s region is computed in the CRTBP and is shown for convenience to visualize the location of the isolating neighborhood boundaries. \[(\nu,x,y,z,x^{\prime},y^{\prime},z^{\prime})\in\Lambda. \tag{56}\] As will be described next, trial solutions are generated by choosing initial conditions inside \(N\) which belong to the Jacobi manifold \(M(3.14)\). In our numerical experiments we observe that these solutions do not exit the layer \(\Lambda\) before they exit the neighborhood \(N\). Once the isolating neighborhood boundaries have been checked, specific base points for computing trial solutions within the isolating neighborhood may be examined. We explore several base points here in the \(z=0\) surface of section \((\Sigma_{z=0})\), focusing on the cases where \(\nu_{i}=0\). We know from previous analyses in the CRTBP, that if the base point is in a region where periodic or quasiperiodic orbits exist, we can find points that stay in the isolating neighborhood forward and backward in time. The procedure for finding these points is detailed in Anderson, Easton, and Lo,[8] but a brief summary is given here. As a first step, spheres of velocities are computed for a selected basepoint, and the velocities on these spheres that exit left and right both forward and backward in time are first computed. The boundaries of the left and right exit trajectories are computed, and their intersections contain the states that remain in the isolating neighborhood both forward and backward in time. In the ERTBP, the velocity magnitude used for the velocity spheres is chosen based upon a selected instantaneous Jacobi constant. These points are then states for trajectories contained within the isolated invariant set, and we may use them as initial conditions for tracking periodic or quasiperiodic orbits within the isolated invariant set. Using the initial conditions computed using the intersections on the velocity sphere at a particular base point, several trajectories are integrated forward and corrected to track orbits that stay in the region. In each case, the correction is applied to the spatial orbit as it intersects \(\Sigma_{z=0}\), and it compensates for the limited precision of the initial conditions and numerical integration. The correction in each case is applied to the velocity, and it is the smallest velocity correction necessary to move back to the asymptotic set. These corrections are generally very small, and specific correction magnitudes for much longer orbits will be discussed later. The selected base points are plotted in Figure 11 relative to the Lyapunov orbit computed at C = 3.14 in the CRTBP (plotted only for reference since our computations are in the ERTBP). Points across the region were selected so as to obtain different types of orbits corresponding to the Lissajous and quasi-halo orbits in the CRTBP. The case with a base point at \((1.12,-0.04)\) is shown in Figure 12 with \(\nu_{i}=0.0\). (Unless otherwise specified all of the orbit tracking results will start with \(\nu_{i}=0.0\).) Here, the orbit is similar to the Lissajous orbits found in the CRTBP. Examining the intersections with \(\Sigma_{z=0}\) reveals a similar topology to that found in the CRTBP, but now there are oscillations around a curve that would normally be smooth. In this case, the tori generated in the ERTBP are three-dimensional tori rather than the two-dimensional tori generated in the CRTBP. Next, three base points were tracked in the quasi-halo region, and the results are given in Figures 13 through 15. In each case, the first portions of the quasi-halo orbits are shown so as to be able to still observe the structure in the orbits. An initial set of intersections with \(\Sigma_{z=0}\) are also shown in the Poincare sections. These base points produce three quasi-halo orbits of different sizes, and in each case the Poincare sections show a similar overall structure to the ones observed in the CRTBP. In each case, there are also still oscillations about the expected curve. If the orbit tracking algorithm is continued further for additional intersections with \(\Sigma_{z=0}\), some additional interesting behavior may be observed for different base points. The difference in behavior between different base points was explored, and three different results are shown for base points that are relatively close in Figures 16 through 18. Some characteristics of each of these cases are compiled in Table 1. Even with over 500 revolutions, the positions are continuous, and the sum of all velocity discontinuities from the orbit tracking algorithm was always less then \(6.2\times 10^{-5}\) or approximately 0.063 m/s. In each case, the initial base point was varied only slightly, but significantly different behavior was found. Note that only the portion of the Poincare section with \(x<0\) is shown to more easily see the structure in the intersecting points. Figure 16 uses the same initial base point as Figure 13, but in this case significantly more revolutions were added. There is initial oscillatory behavior that fills in as a band as the number of revolutions is increased. If the base point location is changed slightly to (1.130, -0.0902) as shown in Figure 17 then the points lie on a curve, and Figure 11: **Overview of the base points used to track orbits in the ERTBP with C = 3.14. (The gray Lyapunov orbit is computed using the CRTBP and is provided for reference only.)** Figure 12: **Quasiperiodic orbit tracked starting from base point \((1.12,-0.04)\) at C = 3.14.** more structure is seen in the orbit in configuration space. Finally, if another nearby base point is examined at (1.130, -0.0902), as shown in Figure 18, another type of oscillatory behavior is found. These results depend of course on the initial time that the orbit tracking algorithm is initiated, and if a different initial time is selected, then different behavior is observed. The results for a base point at \((1.130,-0.0902)\) computed with an initial time \(\nu=\pi/2\) are shown in Figure 19. While the case with \(\nu_{i}=0\) formed a line in \(\Sigma_{z=0}\), with the new initial time the intersections form a band. This result is more similar to the results using base point (1.130, -0.0900) with \(\nu_{i}=0\), and reinforces the importance of the initial epoch in the computation of the resulting quasiperiodic orbits. Figure 14: **Quasiperiodic orbit tracked starting from base point \((1.13,-0.095)\) at C = 3.14.** Figure 13: **Quasiperiodic orbit tracked starting from base point \((1.13,-0.09)\) at C = 3.14.** \begin{table} \begin{tabular}{c c c c} \hline Base Point & Revolutions & Dimensionless Total \(\Delta\)V & Dimensional Total \(\Delta\)V (m/s) \\ \hline (1.130, -0.0900) & 650 & \(6.15\times 10^{-5}\) & \(\approx 0.063\) \\ (1.130, -0.0902) & 500 & \(4.68\times 10^{-5}\) & \(\approx 0.048\) \\ (1.130, -0.0903) & 500 & \(5.43\times 10^{-5}\) & \(\approx 0.056\) \\ \hline \end{tabular} \end{table} Table 1: Orbit tracking characteristics for large revolution orbit calculations Figure 16: **Quasiperiodic orbit tracked starting from base point \((1.130,-0.0900)\) with \(\nu_{i}=0\) at C = 3.14 with additional points.** Figure 15: **Quasiperiodic orbit tracked starting from base point \((1.124,-0.0925)\) at C = 3.14.** Figure 17: **Quasiperiodic orbit tracked starting from base point \((1.130,-0.0902)\) at C = 3.14 with \(\nu_{i}=0\).** Figure 18: **Quasiperiodic orbit tracked starting from base point \((1.130,-0.0903)\) at C = 3.14 with \(\nu_{i}=0\).** Figure 19: **Quasiperiodic orbit tracked starting from base point \((1.130,-0.0902)\) at C = 3.14 with \(\nu_{i}=\pi/2\).** #### View of Spatial Orbits in Different Coordinate Frames In the pulsating frame, the orbits do not appear too dissimilar from the orbits computed in the CRTBP. In real-world computations, the frame is not pulsating, and it is interesting to compute these orbits in different frames. An orbit computed in the non-pulsating, variable rotating frame corresponding to the orbit in Figure 13 is shown in Figure 20. Here, the orbit still has the general characteristics of a quasi-halo orbit, but as might be expected, the orbit is elongated in the direction of the \(x\)-axis. This is easily seen in the Poincare section shown in Figure 20(b) where the curve in Figure 17(b) is elongated as well, and structure is less easily discerned. It is also interesting to examine the orbit in the constant rotating, non-pulsating frame equivalent to the CRTBP's rotating frame. The orbit and \(\Sigma_{y=0}\) intersections are plotted in this frame in Figure 21, and some interesting characteristics may be observed. The intersections in Figure 21(b) now show some structure that was not apparent in the variable rotating, non-pulsating frame. As expected, the orbit also wanders further in the \(y\) direction as the the frames are less aligned. Figure 21: **Quasiperiodic orbit tracked starting from base point \((1.13,-0.0902)\) at C = 3.14 in the constant rotating, non-pulsating frame.** Figure 20: **Quasiperiodic orbit tracked starting from base point \((1.13,-0.0902)\) at C = 3.14 in the variable rotating, non-pulsating frame.** ## Conclusions A method for performing isolating neighborhood computations in non-autonomous systems was developed and applied to both a simple example and the ERTBP. In the case of the simple example, it was possible to compute more stringent isolating block boundaries across all times, and in combination with a bisection method, asymptotic target trajectories that would stay within the isolating block for different initial times. In the ERTBP, it was shown that it was possible to compute isolating neighborhood boundaries that could be verified across a range of energies. By verifying that trajectories used in our orbit tracking algorithm did not go outside of this range of energies, it was found that quasiperiodic orbits around the L\({}_{2}\) libration point could be closely tracked in the ERTBP. Using this approach in the planar ERTBP, it was possible to compute quasiperiodic orbits, or 2-tori, equivalent to the periodic planar Lyapunov orbit in the CRTBP, and these results were found to be consistent with other results in the literature that had been found for small eccentricities. The method was successfully extended to the spatial ERTBP, and orbits equivalent to the Lissajous and quasi-halo orbits in the CRTBP were followed. In these case, these orbits had the characteristics of 3-tori, corresponding to the 2-tori in the CRTBP. Different types of behavior were observed for the quasi-halo orbits in a small region depending on the specific initial base point and initial time. The developed algorithms have now been applied in the CRTBP and ERTBP, and this approach lays the foundation for computations in other non-autonomous systems such as the bicircular and ephemeris models. This approach also allows for a relatively seamless application to these other systems with perturbations that avoids many of the difficulties inherent to other methods such as normal form or parameterization approaches. ## Acknowledgements Part of the research presented in this paper has been carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). This work has been supported through funding by the Multi-mission Ground System and Services Office (MGSS) in support of the development of the Advanced Multi-Mission Operations System (AMMOS).
2301.12319
Hawking radiation inside a charged black hole
Here we analyze the Hawking radiation detected by an inertial observer in an arbitrary position in a Reissner-Nordstr\"om spacetime, with special emphasis on the asymptotic behavior of the Hawking spectrum as an observer approaches the inner or outer horizon. Two different methods are used to analyze the Hawking flux: first, we calculate an effective temperature quantifying the rate of exponential redshift experienced by an observer from an emitter's vacuum modes, which reproduces the Hawking effect provided the redshift is sufficiently adiabatic. Second, we compute the full Bogoliubov graybody spectrum observed in the three regimes where the wave equation can be solved analytically (at infinity and at the outer and inner horizons). We find that for an observer at the event horizon, the effective Hawking temperature is finite and becomes negative when $(Q/M)^2>8/9$, while at the inner horizon, the effective temperature is always negative and infinite in every direction the observer looks, coinciding with an ultraviolet-divergent spectrum.
Tyler McMaken, Andrew J. S. Hamilton
2023-01-29T02:04:28Z
http://arxiv.org/abs/2301.12319v2
# Hawking radiation inside a charged black hole ###### Abstract Here we analyze the Hawking radiation detected by an inertial observer in an arbitrary position in a Reissner-Nordstrom spacetime, with special emphasis on the asymptotic behavior of the Hawking spectrum as an observer approaches the inner or outer horizon. Two different methods are used to analyze the Hawking flux: first, we calculate an effective temperature quantifying the rate of exponential redshift experienced by an observer from an emitter's vacuum modes, which reproduces the Hawking effect provided the redshift is sufficiently adiabatic. Second, we compute the full Bogoliubov graybody spectrum observed in the three regimes where the wave equation can be solved analytically (at infinity and at the outer and inner horizons). We find that for an observer at the event horizon, the effective Hawking temperature is finite and becomes negative when \((Q/M)^{2}>8/9\), while at the inner horizon, the effective temperature is always negative and infinite in every direction the observer looks, coinciding with an ultraviolet-divergent spectrum. ## I Introduction Some of the most extraordinary effects in the study of quantum field theory in curved spacetime occur near the horizons of black holes. A black hole's event horizon is well-known to exhibit a characteristic peeling of null geodesics that results in the detection of field radiation asymptotically far from the black hole [1; 2]. This radiation, known as Hawking radiation, has a thermal distribution (in the geometric optics limit), with a temperature directly proportional to the black hole's surface gravity at the event horizon. Despite the success and relative robustness of Hawking's calculation, much debate has continued to this day concerning the nature, origin, and implications of Hawking radiation. If the steady Hawking flux at late times is to be taken literally as originating from the event horizon, one might expect a local infaller at that horizon to detect a diverging number of particles seeing out with an initially divergent blueshift. However, subsequent derivations of Hawking radiation that rely more heavily on local phenomena, such as particles tunneling across the horizon [3], pairs ripped apart by gravitational tidal forces [4], and particles created locally via the renormalized stress-energy tensor [5], find no divergence at the event horizon, but rather a modest "quantum atmosphere" of Hawking particles produced in its vicinity. Though Hawking's original calculation only applies for radiation seen asymptotically far from a black hole, a formalism to study the Hawking radiation detected by an arbitrary observer any distance from a black hole was introduced in Refs. [6; 7]. Those authors define an "effective temperature" function \(\kappa\) that reduces to the surface gravity when it is sufficiently adiabatic over an interval (as will be detailed in Sec. II.1). This effective temperature is simply a measure of the rate of exponential redshifting seen by an observer from the black hole's past horizon, and in the context of quantum field theory in curved spacetime, such a redshifting implies an excitation out of the vacuum state (i.e. particle production) due to the mixing of positive and negative frequencies. For a Schwarzschild black hole, as a freely falling observer is taken to infinity, \(\kappa\) approaches the surface gravity \(1/(4M)\)1 predicted by Hawking, and as an observer approaches the event horizon, \(\kappa\) is coincidentally found to approach a value exactly four times the surface gravity [8]. One of the main goals of the present paper is to extend this formalism to a charged (Reissner-Nordstrom) black hole. Footnote 1: Throughout this paper we use the \((-+++)\) metric signature and geometric units where \(c=G=\hbar=k_{B}=1\). A key difference between a Schwarzschild and Reissner-Nordstrom black hole is the presence in the latter case of a Cauchy horizon, a null hypersurface in the black hole's interior beyond which an observer can see a timelike singularity. Such a horizon, which will subsequently be referred to as the "inner horizon," is of considerable interest both because of its presence in the astrophysically relevant Kerr metric and because of the open problem related to its stability in the presence of perturbative classical or quantum effects. In 1968, Penrose was the first to point out that the inner horizon is a surface of infinite blueshift [9; 10]. Any external perturbations to the spacetime will produce ingoing radiation that an outgoing observer approaching the inner horizon will detect with diverging energy. Poisson and Israel pioneered the first full nonlinear analysis of this instability in 1990 [11; 12], giving it the name "mass inflation" in reference to the exponentially inflated Misner-Sharp mass parameter [13] measured at the inner horizon. As a result, the spacetime geometry near the inner horizon will classically break down and collapse to form a strong, spacelike singularity [14; 15; 16; 17; 18; 19; 20; 21; 22; 23], potentially alongside a weak, null singularity at late times [24; 25; 26; 27; 28; 29; 30]. Because of the unstable nature of the inner horizon in classical models, studies of quantum effects at the in ner horizon have been ongoing. Early studies modeled the quantum effect of pair creation from the black hole's electric field by replacing the near-inner-horizon regime with a Schwarzschild-type solution once the electric field exceeded a critical value [31; 32], and later numerical studies with dynamical evolution found that Schwinger pair creation does indeed cause the inner horizon to form a spacelike singularity, but with a weak, null portion still surviving depending on the critical value of the electric field [33]. Those same authors also pioneered the first numerical study of Hawking radiation at the inner horizon [34], with the result that a spacelike surface with divergent (super-Planckian) curvature forms [35]. However, these studies of Hawking radiation relied on the use of the 1+1D renormalized stress-energy tensor \(\langle T_{\mu\nu}\rangle\) of a quantized scalar field to estimate the semiclassical backreaction. Renormalization of the full 3+1D stress-energy tensor for a black hole spacetime is a difficult problem with no known analytic solution; only recently have numerical studies begun to calculate the quantum fluxes of \(\langle T_{uu}\rangle\) and \(\langle T_{vv}\rangle\) at the inner horizon, generically finding divergences [36; 37]. Of particular note for the present study is the finding in Ref. [36] that the flux components of \(\langle T_{\mu\nu}\rangle\) in double-null coordinates become negative for sufficiently large charge-to-mass ratios (\(Q/M\gtrsim 0.97\)). When taking into account first-order back-reaction effects, this negative stress-energy implies the local abrupt expansion of the inner horizon geometry (see also [38; 39; 40; 41; 42; 43; 44]). Instead of focusing on the quantum renormalized stress-energy tensor, we here study the particle perception effects of Hawking radiation that do not rely on the ambiguities of renormalization in curved spacetime. Our choice of formalism also allows for a straightforward extension to the Hawking radiation seen in an arbitrary viewing direction, so that we may answer the question of whether the radial modes assumed in virtually all analyses of Hawking radiation are actually the dominant source of feedback at the inner horizon (especially since, for an outgoing observer across the inner horizon, only an exponentially small portion of the field of view is taken up by the blueshifting sky radially overhead). Following up on the study of the Schwarzschild interior in Ref. [45], we extend those results to the Reissner-Nordstrom interior and focus on the seemingly paradoxical result that the effective Hawking temperature seen by an inertial observer always becomes negative at the inner horizon when the black hole has nonzero charge. Before diving into the bulk of the paper, it is worth pausing to comment on the implications (and especially the non-implications) of a negative Hawking temperature. Hawking radiation is often pictured as a positive flux of particles escaping a black hole's horizon, coinciding with a negative flux of partner particles traveling inward to the black hole's singularity [2]. However, the negative-temperature Hawking flux analyzed here is not simply an observation of the inward-traveling negative-energy Hawking partners. In contrast, our negative temperature will be found in both the ingoing and outgoing radiation sectors, and further, our calculations do not involve any tunneling across horizons. It may still be possible to formulate a local picture for the global calculations done here, since virtual particle pairs everywhere will be perturbed by radial gravitational tidal forces, and these forces will begin compressing instead of stretching once an observer comes close enough to the inner horizon [46; 47]. How then should one interpret a negative Hawking temperature under the present formalism? The most straightforward answer is that the modes reaching an observer are blueshifting instead of redshifting, and this blueshift will result in a change in sign of the effective temperature of Eq. (4) below. However, the thermodynamic implications of such a change in sign are less apparent. Ref. [48] was the first to comment on the implications of the fact that the surface gravity \(\varkappa_{-}\) defined at the inner horizon is negative, and many authors since have attempted to provide a consistent thermodynamic picture of a black hole with a negative-temperature inner horizon [49; 50; 51; 52]. However, here we make no claims based on the Bekenstein-Hawking entropy nor any thermodynamic laws, and we also do not rely on any assumptions about what happens beyond the inner horizon. It may well be that the negative surface gravity has some implication for the temperature of a purely mathematical, analytically extended white hole emerging from an inner horizon. Nonetheless, the inner horizon effective temperature \(\kappa\) describing the experience of an infalling observer is distinct from the global surface gravity \(\varkappa_{-}\), and in fact \(\kappa\) will be found either to diverge at the inner horizon or to equal some constant multiple of \(\varkappa_{-}\) (see Sec. III.1.2), depending on whether the observer looks up or down. The structure of this paper is as follows: we begin in Sec. II with a preliminary discussion of the effective temperature formalism used to calculate the Hawking radiation, then we proceed to calculate the effective temperature for various charges and observer positions in Sec. III, commenting on validity of the adiabatic approximation in Sec. III.2 and generalizing from radial modes to arbitrary viewing directions in Sec. III.3. Finally, in Sec. IV we extend beyond the geometric optics approximation to calculate the full Bogoliubov spectrum in the asymptotic regimes where the scattering modes become simple (namely at infinity, the event horizon, and the inner horizon), and we conclude with a discussion in Sec. V. ## II Formalism ### Defining an effective temperature as the rate of exponential redshift The Hawking flux perceived by a timelike geodesic observer in a black hole spacetime can be calculated through the use of an effective temperature function \[\kappa(u)\equiv-\frac{d}{du}\ln\left(\frac{dU}{du}\right) \tag{1}\] where the outgoing null coordinate \(u\) gives the observer's position and the null coordinate \(U\) gives the position of an emitter that defines the vacuum state [6; 7]. By a slight abuse of notation, the two worldlines labeled by coordinates \(U\) and \(u\) are connected by a null ray encoded by the function \(U(u)\), and as long as \(\kappa(u)\) remains approximately constant over a small interval around some point \(u_{*}\), it directly implies that the vacuum expectation value of the particle number operator is consistent with that of a Planckian spectrum with temperature \[T_{H}(u_{*})=\frac{\kappa(u_{*})}{2\pi}. \tag{2}\] The constancy condition can be quantified by the adiabatic control function \[\epsilon(u)\equiv\frac{1}{\kappa^{2}}\left|\frac{d\kappa}{du}\right|, \tag{3}\] which must satisfy \(\epsilon(u_{*})\ll 1\) in order for a thermal Hawking flux to be detected at \(u_{*}\)[8]. However, even if the adiabatic condition is not satisfied, a nonzero \(\kappa\) still implies the detection of particles corresponding to a nonzero Bogoliubov coefficient \(\beta\); the only difference is that the spectral content will generally be non-Planckian. Since both the observer and emitter can naturally use their proper times \(\tau_{\rm ob}\) and \(\tau_{\rm em}\) to label the different null rays they encounter throughout their journey, Eq. (1) can be recast in a more intuitive form: \[\kappa=-\frac{d}{d\tau_{\rm ob}}\ln\left(\frac{\omega_{\rm ob}}{\omega_{\rm em }}\right), \tag{4}\] where the frequency \(\omega\) (with either subscripts "ob" or "em," which will be dropped hereafter when either label could apply), defined by \[\omega\equiv-k^{\mu}\dot{x}_{\mu}, \tag{5}\] is the temporal component of a null particle's coordinate 4-velocity \(k^{\mu}\equiv dx^{\mu}/d\lambda\), measured in the frame of an observer or emitter with coordinate 4-velocity \(\dot{x}^{\mu}\equiv dx^{\mu}/d\tau\). Eq. (4) makes it apparent that the effective temperature \(\kappa\) is nothing more than a measure of the rate of frequency redshifting seen by an observer, an indicator of the exponential peeling of null rays first noted by Hawking as the crucial feature of black hole horizons responsible for particle creation [1; 2]. For black hole spacetimes with a Killing horizon, in the limit as an observer approaches future timelike infinity, the notion of the effective temperature \(\kappa(\tau)\) defined above coincides precisely with the notion of the surface gravity \(\varkappa\) used to define a black hole's Hawking temperature [6]. Thus, \(\kappa(\tau)\) provides a generalization of the Hawking effect for arbitrary observers around or inside of a black hole. ### Vacuum states Instead of performing calculations in a fully dynamical collapse spacetime, it is common to formulate an equivalent problem in an empty, eternal black hole spacetime like the Schwarzschild metric [53]. As a result, the collapsing body must be replaced by appropriate boundary conditions on the past horizon, and these boundary conditions define the quantum field's vacuum state in that spacetime. Three options are generally discussed in the literature: the Boulware state, in which the quantum field's modes are defined to be positive frequency with respect to the Killing vector \(\partial/\partial t\) on both the past horizon and past null infinity; the Hartle-Hawking state, in which modes are defined to be positive frequency with respect to the past boundaries' canonical affine coordinates2\(\partial/\partial U\) and \(\partial/\partial V\); and the (past) Unruh state [53], in which modes are defined to be positive frequency with respect to \(\partial/\partial U\) on the past horizon and \(\partial/\partial t\) at past null infinity. The last of these states is the one that is most physically relevant to the production of a Hawking flux to the future of a collapsing black hole and is the state that will be employed here. Footnote 2: For example, for a Schwarzschild black hole, \(U=-4Me^{-u/(4M)}\) is the outgoing Kruskal-Szekeres coordinate, whose vector field \(\partial/\partial U\) is of Killing type on the past horizon. Positive frequency modes are then defined to be the eigenfunctions of the Lie derivative of the field in the \(\partial/\partial U\) direction. In the effective temperature framework, the vacuum state is specified by the spacetime position and state of motion (the orbital parameters) of the emitter. For example, the Boulware state corresponds to a static emitter maintaining a constant radius \(r_{0}\). This state is thus only defined for the exterior portion of the black hole, since an emitter cannot remain static below the event horizon. A freely falling observer measuring in the Boulware state will see diverging stress-energy at the horizon, as a result of the diverging acceleration required for the Boulware emitter to remain static there. In contrast, the Unruh state is associated with a freely falling emitter, positioned either at the black hole's horizon or at infinity. The outgoing Unruh modes correspond to the limit \(r_{\rm em}\to r_{+}\), so that the observer sees the emitter frozen on the past horizon (one may equivalently take the Unruh emitter's descent into the black hole to have occurred sufficiently far into the past), and the ingoing Unruh modes correspond to the limit \(r_{\rm em}\to\infty\), so that the observer sees the emitter safely resting in the sky above. Since the observer and the Unruh emitter are generally not located at the same spacetime coordinate (as in the Boulware state), their modes must be connected via a null geodesic, since the quantum field under study here is massless. To see how the choice of vacuum corresponds to the specification of the emitter's worldline, consider an emit ter radially free-falling from rest at infinity3 into a static, asymptotically flat, spherically symmetric black hole, which is given by the line element Footnote 3: The same arguments should hold for any inertial free-faller; here we present the radial, \(E=1\) case for simplicity. \[ds^{2}=-\Delta(r)\ dt^{2}+\frac{dr^{2}}{\Delta(r)}+r^{2}\left(d\theta^{2}+\sin^{ 2}\!\theta d\phi^{2}\right). \tag{6}\] The horizon function \(\Delta(r)\) has the property that it vanishes linearly as \(r\) approaches a horizon, and it asymptotes to unity as \(r\to\infty\). Such an emitter will have coordinate 4-velocity with nonzero components \[\dot{t}\equiv\frac{dt}{d\tau}=\frac{1}{\Delta}, \tag{7a}\] \[\dot{r}\equiv\frac{dr}{d\tau}=-\sqrt{1-\Delta}. \tag{7b}\] When the emitter is at infinity (\(\Delta\to 1\)) sending modes inward, Eq. (7a) implies that the emitter's proper time \(\tau\) will tick at the same proportionate rate as the global timelike Killing coordinate \(t\). Thus, \(t\) will be the coordinate the emitter uses to define positive frequency, just as expected for ingoing Hawking modes originating from past null infinity. However, when the emitter reaches a horizon (\(\Delta\to 0\)), Eq. (7a) implies that the static Schwarzschild time \(t\) will tick at an infinitely faster rate than the emitter's proper time \(\tau\). So heuristically, instead of seeing wave modes of the form \(\exp(-i\omega t)\), the emitter should end up seeing modes of the form \(\exp[-i\omega\exp(-kt)]\) (for some constant \(k\)), so that even when \(t\) diverges, the emitter's proper time will still remain finite. The new time coordinate defined by these modes will be found to coincide with the oft-studied Kruskal-Szekeres coordinate \(U\). To make the above arguments more precise, and to extend the discussion to distinguish ingoing and outgoing modes (which depend on both the emitter's proper time and the proper distance between wavefronts), consider a set of eikonal waves in the emitter's locally orthonormal tetrad frame \(\{\mathbf{\gamma}_{0},\mathbf{\gamma}_{1},\mathbf{\gamma}_{2},\mathbf{\gamma}_{3}\}\), whose tangent-space coordinates will be labeled \(\xi^{0}\), \(\xi^{1}\), \(\xi^{2}\), and \(\xi^{3}\). This tetrad frame is constructed so that it is continuous across the event horizon and so that the time axis \(\mathbf{\gamma}_{0}\) is always timelike and future-directed, while the radial axis \(\mathbf{\gamma}_{1}\) is always spacelike and outward-directed. In the limit of large frequency \(\omega\), to leading order in \(1/\omega\), the ingoing (\(+\)) or outgoing (\(-\)) components of the eikonal wavefront will follow a null geodesic congruence with tetrad-frame 4-momentum (neglecting any normalization factors) \[k^{\dot{m}}\equiv\frac{d\xi^{\dot{m}}}{d\lambda}=\left(1,\ \pm 1,\ 0,\ 0\right). \tag{8}\] The transformation from the emitter's local tetrad frame to a coordinate frame can be accomplished through the use of the appropriate vierbein. For an external4 radial free-faller with specific energy \(E\) (where \(E=1\) corresponds to rest at infinity), in the static polar spherical chart this vierbein reads Footnote 4: The case of a free-faller in the black hole interior follows the same line of reasoning as the exterior case presented here, _mutatis mutandis_. \[e^{\dot{m}}{}_{\mu}=\begin{pmatrix}E&\sqrt{E^{2}-\Delta}&0&0\\ \sqrt{E^{2}-\Delta}&E&0&0\\ 0&0&r&0\\ 0&0&0&r\sin\theta\end{pmatrix} \tag{9}\] (where rows label the coordinates \(\xi^{0}\), \(\xi^{1}\), \(\xi^{2}\), \(\xi^{3}\) of the emitter's locally inertial frame, and columns label the global coordinates \(t\), \(r^{*}\), \(\theta\), \(\varphi\)). Here we define the tortoise coordinate \(r^{*}\) by \[\frac{dr}{dr^{*}}=\Delta. \tag{10}\] The coordinate-frame 4-momentum \(k^{\mu}=k^{\dot{m}}e_{\dot{m}}{}^{\mu}\) then follows immediately: \[k^{\mu}=\left(\frac{E\mp\sqrt{E^{2}-\Delta}}{\Delta},\ \pm\frac{E\mp\sqrt{E^{2}- \Delta}}{\Delta},\ 0,\ 0\right). \tag{11}\] If the emitter defines some positive frequency \(\omega\) (along with the corresponding wavenumber \(\omega/c\)), then their natural choice of ingoing (upper sign) or outgoing (lower sign) modes will take the form \(\exp[-i\omega(\xi^{0}\pm\xi^{1})]\), which can be written in coordinate form by matching the affine distances of Eqs. (8) and (11): \[d\xi^{0}\pm d\xi^{1}=\frac{\Delta}{E\mp\sqrt{E^{2}-\Delta}}\left(dt\pm dr^{*} \right). \tag{12}\] Asymptotically, as a unit-energy emitter approaches infinity (\(\Delta\to 1\)), the fraction in Eq. (12) reduces to unity, so that the proper choice of coordinates to define Unruh modes at infinity is the Eddington-Finkelstein double null coordinate system, defined in both the exterior and interior as \[u\equiv t-r^{*},\quad v\equiv t+r^{*}, \tag{13}\] where \(u\) here is the same outgoing null coordinate as in Eq. (1). When the emitter is at a horizon (\(\Delta\to 0\)), the mode behavior depends on whether the waves are ingoing or outgoing. For the ingoing modes of a positive-energy free-faller or the outgoing modes of a negative-energy free-faller (neither of which are needed to define an Unruh emitter but will prove useful later to define the natural modes seen by horizon observers), the fraction in Eq. (12) reduces to \(2E\), so that the proper modes (after \(\omega\) is properly scaled) are once again the Eddington-Finkelstein modes \(\exp[-i\omega(t\pm r^{*})]\). But for the outgoing modes of a positive-energy free-faller or the ingoing modes of a negative-energy free-faller at the horizon, the fraction in Eq. (12) vanishes, so a more appropriate coordinate choice must be found. Define a new coordinate \(\bar{U}\) such that the outgoing Unruh modes at the horizon will be written as \(\exp[-i\omega\bar{U}]\). Then Eq. (12) implies that \(\bar{U}\) must satisfy \[\frac{d\bar{U}}{du}\underset{\Delta\to 0}{=}\frac{\Delta}{2E}\approx\frac{r-r_{ \pm}}{2E}\frac{d\Delta}{dr}\bigg{|}_{r_{\pm}} \tag{14}\] in the near-horizon limit. From this expression one can identify the quantity \[\varkappa_{\pm}\equiv\frac{1}{2}\frac{d\Delta}{dr}\bigg{|}_{r_{\pm}} \tag{15}\] as the outer (\(+\)) or inner (\(-\)) horizon's surface gravity. For an emitter with \(E=1\), since from Eqs. (8), (10), and (11), the radius \(r\) is related to the horizon-limit outgoing proper null coordinate \(\bar{U}\) by \[\frac{dr}{d\bar{U}}=-\frac{1+\sqrt{1-\Delta}}{2}\underset{\Delta\to 0}{=}-1, \tag{16}\] then Eq. (14) solves as \[\bar{U}\propto\exp\left(-\varkappa_{\pm}u\right). \tag{17}\] Eq. (17) assumes that \(\bar{U}\) is chosen to begin at \(0\) at the event horizon, when \(u\rightarrow\infty\). This form of the emitter's proper time (up to an irrelevant normalization factor) is precisely the form of the outgoing Kruskal-Szekeres coordinate \(U\) used by Unruh to define positive frequency on the past horizon [53]. Thus, the outgoing modes of the Unruh state correspond to those seen as positive frequency by an emitter in free fall asymptotically close to the past horizon. In some sense, we have done nothing more than "rederive the obvious" in showing how one may obtain past Unruh null boundary conditions. However, in addition to providing yet another way of understanding the validity of this choice of vacuum state, the generalized derivation above also provides a natural specification of ingoing and outgoing modes for freely falling observers at either horizon, without any reliance on global Killing vector fields or asymptotically Minkowski regimes. We will return to this idea when solving the wave equation in Sec. IV. As a final comment concerning the choice of vacuum state, an additional family of vacuum states was used by Ref. [8] to mimic the switching on of Hawking radiation as a black hole first forms during a collapse. These "collapse vacua" correspond to emitters in free fall from rest at infinity, each separated from the observer by a time delay \(\delta\tau\) (as in the Unruh state), but not necessarily in the limit as they approach the horizon or infinity. However, in the present work, we are not concerned with the initial transient collapse dynamics of a black hole; rather, we will focus on the late-time steady-state behavior once the black hole has settled down into the Unruh state, which should occur only a few light-crossing times after the black hole's formation. ## III Redshifting perceived by an infaller Here we examine the effective temperature seen by a freely falling inertial observer in a charged black hole spacetime with a quantized scalar field. In Sec. III.1 we calculate the effective temperature \(\kappa\) for an observer looking in the radial direction via Eq. (4), in Sec. III.2 we analyze when this \(\kappa\) satisfies adiabaticity, and in Sec. III.3 we generalize to an observer looking in an arbitrary direction. ### Radial effective temperature Consider the line element of Eq. (6), which describes the geometry of a charged, spherically symmetric black hole when the horizon function \(\Delta\) takes the form \[\Delta =\left(1-\frac{r_{+}}{r}\right)\left(1-\frac{r_{-}}{r}\right), \tag{18}\] \[r_{\pm} \equiv M\pm\sqrt{M^{2}-Q^{2}}. \tag{19}\] This is the spacetime known as the Reissner-Nordstrom black hole, which possesses a mass \(M\) and a charge \(Q\). The length scales \(r_{+}\) and \(r_{-}\) are referred to respectively as the outer (event) horizon and the inner (Cauchy) horizon. The rate of redshift seen by a radially infalling observer has already been calculated for the spacetime of Eq. (6) for arbitrary \(\Delta\) (see Appendix B of Ref. [45]), though that analysis was only carried out explicitly for Schwarzschild (\(Q/M=0\)). Here we quote the main results and specialize to Reissner-Nordstrom with a focus on the inner horizon. The frequency \(\omega\) measured in the frame of an observer (\(\equiv\omega_{\rm ob}\)) or emitter (\(\equiv\omega_{\rm em}\)) with specific energy \(E\), normalized to the frequency \(\omega_{\infty}\) seen at rest at infinity, is \[\frac{\omega}{\omega_{\infty}}=\frac{E\pm\sqrt{E^{2}-\Delta}}{\Delta}, \tag{20}\] where the upper (lower) sign applies to outgoing (ingoing) null rays. The effective temperature \(\kappa\) can then be calculated with the help of the chain rule: \[\kappa =-\frac{d}{d\tau_{\rm ob}}\ln\left(\frac{\omega_{\rm ob}}{ \omega_{\rm em}}\right)\] \[=-\omega_{\rm ob}\left(\frac{\dot{r}_{\rm ob}}{\omega_{\rm ob}} \frac{\partial\ln\omega_{\rm ob}}{\partial r_{\rm ob}}-\frac{\dot{r}_{\rm em }}{\omega_{\rm em}}\frac{\partial\ln\omega_{\rm em}}{\partial r_{\rm em}}\right)\] \[=\mp\frac{1}{2}\frac{\omega_{\rm ob}}{\omega_{\infty}}\left(\frac {d\Delta_{\rm ob}}{dr_{\rm ob}}-\frac{d\Delta_{\rm em}}{dr_{\rm em}}\right), \tag{21}\] where an overdot signifies differentiation with respect to the observer's or emitter's proper time \(\tau\). For outgoing modes (upper sign), the Unruh emitter must be placed at the event horizon (\(r_{\rm em}\to r_{+}\)), and for ingoing modes (lower sign), the Unruh emitter resides at infinity (\(r_{\rm em}\rightarrow\infty\)). The result, for an observer in free fall from rest at infinity (\(E_{\rm ob}=1\)), is the sensation of two independent effective temperatures corresponding to the outgoing (\(\kappa^{+}\)) and ingoing (\(\kappa^{-}\)) Hawking modes (throughout the rest of this paper, \(\pm\) superscripts will always refer to outgoing/ingoing quantities, while \(\pm\) subscripts will always refer to outer/inner horizon quantities): \[\kappa^{+} =\frac{Mr_{\rm ob}\left(1-r_{\rm ob}^{2}/r_{+}^{2}\right)-Q^{2} \left(1-r_{\rm ob}^{3}/r_{+}^{3}\right)}{r_{\rm ob}^{2}\left(-r_{\rm ob}+ \sqrt{2Mr_{\rm ob}-Q^{2}}\right)}, \tag{22a}\] \[\kappa^{-} =\frac{Mr_{\rm ob}-Q^{2}}{r_{\rm ob}^{2}\left(r_{\rm ob}+\sqrt{2 Mr_{\rm ob}-Q^{2}}\right)}. \tag{22b}\] The rest of this section will be devoted to exploring the implications of Eqs. (22). As a first comment, because of the square root in the denominator, both temperatures become imaginary when the observer is located close enough to the origin, specifically when \(r_{\rm ob}<Q^{2}/(2M)\). However, such values of \(r_{\rm ob}\) are strictly less than the inner horizon radius \(r_{-}\) for all choices of \(Q\), and the failure of Eqs. (22) in this region coincides with the failure of Gullstrand-Painleve coordinates in the same region, indicative of the presence of an unphysical negative interior mass \(M(r)\) (i.e. this is where an infaller would bounce back due to the effects of the repulsive charged singularity on the spacetime) [54]. Since the region below the inner horizon should be physically disregarded, it will not Figure 1: Outgoing effective temperature \(\kappa^{+}\) (red curve) and ingoing effective temperature \(\kappa^{-}\) (blue curve) as a function of observer radius \(r_{\rm ob}\) for various choices of the Reissner-Nordström black hole charge \(Q\). Solid curves indicate positive values on the log plot, and dashed curves indicate negative values. The inner and outer horizons are shown with gray, dotted vertical lines, and the unphysical region below the inner horizon is grayed out. be explored any further here. Second, it should be noted that for an observer asymptotically far from the blackh hole, the above formulas reproduce familiar results: the outgoing sector's temperature asymptotically approaches the black hole's surface gravity \(\varkappa_{+}\) defined by Eq. (15), and the ingoing Hawking sector vanishes: \[\lim_{r_{\rm ob}\rightarrow\infty}\left\{\kappa^{+},\ \kappa^{-}\right\}= \left\{\frac{r_{+}-r_{-}}{2r_{+}^{2}},\ 0\right\}. \tag{23}\] As expected, \(\kappa^{+}\) approaches \(1/(4M)\) in the Schwarzschild \(Q/M=0\) limit and vanishes in the extremal \(Q/M=1\) limit. These limits can be seen in the respective panels of Fig. 1, which shows the full behavior of \(\kappa^{+}(r_{\rm ob})\) and \(\kappa^{-}(r_{\rm ob})\) for different choices of the black hole's charge-to-mass ratio. #### iii.1.1 Negative \(\kappa\) at the event horizon and beyond As an observer freely falling from infinity approaches the Reissner-Nordstrom event horizon and enters the black hole, the effective Hawking temperatures \(\kappa^{+}\) and \(\kappa^{-}\) grow from their initial values at infinity until reaching a maximum value, after which they quickly drop to zero and become negative (excepting the special cases \(Q/M=0,1\)). When the observer crosses the event horizon, the effective temperatures in the outgoing and ingoing sectors are \[\lim_{r_{\rm ob}\rightarrow\tau_{+}}\left\{\kappa^{+},\ \kappa^{-}\right\}= \left\{\frac{2\left(r_{+}-2r_{-}\right)}{r_{+}(r_{+}-r_{-})},\ \frac{r_{+}-r_{-}}{4r_{+}^{2}}\right\}. \tag{24}\] The most notable feature of Fig. 1 is the fact that \(\kappa^{+}\) and \(\kappa^{-}\) become negative (indicated by the dashed lines) if the observer is close enough to the inner horizon, corresponding to a blueshifting of the observed modes instead of the usual exponential redshifting. The exact regions with negative temperature depend heavily on the charge \(Q\), generally extending farther outward with increasing charge. The ingoing radiation (the blue curve) has negative temperature only below the event horizon, coinciding exactly with the change in sign of the Weyl scalar at \(r=Q^{2}/M\), but curiously enough, the outgoing radiation (red) can have negative temperature even above the event horizon, and in the extremal case, the effective temperature in the entire exterior is negative. How large a charge is necessary for a negative temperature to be detected outside the black hole? From Eq. (24), \(\kappa^{+}\) will be negative above the event horizon if the event horizon is less than double the inner horizon's radius, which occurs when \((Q/M)^{2}>8/9\). This special value of \(Q\) is shown in Fig. 2 with a red dot marking the intersection of the solid red and dotted black curves. The value \((Q/M)^{2}=8/9\), where the event horizon coincides with the radial inflection point in the black hole's horizon function \(\Delta\), has shown up previously in the literature for Reissner-Nordstrom black holes in varying contexts. Ong & Good [47] used a heuristic gravitational analog of the Schwinger effect to show that the energy of two Hawking quanta split apart from tidal forces will be negative near the horizon when \((Q/M)^{2}>8/9\). This change in sign can be traced to the change in the radial tidal force, as measured by the proper acceleration of the free-fall-frame geodesic deviation vector, from the usual stretching force into a compressing force as \(Q\) is increased [46]. Similarly, the square of the free-fall temperature obtained by embedding the black hole in a six-dimensional flat spacetime and finding the Unruh temperature of the analogous observer was found to be negative for \((Q/M)^{2}>8/9\)[55], which those authors interpreted as a failure to detect any radiation. Finally, in the 1+1D case, the renormalized expectation values of the temporal and radial components of a scalar field's stress-energy tensor \(\langle T_{\mu}^{\ \nu}\rangle\) become negative at the event horizon in the exact same range [56]. #### iii.1.2 Diverging \(\kappa\) at the inner horizon Now, consider the effective temperature seen when the observer reaches the inner horizon. As can be seen from Figs. 1 and 2, both the outgoing and ingoing ef Figure 2: Regions of negative temperature in the Reissner-Nordström charge-radius parameter space. The black dotted curve shows the inner and outer horizons, which converge in the extremal limit \(Q/M=1\). The red (blue) curve shows regions where the effective temperature in the outgoing sector \(\kappa^{+}\) (ingoing sector \(\kappa^{-}\)) equals zero, and the red (blue) hatched shading shows regions where the effective temperature \(\kappa^{+}\) (\(\kappa^{-}\)) is negative. The red dot marks the charge \(Q/M=\sqrt{8/9}\) above which the effective temperature \(\kappa^{+}\) becomes negative outside the event horizon. As in Fig. 1, the unphysical region below the inner horizon is shaded out gray. fective temepratures \(\kappa^{+}\) and \(\kappa^{-}\) are always nonpositive at the inner horizon.5 The effective temperature \(\kappa^{-}\) for the ingoing sector remains finite for all nonzero values of \(Q\), but the outgoing temperature \(\kappa^{+}\) always diverges at the inner horizon. Defining a new coordinate \(z_{\rm ob}\equiv(r_{\rm ob}-r_{-})/(r_{+}-r_{-})\) representing the observer's dimensionless distance above the inner horizon, in the limit of small \(z_{\rm ob}\ll 1\), one has (to leading order in \(z_{\rm ob}\)): Footnote 5: Here we treat values of \(\kappa\to\pm\infty\) as equivalent to maintain consistency with the standard entropic definition of temperature, where both coincide with zero inverse thermodynamic temperature \(\beta\). \[\lim_{z_{\rm ob}\to 0\atop E_{\rm ob}\to 1}\left\{\kappa^{+},\ \kappa^{-} \right\}=\left\{-{r_{+}^{2}+r_{-}^{2}\over r_{+}^{2}(r_{+}-r_{-})z_{\rm ob}}, \ -{r_{+}-r_{-}\over 4r_{-}^{2}}\right\}. \tag{25}\] From Eq. (25), one can see that the perceived temperature from outgoing radiation at the inner horizon (when the observer looks straight down at the past horizon) quickly approaches negative infinity, while the practically irrelevant perceived temperature from ingoing radiation (when the observer looks up at the sky above) equals half the inner horizon's surface gravity \(\varkappa_{-}\) of Eq. (15). Note that the above analysis applies only to an ingoing observer, who must pass through the left leg of the inner horizon (labeled \({\cal H}_{r_{-}}^{-}\) in Fig. 6). In order to reach the right leg of the inner horizon, an infalling observer must accelerate outward until they acquire negative energy as measured by another observer at infinity. For an observer with specific energy \(E_{\rm ob}=-1\) (who can exist only inside the event horizon, where the Killing time coordinate \(t\) is spacelike), the only change to Eqs. (22) that is needed is to swap their denominators. With this change, the resulting effective temperatures for an outgoing observer at the inner horizon are: \[\lim_{z_{\rm ob}\to 0\atop E_{\rm ob}\to-1}\left\{\kappa^{+},\ \kappa^{-} \right\}=\] \[\left\{-{r_{+}-r_{-}\over 4r_{-}^{2}}\left(1-{r_{+}-r_{-}\over r_{+}^ {2}}\right),\ -{1\over(r_{+}-r_{-})z_{\rm ob}}\right\}. \tag{26}\] Both effective temperatures are still negative. The main change to be noticed when traveling through the right portion of the inner horizon instead of the left portion is that the ingoing effective temperature \(\kappa^{-}\) seen from the sky above diverges instead of the outgoing temperature seen from the past horizon below. This divergence of \(\kappa^{-}\) is consistent with the inner horizon blueshift divergence first noted by Penrose [9]. In contrast, the outgoing effective temperature \(\kappa^{+}\) remains finite for large \(Q\), vanishing as \(Q/M\to 1\), though as \(Q/M\to 0\), \(\kappa^{+}\) diverges (just as \(\kappa^{-}\) does in the case of an ingoing observer at the inner horizon). One final special case is an observer with zero energy, who passes through the intersection of the left and right legs of the inner horizon (the uppermost point in Fig. 6). At this special location, the ingoing and outgoing effective temperatures both diverge: \[\lim_{z_{\rm ob}^{\rm ob}\to 0\atop E_{\rm ob}\to 0}\left\{\kappa^{+},\ \kappa^{-} \right\}=\left\{-{r_{+}^{2}+r_{-}^{2}\over 2r_{+}^{2}r_{-}\sqrt{z_{\rm ob}}},\ -{1\over 2r_{-}\sqrt{z_{\rm ob}}}\right\}. \tag{27}\] Thus, no matter what portion of the inner horizon the observer reaches, at least one of the Hawking sectors will always feature a divergent, negative temperature. Divergent semi-classical behavior at the Reissner-Nordstrom inner horizon is already well-anticipated in the literature. As early as 1980, it was argued that the renormalized expectation value of the stress-energy tensor in regular coordinates must diverge on at least one of the two legs of the inner horizon [57]. More recently, the renormalized stress-energy tensor in the Unruh state was computed explicitly at the inner horizon, and it was found generically to diverge [36]. There are a few differences between that study's results and the results found here; namely, the sign of \(\langle T_{uu}\rangle_{\rm ren}^{U}\) and \(\langle T_{vv}\rangle_{\rm ren}^{U}\) at the inner horizon can be either positive or negative depending on the charge \(Q\) (as opposed to the purely negative \(\kappa^{\pm}\) found here), and those stress-energy tensor fluxes both vanish in the extremal limit (while only \(\kappa^{+}\) vanishes as \(Q/M\to 1\) for outgoing observers) [43]. However, the effective temperature and the renormalized stress-energy tensor should not be expected to agree, since the former describes the perception by an infaller of a spectral distribution while the latter describes the tensorial flux and energy density of that radiation--a perceptual formulation of \(\langle T_{\mu\nu}\rangle\) would depend not only on \(\kappa\) but also on \(\dot{\kappa}\)[58]. #### iii.1.3 Dependence of \(\kappa\) on the observer's energy Finally, consider how the effective temperatures \(\kappa^{\pm}\) given by Eq. (21) change for arbitrary observer energies. Can an observer eliminate the detection of Hawking radiation, or perhaps even change its sign, simply by Lorentz-boosting to a different frame? The only contribution to the effective temperatures of Eq. (21) that depends on the observer's specific energy \(E_{\rm ob}\) is the factor \(\omega_{\rm ob}\), the observer-frame frequency. Thus, any Lorentz-boosting effects on the effective temperature seen by a radial observer are solely confined to those caused by a Doppler factor shift. This shift will never change the sign of \(\kappa^{\pm}\) for an observer at a given radius; it will only change the overall magnitude. In particular, as the observer speeds up, in the limit \(E_{\rm ob}\gg 1\) (or \(E_{\rm ob}\ll-1\)), the magnitude of \(\kappa^{+}\) (or \(\kappa^{-}\), respectively) will increase linearly with \(E_{\rm ob}\). Similarly, in the limit \(E_{\rm ob}\ll-1\) (or \(E_{\rm ob}\gg 1\)), the magnitude of \(\kappa^{+}\) (or \(\kappa^{-}\), respectively) will drop reciprocally to zero. Between these two limits, \(\kappa^{\pm}\) varies monotonically with \(E_{\rm ob}\), so even if an interior observer's energy passes through zero, \(\kappa^{\pm}\) will always remain the same sign. The change in sign in the radial effective temperature for an inertial observer is thus purely geometrical in origin. As an observer changes their energy (or even their viewing direction in a given patch of sky, as we shall see in Sec. III.3), they can never fully eliminate the presence of Hawking radiation, and the effective temperature will always change sign once they have entered into a region of the spacetime geometry where their local surface gravity [governed by radial gradient of the black hole's horizon function \(\Delta\), Eq. (15)] exceeds that of the Unruh emitter (or vice versa). This radiation in the radial direction can thus be regarded as "real" in the sense that it behaves in the same Lorentz-covariant way as any classical radiation detected by a free-faller would. ### Adiabaticity As mentioned in Sec. II.1, the identification of the effective temperature \(\kappa\) with a thermal Hawking flux is strictly only valid in conjunction with the adiabatic condition, that \(\kappa\) must remain approximately constant over enough e-folds of the arriving modes [6; 7]. This condition is quantified by the adiabatic control function \(\epsilon\), which for radial modes in a static, spherically symmetric black hole can be written as \[\epsilon(r_{\rm ob})\equiv\left|\frac{\dot{\kappa}}{\kappa^{2}}\right|=\left| \frac{\dot{r}_{\rm ob}}{\kappa^{2}}\frac{d\kappa}{dr_{\rm ob}}\right|. \tag{28}\] Whenever \(\epsilon\ll 1\), the adiabatic condition is satisfied and a thermal Hawking spectrum is perceived by the observer. The exact analytic form of \(\epsilon(r_{\rm ob})\) for the Reissner-Nordstrom free-faller in the Unruh state is not too illuminating; nonetheless, several key features can be identified. As \(r_{\rm ob}\rightarrow\infty\), the adiabatic control function for the outgoing modes drops to zero (as anticipated to recover Hawking's original thermal calculation), except in the extremal case where \(\kappa\) itself is already zero and \(\epsilon\) therefore diverges. Similar diverging behavior in \(\epsilon\) is observed whenever the effective temperature \(\kappa\) vanishes, as a result of the \(\kappa^{2}\) term in the denominator of Eq. (28), since it is meaningless to define a thermal flux at zero temperature. Based on the above observations, one might expect that \(\epsilon\) would drop to zero whenever \(\kappa\) diverges, as it does for outgoing modes at the inner horizon. However, the adiabatic control function at the inner horizon instead passes through a finite, nonzero value, which nonetheless is still usually smaller than unity for outgoing modes. Specifically, for an ingoing observer, \[\lim_{r_{\rm ob}\rightarrow r_{-}}\left\{\epsilon^{+},\ \epsilon^{-}\right\}=\] \[\left\{\frac{r_{+}^{2}}{2(2M^{2}-Q^{2})},\ \frac{5Q^{2}+4M\sqrt{M^{2}-Q^{2}}-3M^{2}}{M^{2}-Q^{2}} \right\}. \tag{29}\] This equality technically only holds when \(Q\neq 0\); in the Schwarzschild case, instead of approaching unity, both \(\epsilon^{+}\) and \(\epsilon^{-}\) will asymptotically approach 3 (see the left panel of Fig. 3). But for \(Q>0\), the value of \(\epsilon^{+}\) at the inner horizon is always less than 1, and \(\epsilon^{-}\) is always greater than 1. For large enough charge \(Q\), Eq. (29) thus implies that the outgoing temperature should be approximately thermal for an ingoing observer close enough to the inner horizon. This behavior holds even (and especially) for \(Q/M=1\), where the inflating negative temperature just above the merged horizons occurs in the black hole's exterior. For reference, the behavior of \(\epsilon^{+}(r_{\rm ob})\) and \(\epsilon^{-}(r_{\rm ob})\) are plotted in Fig. 3 for two of the same values of \(Q\) used in Fig. 1. One may observe that for many choices of \(r_{\rm ob}\), \(\kappa\) behaves adiabatically and the thermal results fall into place. However, for much of the observer's trajectory, \(\epsilon\) far exceeds unity, and deeper analysis is required, as examined in Sec. IV. ### General viewing direction The results of Secs. III.1 and III.2 apply to a radial infaller observing modes purely in the radial direction. Since the mass inflation instability involves radial focusing of all null geodesics, one may wonder whether the diverging acceleration seen by an infaller is confined to a single radial point on the sky. The goal of this section is to provide a generalization of Eq. (21) to account for photons reaching the observer from any direction. The photon's 4-momentum will now include additional angular terms with the conserved quantity \(b\equiv k_{\theta}/k_{t}\), the photon's impact parameter, which equals 0 for radial trajectories but in general can take any real value up to infinity. To translate \(b\) into a viewing angle on the observer's sky, it suffices to define a single parameter \(\chi\) that measures the angle in the observer's local tetrad frame between the radial direction and the direction the observer is facing. This viewing angle \(\chi\) ranges from 0 degrees (facing radially inward the past horizon) to 180 degrees (facing radially outward toward the sky above, at past null infinity). For an observer with specific energy \(E_{\rm ob}\) at radius \(r_{\rm ob}\), the impact parameter \(b\) is related to the viewing angle \(\chi\) by [45] \[b=\left|\frac{r_{\rm ob}\sin\chi}{E_{\rm ob}-\sqrt{E_{\rm ob}^{2}-\Delta(r_{ \rm ob})\cos\chi}}\right|. \tag{30}\] The frequency \(\omega\) measured in the frame of an observer (\(\equiv\omega_{\rm ob}\)) or emitter (\(\equiv\omega_{\rm em}\)) with specific energy \(E\), normalized to the frequency \(\omega_{\infty}\) seen at rest at infinity, then generalizes to \[\frac{\omega}{\omega_{\infty}}=\frac{E\pm\sqrt{(E^{2}-\Delta)\left(1-b^{2} \Delta/r^{2}\right)}}{\Delta}, \tag{31}\] where, as in the radial case, the upper (lower) sign applies to outgoing (ingoing) null rays. The calculation of \(\kappa\) then follows as in the radial case, though great care must be taken to account for turnaround radii and ensure the correct sign for different viewing angles and observer positions. Since the perception of particle production is highly dependent on the choice of observer, one must take care to make an appropriate choice depending on the context of the calculation. For example, an observer staring in a fixed direction \(\chi\) as they fall inward is not the same as an observer staring at a single infalling emitter, whose position will constantly change in the observer's field of view. As argued in Ref. [45], the choice of observer that will introduce the least amount of non-inertial radiative effects (e.g. from the rotation of the observer's frame) and will reveal the most "pure" Hawking radiation is an observer staring in a fixed direction \(\chi\). Such an observer will see a family of infalling emitters as they fall inward, with each emitter connected to the observer by a null path with the same phase. If an observer stares at the sky above (corresponding to the ingoing Hawking sector, with a family of Unruh emitters at \(r_{\rm em}\rightarrow\infty\)), the generalization of Eq. (21) to account for the frequency of Eq. (31) seen from any viewing angle \(\chi\) is sufficient to satisfy the requirement from the previous paragraph of an inertial observer with fixed \(\chi\). However, if the observer stares at the past horizon below them (corresponding to the outgoing Hawking sector, with a family of Unruh emitters at \(r_{\rm em}\rightarrow r_{+}\)), the frequency seen by the emitter or the observer will diverge, as will the affine distance of the null geodesics connecting the two infallers. In order to ensure that the observer is seeing the same emitted in-modes as they follow along a geodesic staring in a fixed direction \(\chi\), the emitted affine distance \[\lambda_{\rm em}\equiv\omega_{\rm em}\lambda=\omega_{\rm em}\int_{r_{\rm em}} ^{r_{\rm ob}}\frac{dr}{k^{r}} \tag{32}\] (where \(k^{r}\equiv dr/d\lambda\) is the radial component of photon's coordinate-frame 4-momentum, given by Eq. (80) of Ref. [45]) must be held constant. The resulting effective temperature then takes the form: \[\kappa =-\frac{\partial}{\partial\tau_{\rm ob}}\ln\left(\frac{\omega_{ \rm ob}\lambda}{\lambda_{\rm em}}\right)\Bigg{|}_{\chi,\lambda_{\rm em}}\] \[=-\dot{r}_{\rm ob}\left(\frac{\partial\ln\omega_{\rm ob}}{ \partial r_{\rm ob}}\bigg{|}_{\chi}+\frac{\partial\ln\lambda}{\partial r_{ \rm ob}}\bigg{|}_{\chi}\right)-\dot{r}_{\rm em}\frac{\omega_{\rm ob}}{ \omega_{\rm em}}\frac{\partial\ln\lambda}{\partial r_{\rm em}}\Bigg{|}_{\chi}, \tag{33}\] where the derivatives of the affine distance (at constant \(\chi\)) can be expanded with the Leibniz integral rule: \[\frac{\partial\ln\lambda}{\partial r_{\rm ob}}\Bigg{|}_{\chi} =\frac{1}{\lambda}\left(\frac{1}{k_{\rm ob}^{r}}+\frac{\partial b }{\partial r_{\rm ob}}\bigg{|}_{\chi}\int_{r_{\rm em}}^{r_{\rm ob}}\!\!\!\!\!dr \frac{\partial}{\partial b}\frac{1}{k^{r}}\right), \tag{34a}\] \[\frac{\partial\ln\lambda}{\partial r_{\rm em}}\Bigg{|}_{\chi} =-\frac{1}{\lambda k_{\rm em}^{r}}. \tag{34b}\] The numerical solution to Eq. (33) for various values of \(r_{\rm ob}\) and \(Q\) is shown in Fig. 4. These plots show similar trends to that found in Ref. [45] for Schwarzschild black holes. First, the outgoing Hawking radiation seen from the past horizon (left two panels) is actually weakest in the radial direction (except when the observer is very close to the inner horizon). As \(\chi\) increases from \(0^{\circ}\) Figure 3: Outgoing and ingoing adiabatic control functions \(\epsilon^{+}\) (red curve) and \(\epsilon^{-}\) (blue curve), respectively, as a function of an ingoing observer’s position \(r_{\rm ob}\) for two choices of the Reissner-Nordström black hole charge \(Q\). As in Fig. 1, the inner and outer horizons are shown with gray, dotted vertical lines, and the unphysical region below the inner horizon is grayed out. and the observer looks farther away from the center of the black hole's shadow marking where the past horizon would be, \(\kappa^{+}\) increases until it diverges at the edge of the shadow.6 As the observer falls closer and closer to the inner horizon, the area of sky across which Hawking radiation is visible becomes larger (in conjunction with the growing apparent size of the black hole's shadow), and the Hawking radiation becomes more and more isotropic across the surface of the shadow. But once the observer falls close enough to the inner horizon, the apparent black hole size begins to decrease as the Hawking area shrinks to a small patch of sky ahead of the observer (this effect is most apparent in the lower left panel of Fig. 4, but even in the upper left panel, additional curves for smaller radii \(r_{\rm ob}\) would begin to shrink as the maximum angle \(\chi\) shifts Figure 4: Effective temperatures \(\kappa^{+}\) (left two panels) and \(\kappa^{-}\) (right two panels) seen by a radial, inertial, non-rotating observer falling from infinity to the left leg of the inner horizon, as a function of the observer’s viewing angle \(\chi\) on the sky. Curves from green to magenta indicate radiation observed at radii \(r_{\rm ob}\to\infty\), \(8r_{+}\), \(4r_{+}\), \(2r_{+}\), \(r_{+}\) (thick line), \(r_{-}+0.5(r_{+}-r_{-})\), \(r_{-}+0.25(r_{+}-r_{-})\), \(r_{-}+10^{-1}(r_{+}-r_{-})\), and \(r_{-}+10^{-3}(r_{+}-r_{-})\). All curves are normalized so that the magnitude of \(\kappa^{+}\) or \(\kappa^{-}\) for a given radius when looking, respectively, straight down (\(\chi=0^{\circ}\)) or up (\(\chi=180^{\circ}\)), is 1. Solid curves indicate positive values on the log plot, and dashed curves indicate negative values. down to \(0^{\circ}\) as \(r\to r_{-}\)). When the black hole's charge \(Q\) is nonzero, the main effect on the outgoing effective temperature at arbitrary viewing angle is the same result found in Sec. III.1; namely, an observer close enough to the inner horizon will see a negative \(\kappa^{+}\), corresponding to modes that are exponentially blueshifting instead of redshifting. The higher the charge \(Q\), the farther out in physical space this blueshifting zone becomes, until it extends beyond the outer horizon and reaches infinity in the extremal case. Similarly, ingoing Hawking radiation seen from an observer looking up at the sky above (right two panels of Fig. 4) reproduces the same behavior found in Ref. [45] for the Schwarzschild case, with minimal modifications when \(Q\) is nonzero. The rate of redshifting is strongest when the observer looks straight up to the sky (in the outward radial direction, \(\chi=180^{\circ}\)), and \(\kappa^{-}\) changes sign at \(90^{\circ}\), reflecting the fact that the infaller is accelerating away from the sky above (so that the upper hemisphere is redshifting) and accelerating towards the black hole below (so that the lower hemisphere is blueshifting). However, as with the outgoing effective temperature, the ingoing effective temperature changes sign once the observer falls close enough to the inner horizon [seen, e.g., with the dashed pink line at \(r_{\rm ob}=r_{-}+10^{-3}(r_{+}-r_{-})\) on the right half of the top right panel of Fig. 4], so that the upper hemisphere is blueshifting and the lower hemisphere is redshifting. But unlike the outgoing radiation, the sign change in the ingoing effective temperature is restricted only to infallers within the event horizon, regardless of the value of \(Q\). Aside from the sign reversal in every direction for observers close enough to the inner horizon, the main contribution that an addition of charge has on the angular distribution of Hawking radiation (for both \(\kappa^{+}\) and \(\kappa^{-}\)) is to smooth out the perceived temperature gradients across the sky--the higher the charge \(Q\), the less sharp the temperature cutoff is at the black hole shadow's boundaries, and therefore the less isotropic the temperature is across the observer's field of view for a given distance above the inner horizon. #### iii.2.1 Dependence on the observer's energy The dependence of the ingoing and outgoing effective temperatures \(\kappa^{-}\) and \(\kappa^{+}\) on the observer's specific energy \(E_{\rm ob}\) is shown in the upper two plots of Fig. 5. These plots only show one choice of black hole charge (\(Q/M=0.1\)) and observer position (\(r_{\rm ob}/M=1\)) so that the relevant qualitative trends can be observed. As a check on the consistency of the upper two plots in Fig. 5, one can find that the presence or absence of different constant-\(\chi\) curves at different observer energies exactly matches the position of the black hole silhouette in the observer's field of view. For example, for a black hole with \(Q/M=0.1\), an observer at \(r_{\rm ob}/M=1\) with \(E_{\rm ob}=1\) will see the past horizon below them (the black hole's "shadow") spanning from \(\chi=0^{\circ}\) to its border at approximately \(\chi\approx 53.2^{\circ}\), and in both upper plots at \(E_{\rm ob}=1\), the radiation \(\kappa^{-}\) from the sky exists only for \(\chi>53.2^{\circ}\) while the radiation \(\kappa^{+}\) from the horizon exists only for \(\chi<53.2^{\circ}\). This holds true for all observer energies--as an observer is Lorentz-boosted to \(E_{\rm ob}\to\infty\), the past horizon shrinks to a single point below them, and as they are boosted in the other direction (\(E_{\rm ob}\to-\infty\)), the sky shrinks to a single point above them. The lower two plots of Fig. 5 give a further check on the consistency of the formalism and help to show the degree to which the effective temperatures satisfy Lorentz covariance. As the observer's energy \(E_{\rm ob}\) changes, the observer is effectively Lorentz-boosting to a different frame, even though no restriction was imposed _a priori_ for the effective temperature to transform under the Lorentz group. As a test, the lower two plots of Fig. 5 start with the same calculations of \(\kappa^{+}\) and \(\kappa^{-}\) at \(E_{\rm ob}=0\), but instead of varying \(E_{\rm ob}\) in Eq. (33) to find the effective temperature at other observer energies, a Lorentz boost is applied to the observer and matched to the different energies. When beginning in the \(E_{\rm ob}=0\) frame, an interior observer boosted to a frame where they have energy \(E^{\prime}_{\rm ob}\) will possess the Lorentz factor \[\gamma=\sqrt{\frac{E^{\prime 2}_{\rm ob}-\Delta}{\Delta}}. \tag{35}\] Such a boost will entail two important effects. First, the effective temperature will be Doppler-shifted by the frequency factor \(\omega_{\rm ob}\) from Eq. (31), normalized to the frequency seen in the \(E_{\rm ob}=0\) frame. And secondly, the observer's field of view will experience relativistic aberration, such that photons arriving at an angle \(\chi\) for the \(E_{\rm ob}=0\) observer will be shifted to the angle \[\chi^{\prime}=\cos^{-1}\left(\frac{\cos\chi+\beta}{1+\beta\cos\chi}\right) \tag{36}\] in the boosted frame (where \(\beta=\sqrt{1-\gamma^{-2}}\) is the observer's speed). If the Hawking radiation seen by the observer behaved purely classically and in a Lorentz-covariant fashion, the upper two plots of Fig. 5 would exactly match their lower counterparts. As anticipated by the radial case (see Sec. III.1.3), the Hawking radiation seen from the sky above (\(\kappa^{-}\), upper right plot in Fig. 5) in every direction varies reciprocally with \(E_{\rm ob}\) as the observer's energy asymptotically increases and varies approximately linearly as \(E_{\rm ob}\to-\infty\). Such a behavior is similar to what is expected for a Lorentz-boosted observer as in the lower right plot of Fig. 5. And just as in the radial case, changing the observer's specific energy \(E_{\rm ob}\) for fixed \(\chi\) will never change the sign of \(\kappa^{-}\). The ingoing effective temperature is always zero when \(\chi=90^{\circ}\), always positive (with this specific choice of observer halfway between the outer and inner horizons) for larger \(\chi\), and always negative for smaller \(\chi\). Such a delineation can be noticed in the upper right plot of Fig. 5 from the fact that the \(\chi<90^{\circ}\) curves (blue) are always negative (dashed), while the \(\chi>90^{\circ}\) curves (orange) are always positive (solid). This behavior is a consequence of forcing the observer to star in a fixed direction; such an infaller will classically always see null geodesics from infinity blueshifting below them (when \(\chi<90^{\circ}\)) and redshifting above them as they decrease their radius. What about the outgoing Hawking radiation from the horizon? As shown in the upper left plot of Fig. 5, an interior observer can change the sign of \(\kappa^{+}\) by changing their energy \(E_{\rm ob}\) enough. When \(E_{\rm ob}=1\), the results of the upper left panel of Fig. 4 are reproduced; namely, a positive-temperature horizon is seen with brighter radiation at the edges (i.e. larger \(\kappa^{+}\) for larger \(\chi\)). However, as the observer boosts to smaller and smaller energies, the temperature at the ever-growing edge of the horizon begins to decrease until it drops below zero. The negative-temperature outer portion of the black hole's shadow then begins to grow inward until the entire horizon has a negative temperature, once again with the largest magnitude at the edges. Though only one specific Figure 5: Effective temperatures \(\kappa^{+}\) (left plots) and \(\kappa^{-}\) (right plots) in units of \(M^{-1}\) as a function of the observer’s specific energy \(E_{\rm ob}\), for various choices of the observer’s viewing direction, with intervals of \(15^{\circ}\) from \(\chi=0^{\circ}\) (blue) to \(\chi=180^{\circ}\) (orange) (note that the left plots contain no \(\chi=180^{\circ}\) curves and the right plots contain no \(\chi=0^{\circ}\) curves). Solid curves indicate positive values and dashed curves indicate negative values. The black hole’s charge-to-mass ratio is \(Q/M=0.1\), and the radiation is seen from an observer halfway between the inner and outer horizons, at \(r_{\rm ob}/M=1\). The upper two plots show the effective temperatures calculated from Eq. (33) directly as a function of \(E_{\rm ob}\), while the lower two plots calculate the effective temperatures only for \(E_{\rm ob}=0\) and infer the effective temperatures at other observer energies by Lorentz-boosting to the appropriate frame. case is shown, an outgoing (i.e. negative-energy) observer in a black hole's interior will always see a completely negative-temperature horizon below them. One way that the upper left plot of Fig. 5 differs from the results of Sec. III.1.3 (and from the lower left plot of Fig. 5) is that \(\kappa^{+}\) diverges linearly as \(E_{\rm ob}\rightarrow-\infty\) instead of dropping to zero. As a reminder, the difference in the calculation done here versus that of previous sections is that here the affine distance is kept constant so that the family of emitters seen by the observer will always have the same phase, since the emitted wave's frequency appears to diverge as the emitter is taken to the horizon. Evidently such a restriction has a big impact not just in the evaluation of the horizon temperature for nonzero \(\chi\), but also for the evaluation of the horizon temperature for negative observer energies, even when \(\chi=0^{\circ}\). Finally, let us briefly give special attention to the case of an interior observer with \(E_{\rm ob}=0\). Classically, such an observer will begin at the event horizon seeing nothing but the past horizon in all directions, excepting a vanishingly small patch of sky directly above them at \(\chi=180^{\circ}\). Then, as they fall inwards, the sky above them will grow until it almost takes up a full hemisphere of the observer's field of view, after which the sky will quickly collapse back to a single point as the horizon grows. Semi-classically, in Sec. III.1.2 it was argued that the Hawking radiation in the \(E_{\rm ob}=0\) frame diverges as \(z_{\rm ob}^{-1/2}\) as an observer approaches the inner horizon looking both up (\(\chi=180^{\circ}\)) and down (\(\chi=0^{\circ}\)). What happens in other directions? When \(E_{\rm ob}=0\), the effective temperature from the sky above becomes isotropic and simplifies considerably: \[\lim_{E_{\rm ob}\to 0}\kappa^{-}(\chi)=\frac{1}{2\sqrt{-\Delta}}\frac{d \Delta}{dr_{\rm ob}}. \tag{37}\] This radiation extends across the entire sky visible to the observer, from \(\chi=180^{\circ}\) to the edge of the black hole shadow at \[\cos\chi=-\left[1-\frac{\Delta(r)}{r^{2}}\frac{r_{c}^{2}}{\Delta(r_{c})} \right]^{-1/2}, \tag{38}\] where \(r_{c}\equiv\frac{3M}{2}\left(1+\sqrt{1-\frac{8Q^{2}}{9M^{2}}}\right)\) is the critical radius of the photon sphere. This isotropicity can be seen by the convergence of all the curves in the right plots of Fig. 5 as \(E_{\rm ob}\to 0\). The effective temperature \(\kappa^{+}\) from the horizon does not take on a simple analytic form like \(\kappa^{-}\) does, but its dependence on \(\chi\) for an observer with \(r_{\rm ob}/M=1\) can be ascertained from the left plots of Fig. 5. For various charges \(Q\) and observer positions \(r_{\rm ob}\), the effective temperature is usually negative in all directions, with the smallest magnitude for \(\kappa^{+}\) occurring when looking straight downward (\(\chi=0^{\circ}\)). Notably, as the observer reaches the inner horizon, while the temperature \(\kappa^{-}\) from Eq. (37) diverges isotropically as \((-\Delta)^{-1/2}\) (and therefore as \(z_{\rm ob}^{-1/2}\)), the temperature \(\kappa^{+}\) from the horizon also diverges as \((-\Delta)^{-1/2}\), with an even stronger divergence when \(\chi\geq 0^{\circ}\). ## IV Bogoliubov spectrum Since a variety of choices for the observer position \(r_{\rm ob}\) and black hole charge \(Q\) lead to a non-adiabatic effective temperature function, one may wonder how much trust can be placed on the physical validity of the results of Sec. III. As has been argued, even if the Hawking spectrum is non-thermal, there should in general still be particle production whenever \(\kappa\) is nonzero. To verify this claim, here we will perform a full wave mode analysis to find the particle spectrum seen by an infaller in the locations where the Klein-Gordon equation simplifies enough for such a calculation to be performed. ### Derivation Consider the Bogoliubov coefficients between the vacuum state of an Unruh emitter and that of a freely falling observer in a Reissner-Nordstrom spacetime. In any spacetime with metric \(g_{\mu\nu}\), a canonically quantized, massless scalar field \(\Phi\) will satisfy the Klein-Gordon wave equation \[\frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^{\mu}}\left(\sqrt{-g}\ g^{\mu \nu}\frac{\partial\Phi}{\partial x^{\nu}}\right)=0. \tag{39}\] Motivated by the spacetime's symmetries, we choose to decompose this field \(\Phi\) into a complete set of modes \(\phi_{\omega\ell m}\), each accompanied by creation and annihilation operators \(a^{\dagger}\) and \(a\): \[\Phi=\int_{0}^{\infty}d\omega\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}\left( \phi_{\omega\ell m}a_{\omega\ell m}+\phi_{\omega\ell m}^{*}a_{\omega\ell m}^ {\dagger}\right). \tag{40}\] If these modes are separated as \[\phi_{\omega\ell m}=\frac{f_{\omega\ell}(t,r)Y_{\ell m}(\theta,\varphi)}{r \sqrt{4\pi\omega}}, \tag{41}\] then Eq. (39) implies that \(Y_{\ell m}\) will satisfy the spherical harmonic equation, while \(f_{\omega\ell}\) must satisfy \[\frac{\partial^{2}f_{\omega\ell}}{\partial r^{*2}}-\frac{\partial^{2}f_{ \omega\ell}}{\partial t^{2}}=\Delta\left[\frac{\ell(\ell+1)}{r^{2}}+\frac{1}{r }\frac{d\Delta}{dr}\right]f_{\omega\ell}. \tag{42}\] The annihilation operators \(a_{\omega\ell m}\) of Eq. (40) define the vacuum state of the observer: \[a_{\rm ob}|0_{\rm ob}\rangle=0 \tag{43}\] (for convenience, the mode indices \(\omega\), \(\ell\), and \(m\) will hereafter be suppressed as needed). However, \(\Phi\) could just as easily be decomposed into any other complete set of modes \(\bar{\omega}\), \(\bar{\ell}\), and \(\bar{m}\), so a similar decomposition can be used to define an emitter's vacuum state as \[a_{\rm em}|0_{\rm em}\rangle=0. \tag{44}\] The two vacuum states are related by a Bogoliubov transformation through the coefficients \(\alpha^{\omega\ell m}_{\hat{\omega}\ell\hat{m}}\) and \(\beta^{\omega\ell m}_{\hat{\omega}\ell\hat{m}}\) (and note that there should properly be a sum of two integrals for the emitter's ingoing and outgoing states, which are omitted here for simplicity): \[a_{\rm ob}=\int_{0}^{\infty}d\bar{\omega}\sum_{\ell=0}^{\infty}\sum_{\hat{m}=- \hat{\ell}}^{\bar{\ell}}\left(\alpha\ a_{\rm em}+\beta^{*}a_{\rm em}^{\dagger} \right). \tag{45}\] It is then straightforward to show [59] that the vacuum expectation value of the observer's number operator in the emitter's vacuum state is related to the Bogoliubov coefficient \(\beta\): \[\langle 0_{\rm em}|a_{\rm ob}^{\dagger}a_{\rm ob}|0_{\rm em} \rangle=\int_{0}^{\infty}d\bar{\omega}\sum_{\bar{\ell}=0}^{\infty}\sum_{\hat{m}=- \bar{\ell}}^{\bar{\ell}}\left|\beta\right|^{2}\] \[=\int_{0}^{\infty}d\bar{\omega}\sum_{\bar{\ell}=0}^{\infty}\sum_ {\hat{m}=-\bar{\ell}}^{\bar{\ell}}\left|\langle\phi_{\rm em}|\phi_{\rm ob}^{ *}\rangle\right|^{2}, \tag{46}\] where bra-ket notation denotes the Lorentz-invariant Klein-Gordon inner product, which consists of a 3D integral over an arbitrary spacelike Cauchy hypersurface \(\Sigma\) that terminates at spacelike infinity and is orthogonal to a future-directed unit vector \(n^{\mu}\): \[\langle\phi_{1}|\phi_{2}\rangle\equiv-i\int_{\Sigma}d\Sigma\ n^{\mu}\sqrt{-g_ {\Sigma}}\ \phi_{1}\stackrel{{\leftrightarrow}}{{\partial}}_{\mu}\phi_{2}^{*}. \tag{47}\] To determine the expected particle number seen by an observer, one thus needs only to specify the observer's and emitter's modes (usually via a set of boundary conditions), propagated through the spacetime via the wave equation so that they coincide on some Cauchy hypersurface. The Unruh emitter's ingoing (\(-\)) and outgoing (\(+\)) modes are defined with the following boundary conditions at past null infinity \(\mathscr{I}^{-}\) and the past horizon \(\mathcal{H}^{+}_{\tau_{+}}\equiv{}^{\rm int}\mathcal{H}^{+}_{\tau_{+}}\cup{}^ {\rm ext}\mathcal{H}^{+}_{\tau_{+}}\) [here \(f\) is defined as in Eq. (41), with \(\omega\ell\) indices dropped for convenience]: \[f^{+}_{\rm em}\rightarrow\begin{cases}0,&\mathscr{I}^{-}\\ {\rm e}^{-i\omega U},&\mathcal{H}^{+}_{\tau_{+}}\end{cases}, \tag{48}\] \[f^{-}_{\rm em}\rightarrow\begin{cases}{\rm e}^{-i\omega(t+r^{*})},&\mathscr{ I}^{-}\\ 0,&\mathcal{H}^{+}_{\tau_{+}}\end{cases}, \tag{49}\] where \(U\) is the outgoing Kruskal-Szekeres coordinate, defined in terms of the event horizon's surface gravity \(\varkappa_{+}\) from Eq. (15) by \[U\equiv\begin{cases}-{\rm e}^{-\varkappa_{+}(t-r^{*})}/\varkappa_{+},&r_{+} \leq r<\infty\\ +{\rm e}^{-\varkappa_{+}(t-r^{*})}/\varkappa_{+},&r_{-}\leq r<r_{+}\end{cases}. \tag{50}\] The relevant surfaces to which these boundary conditions correspond are shown schematically with dotted arrows in the Penrose diagram of Fig. 6. Note that, as shown in the diagram, the outgoing modes can be further split into a pair of substates via \(f^{+}\equiv\left({}^{\rm int}f^{+}\right)\cup\left({}^{\rm ext}f^{+}\right)\), each of whose boundary conditions are zero except on their respective null surfaces. As argued in Sec. II.2, the modes of Eqs. (48) and (49) are precisely those which are positive frequency with respect to the proper time of a freely falling observer skimming asymptotically close to those surfaces. The modes \(f^{\pm}_{\rm em}\) can then be extended to the entire spacetime by solving the wave Eq. (42). Similarly, the observer's ingoing (\(-\)) and outgoing (\(+\)) modes can be defined via boundary conditions, in this case on the future null hypersurfaces. At future null infinity, the outgoing modes are positive frequency with respect to the outgoing Eddington-Finkelstein coordinate \(u\equiv t-r^{*}\), since an observer asymptotically close to that surface will define positive frequency with respect to that coordinate (as argued in Sec. II.2). The natural question is then how this vacuum state should be extended to the interior of the black hole. In studies of analogous acoustic black hole systems [60; 61], these interior modes are also defined with respect to the Eddington-Finkelstein coordinates, in part because the inner horizon of those systems is mimicked by a physically infinite asymptotic regime. For the Reissner-Nordstrom spacetime, an in Figure 6: Penrose diagram showing the various boundaries for a Reissner-Nordström black hole on which modes are defined with nonzero values. Past (future) null infinity is labeled \(\mathscr{I}^{-}\) (\(\mathscr{I}^{+}\)), the outer (inner) horizons are labeled \(\mathcal{H}_{\tau_{+}}\) (\(\mathcal{H}_{\tau_{-}}\)), and the superscripts \(+\) (\(-\)) everywhere indicate whether modes traveling across a surface are outgoing (ingoing). The boundary conditions for the emitter’s (observer’s) modes at the locations of the dotted (solid) lines can then be propagated (back-propagated) numerically using the wave equation to define the modes throughout the entire spacetime. faller will not reach an asymptotically steady state at the inner horizon; however, they will approach an asymptotic regime (albeit a transient one) where \(\Delta\to 0\) and the scattering potential of Eq. (42) vanishes. In the regime where this potential vanishes, as shown in Sec. II.2, freely falling observers experience a proper time proportional to the Eddington-Finkelstein coordinates: \[f^{+}_{\mathrm{ob}}\to\begin{cases}\mathrm{e}^{-i\omega(t-r^{*})},&\mathscr{I} ^{+}\\ \mathrm{e}^{-i\omega(r^{*}-t)},&\mathcal{H}^{+}_{r_{-}}\\ 0,&\mathcal{H}^{-}_{r_{-}}\end{cases}, \tag{51}\] \[f^{-}_{\mathrm{ob}}\to\begin{cases}0,&\mathscr{I}^{+}\cup\mathcal{H}^{+}_{r_ {-}}\\ \mathrm{e}^{-i\omega(r^{*}+t)},&\mathcal{H}^{-}_{r_{-}}\end{cases}. \tag{52}\] These modes are shown with solid arrows in Fig. 6. They represent the experience of any inertial observer with arbitrary energy \(E_{\mathrm{ob}}\) (up to a rescaling of the frequency \(\omega\)); without loss of generality, an observer with \(E_{\mathrm{ob}}=1\) is chosen for the left potion of the inner horizon in Fig. 6 (\(\mathcal{H}^{-}_{r_{-}}\)), while an observer with \(E_{\mathrm{ob}}=-1\) is chosen for the right portion (\(\mathcal{H}^{-}_{r_{-}}\)). Also, note that if the observer is placed at the outer horizon instead of the inner horizon, a similar complete set of modes can be defined _mutatis mutandis_. In what follows, we will present the results for both sets of modes simultaneously, though we will only closely follow the steps of analysis for the inner horizon observers' set of modes. Equipped with a complete set of modes for an Unruh emitter and an inertial observer, one may now proceed to calculate the expectation value of the particle number operator seen by the observer in the emitter's vacuum state via Eq. (46). To do so, consider what will subsequently be referred to as the past null Cauchy hypersurface, consisting of the union of past null infinity with the exterior and interior past horizons (\(\mathscr{I}^{-}\cup~{}\mathcal{H}^{+}_{r_{+}}\); see Fig. 6). On this surface, the emitter's modes are given by Eqs. (48) and (49), while the observer's modes can be found with scattering theory, as described below. Since the \(t\) coordinate used to define the observer's modes defines a global timelike Killing vector for the spacetime, the field's modes \(f_{\omega\ell}\) can be separated as \[f_{\omega\ell}(t,r^{*})\equiv\chi_{\omega\ell}(r^{*})~{}\mathrm{e}^{\pm i\omega t}. \tag{53}\] This separation puts the Klein-Gordon wave Eq. (42) into the form of a 1D scattering equation in \(r^{*}\). In the limits as \(\Delta\) approaches both \(0\) and \(1\), the scattering potential of Eq. (42) vanishes, leading to asymptotic eigenmode solutions of the form \(\exp(\pm i\omega r^{*})\). As such, the observer's modes \(\chi^{\pm}_{\mathrm{ob}}\) can be back-propagated to the past null Cauchy hypersurface--altogether, for an observer at future null infinity one has \[\mathrm{e}^{\pm}f^{+}_{\mathrm{ob}}\to\mathrm{e}^{-i\omega t}\begin{cases} \mathrm{e}^{i\omega r^{*}}+\mathcal{R}^{+}_{\mathrm{ext}}\mathrm{e}^{-i\omega r ^{*}},&r^{*}_{\mathrm{ext}}\to\infty\\ \mathcal{T}^{+}_{\mathrm{ext}}\mathrm{e}^{i\omega r^{*}},&r^{*}_{\mathrm{ext}} \to-\infty\end{cases}, \tag{54}\] for an outgoing observer at the inner horizon, \[\mathrm{i}t^{+}_{\mathrm{ob}}\to\mathrm{e}^{i\omega t}\begin{cases}\mathrm{e}^ {-i\omega r^{*}},&r^{*}_{\mathrm{int}}\to\infty\\ \mathcal{T}^{-}_{\mathrm{int}}\mathrm{e}^{-i\omega r^{*}}+\mathcal{R}^{+}_{ \mathrm{int}}\mathrm{e}^{i\omega r^{*}},&r^{*}_{\mathrm{int}}\to-\infty\\ \mathcal{R}^{+}_{\mathrm{int}}\mathcal{T}_{\mathrm{ext}}\mathrm{e}^{i\omega r ^{*}},&r^{*}_{\mathrm{ext}}\to\infty\end{cases}, \tag{55}\] and for an ingoing observer at the inner horizon, \[\mathrm{i}t^{-}_{\mathrm{ob}}\to\mathrm{e}^{-i\omega t}\begin{cases}\mathrm{e}^ {-i\omega r^{*}},&r^{*}_{\mathrm{int}}\to\infty\\ \mathcal{T}^{-}_{\mathrm{int}}\mathrm{e}^{-i\omega r^{*}}+\mathcal{R}^{-}_{ \mathrm{int}}\mathrm{e}^{i\omega r^{*}},&r^{*}_{\mathrm{int}}\to-\infty\\ \mathcal{T}^{-}_{\mathrm{int}}\mathcal{T}^{-}_{\mathrm{ext}}\mathrm{e}^{-i \omega r^{*}},&r^{*}_{\mathrm{ext}}\to\infty\\ \mathcal{T}^{-}_{\mathrm{int}}\left(\mathrm{e}^{-i\omega r^{*}}+\mathcal{R}^{-}_{ \mathrm{ext}}\mathrm{e}^{i\omega r^{*}}\right),&r^{*}_{\mathrm{ext}}\to-\infty \end{cases}, \tag{56}\] where \(r^{*}_{\mathrm{int}}\) and \(r^{*}_{\mathrm{ext}}\) represent the radial tortoise coordinates \(r^{*}\) for the black hole's interior and exterior, respectively. The reflection coefficients \(\mathcal{R}^{\pm}_{\mathrm{int,ext}}\) and transmission coefficients \(\mathcal{T}^{\pm}_{\mathrm{int,ext}}\), which depend on the observer's mode numbers \(\omega\) and \(\ell\), can be computed numerically (or semi-analytically with confluent Heun functions) with the above boundary conditions on the wave Eq. (42); see the Appendix for more details. Defining annihilation operators \(\mathrm{i}^{\mathrm{int,ext}}a^{\pm}_{\mathrm{ob,em}}\) for each respective set of modes \(\mathrm{i}^{\mathrm{int,ext}}f^{\pm}_{\mathrm{ob,em}}\), we can now calculate the particle content seen by the observer. The vacuum expectation values of the number operators associated with each choice of observer are \[\langle N^{\pm}_{\mathrm{int,ext}}\rangle\equiv\left\langle 0_{\mathrm{em}} \left|\left(\mathrm{i}^{\mathrm{int,ext}}a^{\pm}_{\mathrm{ob}}\right)^{ \dagger}\left(\mathrm{i}^{\mathrm{int,ext}}a^{\pm}_{\mathrm{ob}}\right)\right|0 _{\mathrm{em}}\right\rangle, \tag{57}\] with \(\langle N^{+}_{\mathrm{ext}}\rangle\) for the expected particle number observed at future null infinity \(\mathscr{I}^{+}\), \(\langle N^{-}_{\mathrm{ext}}\rangle\) for an observer at the event horizon \(\mathcal{H}^{-}_{r_{+}}\), \(\langle N^{+}_{\mathrm{int}}\rangle\) for an outgoing observer at the inner horizon \(\mathcal{H}^{+}_{r_{-}}\), and \(\langle N^{-}_{\mathrm{int}}\rangle\) for an ingoing observer at the inner horizon \(\mathcal{H}^{-}_{r_{-}}\). Using Eqs. (46) and (47) and evaluating the inner product between the the emitter's modes and the observer's back-propagated modes along the past null Cauchy hypersurface, the anticipated number operators can be calculated. After summing over the angular modes, the following inner products yield nontrivial (i.e. up to an irrelevant phase) contributions to the Bogoliubov coefficients: \[\langle N^{+}_{\mathrm{ext}}\rangle =\int_{0}^{\infty}d\bar{\omega}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \[\langle N^{+}_{\text{int}}\rangle =\int_{0}^{\infty}d\bar{\omega}\,\,\left|\vbox{\hbox{\includegraphics[ ]{fig/P_0\(0\)0_ event horizon and therefore tied to \(\kappa^{-}\), although adiabaticity is not quite satisfied there. In principle, one may also calculate the spectrum of outgoing Hawking modes seen at the event horizon, corresponding to the effective temperature \(\kappa^{+}\) there, and indeed, an infalling observer will still see an exponentially redshifting past horizon below them after they cross the event horizon. However, calculating the outgoing modes for an ingoing horizon observer (and vice versa) requires Fourier-decomposing the observer's modes of Eq. (17) so that they can be back-propagated to the past horizon, which will be deferred to a future study; for more details, see, e.g., Ref. [63]. Though only frequencies as high as \(\omega M\!\sim\!0.6\) are shown for the horizon spectra of Fig. 7 (the \(\omega M\!\gg\!1\) regime is beyond our current numerical capabilities), any Figure 7: Graybody \(s\)-mode factors from Eqs. (61) modifying the thermal Hawking spectra seen by an outgoing observer asymptotically far from the black hole looking downward (top left), an ingoing observer at the event horizon looking upward (top right), an ingoing observer at the inner horizon looking upward (bottom left), and an outgoing observer at the inner horizon looking downward (bottom right). Different black hole charges are shown with respective colors from dark blue to yellow: \(Q/M\) = 0.1, 0.5, 0.7, 0.9, 0.96, 0.99, and 0.999. Solid curves show the numerically computed spectra, while dotted curves show the positive-valued spectra obtained from a completely thermal distribution with temperatures \(\kappa^{+}/(2\pi)\) from Eq. (23) (upper left), \(\kappa^{-}/(2\pi)\) from Eq. (24) (upper right), \(\kappa^{-}/(2\pi)\) from Eq. (25) (lower left), or \(\kappa^{+}/(2\pi)\) from Eq. (26) (lower right). higher frequencies are all but irrelevant compared to the peaks of the blackbodies, which occur between \(\omega M\sim 0.2\) (for the lowest charge \(Q\)) and \(\omega M\sim 0.01\) (for the highest charge plotted). While the Hawking spectra seen at infinity and the event horizon contain straightforward graybody deviations from a thermal spectrum, the spectra seen at the inner horizon tell a different story. Two spectra for the left and right portions of the inner horizon are shown in the lower left and right panels of Fig. 7, respectively. These spectra bear little resemblance to the initial \(\varkappa_{+}\) blackbodies seen at infinity; nonetheless, we still present the spectra normalized to the \(\varkappa_{+}\) blackbodies due to the factors in the denominators of Eqs. (61). At the left leg of the inner horizon (lower left panel of Fig. 7), the particle spectrum given by the Bogoliubov coefficient between the observer's and emitter's vacuum states all appear to be ultraviolet-divergent; if an exponential cutoff does occur, it must happen at frequencies higher than we are able to calculate. A qualitatively similar spectrum would occur for a Planckian distribution with negative temperature (albeit with an overall sign change), as anticipated in Secs. III.1 and III.3, and for reference, the corresponding negative-temperature \(\kappa^{-}\) blackbodies are shown by the dotted curves in Fig. 7. Notably, as \(Q/M\to 1\), the ultraviolet divergence grows stronger, though as \(Q/M\to 0\), the entire spectrum diverges (once \(Q/M\) goes below \(\sim 0.01\), the spectrum is too high to be seen on these lower two plots). Such a panchromatic divergence can be attributed to the fact that the inner horizon's surface gravity \(\varkappa_{-}\), and consequently the temperatures \(\kappa^{-}\) from Eq. (25) and \(\kappa^{+}\) from Eq. (26), grow to infinity in the Schwarzschild limit, since \(r_{-}\to 0\). At the inner horizon's right leg (lower right panel of Fig. 7), the curves once again diverge at higher frequencies, indicating quasi-temperatures much higher than the underlying \(\varkappa_{+}\) blackbodies. These temperatures may be high enough to be negative, though when the black hole charge is large enough, the spectra begin to deviate significantly from the dotted lines showing \(\kappa^{-}\) blackbodies. Nonetheless, the spectrum is still everywhere nonthermal as a result of the frequency-dependent additive final term in Eq. (61d). #### iv.2.2 Spectra for higher spherical harmonics The dependence of the Hawking spectra on the spherical harmonic mode number \(\ell\) is shown in Fig. 8. Instead of plotting the entire spectrum for each \(\ell\), we sample two points from each spectrum, one at a higher frequency (\(\omega M=0.5\), blue points) and one at a lower frequency (\(\omega M=0.05\), red points). In almost all cases (except the spectra for \(\langle N_{\rm ext}^{-}\rangle\); see the upper right panel of Fig. 7), the higher-frequency blue points exceed their lower-frequency red counterparts, indicating that the general qualitative trends of each spectrum in Fig. 7 remain intact for higher-\(\ell\) modes. For the Hawking radiation seen asymptotically far from the black hole, the \(\ell=0\) mode dominates over any higher harmonics [64], as can be seen from the drop-off of the solid circular points in Fig. 8. However, for radiation seen at the outer and inner horizons, the spectra do not seem to fall off as \(\ell\) is increased. It would appear that the ultraviolet-divergent Hawking spectra contain substantial contributions not only from the spherical \(\ell=0\) modes, but also from much higher harmonics. One important implication of this result is that semi-classical calculations of the renormalized stress-energy tensor in the (1+1)D Polyakov approximation potentially miss out on key beyond-\(s\)-wave physics near the horizons. ## V Discussion Two of the main questions underpinning this study are as follows: how would Hawking radiation appear for someone at a black hole's inner horizon? And what is meant by a negative Hawking temperature in this context? Ultimately, one may wish to understand the full quantum back-reaction near the inner horizon, and though we are not at a place to provide a definitive assertion regarding the fully dynamical, quantum gravitational back-reaction, the present analysis does shed further light on the nature of both Hawking radiation and semi-classical charged black holes. Figure 8: Sampled points at \(\omega M=0.5\) (blue) and \(\omega M=0.05\) (red) for the four spectra of Fig. 7 when generalized to higher-\(\ell\) modes. All points use a black hole charge of \(Q/M=0.1\). The \(\ell=0\) mode dominates the spectrum \(\langle N_{\rm ext}^{+}\rangle\) seen at infinity, but higher-\(\ell\) modes make substantial contributions to the spectra seen at the horizons. To study the Hawking radiation seen anywhere near or far from a black hole, we began with the effective temperature functions \(\kappa^{\pm}\) for an observer looking radially inward or outward [6; 7], as defined in Sec. II.1, which reproduces Hawking's original calculation in the geometric optics limit but generalizes to an arbitrary inertial observer at a radius \(r_{\rm ob}\). This effective temperature, given by Eqs. (22) for an infaller from rest at infinity, diverges at the inner horizon, and regardless of the observer's orbital parameters, it becomes negative (indicative of modes that are blueshifting instead of redshifting) once the observer falls close enough to the inner horizon. As it turns out, this negative temperature is not merely confined to the black hole's interior that would remain inaccessible to the outside universe; instead, when the charge-to-mass ratio is high enough, specifically when \((Q/M)^{2}>8/9\), the inner horizon becomes close enough to the event horizon that a negative \(\kappa^{+}\) is detected _outside_ the black hole. The change in sign of the effective Hawking temperature for observers close enough to the inner horizon was found in Sec. III.3 to occur not just in the radial direction, but in every direction the observer looks in their field of view. The classical phenomenon of mass inflation involves a divergence only at a single radial point in the sky (as an outgoing observer approaches the inner horizon, the sky above them will shrink to a point and become infinitely blueshifted), but semi-classically, Hawking radiation originating from the past horizon will fill the observer's entire field of view with diverging, negative-temperature radiation as they approach the inner horizon. Are the approximations of the effective temperature formalism even valid whenever \(\kappa\) becomes negative? By studying the adiabatic control function \(\epsilon\) in Sec. III.2, we can learn whether \(\epsilon\) is small enough for the adiabatic condition to be satisfied and therefore for \(\kappa\) to reproduce approximately thermal Bogoliubov coefficients. We find that at the inner horizon, the outgoing modes for an ingoing observer are sufficiently adiabatic for a large enough black hole charge \(Q\), while the ingoing modes are never adiabatic there. To complement these effective temperature results and provide a more rigorous calculation in the regimes where the adiabatic condition fails, we finally performed a full wave mode analysis in Sec. IV to determine the Bogoliubov spectrum at each of the asymptotic regimes. To do so, the observer's wave modes were back-propagated through the spacetime to the position of the Unruh emitter using Eq. (42) for a massless scalar Klein-Gordon field, and the inner product of the observer's and emitter's modes was computed. Asymptotically far from the black hole, the spectrum becomes completely thermal for high enough frequencies (i.e. in the geometric optics limit), which is consistent with the vanishing of the outgoing adiabatic control function \(\epsilon^{+}\) at infinity. In contrast, for an observer at the event horizon, \(\epsilon^{+}\) is almost never significantly smaller than unity, and the corresponding Bogoliubov spectrum does deviate significantly from thermality in the geometric optics limit. At the inner horizon, the spectrum of scalar particles appears quite different from that of a positive-temperature blackbody, and instead looks much more like the spectrum one would obtain (up to an overall change in sign) from a blackbody with a negative temperature. The spectra are thus mostly consistent with the effective temperature predictions, despite the general lack of adiabaticity in that regime. The familiar Rayleigh-Jeans power law is still present at lower frequencies, but at higher frequencies, the spectral intensity continues to climb even higher. Even if an exponential decay at higher frequencies never occurs with the present formalism, one may nonetheless suspect that some ultraviolet cutoff will exist once the semi-classical approximation breaks down at the Planck scale, or, more importantly, that the semi-classical back-reaction in a dynamical collapse would prevent such a spectrum from ever occurring in the first place. The right leg of the inner horizon is unique in that its spectrum contains substantial contributions not only from the outgoing Unruh modes originating from the past horizon below the observer (as in all the other cases), but also from the ingoing Unruh modes originating from the sky above; see Eq. (58d). The resulting spectra are even more divergent at low \(Q\) than those of the left leg of the inner horizon, though at higher \(Q\), the spectra appear much tamer (albeit still with a much higher gray-body temperature than that observed asymptotically far away). If one wishes to compare these spectra with the effective temperature formalism of Sec. III, it should be noted that the adiabatic control functions of Fig. 3 are only valid for the left leg of the inner horizon--for the right leg, \(\epsilon^{-}\) always equals 1, while \(\epsilon^{+}\) is always greater than 1. When comparing the the inner horizon values of the effective temperatures \(\kappa^{\pm}\) with their corresponding Bogoliubov spectra, it is important to note that the two spectra shown in the lower panels of Fig. 7 are associated with the Hawking sectors that are _not_ expected to yield diverging effective temperatures (but are nonetheless negative); namely, the ingoing temperature \(\kappa^{-}\) in Eq. (25) and the outgoing temperature \(\kappa^{+}\) in Eq. (26). If an ingoing (or outgoing) observer at the inner horizon looks downward (or upward, respectively), they should be met with an even stronger dose of diverging Hawking radiation. But what Fig. 7 communicates is that for an outgoing observer approaching the inner horizon, while they can look upward to see the Penrose blueshift singularity forming, if they look downward at the initially dimming and redshifting past horizon, even this surface will eventually begin to blueshift and produce a semi-classically divergent spectrum of Hawking radiation. The implications of these Hawking spectra are clear: the interaction of a quantum scalar field with a charged black hole results in runaway particle creation detected at the inner horizon. The particle spectrum diverges at all frequencies as \(Q/M\to 0\), since the inner horizon co incides with the \(r=0\) singularity that was already found to feature a diverging Hawking flux in Ref. [45]. But even for nonzero charge, the inner horizon spectrum becomes highly blueshifted and is potentially ultraviolet-divergent. Such a highly energetic source of radiation will quickly become amplified in the radial direction and provide an ongoing source for the Poisson-Israel mass inflation instability. Even if the observer is taken to be something as simple as a two-level atom, one may speculate that the implied Hawking flux would energize the atom to such an extent that the inevitable result is a positive feedback loop resulting in the collapse of the spacetime geometry into a spacelike singularity. Several important questions remain to be answered. While the effective temperature and Bogoliubov spectrum formalisms complement each other well in many regards, they nevertheless lead to some conceptual incongruuities: for example, if an ingoing observer at either horizon looks up at the sky, will they see Hawking particles that originated from past null infinity? The effective temperature calculation says yes, and the Bogoliubov spectrum calculation says no. Additionally, one may wish to explore further the implications of the dominance of higher-\(\ell\) Hawking modes once an observer reaches either horizon (note that the higher-\(\ell\) modes of Fig. 8 cannot be directly compared with the observed angular modes of Fig. 4, since the latter describe angular modes with respect to the observer while the former describe angular modes with respect to the black hole's center). But the biggest question one may wish to ask is whether either calculation is able to predict the presence of "real" particles. We have not made use of any response functions, Unruh-DeWitt detectors, or renormalization schemes that would indicate the influence of a Hawking particle on an observer or on the underlying spacetime geometry. Nonetheless, in analyzing how the effective temperature depends on an observer's energy, it does appear to preserve Lorentz covariance in some regimes, and regardless, there is no doubt that the semiclassical effects predicted here should substantially alter the spacetime geometry near the inner horizon. ## Appendix A Back-scattering coefficients via confluent Heun functions In this appendix we outline the methodology to compute the back-scattering coefficients used in Sec. IV to find the graybody factors associated with the Hawking spectrum at infinity, the event horizon, and the inner horizon. Eqs. (54)\(-\)(56) provide the boundary conditions for the observer's back-scattered mode functions in terms of the reflection coefficients \(\mathcal{R}^{\pm}_{\text{int,ext}}\) and transmission coefficients \(\mathcal{T}^{\pm}_{\text{int,ext}}\), where the subscript labels whether the scattering occurs in the black hole's interior ("int") or exterior ("ext"), and the superscript labels whether the modes are outgoing (\(+\)) or ingoing (\(-\)) prior to back-propagation, at the future null surface in the relevant spacetime sector. Conservation of the Wronskian dictates that these coefficients satisfy the following normalization conditions: \[\left|\mathcal{T}^{\pm}_{\text{int}}\right|^{2}-\left|\mathcal{R} ^{\pm}_{\text{int}}\right|^{2} =1, \tag{10a}\] \[\left|\mathcal{T}^{\pm}_{\text{ext}}\right|^{2}+\left|\mathcal{R} ^{\pm}_{\text{ext}}\right|^{2} =1, \tag{10b}\] which will provide a check to ensure the accuracy of the numerical scheme. The negative sign associated with \(\mathcal{R}^{\pm}_{\text{int}}\) in Eq. (10a) is due to the fact that the corresponding substates have a negative norm; the scattering potential inside the black hole allows for the existence of both the observer's original modes \(\exp(-i\omega r^{*})\) (positive frequency with respect to the timelike coordinate \(r^{*}\)) and the anomalous modes \(\exp(+i\omega r^{*})\). The back-scattering coefficients can be calculated either by implementing an implicit numerical ODE method to solve the Klein-Gordon wave Eq. (42), or by matching analytic solutions to that equation. Here we will explore the latter option. Instead of the mode separation of Eq. (41), the Klein-Gordon scalar field can be separated as \[\phi_{\omega\ell m}=\frac{R_{\omega\ell}(r)\ \mathrm{e}^{\pm i\omega t}\ Y_{ \ell m}(\theta,\varphi)}{\sqrt{4\pi\omega}}, \tag{11}\] with the upper (\(+\)) sign in the exponential for the outgoing modes observed at the right leg of the inner horizon (which can be written as \({}^{\text{int}}R^{+}_{\text{ob}}\)) and the lower (\(-\)) sign for both the ingoing modes observed at the left leg of the inner horizon (\({}^{\text{int}}R^{-}_{\text{ob}}\)) as well as the outgoing modes observed at future null infinity (\({}^{\text{ext}}R^{+}_{\text{ob}}\)). In terms of the modes of Eq. (42), \(R_{\omega\ell}\) and \(f_{\omega\ell}\) are related by \[f_{\omega\ell}(t,r)=rR_{\omega\ell}(r)\ \mathrm{e}^{\pm i\omega t}. \tag{12}\] The Klein-Gordon wave equation for the spatial modes \(R_{\omega\ell}\), or equivalently, the wave Eq. (42) for \(f_{\omega\ell}\), contains three singular points throughout the spacetime, which occur whenever \(r^{*}\rightarrow\pm\infty\). Two of these are the regular singularities located at the inner and outer horizons, and the third is an irregular, rank-1 singularity at spatial infinity. This structure suggests that the wave equation can be cast into confluent Heun form: first, apply a Mobius transformation to define the new coordinate \[z\equiv\frac{r-r_{-}}{r_{+}-r_{-}} \tag{13}\] so that the singular points are shifted from \(r=(r_{-},r_{+},\infty)\) to \(z=(0,1,\infty)\). Then, apply a gauge transformation to the field variable that keeps the singular points fixed (such a shift in the Frobenius solution indices is known as an \(F\)-homotopic transformation): \[R(z)=z^{\frac{\gamma-1}{2}}|z-1|^{\frac{\delta-1}{2}}\ \mathrm{e}^{\frac{\delta}{2}z}\ Z(z), \tag{14}\] so that the Klein-Gordon wave Eq. (39) reduces to: \[\frac{d^{2}Z}{dz^{2}}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\varepsilon \right)\frac{dZ}{dz}+\left(\frac{q}{z}+\frac{\alpha-q}{z-1}\right)Z=0 \tag{15}\] provided \[q =\ell(\ell+1)+2i\omega\frac{r_{+}r_{-}}{r_{+}-r_{-}}-4\omega^{2}r_{-} ^{2}\] \[\quad+4\omega^{2}\left(\frac{r_{+}r_{-}}{r_{+}-r_{-}}\right)^{2}, \tag{111a}\] \[\alpha =-2i\omega(r_{+}-r_{-})-4\omega^{2}r_{-}^{2},\] (11b) \[\gamma =1-2i\omega\frac{r_{-}^{2}}{r_{+}-r_{-}},\] (11c) \[\delta =1-2i\omega\frac{r_{+}^{2}}{r_{+}-r_{-}},\] (11d) \[\varepsilon =-2i\omega(r_{+}-r_{-}). \tag{11e}\] For the more general Kerr-Newman case, the corresponding version of these parameters can be inferred from, e.g. Ref. [65]. Also, note that the signs of the three exponents in Eq. (101) can be either positive or negative, corresponding either to outgoing or ingoing waves at each of the singular points. Regardless of this gauge choice, both ingoing and outgoing modes will always be recovered by the choice of linear combinations of modes for \(Z(z)\). Two linearly independent solutions to Eq. (100) that are regular at the inner horizon are given via confluent Heun functions for the equation's allowed \(F\)-homotopic automorphisms: \[Z_{(0)}(z) =A_{(0)}Z_{(0)}^{A}(z)+B_{(0)}Z_{(0)}^{B}(z), \tag{112a}\] \[Z_{(0)}^{A}(z) =\text{HeunC}\left(q,\alpha,\gamma,\delta,\varepsilon;z\right),\] (112b) \[Z_{(0)}^{B}(z) =z^{1-\gamma}\text{HeunC}\left(q^{\prime},\alpha^{\prime},2-\gamma,\delta,\varepsilon;z\right), \tag{112c}\] with arbitrary complex coefficients \(A_{(0)}\) and \(B_{(0)}\), with the definitions \[q^{\prime} =q-(\delta-\varepsilon)(1-\gamma), \tag{113a}\] \[\alpha^{\prime} =\alpha+\varepsilon(1-\gamma), \tag{113b}\] and with the functions' argument structure following the convention used in Wolfram Mathematica, which has newly implemented Heun functions in version 12.1. These negative- and positive-frequency solutions can be computed with a forwardly stable set of power series that are convergent everywhere except at the singular points \(z=1,\infty\) and are linearly independent except when \(\gamma=1\), in which case the factor \(z^{1-\gamma}\) can be replaced with \(\ln(z)\). As a reminder, the goal here is to compute the values of the reflection and transmission coefficients \(\mathcal{R}_{\text{int,ext}}^{\pm}\) and \(\mathcal{T}_{\text{int,ext}}^{\pm}\), which can be used to calculate the observed spectra of Eq. (61). These coefficients are tied to the asymptotic forms of the field modes given in Eqs. (54)\(-\)(56), which in the present notation take the form \[\text{ext}R_{\text{ob}}^{+}(z) \to\begin{cases}\frac{\text{e}^{i\omega r_{-}}}{r_{+}-r_{-}} \text{e}^{i\omega(r_{+}-r_{-})z}|z|^{2i\omega-1}+\mathcal{R}_{\text{ext}}^{+} \frac{\text{e}^{-i\omega r_{-}}}{r_{+}-r_{-}}\text{e}^{-i\omega(r_{+}-r_{-})z }|z|^{-2i\omega-1},&z\to\infty\\ \mathcal{T}_{\text{ext}}^{+}\frac{\text{e}^{i\omega r_{+}}}{r_{+}}|z-1|^{i \omega\frac{r_{+}^{2}}{r_{+}-r_{-}}},&z\to 1\\ 0,&z\to 0\end{cases}\, \tag{114a}\] \[\text{int}R_{\text{ob}}^{+}(z) \to\begin{cases}\mathcal{R}_{\text{int}}^{+}\mathcal{T}_{\text{ ext}}^{-}\frac{\text{e}^{i\omega r_{-}}}{r_{+}-r_{-}}\text{e}^{i\omega(r_{+}-r_{-})z }|z|^{2i\omega-1},&z\to\infty\\ \mathcal{R}_{\text{int}}^{+}\frac{\text{e}^{i\omega r_{+}}}{r_{+}}|z-1|^{i \omega\frac{r_{+}^{2}}{r_{+}-r_{-}}}+\left(\mathcal{T}_{\text{int}}^{+}+ \mathcal{R}_{\text{int}}^{+}\mathcal{R}_{\text{ext}}^{-}\right)\frac{\text{e} ^{-i\omega r_{+}}}{r_{+}}|z-1|^{-i\omega\frac{r_{+}^{2}}{r_{+}-r_{-}}},&z\to 1\\ \frac{\text{e}^{-i\omega r_{-}}}{r_{-}}|z|^{i\omega\frac{r_{-}^{2}}{r_{+}-r_{-}}},&z\to 0\end{cases}\,\] (114b) \[\text{int}R_{\text{ob}}^{-}(z) \to\begin{cases}\mathcal{T}_{\text{int}}\mathcal{T}_{\text{ext}}^{-} \frac{\text{e}^{-i\omega r_{-}}}{r_{+}-r_{-}}\text{e}^{-i\omega(r_{+}-r_{-})z }|z|^{-2i\omega-1},&z\to\infty\\ \mathcal{T}_{\text{int}}^{-}\frac{\text{e}^{-i\omega r_{+}}}{r_{+}}|z-1|^{-i \omega\frac{r_{+}^{2}}{r_{+}-r_{-}}}+\left(\mathcal{R}_{\text{int}}^{-}+ \mathcal{T}_{\text{int}}^{-}\mathcal{R}_{\text{ext}}^{-}\right)\frac{\text{e}^{ i\omega r_{+}}}{r_{+}}|z-1|^{i\omega\frac{r_{+}^{2}}{r_{+}-r_{-}}},&z\to 1\\ \frac{\text{e}^{-i\omega r_{-}}}{r_{-}}|z|^{i\omega\frac{r_{-}^{2}}{r_{+}-r_{-}}},&z\to 0\end{cases}. \tag{114c}\] Here the integration constant for the tortoise coordinate \(r^{*}\) of Eq. (10) is chosen so that \[r^{*}=r+\frac{r_{+}^{2}}{r_{+}-r_{-}}\ln|z-1|-\frac{r_{-}^{2}}{r_{+}-r_{-}}\ln| z|\,. \tag{115}\] Asymptotically, the modes of Eq. (112) at the inner horizon (\(z=0\)) reduce to \[R(z)\to A_{(0)}z^{-\frac{1-\gamma}{2}}+B_{(0)}z^{\frac{1-\gamma}{2}},\qquad z \to 0, \tag{116}\] since the confluent Heun functions are normalized to unity when the independent variable equals zero, provided \(\gamma\) is not a nonpositive integer. This asymptotic form can then be matched to the modes of Eq. (114) to find expressions for \(A_{(0)}\) and \(B_{(0)}\). One obtains \(A_{(0)}=0\) for all three sets of modes in Eq. (114), since by definition the inner horizon observer only sees positive frequency waves there. For the interior observer modes \({}^{\rm int}R^{\pm}_{\rm ob}(z)\), \(B_{(0)}=\exp(-i\omega r_{-})/r_{-}\), while the exterior observer modes \({}^{\rm ext}R^{+}_{\rm ob}(z)\) are only defined for \(z\geq 1\) and must be treated separately. Unfortunately, analytic asymptotic forms for the modes of Eq. (113) are not known at the spacetime's two other singular points. An explicit solution to the central two-point connection problem for confluent Heun functions is still outstanding and is directly related to the inverse of Hilbert's 21st problem; currently, analytic forms of the monodromy matrices have only been found for the reduced confluent Heun equation with \(\varepsilon=0\)[66]. Thus, we proceed by defining a new set of local Heun modes at each singular point and numerically matching their coefficients via the algorithm set forth in Ref. [67]. At the event horizon (\(z=1\)), a set of regular, linearly independent solutions to Eq. (112) that are convergent everywhere except at the singular points \(z=0,\infty\) can be written as: \[Z_{(1)}(z) =A_{(1)}Z_{(1)}^{A}(z)+B_{(1)}Z_{(1)}^{B}(z), \tag{115a}\] \[Z_{(1)}^{A}(z) =\text{HeunC}\left(q-\alpha,-\alpha,\delta,\gamma,-\varepsilon; 1-z\right),\] (115b) \[Z_{(1)}^{B}(z) =(1-z)^{1-\delta}\] \[\times\text{HeunC}\left(q^{\prime}-\alpha^{\prime},-\alpha^{ \prime},2-\delta,\gamma,-\varepsilon;1-z\right), \tag{115c}\] with arbitrary complex coefficients \(A_{(1)}\) and \(B_{(1)}\), and with the definitions \[q^{\prime} =q-\gamma(1-\delta), \tag{116a}\] \[\alpha^{\prime} =\alpha+\varepsilon(1-\delta). \tag{116b}\] Asymptotically, the modes of Eq. (115) at the event horizon (\(z=1\)) reduce to \[R(z)\to\mathrm{e}^{\frac{z}{2}}|z-1|^{-\frac{1-\delta}{2}}\left(A_{(1)}+B_{(1 )}(1-z)^{1-\delta}\right), \tag{117}\] which leads to the matching \[{}^{\rm ext}A_{(1)}^{+} =0,\] \[{}^{\rm ext}B_{(1)}^{+} =\mathcal{T}_{\rm ext}^{+}\frac{\mathrm{e}^{i\omega(2r_{+}-r_{-} )}}{r_{+}}; \tag{118a}\] \[{}^{\rm int}A_{(1)}^{+} =\left(\mathcal{T}_{\rm int}^{+}+\mathcal{R}_{\rm int}^{+} \mathcal{R}_{\rm ext}^{-}\right)\frac{\mathrm{e}^{-i\omega r_{-}}}{r_{+}},\] \[{}^{\rm int}B_{(1)}^{+} =\mathcal{R}_{\rm int}^{+}\frac{\mathrm{e}^{i\omega(2r_{+}-r_{-} )}}{r_{+}};\] (118b) \[{}^{\rm int}A_{(1)}^{-} =\mathcal{T}_{\rm int}^{-}\frac{\mathrm{e}^{-i\omega r_{-}}}{r_{+}},\] \[{}^{\rm int}B_{(1)}^{-} =\left(\mathcal{R}_{\rm int}^{-}+\mathcal{T}_{\rm int}^{-} \mathcal{R}_{\rm ext}^{-}\right)\frac{\mathrm{e}^{i\omega(2r_{+}-r_{-})}}{r_{+ }}; \tag{118c}\] for each respective set of modes; i.e., the coefficients from Eq. (115) for \({}^{\rm ext,int}R^{\pm}_{\rm ob}\) are labeled \({}^{\rm ext,int}A_{(1)}^{\pm}\) and \({}^{\rm ext,int}B_{(1)}^{\pm}\). Eqs. (118) are strictly only valid for \(z<1\); for the exterior (\(z>1\)), an additional factor of \(\exp[2\pi\omega r_{+}^{2}/(r_{+}-r_{-})]\) must be included in the right-hand side of the equations for each of the \(B\) coefficients to account for the lack of absolute values in the trailing factor of Eq. (117). At some point \(z_{*}\) in the interior (we take \(z_{*}=0.5\) for simplicity), both Eqs. (113) and (115) provide regular solutions to the wave Eq. (112). One can convert between them with the respective linear systems \[Z_{(1)}^{A,B}(z_{*})=C_{A}^{A,B}Z_{(0)}^{A}(z_{*})+C_{B}^{A,B}Z_ {(0)}^{B}(z_{*}), \tag{119a}\] \[(Z_{(1)}^{A,B})^{\prime}(z)\big{|}_{z_{*}}\] \[=C_{A}^{A,B}(Z_{(0)}^{A})^{\prime}(z)\big{|}_{z_{*}}+C_{B}^{A,B}(Z _{(0)}^{B})^{\prime}(z)\big{|}_{z_{*}}. \tag{119b}\] The functions \(Z_{(0)}^{A}(z_{*})\), \(Z_{(0)}^{B}(z_{*})\), \(Z_{(1)}^{A}(z_{*})\) and \(Z_{(1)}^{B}(z_{*})\) can be computed numerically, and therefore the constants \(C_{A}^{A}\), \(C_{A}^{B}\), \(C_{B}^{A}\), and \(C_{B}^{B}\) can also be computed. Once these constants are known, the total eigenmodes \(Z_{(0)}(z)\) and \(Z_{(1)}(z)\) can be matched to solve for each of the back-scattering coefficients: \[A_{(0)} =A_{(1)}C_{A}^{A}+B_{(1)}C_{A}^{B}, \tag{120a}\] \[B_{(0)} =A_{(1)}C_{B}^{A}+B_{(1)}C_{B}^{B}. \tag{120b}\] Once the back-scattering coefficients connecting \(z=0\) to \(z=1\) are known, a similar process will yield the coefficients connecting \(z=1\) to \(z=\infty\). As \(z\) approaches infinity, the confluent Heun solutions to Eq. (112) asymptotically (in a sector) take the form \[Z_{(\infty)}(z) =A_{(\infty)}Z_{(\infty)}^{A}(z)+B_{(\infty)}Z_{(\infty)}^{B}(z), \tag{121a}\] \[Z_{(\infty)}^{A}(z) =z^{-\frac{\alpha}{\varepsilon}},\] (121b) \[Z_{(\infty)}^{B}(z) =\mathrm{e}^{-\varepsilon z}z^{\frac{\alpha}{\varepsilon}-\gamma- \delta}, \tag{121c}\] with arbitrary complex coefficients \(A_{(\infty)}\) and \(B_{(\infty)}\). Comparison with the asymptotic forms of Eq. (114) reveals the following matched values for these coefficients: \[{}^{\rm ext}A_{(\infty)}^{+} =\mathcal{R}_{\rm ext}^{+}\frac{\mathrm{e}^{-i\omega r_{-}}}{r_{+} -r_{-}},\] \[{}^{\rm ext}B_{(\infty)}^{+} =\frac{\mathrm{e}^{i\omega r_{-}}}{r_{+}-r_{-}}; \tag{122a}\] \[{}^{\rm int}A_{(\infty)}^{+} =0,\] \[{}^{\rm int}B_{(\infty)}^{+} =\mathcal{R}_{\rm int}^{+}\mathcal{T}_{\rm ext}^{-}\frac{\mathrm{e}^{i \omega r_{-}}}{r_{+}-r_{-}};\] (122b) \[{}^{\rm int}A_{(\infty)}^{-} =\mathcal{T}_{\rm int}^{-}\mathcal{T}_{\rm ext}^{-}\frac{\mathrm{e}^{-i \omega r_{-}}}{r_{+}-r_{-}},\] \[{}^{\rm int}B_{(\infty)}^{-} =0, \tag{122c}\] where the coefficient notation is the same as in Eq. (118). For some sufficiently large radial coordinate \(z=z^{*}\) (we find heuristically that \(z^{*}=18/(\omega\sqrt{1-Q^{2}})\) is more than sufficient to ensure convergence at machine-level precision), both Eqs. (113) and (119) satisfy the wave Eq. (116), and so the two sets of solutions can be matched. One has the system \[Z^{A,B}_{(1)}(z^{*})=D^{A,B}_{A}Z^{A}_{(\infty)}(z^{*})+D^{A,B}_{B }Z^{B}_{(\infty)}(z^{*}), \tag{121a}\] \[(Z^{A,B}_{(1)})^{\prime}(z)\big{|}_{z^{*}}\] \[=D^{A,B}_{A}(Z^{A}_{(\infty)})^{\prime}(z)\big{|}_{z^{*}}+D^{A,B} _{B}(Z^{B}_{(\infty)})^{\prime}(z)\big{|}_{z^{*}} \tag{121b}\] to solve for the constants \(D^{A}_{A}\), \(D^{B}_{A}\), \(D^{A}_{B}\), and \(D^{B}_{B}\), which can then be used to solve for the back-scattering coefficients with the system \[A_{(\infty)} =A_{(1)}D^{A}_{A}+B_{(1)}D^{B}_{A}, \tag{122a}\] \[B_{(\infty)} =A_{(1)}D^{A}_{B}+B_{(1)}D^{B}_{B}. \tag{122b}\] Altogether, the relevant back-scattering coefficients can be written as follows (note that multiple variations to the below equations are possible based on implicit relations between and among the \(C\) and \(D\) coefficients): \[\mathcal{T}^{+}_{\text{ext}} =\frac{r_{+}}{r_{+}-r_{-}}\frac{1}{D^{B}_{B}}\ \text{e}^{-2i\omega(r_{+}-r_{-})-\pi\omega/ \varkappa_{+}}, \tag{123a}\] \[\mathcal{T}^{-}_{\text{ext}} =\frac{r_{+}-r_{-}}{r_{+}}\frac{\widetilde{D}}{D^{B}_{B}},\] (123b) \[\mathcal{R}^{+}_{\text{ext}} =\frac{D^{A}_{B}}{D^{B}_{B}}\ \text{e}^{2i\omega r_{-}},\] (123c) \[\mathcal{R}^{-}_{\text{ext}} =-\frac{D^{A}_{B}}{D^{B}_{B}}\ \text{e}^{-2i\omega r_{+}-\pi\omega/ \varkappa_{+}},\] (123d) \[\mathcal{T}^{+}_{\text{int}} =-\frac{r_{+}}{r_{-}}\frac{C^{A}_{A}D^{B}_{B}-C^{A}_{A}D^{A}_{B} \ \text{e}^{-4i\omega r_{+}-\pi\omega/\varkappa_{+}}}{\widetilde{C}D^{B}_{B}},\] (123e) \[\mathcal{T}^{-}_{\text{int}} =-\frac{r_{+}}{r_{-}}\frac{C^{A}_{A}}{\widetilde{C}},\] (123f) \[\mathcal{R}^{+}_{\text{int}} =\frac{r_{+}}{r_{-}}\frac{C^{A}_{A}}{\widetilde{C}}\ \text{e}^{-2i \omega r_{+}},\] (123g) \[\mathcal{R}^{-}_{\text{int}} =\frac{r_{+}}{r_{-}}\frac{C^{A}_{A}D^{B}_{B}-C^{A}_{A}D^{A}_{B}} {\widetilde{C}D^{B}_{B}}\ \text{e}^{-2i\omega r_{+}}, \tag{123h}\] where \[\widetilde{C} \equiv C^{A}_{A}C^{B}_{B}-C^{B}_{A}C^{A}_{B}, \tag{124}\] \[\widetilde{D} \equiv D^{A}_{A}D^{B}_{B}-D^{B}_{A}D^{A}_{B}. \tag{125}\] The resulting numerical values of the back-scattering coefficients are used to calculate the Hawking spectra of Fig. (7).
2304.09859
pymovements: A Python Package for Eye Movement Data Processing
We introduce pymovements: a Python package for analyzing eye-tracking data that follows best practices in software development, including rigorous testing and adherence to coding standards. The package provides functionality for key processes along the entire preprocessing pipeline. This includes parsing of eye tracker data files, transforming positional data into velocity data, detecting gaze events like saccades and fixations, computing event properties like saccade amplitude and fixational dispersion and visualizing data and results with several types of plotting methods. Moreover, pymovements also provides an easily accessible interface for downloading and processing publicly available datasets. Additionally, we emphasize how rigorous testing in scientific software packages is critical to the reproducibility and transparency of research, enabling other researchers to verify and build upon previous findings.
Daniel G. Krakowczyk, David R. Reich, Jakob Chwastek, Deborah N. Jakobi, Paul Prasse, Assunta Süss, Oleksii Turuta, Paweł Kasprowski, Lena A. Jäger
2023-04-11T18:39:37Z
http://arxiv.org/abs/2304.09859v1
# pymovements: A Python Package ###### Abstract. We introduce _pymovements_: a Python package for analyzing eye-tracking data that follows best practices in software development, including rigorous testing and adherence to coding standards. The package provides functionality for key processes along the entire preprocessing pipeline. This includes parsing of eye tracker data files, transforming positional data into velocity data, detecting gaze events like saccades and fixations, computing event properties like saccade amplitude and fixational dispersion and visualizing data and results with several types of plotting methods. Moreover, _pymovements_ also provides an easily accessible interface for downloading and processing publicly available datasets. Additionally, we emphasize how rigorous testing in scientific software packages is critical to the reproducibility and transparency of research, enabling other researchers to verify and build upon previous findings. eye movements, preprocessing, event detection, software packages, scientific computing + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Package design _pymovements_ is structured around a central module, the dataset module, which provides a unified interface for working with eye movement datasets. This module makes it easy to load eye movement data and save relevant results for a given eye movement study. The dataset class provides access to the key functionality needed for preprocessing: from transforming positional signals into velocity signals, over detecting gaze events such as (micro-)saccades and fixations, up to analyzing specific properties for detected events, such as saccade amplitude or fractional dispersion. The dataset class is very flexible to allow for a seamless integration of own local datasets. Each dataset object holds recording session parameters, gaze dataframes for raw eye gaze signals and event dataframes, which contain the detected events within the eye gaze signals. These dataframes are organized in a consistent manner, making it easy to work with multiple datasets and compare results across studies. _pymovements_ also provides a simple interface to access publicly available datasets. Except for an example toy dataset, all these public datasets are not part of the _pymovements_ package, but are definitions on where to find dataset resources online with additional details on how to load the input data to be consistent across several datasets. This implementation is aimed to be simple enough to facilitate the inclusion of additional datasets, even to researchers without any Python programming experience. The _pymovements_ package then automatically takes care of downloading the particular dataset. ## 4. Documentation _pymovements_ comes with a comprehensive online documentation that explains each function and module in detail along with examples of typical usage. In addition to this, we provide detailed and comprehensive step-by-step tutorials that guide users through the entire data processing pipeline enabling them to use _pymovements_ to its full potential. These tutorials cover key use cases in eye movement data processing and are accompanied by a small example toy dataset. The documentation is automatically generated from the source code and uploaded to the web hoster with each new update, such that it always reflects the most recent state of the package. For each update all examples and tutorials are verified to run successfully and to produce the expected results. ## 5. Testing Testing is one of the most important software development processes to maintain high quality and consistency of the software. _pymovements_ has a robust testing pipeline to ensure that additions to the codebase are thoroughly tested and reviewed before being accepted. Every proposed change must have at least one approvoring review to be incorporated and must pass all existing tests and checks of the testing pipeline. We enforce a 100% test coverage and a consistent coding style by obligatory linting and type checking. ## 6. Integration with existing software The _pymovements_ package supports native integration with other Python libraries, such as _Polars_, _Pandas_, _NumPy_, and _Matplotlib_. All of the obtained data during processing can be easily exported into commonly used file formats such as CSV and Feather files. Furthermore, _pymovements_ is compatible with the R programming language through the _reticulate_ package. This means that researchers can use _pymovements'_ functionality within the R environment, allowing for seamless integration with R's extensive library of statistical tools and visualization packages. ## 7. Future work _pymovements_ is an evolving package that is continuously being improved and expanded to meet the needs of researchers. We plan to add functionality to allow researchers to analyze reading behavior and expand the package's capabilities by including visual stimuli such as image and video data into the analysis pipeline. To increase the versatility of the package, we plan to add more event detection algorithms and public dataset definitions. Additionally, we are working on adding parsing support for more eye trackers, enabling researchers to process and analyze data from a wider range of eye-tracking systems. ## 8. Broader Impact One of the key benefits of _pymovements_ is its ability to unify the work across different research groups working with eye movement preprocessing pipelines. By providing a standardized rigorously tested interface, _pymovements_ helps to ensure that analyses can be easily reproduced and verified by other researchers, increasing the transparency and reliability of eye movement research. Due to its permissive open-source license researchers can contribute their own methods and datasets, increasing both the visibility of their own work and the feature set of the software. _pymovements_ has thus the potential to have a broad impact on the field of eye movement research, facilitating greater collaboration and reproducibility across different research groups. ###### Acknowledgements. This work was partially funded by the German Federal Ministry of Education and Research (grant 01IS20043) and is based upon work from COST Action MultiEYE, CA21131, supported by COST (European Cooperation in Science and Technology).
2310.02893
Photoelectron circular dichroism upon multiphoton ionization of a chiral alcohol
We present the first photoelectron circular dichroism (PECD) measurements of chiral alcohols, and in particular 1-Phenylethanol, using multiphoton ionization at 400 nm. Observed PECD values were rather small at $\sim2$%, but could be reliably extracted using both hemispherical integration and Abel inversion approaches. Experimental uncertainties of $<0.3$% (2$\sigma$) were achieved with a collection time of around 2 hours. All experiments were conducted in a new compact spectrometer, featuring a continuous flow supersonic expansion and velocity-map imaging detection. The latter is crucial to extract reliable PECD values, as it allows discrimination of different features in the photoelectron spectrum, which exhibit different and opposing PECD signals. The use of a tabletop multiphoton universal ionization scheme is an important step towards a viable analytical chiral spectrometer based on PECD.
Peter Krüger, Michiel Balster, Bhargava Ram Niraghatam, Maurice H. M. Janssen, Daniel A. Horke
2023-10-04T15:30:37Z
http://arxiv.org/abs/2310.02893v1
# Photoelectron circular dichroism upon multiphoton ionization of a chiral alcohol ###### Abstract We present the first photoelectron circular dichroism (PECD) measurements of chiral alcohols, and in particular 1-Phenylethanol, using multiphoton ionization at 400 nm. Observed PECD values were rather small at \(\sim 2\%\), but could be reliably extracted using both hemispherical integration and Abel inversion approaches. Experimental uncertainties of \(<0.3\%\) (\(2\sigma\)) where achieved with a collection time of around 2 hours. All experiments were conducted in a new compact spectrometer, featuring a continuous flow supersonic expansion and velocity-map imaging detection. The latter is crucial to extract reliable PECD values, as it allows discrimination of different features in the photoelectron spectrum, which exhibit different and opposing PECD signals. The use of a tabletop multiphoton universal ionization scheme is an important step towards a viable analytical chiral spectrometer based on PECD. ## Introduction The phenomenon of chiral molecules, that is molecules with non-superimposable mirror images, continues to fascinate scientists across a wide variety of disciplines. From the inherent chirality in biomolecules, leading to enantiomer-dependent bioactivities and the so-called _homochirality of life_[1; 2], to fundamental physics and the quest for parity violation in chiral molecules [3; 4]. Along with the general interest in chiral molecules ran the development of new chirally-sensitive spectroscopic techniques. While traditionally this was the domain of dichroic absorption spectroscopies, such as electronic or vibrational circular dichroism, several new approaches have emerged in the last decades, offering much higher sensitivities and chiral responses [5; 6; 7]. The concept of photoelectron circular dichroism (PECD) has been shown to be particularly suitable. This involves measuring the direction of outgoing photoelectrons following ionization with chiral (circularly polarized) light, with a PECD effect manifesting itself as an asymmetry in the forward and backward emission yield [8]. PECD measurements have emerged as an attractive option for chiral analysis, since it frequently offers much higher chiral responses (sometimes 10s of % [9]) than absorption-type spectroscopies, and can be combined with the high detection efficiency and throughput of charged-particle imaging [10; 11]. PECD was first theoretically described in the 1970s [12; 13], which was further refined in the early 2000s [14; 15], before the first experimental observation some 30 years after the initial theoretical predictions. These first experiments utilized circularly polarized vacuum ultraviolet (VUV) synchrotron light sources [16; 17]. Using single photon VUV photoionization for PECD detection has been a thriving field ever since [18], nowadays featuring dedicated endstations at synchrotron facilities [19; 20]. The realization of laboratory-based PECD measurements was shown around 10 years later [5; 21], utilizing femtosecond resonance-enhanced multiphoton ionization (REMPI) and coincidence detection of the produced photoion and photoelectron [22]. This was followed by the demonstration of the chiral analysis of complex mixtures using the same approach [23]. The laboratory-based PECD approach has also expanded significantly in the past years [24], and recent technological demonstrations include the use of nanosecond high-resolution REMPI as an ionization source [25; 26], high-harmonic based XUV light sources [27], the incorporation of novel molecular sources [28], and high-throughput experiments with fiber-based high repetition-rate femtosecond lasers [29; 30]. Chiral analysis using PECD has also been demonstrated for photodetachment from gas-phase anions recently [31; 32], including the first chirality measurement of larger biopolymers [33]. In this contribution we report the first PECD measurements of the two enantiomers of 1-Phenylethanol, as shown in Figure 1. These were performed using femtosecond multiphoton ionization (fs-MPI) at 400 nm, and hence demonstrating chiral analysis with a fully universal table-top ionization approach. The resulting PECD was rather small, \(\sim\)2%, but could be measured with an absolute error of <0.3% in a 2 hour long measurement with a 3 kHz repetition rate laser system. These measurements were conducted on a new and compact PECD spectrometer, specifically designed to be cost and space effective, while still making use of the benefits of a continuous cold supersonically-expanded molecular beam, and velocity-map imaging photoelectron detection [11]. Since the spectrometer makes use of a continuous molecular beam, the duty cycle is limited only by the repetition rate of the laser and hence could be improved by 2-3 orders of magnitude by the use of high repetition-rate fiber-based femtosecond light sources. ### Experimental Methods Experiments were conducted in a newly designed compact velocity-map imaging (VMI) spectrometer, a schematic of the new experimental setup is shown in Figure 2. This consists of two vacuum chambers, with differential pumping provided by the VMI electrodes that separate the chambers. The compact source and interaction chamber is based on a CF63 6-way cross and houses the VMI electrodes and gas nozzle. It is pumped by a small turbomolecular pump (HiPace 300, Pfeiffer Vacuum), with a typical operating pressure (under load) of Figure 1: Structures of 1-Phenylethanol enantiomers. \(1\times 10^{-5}\) mbar. The detection chamber provides a \(\mu\)-metal shielded field-free flight tube for electrons or ions of 400 mm, before particles are detected on microchannel plate (MCP) detector, coupled to a fast phosphor screen (Photonis ADP 2PS, 40 mm, 1:60, P47 phosphor). This chamber is pumped by a 700 l/s pump (HiPace 700, Pfeiffer Vacuum), with typical operating pressures under load of \(<1\times 10^{-6}\) mbar. Sample molecules are introduced into the spectrometer by means of supersonic expansion from a handmade glass nozzle. The nozzle tip has an opening diameter of about 15 \(\mu\)m and is coated in silver to allow high voltages to be applied. The nozzle is placed equidistant between the extractor and repeller electrode of the VMI setup. A high voltage is applied to the silver coated nozzle tip to minimize distortions of the electric field due to the tip, and it is typically operated at the average voltage of repeller and extractor (i. e., \(V_{\mathrm{tip}}=\frac{1}{2}(V_{\mathrm{rep}}+V_{\mathrm{extr}})\)). The tip of the nozzle is located \(\sim 5\) mm from the interaction point in the center of the VMI, where the expanding gas plume is intersected orthogonally by a focussed laser beam. The VMI is based on the classic 3-plate design from Eppink & Parker [11], with an outer diameter of Figure 2: Schematic of the experimental setup. Molecules are supersonically expanded into a vacuum through a 15 \(\upmu\)m orifice capillary, located 5 mm from the interaction point at the center of a velocity-map imaging (VMI) spectrometer. Molecules are ionized by circularly polarized 100 fs 400 nm laser pulses, with the helicity controlled by a motorized quarter-wave plate. Electrons or ions are accelerated by the VMI electrodes towards a position sensitive detector, _via_ a \(\mu\)-metal shielded field-free flight tube. For mass spectra the detector response is picked up by a constant-fraction discriminator and time stamped by a time-to-digital converter. Photoelectron images are recorded with a fast CMOS camera and centroided on-the-fly. 40 mm, and central orifices of 5 mm (extractor) and 10 mm (ground). Produced charged particles are accelerated towards the detector, where we either collect time-of-flight mass spectra or photoelectron images. For the former, the response from the MCP is counted after pre-amplification and constant-fraction-discrimination (Surface Concept 1-Channel Preamplifier-CFD) using a high-resolution time-to-digital card (chronologic xTDC4-PCIe, 16 ps bins). For electron detection, impact positions on the phosphor screen are recorded with a fast CMOS camera (Basler acA720-520um), operating at a frame rate of around 1 kHz and a resolution of \(256\times 256\) pixel. To increase resolution and overcome any spatial inhomogeneities of the detector response, events are centroided on-the-fly and only centroids retained for further analysis. As a light source a 3 kHz Ti:Sapphire laser (Spectra Physics SpitfireAce) was used, producing fundamental output centered at 800 nm and with 100 fs pulse duration. Photons at 400 nm were generated by frequency doubling in a beta-barium borate (BBO) crystal. The pulse energy was controlled by an attenuator consisting of a half-wave plate (CASIX WPZ1315) and a linear Glan laser polarizer. Circular polarized light was then generated by an achromatic quarter-wave plate (B. Halle 300-470 nm). The produced helicity could be inverted by rotating the waveplate 90 degrees using a motorized rotation stage (Standa 8MRU-1). Both circular polarization states were characterized by rotating a linear polarizer (Thorlabs GL10B) behind the quarterwave-plate while measuring the transmitted power. Stokes vectors of (1, 0.06, 0.034, -0.998) and (1, 0.053, 0.039, 0.998) were determined for LCP and RCP, respectively (see supplementary information for further details). The laser is focussed into the VMI spectrometer using a plano-spherical lens (Thorlabs LA4904-UV, f = 15 cm), yielding typical spotsizes of 15 \(\upmu\)m (FWHM). A single PECD measurement consisted of 30000 camera frames for each of the two polarizations, and hence takes about 1 min. For the data presented here, the mean PECD and corresponding standard errors were determined from a total of 260 measurements, confidence intervals are given as two standard errors. During data acquisition the polarization state sequence is alternated to prevent systematic errors by long-term drifts, i.e. data is collected in the sequence (LCP-RCP)-(RCP-LCP)-.... Photoelectron images were analyzed using Python scripts including the pyAbel package [34], utilizing the rBasex method for inversion with finite differences regularization (Strength = 30000) [35]. Samples of (S)- and (R)-Phenylethanol, each with a nominal purity of 97%, were purchased from Merck Life Science and used without further purification. The sample vapour (at room temperature) is picked up and transported to the nozzle by a small Helium flow (300 mbar) for expansion into the vacuum chamber of the spectrometer. ## Results and Discussion An exemplary 400 nm multiphoton ionization mass spectrum of R-1-Phenylethanol is shown in the top half of Figure 3. The inset shows a detailed view of the m/z region \(120-124\), which yields an experimental mass resolution (FWHM) of \(\frac{M}{\Delta M}=523\). The mass spectrum is dominated by the parent ion at 122 Da, with main fragments appearing at m/z = 43, 77, 78, 79, and 107. For comparison, we also show a literature (NIST chemistry webbook) electron impact ionization mass spectrum in the lower half of Figure 3[36]. Its is clear that femtosecond multiphoton ionization (fs-MPI) is a relatively soft process, inducing significantly less fragmentation than electron impact ionization, and a much larger fraction of intact parent ions [37]. Apart from a small contamination appearing at m/z = 32, similar fragments are encountered in both spectra, allowing the clear identification of 1-Phenylethanol. We now consider the photoelectron image and spectrum of 1-PE collected at 400 nm and Figure 3: Mass spectra of R-1-Phenylethanol. (a) Time-of-flight mass spectrum acquired in our new spectrometer using femtosecond multiphoton ionization at 400 nm. The inset shows in detail the mass region of the parent ion, indicating the achievable mass resolution of our spectrometer. (b) Reference electron impact ionization mass spectrum from the NIST database [36]. shown in Figure 4, collected at laser pulse energies of 1.5 \(\upmu\)J (corresponding to \(\sim 2\times 10^{12}\) W/cm\({}^{2}\)). The photoelectron images exhibited two features; a sharp and intense feature at very low electron kinetic energy (eKE) in the center of the image, and a weaker outer ring at higher eKE. The latter feature peaks at around 0.35 eV electron kinetic energy. The (non-adiabatic) ionization energy for 1-PE has been reported as 8.89 eV [38], ionization at 400 nm thus requires 3 photons which carry a total photon energy of 9.3 eV. Assuming such a 3-photon process, our data yields a vertical (non-adiabatic) ionization energy of \(8.95\pm 0.1\) eV, fully consistent with the literature value. The observed feature is significantly broader than our experimental resolution and bandwidth (see S.I. for details), most likely indicating a significant change in molecular geometry upon photoionization, as has been noted before [38]. From the high-energy cutoff at around 0.6 eV in Figure 4 we can estimate the adiabatic ionization energy as around 8.7 eV. To obtain enantiosensitive information, photoelectron images were obtained for both helicities of circularly polarized light. For these measurements we focus only on the weaker outer feature, since a feature in the center of the image does not yield information on the photoelectron angular distribution and hence photoemission asymmetry. The used laser pulse energy was hence increased to 2 \(\upmu\)J (fluence of \(\sim 3\times 10^{12}\) W/cm\({}^{2}\)) to obtain sufficient count rates in the high eKE region. This caused centroiding artifacts in the image center due to overlapping events and this region has therefore been masked out in the images shown here. Photoelectron images for both enantiomers and both helicities are shown in Figure 5, Figure 4: Photoelectron spectrum and raw photoelectron image (inset) of 1-Phenylethanol at reduced laser power to avoid saturation effects in the center of the image. where the left half always corresponds to the antisymmetric difference image, and the right half to the symmetric sum image of the two enantiomers. The top panels (a,b) show raw images, whereas the bottom panels (c,d) show Abel-inverted images. The center of all images has been masked out, as explained above. The anti-symmetrized difference images (LCP-RCP) for R- and S-1PE show a clear forward-backward asymmetry along the laser propagation axis, with photoelectrons for R-1PE preferentially emitted in the forward direction for LCP light, and correspondingly in Figure 5: Antisymmetric difference (left) and symmetric sum (right) photoelectron images based on raw (a, b) as well as Abel-inverted data (c, d) for S- (a, c) and R-1PE (b, d), respectively. For enhanced contrast the raw images where 2\(\times\) binned, resulting in \(128\times 128\) pixel images. The central region was masked due to artifacts caused by abundant low kinetic energy electrons. Dashed circles indicate the radial region used for calculation of the PECD effect, see text for details. The laser propagation direction is upwards in all images. the backward direction for S-1PE. This is apparent both in the raw difference images as well as in the Abel-inverted images. This PECD effect can be quantized in a single value by either simple hemispherical integration or based on the Legendre coefficients of the photoelectron angular distributions. For this only the radial region of the images containing the high-eKE features was used, as indicated by the dashed yellow circles in Figure 5. The range used was 0.21-0.49 eV, corresponding to the FWHM of the high eKE feature. For the hemispherical integration approach, the PECD is then calculated from the respective electron yields in the forward (_fwd_) and backward (_bwd_) direction for both helicities of light (_rcp,lcp_) [5; 8]: \[\text{PECD}=2\left(\frac{Y_{lcp}^{fwd}-Y_{lcp}^{bwd}}{Y_{lcp}^{fwd}+Y_{lcp}^{ bwd}}-\frac{Y_{rcp}^{fwd}-Y_{rcp}^{bwd}}{Y_{rcp}^{fwd}+Y_{rcp}^{bwd}}\right). \tag{1}\] This approach yielded PECD values of \(+1.67\%\pm 0.26\%\) and \(-2.25\%\pm 0.26\%\) for R- and S-1PE, respectively. Throughout, all given uncertainties correspond to 2 standard errors. Alternatively, we can define the PECD based on the (odd) Legendre coefficients (\(\beta_{n}\)) contributing to the Abel-inverted photoelectron image [39]. For a 3-photon process Legendre polynomials up to order \(n=5\) need to be taken into account, and hence we define the PECD as [5]: \[\text{PECD}=2\beta_{1}-\frac{1}{2}\beta_{3}+\frac{1}{4}\beta_{5}. \tag{2}\] This yielded PECD values of \(+1.63\%\pm 0.30\%\) and \(-2.08\%\pm 0.28\%\) for R- and S-1PE, respectively. The corresponding values for the two enantiomers are also summarized in Table 1. PECD values extracted from both analysis frameworks are in excellent agreement and show the expected sign inversion. They furthermore agree with a recent PEELD study on chiral alcohols [40]. The S-1PE enantiomer consistently exhibited a larger PECD effect then measured for the R-1PE, potentially indicating that our sample of R-1PE had a reduced \begin{table} \begin{tabular}{l l l} \hline Sample & PECD (Abel inversion) & PECD (hemisph. int.) \\ \hline R-1PE & \(+1.63\pm 0.30\) & \(+1.67\pm 0.26\) \\ S-1PE & \(-2.08\pm 0.28\) & \(-2.25\pm 0.26\) \\ \hline \end{tabular} \end{table} Table 1: PECD values of enantiopure 1-PE samples determined by hemispherical integration of raw images and PAD fitting of Abel-inverted images. Uncertainties are given as \(2\sigma\). enantiopurity. This direct comparison of the two analysis pathways shows that for reliable quantitative PECD determination a full Abel analysis of the VMI images is not required (at least as long as there are no strongly overlapping features), and the simpler hemispherical integration is sufficient. Indeed the slightly larger uncertainties for Abel inversion are likely due to the reduced number of electrons taken into account when only considering the central slice of the distribution. The use of the VMI technique, and hence the ability to define electron kinetic energy ranges over which to determine the PECD, is highly beneficial compared to simple integrating detectors such as half-moon anodes or electron multipliers. It allows differentiation of different features in the photoelectron spectrum that frequently correspond to different electronic states in the cation, with potentially very different PECD effects that cannot be distinguished in simple integrating detectors. Indeed, this is also the case for 1PE here. Abel inverted images in Figure 5 clearly show a feature with inverted PECD asymmetry at lower radii, albeit with much reduced intensity. Nonetheless, a simple integrating detector would have recorded much lower PECD values of 1.00 (R-1PE) and -1.42 (S-1PE). Abel analysis furthermore lets us analyze the contributions from individual Legendre coefficients to the overall observed PAD. The PECD effect is contained only in the odd Legendre coefficients, and the extracted values for the two enantiomers are shown in Figure 6. Numerical values for all extracted coefficients are given in the supplementary information. This clearly shows that the main contributors to the PECD are the \(\beta_{1}\) and \(\beta_{3}\) terms, which also show the expected mirroring behavior. While the magnitude of the PECD effect of only 2 % is small compared to many previously investigated substances, it is clearly measurable with high fidelity in our compact spectrometer. The extracted values are furthermore comparable with a recently published photoelectron elliptical dichroism (PEELD) study, which utilized (3+1) multiphoton ionization at 520 nm to study (amongst others) chiral alcohols [40]. A previous study already demonstrated the recording of PECD signals in the region of above-threshold ionization (ATI) and tunnel ionization [41], with the current study we are within the multiphoton ionization region (Keldysh parameter \(\gamma\sim 2\)). This confirms the earlier observation that even with strong-field ionization, whether it is ATI, tunneling or MPI, the PECD effect still persists. ## Conclusion We have shown the first PECD measurements of 1-Phenylethanol using femtosecond multiphoton ionization. This highlights the universality of the fs-MPI approach for chiral analysis. The PECD for 1-PE, albeit small at only \(\sim 2\%\), was reliably extracted using both hemispherical integration and Abel analysis, and showed the expect mirroring behavior. Experiments were conducted on a new compact PECD spectrometer, featuring a small diameter nozzle and optimized pumping geometry, which enables a small footprint mobile machine, with much reduced pumping requirements compared to conventional gas-phase photoelectron spectrometers. The use of VMI detection yields additional spectroscopic information on the system and allows the filtering out of particular photoelectron features for reliable PECD determination. In the current iteration using a 3 kHz laser, reliable quantitative PECD analysis can be performed in around one hour. We are currently working on implementing a high repetition-rate femtosecond laser that will improve the duty cycle, and hence reduce the measurement time, by 2-3 orders of magnitude. This will pave the way for integration of PECD-based chiral analysis with standard analytical separation techniques such as chromatography. Figure 6: Amplitudes of the odd Legendre coefficients for both enantiomers extracted from the PAD fitting. The main contribution to the PECD comes from the \(\beta_{1}\) and \(\beta_{3}\) terms, with all orders showing the expected sign inversion between the enantiomers. ## Acknowledgements This work was supported by the Netherlands Organization for Scientific Research (NWO) under grant numbers STU.019.009, VIDI.193.037 and 712.018.004, and the European Regional Development fund (EFRO, OP Oost) under project number PROJ-00949. We furthermore thank the Spectroscopy of Cold Molecules Department, and in particular Prof. Bas van de Meerakker, for continued support. ## Conflicts of Interest MHMJ is founder and CEO of _MassSpecpecD BV_ (www.MassSpecpecD.com).
2305.17796
Comparison Problems for Radon Transforms
Given two non-negative functions $f$ and $g$ such that the Radon transform of $f$ is pointwise smaller than the Radon transform of $g$, does it follow that the $L^p$-norm of $f$ is smaller than the $L^p$-norm of $g$ for a given $p>0$? We consider this problem for the classical and spherical Radon transforms. In both cases we point out classes of functions for which the answer is affirmative, and show that in general the answer is negative if the functions do not belong to these classes. The results are in the spirit of the solution of the Busemann-Petty problem from convex geometry, and the classes of functions that we introduce generalize the class of intersection bodies introduced by Lutwak in 1988. We also deduce slicing inequalities that are related to the well-known Oberlin-Stein type estimates for the Radon transform.
Alexander Koldobsky, Michael Roysdon, Artem Zvavitch
2023-05-28T18:56:06Z
http://arxiv.org/abs/2305.17796v1
# Comparison problems for Radon transforms ###### Abstract. Given two non-negative functions \(f\) and \(g\) such that the Radon transform of \(f\) is pointwise smaller than the Radon transform of \(g\), does it follow that the \(L^{p}\)-norm of \(f\) is smaller than the \(L^{p}\)-norm of \(g\) for a given \(p>0\)? We consider this problem for the classical and spherical Radon transforms. In both cases we point out classes of functions for which the answer is affirmative, and show that in general the answer is negative if the functions do not belong to these classes. The results are in the spirit of the solution of the Busemann-Petty problem from convex geometry, and the classes of functions that we introduce generalize the class of intersection bodies introduced by Lutwak in 1988. We also deduce slicing inequalities that are related to the well-known Oberlin-Stein type estimates for the Radon transform. Key words and phrases:Radon transform, spherical Radon transform, star bodies, intersection functions, intersection bodies 2020 Mathematics Subject Classification: Primary: 52A20, 42B10, 46F12; Secondary: 44A12 A.K was supported in part by the National Science Foundation grant DMS-2054068 M.R. was supported in part by the Zuckerman STEM Leadership program A.Z. was supported in part by the U.S. National Science Foundation Grant DMS-2000304, the United States - Israel Binational Science Foundation (BSF) Grant 2018115, and in part by Bezout Labex funded by ANR, reference ANR-10-LABX-58 This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 and the Simons Foundation under Grant No. 815891 while the authors was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the _Harmonic Analysis and Convexity_ semester program and while the second named author was in residence at the Institute for Computational and Experimental Research in Mathematics during the _Discrete Optimization: Mathematics, Algorithms, and Computation_ semester program during the 2022-2023 academic year. The spherical Radon transform of a continuous function \(f\) on the sphere \(S^{n-1}\) is a continuous function \(Rf\) on the sphere defined by \[Rf(\theta)=\int_{S^{n-1}\cap\theta^{\perp}}f(\xi)d\xi,\quad\theta\in S^{n-1}.\] Here \(\theta^{\perp}\) denotes the central hyperplane orthogonal to the direction \(\theta\). For the properties of the Radon transform and spherical Radon transform and different geometric applications, we refer the reader to the monographs [10, 20, 21, 22, 31, 46]. We consider the following comparison problems. **Problem 1.1** (Comparison problem for the spherical Radon transform).: _Consider two even, continuous, positive functions \(f,g\) on \(S^{n-1},\ n\geq 3\), and let \(p>0\). If_ \[Rf(\theta)\leq Rg(\theta)\quad\text{for all $\theta\in S^{n-1}$}, \tag{1}\] _does it follow that \(\|f\|_{L^{p}(S^{n-1})}\leq\|g\|_{L^{p}(S^{n-1})}\)?_ **Problem 1.2** (Comparison problem for the Radon transform).: _Let \(p>0\). Given a pair of even, continuous functions \(\varphi,\psi\colon\mathbb{R}^{n}\to\mathbb{R}_{+},\ n\geq 2\), each of which is integrable and integrable over all affine hyperplanes, satisfying the condition:_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta),\quad\text{for all $(t,\theta)\in\mathbb{R}\times S^{n-1}$}, \tag{2}\] _does it follow that \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\leq\|\psi\|_{L^{p}(\mathbb{R}^{n})}\)?_ By a well-known integration formula on the sphere (see for example [31, p.28]) and by Cavalieri's principle, the answers to both problems are affirmative for \(p=1.\) However, for \(p\neq 1\) the conclusions of the above problems maybe fail to be true in general. For Problem 1.2 we show how by the following simple example. **Example 1.3**.: _Let \(n\geq 2\), \(M>1\), \(c=M^{-n+1}\), and set \(p>\frac{n}{n-1}.\) Consider the functions \(\varphi(x)=\chi_{B_{2}^{n}}(x)\) and \(\psi(x)=c\chi_{MB_{2}^{n}}(x)\). Then it is clear that the inequality (2) holds, namely that_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta),\quad\text{for all $(t,\theta)\in\mathbb{R}\times S^{n-1}$},\] _while \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}>\|\psi\|_{L^{p}(\mathbb{R}^{n})}.\)_ Our main source of motivation and guidance is the Busemann-Petty problem in convex geometry which was introduced in 1956 in [5] and solved at the end of 1990's. Suppose \(K,L\subset\mathbb{R}^{n}\) are two origin-symmetric convex bodies so that \[|K\cap\theta^{\perp}|\leq|L\cap\theta^{\perp}| \tag{3}\] for every direction \(\theta\in S^{n-1}.\) Does it necessarily follow that \(|K|\leq|L|\)? Here \(|\cdot|\) denotes the volume of the appropriate dimension. It was proven that the answer is affirmative when \(n\leq 4\) and negative when \(n\geq 5\); see [10, 31] for the solution and its history. Note that Problem 1.1 is a generalization of the Busemann-Petty problem (choose \(f=\|\cdot\|_{K}^{-n+1}\), \(g=\|\cdot\|_{L}^{-n}\) and \(p=\frac{n}{n-1}\)). Also, it was shown in [50, 51] that if one considers the Busemann-Petty problem with volume replaced by an arbitrary measure with even, continuous, and positive density, then the answer remains the same. One of the critical ingredients in the solution of the Busemann-Petty problem is the notion of an intersection body introduced by Lutwak [17, 40]; see Section 2 below for a definition. Lutwak showed that if the body \(K\) in (3) is an intersection body, then the answer to the Busemann-Petty problem is affirmative. On the other hand, every origin-symmetric convex non-intersection body can be perturbed to construct a counterexample. Therefore, the answer to the Busemann-Petty problem in \(\mathbb{R}^{n}\) is affirmative if, and only if every origin-symmetric convex body in \(\mathbb{R}^{n}\) is an intersection body. Another ingredient in the Fourier analytic solution of the Busemann-Petty problem in [11] is the characterization of intersection bodies in terms of the Fourier transform. It was proven in [35] that an origin-symmetric star body \(K\subset\mathbb{R}^{n}\) is an intersection body if, and only if, \(\|\cdot\|_{K}^{-1}\) represents a positive definite distribution on \(\mathbb{R}^{n}\). Our approach to the comparison problems is based on these two ideas. We introduce special classes of functions that play the role of intersection bodies. For the spherical comparison problem, this is the class of functions \(f\) on \(S^{n-1}\) for which the extension of \(f^{p}\) to an even homogeneous of degree -1 function on \(\mathbb{R}^{n}\) represents a positive definite distribution. The results resemble Lutwak's connections in the Busemann-Petty problem. **Theorem 1.4**.: _Let \(f,g\) be even continuous positive functions on the sphere \(S^{n-1}\), and suppose that_ \[Rf(\theta)\leq Rg(\theta),\qquad\text{for all }\theta\in S^{n-1}. \tag{4}\] _Then:_ * _Suppose that for some_ \(p>1\) _the function_ \(|x|_{2}^{-1}f^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _represents a positive definite distribution on_ \(\mathbb{R}^{n}\)_. Then_ \(\|f\|_{L^{p}(S^{n-1})}\leq\|g\|_{L^{p}(S^{n-1})}\)_._ * _Suppose that for some_ \(0<p<1\) _the function_ \(|x|_{2}^{-1}g^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _represents a positive definite distribution on_ \(\mathbb{R}^{n}\)_. Then_ \(\|f\|_{L^{p}(S^{n-1})}\leq\|g\|_{L^{p}(S^{n-1})}\)_._ **Theorem 1.5**.: _The following hold true:_ * _Let_ \(g\) _be an infinitely smooth strictly positive even function on_ \(S^{n-1}\) _and_ \(p>1\)_. Suppose that the distribution_ \(|x|_{2}^{-1}g^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _is not positive definite on_ \(\mathbb{R}^{n}\)_. Then there exists an infinitely smooth even function_ \(f\) _on_ \(S^{n-1}\) _so that the condition (_4_) holds, but_ \(\|f\|_{L^{p}(S^{n-1})}>\|g\|_{L^{p}(S^{n-1})}\)_._ * _Let_ \(f\) _be an infinitely smooth strictly positive even function on_ \(S^{n-1}\) _and_ \(0<p<1\)_. Suppose that the distribution_ \(|x|_{2}^{-1}f^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _is not positive definite on_ \(\mathbb{R}^{n}\)_. Then there exists an infinitely smooth even function_ \(g\) _on_ \(S^{n-1}\) _so that the condition (_4_) holds, but_ \(\|f\|_{L^{p}(S^{n-1})}>\|g\|_{L^{p}(S^{n-1})}\)_._ To further investigate Problem 1.2, we introduce the class of intersection functions. **Definition 1.6**.: _An even, continuous, non-negative, and integrable function \(f\) defined on \(\mathbb{R}^{n}\) is called an intersection function if, for every direction \(\theta\in S^{n-1}\), the function_ \[r\in\mathbb{R}\mapsto|r|^{n-1}\hat{f}(r\theta)\] _is positive definite, where \(\widehat{f}\) denotes the Fourier transforms of \(f\) on \(\mathbb{R}^{n}\)._ We chose the Fourier definition rather than a more geometric one which is similar to the original definition of intersection bodies in [17]. The geometric definition now becomes a theorem, as follows. **Theorem 1.7**.: _An even, continuous, non-negative, and integrable function \(f\) defined on \(\mathbb{R}^{n}\) is an intersection function if, and only if, for every direction \(\theta\in S^{n-1}\), there exists a non-negative, even, finite Borel measure \(\mu_{\theta}\) on \(\mathbb{R}\) such that_ * _the function_ \[\theta\in S^{n-1}\mapsto\int_{\mathbb{R}}\mathcal{R}\varphi(t,\theta)d\mu_{ \theta}(t)\] _belongs to_ \(L^{1}(S^{n-1})\) _whenever_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\)_, and_ * \[\int_{\mathbb{R}^{n}}f\varphi=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R} \varphi(t,\theta)d\mu_{\theta}(t)d\theta.\] (5) _holds for all_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\)_._ We will use both the Fourier definition and the geometric characterization to point out many examples of intersection functions. In particular, we will see that the class of intersection bodies of star bodies in \(\mathbb{R}^{n}\) can be identified as part of the class of intersection functions. We now formulate analogs of Lutwak's connections for the Problem 1.2. **Theorem 1.8**.: _Let \(p>0\) and consider a pair of continuous, non-negative even functions \(\varphi,\psi\in L^{1}(\mathbb{R}^{n})\cap L^{p}(\mathbb{R}^{n})\) satisfying the condition_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta)\quad\text{for all }(t,\theta)\in\mathbb{R}\times S^{n-1}. \tag{6}\] _Then:_ * _if_ \(p>1\) _and_ \(\varphi^{p-1}\) _is an intersection function, then_ \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\leq\|\psi\|_{L^{p}(\mathbb{R}^{n})}\)_, and_ * _if_ \(0<p<1\) _and_ \(\psi^{p-1}\) _is an intersection function, then_ \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\leq\|\psi\|_{L^{p}(\mathbb{R}^{n})}\)_._ We also give a counterexample to Problem 1.2. **Theorem 1.9**.: _The following hold:_ * _Fix_ \(p>1\) _and let_ \(\psi\in\mathcal{S}(\mathbb{R}^{n})\) _be non-negative and even. If_ \(\psi^{p-1}\) _is not an intersection, then there exists an even, non-negative_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) _such that_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta)\quad\text{for all }(t,\theta)\in\mathbb{R}\times S^{n-1},\] _but with_ \(\|\psi\|_{L^{p}(\mathbb{R}^{n})}<\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\)_._ * _Fix_ \(0<p<1\) _and let_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) _be non-negative and even. If_ \(\varphi^{p-1}\) _is not an intersection function, then there exists a non-negative, even_ \(\psi\in\mathcal{S}(\mathbb{R}^{n})\) _such that_ \(\mathcal{R}\varphi\leq\mathcal{R}\psi\)_, but with_ \(\|\psi\|_{L^{p}(\mathbb{R}^{n})}<\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\)_._ Since the answer to the Busemann-Petty problem is negative in most dimensions, it make sense to ask if it holds up to some absolute constant. This is the so-called _isomorphic Busemann-Petty problem_ and was introduced in [42]: Given any pair of origin-symmetric convex bodies \(K,L\subset\mathbb{R}^{n}\) satisfying the condition (3), does it follow that \(|K|\leq C|L|\) for some absolute constant \(C>0\)? As shown in [42], the isomorphic Busemann-Petty problem is equivalent to the slicing problem of Bourgain [3, 4]: Does there exist an absolute constant \(C>0\) such that, for any \(n\in\mathbb{N}\) and for any origin-symmetric convex body \(K\) in \(\mathbb{R}^{n}\), \[|K|^{\frac{n-1}{n}}\leq C\max_{\theta\in S^{n-1}}|K\cap\theta^{\perp}|?\] Both the isomorphic Busemann-Petty problem and slicing problem remain open. In [4] Bourgain showed that \(C\leq O(n^{1/4}\log(n))\). Klartag [26] removed logarithmic term in Bourgain's estimate. Chen [7] proved that \(C\leq O(n^{\epsilon})\) for every \(\epsilon>0\) as \(n\) tends to infinity. Klartag and Lehec [29] established a polylog bound \(C\leq O(\log^{4}n)\). The proof of Klartag and Lehec was slightly refined in [23] to get \(C\leq\log^{2.2226}n\). Finally, in [27] Klartag improved the estimate to \(C\leq\sqrt{\log n}\). Extensions and analogs of the slicing problem to arbitrary functions was studied in [6, 15, 18, 28, 30, 32, 33, 34, 38]. In particular, it was proved in [33] that for any \(n\in\mathbb{N}\), any star body \(K\) in \(\mathbb{R}^{n}\) and and non-negative continuous function \(f\) on \(K\), one has \[\int_{K}f\leq 2d_{ovr}(K,\mathcal{I}_{n})\max_{\theta\in S^{n-1}}\int_{K\cap \theta^{\perp}}f.\] Here \(\mathcal{I}_{n}\) denotes the class of intersection bodies in \(\mathbb{R}^{n}\), and \(d_{\mathrm{ovr}}(K,\mathcal{I}_{n})\) is the outer volume ratio distance. In the case when \(f\) is even, \(K\) is origin-symmetric and convex, we may apply John's theorem [24] to conclude that \(d_{ovr}(K,\mathcal{I}_{n})\leq\sqrt{n}\). An isomorphic version of the measure theoretic Busemann-Petty problem from [51] was proved in [39]: Given a non-negative, continuous function \(f\colon\mathbb{R}^{n}\to\mathbb{R}_{+}\), and a pair of origin-symmetric convex bodies \(K,L\) in \(\mathbb{R}^{n}\) satisfying \(\int_{K\cap\theta^{\perp}}f\leq\int_{L\cap\theta^{\perp}}f\) for every \(\theta\in S^{n-1}\), one has \[\int_{K}f\leq\sqrt{n}\int_{L}f. \tag{7}\] It is still an open problem to determine whether the constant \(\sqrt{n}\) is optimal. In [15] the following extension of the inequality (7) was established: Let \(K\) and \(L\) be star bodies in \(\mathbb{R}^{n}\) and let \(f,g\colon\mathbb{R}^{n}\to\mathbb{R}_{+}\) be non-negative, continuous functions on \(K\) and \(L\), respectively, so that \(\|g\|_{\infty}=g(0)=1\). Then \[\int_{K}f\leq\frac{n}{n-1}d_{ovr}(K,\mathcal{I}_{n})\max_{\theta\in S^{n-1}} \left(\frac{\int_{K\cap\theta^{\perp}}f}{\int_{L\cap\theta^{\perp}}g}\right)| K|^{\frac{1}{n}}\left(\int_{L}g\right)^{\frac{n-1}{n}}. \tag{8}\] For the current state of the Busemann-Petty and slicing problems for functions see the survey [16]. We get a slicing inequality for \(p>1\) from Theorem 1.4. In fact, if the function \(g\) is constant with the value \[g\equiv\frac{1}{|S^{n-2}|}\max_{\xi\in S^{n-1}}\int_{S^{n-1}\cap\xi^{\perp}}f( \theta)d\theta,\] then \(f\) and \(g\) satisfy the conditions of Theorem 1.4, and the conclusion reads as follows. **Theorem 1.10**.: _Let \(f\) be a positive even, continuous function on the sphere \(S^{n-1}\) Assume \(p>1\) and if \(|x|_{2}^{-1}f^{p-1}\left(\frac{x}{|x|_{2}}\right)\) represents a positive definite distribution on \(\mathbb{R}^{n}\), then_ \[\|f\|_{L^{p}(S^{n-1})}\leq\frac{|S^{n-1}|^{\frac{1}{p}}}{|S^{n-2}|}\max_{\xi\in S ^{n-1}}Rf(\xi). \tag{9}\] Similarly, in the case \(0<p<1\), by choosing the function \[f=\frac{1}{|S^{n-2}|}\min_{\theta\in S^{n-1}}\int_{S^{n-1}\cap\xi^{\perp}}g( \theta)d\theta\] in Theorem 1.4, we obtain: **Theorem 1.11**.: _Let \(g\) be a positive even, continuous function on the sphere \(S^{n-1}\) Assume \(0<p<1\) and if \(|x|_{2}^{-1}g^{p-1}\left(\frac{x}{|x|_{2}}\right)\) represents a positive definite distribution on \(\mathbb{R}^{n}\), then_ \[\|g\|_{L^{p}(S^{n-1})}\geq\frac{|S^{n-1}|^{\frac{1}{p}}}{|S^{n-2}|}\min_{\xi\in S ^{n-1}}Rg(\xi).\] As proved in [35], an origin-symmetric star body \(K\subset\mathbb{R}^{n}\) is an intersection body if, and only if, \(\|\cdot\|_{K}^{-1}\) represents a positive definite distribution on \(\mathbb{R}^{n}\). Therefore, a positive continuous function \(f\) on the sphere has the property that the distribution \(f^{p-1}\cdot r^{-1}\) is positive definite if, and only if, \(f=\|\cdot\|_{K}^{-\frac{1}{p-1}}\) for some intersection body \(K.\) Combining this observation with Theorem 1.10, we get that for any intersection body \(K\) in \(\mathbb{R}^{n}\) and any \(p>1\) \[\left(\int_{S^{n-1}}\|x\|_{K}^{-\frac{p}{p-1}}dx\right)^{\frac{1}{p}}\leq \frac{|S^{n-1}|^{\frac{1}{p}}}{|S^{n-2}|}\max_{\xi\in S^{n-1}}\left(\int_{S^{n -1}\cap\xi^{\perp}}\|x\|_{K}^{-\frac{1}{p-1}}dx\right). \tag{10}\] When \(p=\frac{n}{n-1}\), the latter inequality turns into Bourgain's slicing inequality for intersection bodies. It would be interesting to see whether inequality (10) holds for other classes of bodies, maybe with a different constant. Note that the class of intersection bodies contains ellipsoids, unit balls of finite-dimensional subspaces of \(L^{p}\) with \(0<p\leq 2\), among others; see [31, Chapter 4]. We would like to point out that the inequality of Theorem 1.10 goes in the opposite direction to the well-known \(L^{p}\)-\(L^{q}\)-estimates for the Radon transform; see [8, 9, 45] for a historical account of such results. The first result of this kind was established by Oberlin and Stein in [44]: Given any function \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) belonging to \(L^{p}(\mathbb{R}^{n})\), one has that \[\left(\int_{S^{n-1}}\left(\int_{\mathbb{R}}|\mathcal{R}f(t,\theta)|^{r}dt \right)^{\frac{q}{r}}d\theta\right)^{\frac{1}{q}}\leq C_{n,p,q}\|f\|_{L^{p}( \mathbb{R}^{n})}, \tag{11}\] if, and only if, \(1\leq p<\frac{n}{n-1},\,q\leq p^{\prime}\)\((p^{-1}+{p^{\prime}}^{-1}=1)\), and \(\frac{1}{r}=\frac{n}{p}-n+1\). In particular, inequality (11) implies that \(\mathcal{R}f\) is finite almost everywhere on \(\mathbb{R}\times S^{n-1}\) provided \(f\in L^{p}(\mathbb{R}^{n})\) for some \(1\leq p<\frac{n}{n-1}\). It was also proved in [44] that for every \(n\geq 3\) one has \[\left(\int_{S^{n-1}}\sup_{t\in\mathbb{R}}|\mathcal{R}f(t,\theta)|^{s}d\theta \right)^{\frac{1}{s}}\leq C_{p_{1},p_{2},s}\|f\|_{L^{p_{1}}(\mathbb{R}^{n})} ^{\alpha}\|f\|_{L^{p_{2}}(\mathbb{R}^{n})}^{1-\alpha} \tag{12}\] whenever \(s\leq n\), \(1\leq p_{1}<\frac{n}{n-1}<p_{2}\leq\infty\), and \[\frac{\alpha}{p_{1}}+\frac{1-\alpha}{p_{2}}=\frac{n-1}{n}.\] Of particular interest to us is the limiting case of (12) due to its geometric content: If \(\chi_{A}\) is the characteristic function of a measurable set in \(A\subset\mathbb{R}^{n}\) and \(s=n\), then as \(p\to\frac{n}{n-1}\), inequality (12) becomes \[\left(\int_{S^{n-1}}\left(\sup_{t\in\mathbb{R}}|A\cap(\theta^{\perp}+t\theta)| \right)^{n}\right)^{\frac{1}{n}}\leq C_{n}|A|^{\frac{1}{n}}. \tag{13}\] If \(A\) is an origin-symmetric convex body in \(\mathbb{R}^{n}\), by the Brunn concavity principle (see [10, 19]) the supremum is achieved at \(t=0\), and one gets the Busemann intersection inequality. This connection was first observed by Lutwak in [41]. The paper is organized as follows. Section 2 details notations and concepts we need from harmonic analysis and convex geometry. In Section 3, we give a detailed solution to Problem 1.1. In Section 4, we introduce, as an intuitive step, the notion of an intersection function of a given function, provide several examples, and prove a characterization theorem for this class of functions. Section 5 is dedicated to the introduction of the notion of intersection functions, those which serve as a natural extension of intersection bodies. In Section 6 we prove Theorem 1.8 and Theorem 1.9. ## 2. Preliminaries In this section we will recall several facts from harmonic analysis and convex geometry that will be used throughout the paper. ### Notions from harmonic analysis We will work in the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) equipped with its usual inner product structure \(\langle\cdot,\cdot\rangle\) and induced normed \(|\cdot|_{2}=\sqrt{\langle\cdot,\cdot\rangle}\). We denote the Lebesgue measure of a measurable subset \(A\) of \(\mathbb{R}^{n}\) of appropriate dimension by \(|A|\). The \(n\)-dimensional Euclidean unit ball shall be denoted by \(B_{2}^{n}\), and its boundary, the unit sphere, by \(S^{n-1}\). For any fixed unit vector \(\theta\in S^{n-1}\) we denote by \(\theta^{\perp}\) the orthogonal complement of \(\{\theta\}\), that is, \(\theta^{\perp}=\{x\in\mathbb{R}^{n}\colon(x,\theta)=0\}\). More generally, for any fixed \(\theta\in S^{n-1}\) and \(t\in\mathbb{R}\) we set \[\theta^{\perp}+t\theta:=\{x\in\mathbb{R}^{n}\colon\langle x,\theta\rangle=t\}\] to be the hyperplane parallel to \(\theta^{\perp}\) at distance \(t\) from the origin. We will often make use of the notation \(\langle x,\theta\rangle=t\) to denote such hyperplanes in our computations below. Given a measure metric space \((X,d,\mu)\) and \(p>0\), we say that a real-valued function \(h\colon X\to\mathbb{R}\) belongs to \(L^{p}(X,\mu)\) if \[\int_{X}|h(x)|^{p}d\mu(x)<\infty.\] We define the \(L^{p}(X)\)-norm of a function \(h\colon X\to\mathbb{R}\) to be \[\|h\|_{L^{p}(X)}=\left(\int_{X}|h(x)|^{p}dx\right)^{\frac{1}{p}}.\] We will make frequent use of the following reverse Holder inequality [43, pg. 135 Theorem 1]: given a measure space \((X,\mu)\), with \(\mu(X)\neq 0\), \(1<r<\infty\) and any pair of non-negative functions \(h,w\in L^{1}(X)\), each having a positive integral, one has \[\|hw\|_{L^{1}(X)}\geq\|h\|_{L^{\frac{1}{r}}(X)}\|w\|_{L^{-\frac{1}{r-1}}(X)}. \tag{14}\] We define the Fourier transform of a function \(\varphi\in L^{1}(\mathbb{R}^{n})\) by \[\mathcal{F}\varphi(\xi)=\hat{\varphi}(\xi)=\int_{\mathbb{R}^{n}}\varphi(x)e^{-i( x,\xi)}dx,\quad\xi\in\mathbb{R}^{n}.\] For the following notions, we follow the presentation of [31] (see also [12, 47]). By \(\mathcal{S}:=\mathcal{S}(\mathbb{R}^{n})\) we denote the Schwartz space of rapidly decreasing infinitely differentiable test functions, and by \(\mathcal{S}^{\prime}\) the space of continuous linear functionals (distributions) acting on \(\mathcal{S}.\) The action of a distribution \(f\in\mathcal{S}^{\prime}\) on a test function \(\varphi\in\mathcal{S}\) is given by integration: \[\langle f,\varphi\rangle=\int_{\mathbb{R}^{n}}f(x)\varphi(x)dx.\] If \(\varphi\) is a test function, then so is its Fourier transform \(\hat{\varphi}.\) Moreover, the Fourier transform is invertible on \(S\) and its inverse is given by \[\tilde{\mathcal{F}}\varphi(\xi)=(2\pi)^{-n}\int_{\mathbb{R}^{n}}\varphi(x)e^{ i(x,\xi)}dx.\] Consequently, for every \(\varphi\in\mathcal{S},\)\((\hat{\varphi})^{\wedge}(\xi)=(2\pi)^{n}\varphi(-\xi)\). Also, the Fourier transform and its inverse are continuous operators on \(\mathcal{S}\). By the Fubini theorem, we also have the following Parseval identity: For any pair \(\varphi,\psi\in\mathcal{S},\) \[\int_{\mathbb{R}^{n}}\hat{\varphi}(x)\psi(x)dx=\int_{\mathbb{R}^{n}}\varphi( \xi)\hat{\psi}(\xi)d\xi.\] With this in mind, we define the Fourier transform of a distribution \(f\) as a distribution \(\hat{f}\) acting by \(\langle\hat{f},\varphi\rangle=\langle f,\hat{\varphi}\rangle.\) If \(\varphi\) is an even test function, then \[(\hat{\varphi})^{\wedge}=(2\pi)^{n}\varphi\quad\text{ and }\langle f,\hat{ \varphi}\rangle=(2\pi)^{n}\langle f,\varphi\rangle.\] We say a distribution \(f\) is a positive definite distribution if its Fourier transform is a positive distribution, i.e. for every \(\varphi\in\mathcal{S}\) one has \(\langle\hat{f},\varphi\rangle\geq 0\) whenever \(\varphi\geq 0.\) We say that a complex-valued function \(f\) defined on \(\mathbb{R}^{n}\) is a positive definite function on \(\mathbb{R}^{n}\) if, for every finite sequence \(\{x_{j}\}_{1}^{m}\) in \(\mathbb{R}^{n}\) and every choice of complex numbers \(\{c_{j}\}_{1}^{m},\) we have \[\sum_{\ell=1}^{m}\sum_{j=1}^{m}c_{\ell}\bar{c_{j}}f(x_{\ell}-x_{j})\geq 0.\] By Bochner theorem, a function on \(\mathbb{R}^{n}\) is positive definite if, and only if, it is the Fourier transform of a finite, positive Borel measure \(\mu\) on \(\mathbb{R}^{n}\) (see [13]). From this it follows that products of positive definite functions are again positive definite functions. More generally, Schwartz's generalization of Bochner's theorem asserts that a distribution is positive definite if, and only if, if is the Fourier transform of a tempered measure on \(\mathbb{R}^{n}.\) The Radon transform of a function \(\varphi\in L^{1}(\mathbb{R}^{n})\) that is integrable over every affine hyperplane is defined as \[\mathcal{R}\varphi(t,\theta)=\int_{\theta^{\perp}+t\theta}\varphi(x)dx.\] Moreover, we will make frequent use of the relationship between the Fourier transform and Radon transform: Given \(\varphi\in L^{1}(\mathbb{R}^{n}),\) for every fixed direction \(\xi\in S^{n-1},\) one has that the Fourier transform of the map \(t\in\mathbb{R}\mapsto\mathcal{R}\varphi(t,\theta)\) is equal to the function \(z\in\mathbb{R}\mapsto\hat{\varphi}(z,\xi)\); see for example [31, Lemma 2.11]. The spherical Radon transform \(R\colon C(S^{n-1})\to C(S^{n-1})\) is a linear operator defined by \[Rf(\xi)=\int_{S^{n-1}\cap\xi^{\perp}}f(x)dx,\quad\xi\in S^{n-1},\] for every function \(f\in C(S^{n-1}).\) The spherical Radon transform is self-dual, that is, for any pair \(f,g\in C(S^{n-1}),\) one has \(\langle Rf,g\rangle=\langle f,Rg\rangle\). We define the spherical Radon transform of a measure \(\mu\) as a function \(R\mu\) on the space \(C(S^{n-1})\) acting by \[\langle R\mu,f\rangle=\langle\mu,Rf\rangle=\int_{S^{n-1}}Rf(x)d\mu(x).\] Denote by \(\mathbb{P}^{n}\) the space of all affine hyperplanes contained in \(\mathbb{R}^{n}\). Along with the Radon transform, we consider the dual Radon transform \(\mathcal{R}^{*}\) of an even continuous function \(g\colon\mathbb{P}^{n}\to\mathbb{R}\) is defined by \[(\mathcal{R}^{*}g)(x)=\int_{\{H\in\mathbb{P}^{n}\colon x\in H\}}g(H)d\nu_{n,n- 1}(H),\] where \(\nu_{n,n-1}\) denotes the rotation invariant Haar probability measure on the compact set \(\{H\in\mathbb{P}^{n}\colon x\in H\}\). Following [21] one can identify \(C(\mathbb{P}^{n})\) with the class even functions belonging to \(C(\mathbb{R}\times S^{n-1})\). For more information on the Radon transform, see the books of Helgason [21, 22]. ### Notions from Convexity We say that a compact subset \(K\) of \(\mathbb{R}^{n}\) is a star body if the origin \(o\) belongs the the interior of \(K\) and if, for every \(x\in K\), each point of the interval \([o,x)\) is an interior point of \(K\), and the boundary of \(K\) is continuous in the sense that the Minkowski function of \(K\) defined by \[\|x\|_{K}=\min\{s\geq 0\colon x\in sK\}\] is a continuous function on \(\mathbb{R}^{n}\). A star body \(K\) is called a convex body if in addition it is a convex set. Moreover, any star body \(K\) satisfies \[K=\{x\in\mathbb{R}^{n}\colon\|x\|_{K}\leq 1\}.\] A star body \(K\) is said to be origin-symmetric if \(K=-K.\) The radial function of a star body \(K\) is defined as \(\rho_{K}(\cdot)=\|\cdot\|_{K}^{-1}\); it is positive and continuous outside of the origin. For every direction \(\theta\in S^{n-1}\), \(\rho_{K}(\theta)\) is the distance from the origin to the boundary of \(K\) in the direction of \(\theta\). Given a measure \(\mu\) on \(\mathbb{R}^{n}\) with non-negative continuous density \(f\) and a star body \(K\) in \(\mathbb{R}^{n}\), we have \[\mu(K)=\int_{S^{n-1}}\int_{0}^{\|\theta\|_{K}^{-1}}r^{n-1}f(r\theta)dr\theta, \tag{15}\] which, in the case of the volume, becomes \[|K|=\frac{1}{n}\int_{S^{n-1}}\|\theta\|_{K}^{-n}d\theta.\] Given any hyperplane \(\xi^{\perp}\), the polar formula for measure of the section \(K\cap\xi^{\perp}\) is given by \[\begin{split}\mu(K\cap\xi^{\perp})&=\int_{S^{n-1}\cap \xi^{\perp}}\int_{0}^{\|\theta\|_{K}^{-1}}r^{n-2}f(r\theta)drd\theta\\ &=R\left(\int_{0}^{\|\cdot\|_{K}^{-1}}r^{n-2}f(r\cdot)dr\right)( \xi),\end{split} \tag{16}\] and for the volume: \[|K\cap\xi^{\perp}|=\frac{1}{n-1}R\left(\|\cdot\|_{K}^{-n+1}\right)(\xi).\] If \(f\) is a continuous function on \(S^{n-1}\) and \(0<p<n\), we denote by \(f\cdot r^{-p}\) the extension of \(f\) to an even homogeneous function of degree \(-p\) on \(\mathbb{R}^{n}\) : \[f\cdot r^{-p}(x)=|x|_{2}^{-p}f\left(\frac{x}{|x|_{2}}\right)\quad x\in\mathbb{ R}^{n}\setminus\{0\}.\] Since \(0<p<n\), this function is locally integrable on \(\mathbb{R}^{n}\) and represents a distribution. Suppose that \(f\) is infinitely smooth, i.e. \(f\in C^{\infty}(S^{n-1})\). Then by [31, Lemma 3.16], the Fourier transform in the sense of distributions \[(f\cdot r^{-p})^{\wedge}=g\cdot r^{-n+p},\] for some function \(g\in C^{\infty}(S^{n-1})\). When we write \((f\cdot r^{-p})^{\wedge}(\xi)\), we mean \(g(\xi),\ \xi\in S^{n-1}\). If \(f,g\) are infinitely smooth functions on \(S^{n-1}\), we have the following spherical version of Parseval's formula (see [31, Lemma 3.22]): for any \(p\in(-n,0)\) \[\int_{S^{n-1}}(f\cdot r^{-p})^{\wedge}(\xi)(g\cdot r^{-n+p})^{\wedge}(\xi)=(2 \pi)^{n}\int_{S^{n-1}}f(\theta)g(\theta)\ d\theta. \tag{17}\] Suppose \(f\) is a continuous function on \(S^{n-1}\). The Fourier transform of \(f\cdot r^{-n+1}\) is a continuous function on the sphere. More precisely, by [31, Lemma 3.7], \[(f\cdot r^{-n+1})^{\wedge}=\pi Rf\cdot r^{-1}. \tag{18}\] We will also use a non-smooth version of Parseval's formula from [31, Corollary 3.23]. If \(g\in C(S^{n-1})\) and \(g\cdot r^{-1}\) is a positive definite distribution, then there exists a finite Borel measure \(\mu_{g}\) on \(S^{n-1}\) so that for every \(f\in C(S^{n-1})\) \[\int_{S^{n-1}}(f\cdot r^{-n+1})^{\wedge}(\xi)d\mu_{g}(\xi)=(2\pi)^{n}\int_{S^ {n-1}}f(\theta)g(\theta)d\theta. \tag{19}\] Note that (17) and (19) are formulated in [31] specifically for Minkowski functionals, not functions. However, the Minkowski functional of a star body is an arbitrary continuous positive function on \(S^{n-1}\). The class of intersection bodies of star bodies was introduced by Lutwak in [40] during his investigation of the Busemann-Petty problem. We say that a star body \(K\) is an _intersection body of a star body \(L\)_ if, for every direction \(\theta\in S^{n-1}\), one has \[\|\xi\|_{K}^{-1}=|L\cap\xi^{\perp}|=\frac{1}{n-1}R\left(\|\cdot\|_{L}^{-n+1} \right)(\xi).\] Following [17], we say that a star body \(K\) is an _intersection body_ if there exists a finite, positive Borel measure \(\mu\) on the sphere \(S^{n-1}\) so that \(\|\cdot\|_{K}^{-1}=R\mu\) as functionals on \(C(S^{n-1})\); that is, for every continuous functions \(f\) on \(S^{n-1}\), \[\int_{S^{n-1}}\|\theta\|_{K}^{-1}f(\theta)d\theta=\int_{S^{n-1}}Rf(x)d\mu(x). \tag{20}\] We denote by \(\mathcal{I}_{n}\) the class of intersection bodies in \(\mathbb{R}^{n}\); it is immediately clear that \(\mathcal{I}_{n}\) contains the class of intersection bodies of star bodies. In [35] it was proved that an origin-symmetric star body \(K\) in \(\mathbb{R}^{n}\) is an intersection body if, and only if, \(\|\cdot\|_{K}^{-1}\) is a positive definite distribution on \(\mathbb{R}^{n}\). This result was used in the solution of the Busemann-Petty problem in [11]. ## 3. The spherical case In this section prove Theorems 1.4 and 1.5. Begin by recalling the statement of Theorem 1.4: **Theorem 1.4**.: _Let \(f,g\) be even continuous positive functions on the sphere \(S^{n-1}\), and suppose that_ \[Rf(\theta)\leq Rg(\theta),\qquad\text{for all }\theta\in S^{n-1}.\] _Then:_ 1. _Suppose that for some_ \(p>1\) _the function_ \(|x|_{2}^{-1}f^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _represents a positive definite distribution on_ \(\mathbb{R}^{n}\)_. Then_ \(\|f\|_{L^{p}(S^{n-1})}\leq\|g\|_{L^{p}(S^{n-1})}\)_._ 2. _Suppose that for some_ \(0<p<1\) _the function_ \(|x|_{2}^{-1}g^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _represents a positive definite distribution on_ \(\mathbb{R}^{n}\)_. Then_ \(\|f\|_{L^{p}(S^{n-1})}\leq\|g\|_{L^{p}(S^{n-1})}\)_._ Proof.: We begin with the proof of part (a). By (18), the inequality \(Rf\leq Rg\) can be written as \[(f\cdot r^{-n+1})^{\wedge}(\theta)\leq(g\cdot r^{-n+1})^{\wedge}(\theta), \qquad\forall\theta\in S^{n-1}. \tag{21}\] Integrating both sides of the latter inequality over \(S^{n-1}\) with respect to the non-negative measure \(\mu_{f^{p-1}}\) corresponding to the positive definite distribution \(f^{p-1}\cdot r^{-1}\) by (19), we get \[\int_{S^{n-1}}(f\cdot r^{-n+1})^{\wedge}(\theta)d\mu_{f^{p-1}}(\theta)\leq \int_{S^{n-1}}(g\cdot r^{-n+1})^{\wedge}(\theta)d\mu_{f^{p-1}}(\theta).\] Applying Parseval's identity (19) we get \[\int_{S^{n-1}}f^{p}(\theta)\ d\theta\leq\int_{S^{n-1}}f^{p-1}(\theta)g(\theta )\ d\theta,\] and using Holder's inequality we get \[\int_{S^{n-1}}f^{p}(\theta)\ d\theta\leq\int_{S^{n-1}}f^{p-1}(\theta)g(\theta )\ d\theta\leq\left(\int_{S^{n-1}}f^{p}(\theta)d\theta\right)^{\frac{p-1}{p} }\left(\int_{S^{n-1}}g^{p}(\theta)d\theta\right)^{\frac{1}{p}},\] and the result follows, which completes the proof of part (a). The proof of (b) is more or less identical to the proof of part (a), except with the use of Holder's inequality below replaced with its reverse (14) for \(0<p<1\). Integrating both sides of the inequality (21) over \(S^{n-1}\) with respect to the non-negative measure \(\mu_{g^{p-1}}\) corresponding to the positive definite distribution \(g^{p-1}\cdot r^{-1}\) by (19), and applying Parseval's identity (19) we get \[\int_{S^{n-1}}f(\theta)g^{p-1}(\theta)d\theta\leq\int_{S^{n-1}}g^{p}(\theta)\ d\theta,\] and using the reverse Holder's inequality (14) with \(X=S^{n-1},\)\(d\mu=d\theta,\)\(h=f,\)\(w=g^{p-1}\) and \(r=1/p,\) we get \[\int_{S^{n-1}}g^{p}(\theta)\ d\theta\geq\int_{S^{n-1}}f(\theta)g^{p-1}(\theta) \ d\theta\geq\left(\int_{S^{n-1}}f^{p}(\theta)d\theta\right)^{\frac{1}{p}} \left(\int_{S^{n-1}}g^{p}(\theta)d\theta\right)^{\frac{p-1}{p}},\] and the result follows, which completes the proof of part (b). To conclude this section, we provide a proof of Theorem 1.5. We restate it here for convenience of the reader. **Theorem 1.5**.: _The following hold true:_ * _Let_ \(g\) _be an infinitely smooth strictly positive even function on_ \(S^{n-1}\) _and_ \(p>1.\) _Suppose that the distribution_ \(|x|_{2}^{-1}g^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _is not positive definite on_ \(\mathbb{R}^{n}\)_. Then there exists an infinitely smooth even function_ \(f\) _on_ \(S^{n-1}\) _so that the condition (_4_) holds, but_ \(\|f\|_{L^{p}(S^{n-1})}>\|g\|_{L^{p}(S^{n-1})}\)_._ * _Let_ \(f\) _be an infinitely smooth strictly positive even function on_ \(S^{n-1}\) _and_ \(0<p<1.\) _Suppose that the distribution_ \(|x|_{2}^{-1}f^{p-1}\left(\frac{x}{|x|_{2}}\right)\) _is not positive definite on_ \(\mathbb{R}^{n}\)_. Then there exists an infinitely smooth even function_ \(g\) _on_ \(S^{n-1}\) _so that the condition (_4_) holds, but_ \(\|f\|_{L^{p}(S^{n-1})}>\|g\|_{L^{p}(S^{n-1})}\)_._ Proof.: We begin with the proof of part (a). Since \(g\) is infinitely differentiable and positive, the Fourier transform of \(g^{p-1}\cdot r^{-1}\) is of the form \(h\cdot r^{-n+1}\) where \(h\) is an even infinitely differentiable function on the sphere; see [31, Lemma 3.16]. This function is negative on some open symmetric set \(\Omega\). Choose a function \(\psi\in C^{\infty}(S^{n-1})\) so that \(\psi\geq 0\) everywhere on \(S^{n-1}\) and \(\psi>0\) only on some non-empty open subset of \(\Omega.\) The Fourier transform of \(\psi\cdot r^{-1}\) is a function \(\varphi\cdot r^{-n+1}\) where \(\varphi\in C^{\infty}(S^{n-1}),\) again by [31, Lemma 3.16]. Then \((\varphi\cdot r^{-n+1})^{\wedge}=(2\pi)^{n}\psi\cdot r^{-1}.\) Define the function \(f\) on \(S^{n-1}\) by \[f(\theta)=g(\theta)-\epsilon\,\varphi(\theta),\qquad\forall\theta\in S^{n-1},\] where \(\epsilon\) is small enough so that \(f>0\) on the sphere. By extending the functions \(f,g,\varphi\) to \(\mathbb{R}^{n}\) homogeneously of degree \(-n+1,\) we then have \[(f\cdot r^{-n+1})^{\wedge}=(g\cdot r^{-n+1})^{\wedge}-(2\pi)^{n}\psi\cdot r^{ -1}.\] Since \(\psi\) is non-negative everywhere on the sphere, by (18) we conclude that the functions \(f\) and \(g\) satisfy the condition (4). Multiplying the latter equality by \((g^{p-1}\cdot r^{-1})^{\wedge}\) and integrating over the sphere we get \[\int_{S^{n-1}}(f\cdot r^{-n+1})^{\wedge}(\theta)(g^{p-1}\cdot r^{-1})^{\wedge }(\theta)\ d\theta\] \[=\int_{S^{n-1}}(g\cdot r^{-n+1})^{\wedge}(\theta)(g^{p-1}\cdot r^{-1})^{\wedge}( \theta)\ d\theta-\epsilon(2\pi)^{n}\int_{S^{n-1}}\psi(\theta)(g^{p-1}\cdot r^{- 1})^{\wedge}(\theta)\ d\theta.\] Parseval's formula (17) implies that \[\int_{S^{n-1}}f(\theta)g^{p-1}(\theta)\ d\theta=\int_{S^{n-1}}g^{p}(\theta)\ d \theta-\epsilon(2\pi)^{n}\int_{S^{n-1}}\psi(\theta)(g^{p-1}\cdot r^{-1})^{ \wedge}(\theta)\ d\theta.\] Since \(\psi\) can be positive only where \((g^{p-1}\cdot r^{-1})^{\wedge}\) is negative, we get \[\int_{S^{n-1}}g^{p}(\theta)\ d\theta<\int_{S^{n-1}}f(\theta)g^{p-1}(\theta)\ d\theta,\] and by Holder's inequality \[\int_{S^{n-1}}g^{p}(\theta)\ d\theta<\int_{S^{n-1}}f^{p}(\theta)\ d\theta,\] which completes the proof of part (a). Now we move to the proof of (b). Since \(f\) is infinitely differentiable and positive, the Fourier transform of \(f^{p-1}\cdot r^{-1}\) is of the form \(h\cdot r^{-n+1}\) where \(h\) is an even infinitely differentiable function on the sphere; see [31, Lemma 3.16]. This function is negative on some open symmetric set \(\Omega\) in the sphere. Choose a function \(\psi\in C^{\infty}(S^{n-1})\) so that \(\psi\geq 0\) everywhere on \(S^{n-1}\) and \(\psi>0\) only on some non-empty open subset of \(\Omega.\) The Fourier transform of \(\psi\cdot r^{-1}\) is a function \(\varphi\cdot r^{-n+1}\) where \(\varphi\in C^{\infty}(S^{n-1}),\) again by [31, Lemma 3.16]. Then \((\varphi\cdot r^{-n+1})^{\wedge}=(2\pi)^{n}\psi\cdot r^{-1}.\) Define the function \(g\) on \(S^{n-1}\) by \[g(\theta)=f(\theta)+\epsilon\,\varphi(\theta),\qquad\forall\theta\in S^{n-1},\] where \(\epsilon\) is small enough so that \(g>0\) on the sphere. By extending the functions \(f,g,\varphi\) to \(\mathbb{R}^{n}\) homogeneously of degree \(-n+1,\) we then have \[(g\cdot r^{-n+1})^{\wedge}=(f\cdot r^{-n+1})^{\wedge}+(2\pi)^{n}\psi\cdot r^ {-1}.\] Since \(\psi\) is non-negative everywhere on the sphere, by (18) we conclude that the functions \(f\) and \(g\) satisfy the condition (4). Multiplying the latter equality by \((f^{p-1}\cdot r^{-1})^{\wedge}\) and integrating over the sphere we get \[\int_{S^{n-1}}(g\cdot r^{-n+1})^{\wedge}(\theta)(f^{p-1}\cdot r^{-1})^{ \wedge}(\theta)\ d\theta\] \[=\int_{S^{n-1}}(f\cdot r^{-n+1})^{\wedge}(\theta)(f^{p-1}\cdot r^{-1})^{ \wedge}(\theta)\ d\theta+\epsilon(2\pi)^{n}\int_{S^{n-1}}\psi(\theta)(f^{p-1} \cdot r^{-1})^{\wedge}(\theta)\ d\theta.\] Parseval's formula (17) implies that \[\int_{S^{n-1}}g(\theta)f^{p-1}(\theta)\ d\theta=\int_{S^{n-1}}f^{p}(\theta)\ d \theta+\epsilon(2\pi)^{n}\int_{S^{n-1}}\psi(\theta)(f^{p-1}\cdot r^{-1})^{ \wedge}(\theta)\ d\theta.\] Since \(\psi\) can be positive only where \((f^{p-1}\cdot r^{-1})^{\wedge}\) is negative, we get \[\int_{S^{n-1}}g(\theta)f^{p-1}(\theta)\ d\theta<\int_{S^{n-1}}f^{p}(\theta)\ d\theta\] and by the reverse Holder's inequality (14) we get \[\int_{S^{n-1}}g^{p}(\theta)\ d\theta<\int_{S^{n-1}}f^{p}(\theta)\ d\theta.\] Note that the condition of Theorem 1.5 that \(g\) is infinitely smooth can be removed using the approximation argument of [31, Lemma 4.10], but then \(g\) needs to be perturbed twice to construct a counterexample. ## 4. The intersection function of a function The concept of an intersection body in two steps [17, 40]. First, he gave a geometrically clear definition of an intersection body of a star body, and then replaced star bodies by measures to define the general concept of an intersection body. We do it in a similar way for intersection functions. First, we introduce the intersection function of a positive function. Denote by \(\mathcal{L}^{n}\) the class of positive, continuous, integrable, and even in the first variable functions on \(\mathbb{R}\times S^{n-1}.\) **Definition 4.1**.: _Given \(g\in\mathcal{L}^{n},\) we say that a function \(f\) on \(\mathbb{R}^{n}\) is an intersection function of \(g\) if, for any Schwartz test function \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\),_ \[\int_{\mathbb{R}^{n}}\varphi(x)f(x)\ dx=\int_{S^{n-1}}\int_{\mathbb{R}} \mathcal{R}\varphi(t,\theta)g(t,\theta)\ dt\ d\theta. \tag{22}\] Essentially, this means that \(f=R^{*}g\) is the dual Radon transform of a positive function \(g;\) see [22, p.3]. The existence of an intersection function is guaranteed by the well-known formula for the dual Radon transform. **Proposition 4.2**.: _Let \(g\in\mathcal{L}^{n},\) then the function \(f\colon\mathbb{R}^{n}\to\mathbb{R}_{+}\) defined by_ \[f(x)=\int_{S^{n-1}}g(\langle x,\theta\rangle,\theta)d\theta\] _is an intersection function of \(g\)._ Proof.: By Fubini's theorem, we have that \[\langle f,\varphi\rangle =\int_{\mathbb{R}^{n}}f(x)\varphi(x)\ dx\] \[=\int_{\mathbb{R}^{n}}\left(\int_{S^{n-1}}g(\langle x,\theta \rangle,\theta)d\theta\right)\varphi(x)\ dx\] \[=\int_{S^{n-1}}\int_{\mathbb{R}^{n}}\varphi(x)g(\langle x,\theta \rangle),\theta)\ dx\ d\theta\] \[=\int_{S^{n-1}}\int_{\mathbb{R}}\left(\int_{\theta^{\perp}+t \theta}\varphi(x)dx\right)g(t,\theta)\ dt\ d\theta\] \[=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R}\varphi(t,\theta)g(t,\theta)\ dt\ d\theta,\] whenever \(\varphi\) is a Schwartz test function on \(\mathbb{R}^{n}\). This means that the function \(f\), as defined above, satisfies the condition (22) for any Schwartz test function on \(\mathbb{R}^{n}\), and so it must be an intersection function of the function \(g\). This simple formula is not very effective if one wants to know weather a given function \(f\) is an intersection function of a function. As it was done in the case of intersection bodies, we establish the Fourier characterization of intersection functions which works better for our purposes. **Proposition 4.3**.: _Let \(g\in\mathcal{L}^{n}.\) A function \(f\) on \(\mathbb{R}^{n}\) is an intersection function of \(g\) if, and only if,_ \[f=\frac{1}{\pi}\left(|x|_{2}^{-n+1}\left(g\left(t,\frac{x}{|x|_{2}}\right) \right)\right)_{t}^{\wedge}(|x|_{2})\right)_{x}^{\wedge},\] _where the interior Fourier transform is taken with respect to \(t\in\mathbb{R},\) and the exterior Fourier transform is with respect to \(x\in\mathbb{R}^{n}\)._ Proof.: Note that for fixed \(\theta\in S^{n-1}\) the function \(t\in\mathbb{R}\rightarrow\mathcal{R}\hat{\varphi}(t,\theta)\) is the Fourier transform of the function \(z\in\mathbb{R}\rightarrow(2\pi)^{n-1}\varphi(z\theta).\) Therefore, for any test function \(\varphi\), applying Parseval to the inner integral by \(dt,\) we get \[\langle\hat{f},\varphi\rangle =\int_{\mathbb{R}^{n}}f(x)\hat{\varphi}(x)\ dx=\int_{S^{n-1}} \int_{\mathbb{R}}\mathcal{R}\hat{\varphi}(t,\theta)g(t,\theta)\ dt\ d\theta\] \[=(2\pi)^{n-1}\int_{S^{n-1}}\int_{\mathbb{R}}\varphi(z\theta)(g(t,\theta))_{t}^{\wedge}(z)\ dz\ d\theta\] \[=2(2\pi)^{n-1}\Big{\langle}|x|_{2}^{-n+1}\left(g\left(t,\frac{x} {|x|_{2}}\right)\right)_{t}^{\wedge}(|x|_{2}),\varphi(x)\Big{\rangle}.\] From the Propositions 4.2 and 4.3, respectively, we have the following corollary. **Corollary 4.4**.: _For any \(g\in\mathcal{L}^{n}\), one has_ \[\frac{1}{\pi}\left(|x|_{2}^{-n+1}\left(g\left(t,\frac{x}{|x|_{2}}\right) \right)_{t}^{\wedge}(|x|_{2})\right)_{x}^{\wedge}(\xi)=\int_{S^{n-1}}g\left( \langle\xi,\theta\rangle,\theta\right)\ d\theta,\quad\forall\xi\in\mathbb{R}^ {n}.\] _Moreover, for every \(r\in\mathbb{R}\) and \(\theta\in S^{n-1}\), the following identity holds:_ \[(g(t,\theta))_{t}^{\wedge}(r)=|r|^{n-1}\hat{f}(r\theta), \tag{23}\] _where \(f\) is the intersection function of \(g\)._ We also have the following uniqueness theorem: **Corollary 4.5**.: _Given a pair \(g_{1},g_{2}\in\mathcal{L}^{n}\) such that_ \[\int_{S^{n-1}}g_{1}(\langle x,\theta\rangle,\theta)d\theta=\int_{S^{n-1}}g_{2} (\langle x,\theta\rangle,\theta)d\theta\quad\text{for all }x\in\mathbb{R}^{n},\] _one has that \(g_{1}=g_{2}\)._ The condition (23) means that the function \(r\to|r|^{n-1}\hat{f}(r\theta)\) is positive definite. We will use this property to define a more general class of intersection functions. Also, this condition allows to point out several examples of functions which are and are not intersection functions, as follows. **Example 4.6**.: _Fix \(\alpha,\beta>0\) and \(\ell\in C(S^{n-1})\) even and strictly positive._ 1. _For each_ \(\theta\in S^{n-1}\) _consider the function_ \(h_{\theta}(r)=\alpha\exp(-|r|^{2}\ell(\theta))\)_. It can be check that_ \(h_{\theta}\) _is the Fourier transform of the non-negative function:_ \[(h_{\theta})_{r}^{\wedge}(t)=\alpha\sqrt{\frac{\pi}{\ell(\theta)}}e^{-\frac{1} {4\ell(\theta)}|t|^{2}}\geq 0.\] _So, by Proposition_ 4.3_,_ \[f(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}\alpha e^{-|x|_{2}^{2}\ell\left(\frac{ x}{|x|_{2}}\right)}\right]_{x}^{\wedge}(\xi)\] _is the intersection function of the function_ \[g(t,\theta)=\alpha\sqrt{\frac{\pi}{\ell(\theta)}}e^{-\frac{1}{4\ell(\theta)} |t|^{2}}\] 2. _For each_ \(\theta\in S^{n-1}\) _consider the function_ \(h_{\theta}(r)=\alpha\sqrt{\frac{\pi}{\beta}}e^{-\frac{1}{4\beta}|r|^{2}}\ell( \theta).\) _Note that, as above,_ \[(h_{\theta})_{r}^{\wedge}(t)=\alpha\ell(\theta)e^{-\beta|t|^{2}}\geq 0.\] _Again, according to Proposition_ 4.3_,_ \[f(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2}}\right) \alpha\sqrt{\frac{\pi}{\beta}}e^{-\frac{1}{4\beta}|x|_{2}^{2}}\right]_{x}^{ \wedge}(\xi)\] _is the intersection function of the function_ \[g(t,\theta)=\alpha\ell(\theta)e^{-\beta|t|^{2}}.\] 3. _For each_ \(\theta\in S^{n-1}\) _consider the function_ \(h_{\theta}(r)=\exp(-|r|\ell(\theta)).\) _Notice that_ \[(h_{\theta})_{r}^{\wedge}(t)=\frac{2\ell(\theta)}{t^{2}+[\ell(\theta)]^{2}} \geq 0.\] _Consequently, the function_ \[f(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}e^{-|x|_{2}\ell\left(\frac{x}{|x|_{2}} \right)}\right]_{x}^{\wedge}(\xi)\] _is the intersection function of_ \[g(t,\theta)=\frac{2\ell(\theta)}{t^{2}+[\ell(\theta)]^{2}}\] 4. _More generally, fix_ \(q\in(0,2]\)_, and for each_ \(\theta\in S^{n-1}\)_, set_ \[h_{\theta}(r)=\ell(\theta)e^{-|r|^{q}}.\] _According to_ _[_31_, Lemma 2.27]_ \[(h_{\theta})_{r}^{\wedge}(t)=\ell(\theta)\left(e^{-|r|^{q}}\right)_{r}^{\wedge }(t):=\ell(\theta)\gamma_{q}(t)\] _is a positive function on_ \(\mathbb{R}\)_. Consequently, the function_ \[f_{q}(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|^{2}}\right)e^ {-|x|_{2}^{q}}\right]_{x}^{\wedge}(\xi)\] _is the intersection function of_ \[g_{q}(t,\theta)=\ell(\theta)\gamma_{q}(t).\] **Example 4.7**.: _To provide examples of functions which are not an intersection functions, for any \(\theta\in S^{n-1}\) and \(q>2\), consider functions of the form \(h_{\theta}(r)=\ell(\theta)\exp(-|r|^{q}),\) where \(\ell\in C(S^{n-1})\) is strictly positive. Taking the Fourier transform by \(r\in\mathbb{R}\), we see that_ \[(h_{\theta})_{r}^{\wedge}(t)=\ell(\theta)(e^{-|r|^{q}})_{r}^{\wedge}(t):= \ell(\theta)\gamma_{q}(t).\] _But \(\gamma_{q}(t)\) is not always non-negative (see [31]), so according to Corollary 4.4, the function \(f\) given by_ \[f(x)=(2\pi)^{-n}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2}}\right)e^{-|x| _{2}^{q}}\right]_{\xi}^{\wedge}(x)\] _fails to be an intersection function of any member of \(\mathcal{L}^{n}\)._ ## 5. Intersection Functions The next step is to define the class of functions which includes both intersection functions of functions and intersection bodies. We base our definition on the result of Corollary 4.4, rather than use a more geometric definition in the spirit of Lutwak's approach to intersection bodies which is based on the extension of the dual Radon transform to measures. **Definition 5.1**.: _A non-negative, even, continuous, integrable function \(f\) on \(\mathbb{R}^{n}\) is called an intersection function if, for every direction \(\theta\in S^{n-1}\), the function_ \[r\in\mathbb{R}\mapsto|r|^{n-1}\hat{f}(r\theta)\] _is a positive definite function on \(\mathbb{R}\)._ The geometric definition now becomes a theorem, as follows. The theorem was formulated in the Introduction; we recall the statement. **Theorem 1.7**.: _An even, continuous, non-negative, and integrable function \(f\) defined on \(\mathbb{R}^{n}\) is an intersection function if, and only if, for every direction \(\theta\in S^{n-1}\), there exists a non-negative, even, finite Borel measure \(\mu_{\theta}\) on \(\mathbb{R}\) such that_ * _the function_ \[\theta\in S^{n-1}\mapsto\int_{\mathbb{R}}\mathcal{R}\varphi(t,\theta)d\mu_{ \theta}(t)\] _belongs to_ \(L(S^{n-1})\) _whenever_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\)_, and_ * \[\int_{\mathbb{R}^{n}}f\varphi=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R} \varphi(t,\theta)d\mu_{\theta}(t)d\theta.\] _holds for all_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\)_._ Proof.: Begin by recalling the connection between the Radon transform and the Fourier transform: For any fixed direction \(\theta\in S^{n-1}\), the function \(g(t)=\mathcal{R}\hat{\varphi}(t,\theta)\) is the Fourier transform of the function \(h(z)=(2\pi)^{n-1}\varphi(z\theta)\), \(t,z\in\mathbb{R}\) whenever \(\varphi\) is a test function on \(\mathbb{R}^{n}\). Assume that \(f\) is an intersection function. For each \(\theta\in S^{n-1}\) we are tasked with finding a finite, positive Borel measure \(\mu_{\theta}\) on \(\mathbb{R}\) for which \[\int_{\mathbb{R}^{n}}f(x)\varphi(x)dx=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{ R}\varphi(t,\theta)d\mu_{\theta}(t)d\theta\] holds whenever \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\). Since \(f\) is an intersection function on \(\mathbb{R}^{n}\), for every direction \(\theta\in S^{n-1}\), the function \(h_{\theta}(r)=|r|^{n-1}\hat{f}(r\theta)\) is a positive definite function on \(\mathbb{R}\). Therefore, by Bochner's theorem, for each \(\theta\in S^{n-1}\), there exists a finite, positive Borel measure \(\nu_{\theta}\) on \(\mathbb{R}\) such that the Fourier transform of \(\nu_{\theta}\) is equal to \(h_{\theta}\). Notice that, by applying Parseval's identity on \(\mathbb{R}^{n}\) and then again on \(\mathbb{R}\), we have \[\langle f,\varphi\rangle =(2\pi)^{n}\langle\hat{f},\hat{\varphi}\rangle=\frac{(2\pi)^{n}}{ 2}\int_{S^{n-1}}\int_{\mathbb{R}}|r|^{n-1}\hat{f}(r\theta)\hat{\varphi}(r \theta)drd\theta\] \[=\frac{(2\pi)^{n}}{2}\int_{S^{n-1}}\int_{\mathbb{R}}(|\cdot|^{n-1 }\hat{f}(\cdot\theta))^{\wedge}_{r}(z)(\hat{\varphi}(\cdot\theta))^{\wedge}_{r }(z)dzd\theta\] \[=\frac{(2\pi)^{n}}{2}\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R} \varphi(s,\theta)d\nu_{\theta}(s)d\theta\] whenever \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) is even, which is exactly the condition (5). Conversely, assume that the condition (5) holds. For every fixed direction \(\theta\), by Bochner's theorem, the Fourier transform of the measure \(\mu_{\theta}\) is a continuous, positive definite function \(f_{\theta}\) defined on \(\mathbb{R}\). Consequently, for any even test function \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\), applying Parseval's identity to the integral by \(dt\), we have that \[\langle\hat{f},\varphi\rangle =\langle f,\hat{\varphi}\rangle=\int_{S^{n-1}}\int_{\mathbb{R}} \mathcal{R}\hat{\varphi}(t,\theta)d\mu_{\theta}(t)d\theta\] \[=\int_{S^{n-1}}\langle\mathcal{R}\hat{\varphi}(\cdot,\theta),\mu _{\theta}(\cdot)\rangle d\theta=\int_{S^{n-1}}\langle[\mathcal{R}\hat{\varphi} (\cdot,\theta)]^{\wedge}_{t},[\mu_{\theta}]^{\wedge}_{t}\rangle d\theta\] \[=(2\pi)^{n-1}\int_{S^{n-1}}\int_{\mathbb{R}}\varphi(z\theta)f_{ \theta}(z)dzd\theta\] \[=2(2\pi)^{n-1}\int_{S^{n-1}}\int_{0}^{\infty}\frac{|r|^{n-1}}{|r| ^{n-1}}\varphi(z\theta)f_{\theta}(z)dzd\theta\] \[=2(2\pi)^{n-1}\int_{\mathbb{R}^{n}}|x|_{2}^{-n+1}\varphi(x)f_{ \frac{x}{|x|_{2}}}(|x|_{2})dx\] \[=2(2\pi)^{n-1}\langle|x|_{2}^{-n+1}f_{\frac{x}{|x|_{2}}}(|x|_{2}), \varphi\rangle.\] So it must be the case that \[\hat{f}(x)=|x|_{2}^{-n+1}f_{\frac{x}{|x|_{2}}}(|x|_{2})\] as distributions. Hence, for any fixed direction \(\theta\in S^{n-1}\), using the positive definiteness of the function \(f_{\theta}\), one has \[\langle[|\cdot|^{n-1}\hat{f}(\cdot\theta)]_{r}^{\wedge},\psi\rangle =\langle|\cdot|^{n-1}\hat{f}(\cdot\theta),\hat{\psi}\rangle\] \[=\int_{\mathbb{R}}f_{\theta}(z)(\psi)_{t}^{\wedge}(z)dz\] \[=\int_{\mathbb{R}}(f_{\theta})_{z}^{\wedge}(s)\psi(s)ds\geq 0,\] whenever \(\psi\in\mathcal{S}(\mathbb{R})\) is even and non-negative. Therefore, the function \(f\) is an intersection function. In the following subsections we will examine some examples of intersection functions. In particular, we will see that the class of intersection functions contains the class of intersection bodies of star bodies. ### The spherical Radon transform Let \(\ell\in C(S^{n-1})\) be continuous and strictly positive. Given \(\epsilon>0\), consider the function \[g_{\epsilon}(t,\theta)=\ell(\theta)\frac{1}{\sqrt{\pi\,\epsilon}}e^{-\, \epsilon^{2}\,|t|^{2}}.\] As we saw in Example 4.6, for each \(\epsilon>0\), the intersection function \(f_{\epsilon}\) of \(g_{\epsilon}\) is \[f_{\epsilon}(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2} }\right)\sqrt{\frac{\pi}{\epsilon}}e^{-\frac{1}{4\,\epsilon^{2}}|x|_{2}^{2}} \right]_{x}^{\wedge}(\xi).\] By the definition of an intersection function of \(g\), for every even \(\varphi\in S(\mathbb{R}^{n})\) and any fixed \(\epsilon>0\), \[\int_{\mathbb{R}^{n}}\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left( \frac{x}{|x|_{2}}\right)\sqrt{\frac{\pi}{\epsilon}}e^{-\frac{1}{4\,\epsilon^{ 2}}|x|_{2}^{2}}\right]_{x}^{\wedge}(\xi)\varphi(\xi)d\xi=\int_{\mathbb{R}^{n} }f_{\epsilon}(\xi)\varphi(\xi)d\xi\] \[=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R}\varphi(t,\theta)g_{ \epsilon}(t,\theta)dtd\theta\] \[=\int_{S^{n-1}}\int_{\mathbb{R}}\mathcal{R}\varphi(t,\theta)\ell (\theta)\frac{1}{\sqrt{\pi\,\epsilon}}e^{-\,\epsilon^{2}\,|t|^{2}}dtd\theta.\] Sending \(\epsilon\to 0\), the left-hand side of the above equality tends to \[\frac{1}{\pi}\int_{\mathbb{R}^{n}}\left[|x|_{2}^{-n+1}\ell\left( \frac{x}{|x|_{2}}\right)(\delta_{0}(t))_{t}^{\wedge}(|x|_{2})\right]_{x}^{ \wedge}(\xi)\varphi(\xi)d\xi\] \[=\frac{1}{\pi}\int_{\mathbb{R}^{n}}\left[|x|_{2}^{-n+1}\ell\left( \frac{x}{|x|_{2}}\right)\right]_{x}^{\wedge}(\xi)\varphi(\xi)d\xi\] whenever \(\varphi\in S(\mathbb{R}^{n})\) is even, where \(\delta_{0}\) is the delta function. Here we have used the fact that \(\hat{\delta}\equiv 1\). Similarly, \(\epsilon\to 0\) the right-hard side tends to \[\int_{S^{n-1}}\mathcal{R}\varphi(0,\theta)\ell(\theta)d\theta=\int_{S^{n-1}}R \varphi(\theta)\ell(\theta)d\theta,\] where \(R\colon C(S^{n-1})\to C(S^{n-1})\) denotes the spherical Radon transform. Consequently, we have shown that \[\frac{1}{\pi}\int_{\mathbb{R}^{n}}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2} }\right)\right]_{x}^{\wedge}(\xi)\varphi(\xi)d\xi=\int_{S^{n-1}}R\varphi(\theta )\ell(\theta)d\theta \tag{24}\] whenever \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) is even. From Theorem 1.7 paired with Bochner's theorem, limits of intersection functions are themselves intersection functions, so it follows that \[f(\xi)=\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2}}\right) \right]_{x}^{\wedge}(\xi)\] is an intersection function. In particular, \(f\) is a continuous function on the sphere extended to a homogeneous function of the order \(-1\) on \(\mathbb{R}^{n}\setminus\{0\}\), which recover [31, Lemma 3.7]: For every \(\theta\in S^{n-1}\), \[\frac{1}{\pi}\left[|x|_{2}^{-n+1}\ell\left(\frac{x}{|x|_{2}}\right)\right]_{x }^{\wedge}(\theta)=R\ell(\theta).\] Next, we can rewrite equality (24) to get \[\int f(x)\varphi(x)dx =\int_{S^{n-1}}R\varphi(\theta)\ell(\theta)d\theta\] \[=\int_{S^{n-1}}\left(\int_{\langle x,\theta\rangle=0}\varphi(x)dx \right)\ell(\theta)d\theta\] \[=\int_{S^{n-1}}\left(\int_{S^{n-1}\cap\theta^{\perp}}\int_{0}^{ \infty}s^{n-2}\varphi(s\omega)dsd\omega\right)\ell(\theta)d\theta.\] If we denote by \(h(\theta)=\int_{0}^{\infty}r^{n-2}\varphi(r\theta)dr\), then we recover the well-known self-duality property of the spherical Radon transform: For any \(\ell,h\in C(S^{n-1})\) \[\int_{S^{n-1}}R\ell(\theta)h(\theta)d\theta=\int_{S^{n-1}}Rh(\theta)\ell( \theta)d\theta.\] ### Intersection bodies In the previous example, set \(\ell(\theta)=\left\|\theta\right\|_{L}^{-n+1}\), where \(L\) is an origin-symmetric star body in \(\mathbb{R}^{n}\). Then we recover the Fourier formula for the volume of a section, [31, Th.3.8]: \[f(x)=(n-1)|x|_{2}^{-1}\left|L\cap\left(\frac{x}{|x|_{2}}\right)^{\perp}\right| =\frac{1}{\pi}(\|\cdot\|_{L}^{-n+1})^{\wedge}(x).\] In fact, we have shown that the concept of intersection function as described in Definition 5.1 extends the notion of intersection bodies. Moreover, we have \(f(x)=(n-1)\|x\|_{IL}^{-1}\), so we recover the result of [31, p.72]: For every \(\xi\in S^{n-1}\) \[\left(\|x\|_{IL}^{-1}\right)^{\wedge}(\xi)=\frac{(2\pi)^{n}}{\pi(n-1)}\|\xi\|_ {L}^{-n+1}.\] In particular, an origin-symmetric star body \(K\) is an intersection body of a star body if, and only if, the Fourier transform of \(\|\cdot\|_{K}^{-1}\) is a \((-n+1)\)-homogeneous function on \(\mathbb{R}^{n}\) whose restriction to the sphere is continuous and strictly positive, cf [31, Th.4.1]. ## 6. The case of the Radon transform In this section, we prove Theorem 1.8 and Theorem 1.9. **Theorem 1.8**.: _Let \(p>0\) and consider a pair of continuous, non-negative even functions \(\varphi,\psi\in L^{1}(\mathbb{R}^{n})\cap L^{p}(\mathbb{R}^{n})\) satisfying the condition_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta)\quad\text{for all }(t,\theta)\in\mathbb{R}\times S^{n-1}.\] _Then:_ 1. _if_ \(p>1\) _and_ \(\varphi^{p-1}\) _is an intersection function, then_ \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\leq\|\psi\|_{L^{p}(\mathbb{R}^{n})}\)_, and_ 2. _if_ \(0<p<1\) _and_ \(\psi^{p-1}\) _is an intersection function, then_ \(\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\leq\|\psi\|_{L^{p}(\mathbb{R}^{n})}\)_._ Proof.: Without loss of generality, we may assume that \(\varphi,\psi\in\mathcal{S}(\mathbb{R}^{n})\). We begin the proof of (a). Since \(\varphi^{p-1}\) is an intersection function, by Theorem 1.7, for each \(\theta\in S^{n-1}\) there exists a non-negative, even, finite Borel measure \(\mu_{\theta}\) on \(\mathbb{R}\) such that the function \[\alpha_{\theta}:=\int_{\mathbb{R}}\mathcal{R}\alpha(t,\theta)d\mu_{\theta}(t)\] is integrable on \(S^{n-1}\) for any \(\alpha\in\mathcal{S}(\mathbb{R}^{n})\). Integrating both sides of the assumption (6) over \(\mathbb{R}\) with respect to the measure \(\mu_{\theta}\), we then have the inequality \[\varphi_{\theta}\leq\psi_{\theta}\quad\text{for all }\theta\in S^{n-1}.\] Integrating the above inequality over \(S^{n-1}\) and applying the identity (5) of Theorem 1.7, we obtain \[\int_{\mathbb{R}^{n}}\varphi(x)^{p}dx =\int_{\mathbb{R}\times S^{n-1}}\mathcal{R}\varphi(t\theta)d\mu_{ \theta}(t)d\theta\] \[=\int_{S^{n-1}}\varphi_{\theta}d\theta\leq\int_{S^{n-1}}\psi_{ \theta}d\theta\] \[=\int_{\mathbb{R}\times S^{n-1}}\mathcal{R}\psi(t\theta)d\mu_{ \theta}(t)d\theta=\int_{\mathbb{R}^{n}}\varphi(x)^{p-1}\psi(x)dx\] \[\leq\left(\int_{\mathbb{R}^{n}}\varphi(x)^{p}dx\right)^{\frac{p-1 }{p}}\left(\int_{\mathbb{R}^{n}}\psi(x)^{p}dx\right)^{\frac{1}{p}},\] where in the last line we applied Holder's inequality. The proof of part (b) is the same as (a) with a minor adjustment akin to the proof of Theorem 1.4(b). Next, we treat the second part of Problem 1.2, Theorem 1.9, which we restate here. **Theorem 1.9**.: _The following hold:_ 1. _Fix_ \(p>1\) _and let_ \(\psi\in\mathcal{S}(\mathbb{R}^{n})\) _be non-negative and even. If_ \(\psi^{p-1}\) _is not an intersection function, then there exists an even, non-negative_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) _such that_ \[\mathcal{R}\varphi(t,\theta)\leq\mathcal{R}\psi(t,\theta)\quad\text{for all }(t,\theta)\in\mathbb{R}\times S^{n-1},\] _but with_ \(\|\psi\|_{L^{p}(\mathbb{R}^{n})}<\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\)_._ 2. _Fix_ \(0<p<1\) _and let_ \(\varphi\in\mathcal{S}(\mathbb{R}^{n})\) _be non-negative and even. If_ \(\varphi^{p-1}\) _is not an intersection function, then there exists a non-negative, even_ \(\psi\in\mathcal{S}(\mathbb{R}^{n})\) _such that_ \(\mathcal{R}\varphi\leq\mathcal{R}\psi\)_, but with_ \(\|\psi\|_{L^{p}(\mathbb{R}^{n})}<\|\varphi\|_{L^{p}(\mathbb{R}^{n})}\)_._ Proof.: We will present the proof of part (a). For brevity, set \(\psi_{\theta}(t)=|t|^{n-1}\widehat{\psi^{p-1}}(t\theta)\) whenever \(\theta\in S^{n-1}\). To begin, we will show that there is a symmetric set \(\Gamma\subset S^{n-1}\) of positive \(S^{n-1}\) measure such that \(\psi_{\omega}\) is not a positive definite function of \(t\in\mathbb{R}\) whenever \(\omega\in\Gamma\). Since \(\psi\in\mathcal{S}(\mathbb{R}^{n})\), we get that \(\widehat{\psi^{p-1}}\in\mathcal{S}(\mathbb{R}^{n})\). Using the fact that \(\psi^{p-1}\) is not an intersection function, there exists a direction \(\nu\in S^{n-1}\) such that \(\psi_{\nu}\) is not positive definite on the line \(\mathbb{R}\nu:=\{t\nu\colon t\in\mathbb{R}\}\). In particular, since \(\psi_{\nu}\) is not positive definite on \(\mathbb{R}\nu\), there exists some non-empty, open symmetric set \(I_{\nu}\subset\mathbb{R}\nu\), and a non-negative, even \(f\in\mathcal{S}(\mathbb{R})\) such that \[\langle\psi_{\nu},\widehat{f}\rangle=\int_{\mathbb{R}}\psi_{\nu}(t)\widehat{ f}(t)dt=-\delta<0.\] Our goal is to show that there is a small neighborhood \(\Omega\) of \(\nu\) in \(S^{n-1}\) such that \[|\langle\psi_{\nu}-\psi_{\omega},\widehat{f}\rangle|\leq\frac{\delta}{2},\] for all \(\omega\in\Omega\). From the Cauchy-Schwartz inequality: \[|\langle\psi_{\nu}-\psi_{\omega},\widehat{f}\rangle|\leq\|\psi_{\nu}-\psi_{ \omega}\|_{L^{2}(\mathbb{R})}\|f\|_{L^{2}(\mathbb{R})}.\] Since \(\|f\|_{L^{2}(\mathbb{R})}\) is bounded, it is enough to show that \(\|\psi_{\nu}-\psi_{\omega}\|_{L^{2}(\mathbb{R})}\) is small whenever \(\omega,\nu\in S^{n-1}\) are sufficiently close in norm. Using the fact that \(\widehat{\psi^{p-1}}\in\mathcal{S}(\mathbb{R}^{n})\), it is a locally Lipschitz function. Moreover, there exists \(m\in\mathbb{N}\) and \(M>0\) such that \[|\widehat{\psi^{p-1}}(x)|\leq\frac{1}{|x|^{m}}\quad\text{for all }|x|_{2}>M,\quad\text{and }\int_{R}^{\infty}t^{2(n-1)}|\widehat{\psi^{p-1}}(t\omega)|^{2}dt\leq\frac{ \delta^{2}}{16\|f\|_{L^{2}(\mathbb{R})}^{2}}\] holds for all \(\omega\in S^{n-1}\). Let \(C(\psi)=2M^{2n+2}L^{2}_{(\widehat{\psi^{p-1}})}\), where \(L_{\widehat{\psi^{p-1}}}\) is the Lipschitz constant of \(\widehat{\psi^{p-1}}\) on \([0,R]\). Now, if \(\omega\in S^{n-1}\) satisfies \(|\nu-\omega|_{2}<\sqrt{\frac{\delta^{2}}{8C(\psi)\|f\|_{L^{2}(\mathbb{R})}}}\), then \[\|\psi_{\nu}-\psi_{\omega}\|_{L^{2}(\mathbb{R})}^{2}=\int_{\mathbb{ R}}|\psi_{\nu}(r)-\psi_{\omega}(t)|^{2}dt\] \[=2\int_{0}^{M}t^{2(n-1)}|(\widehat{\psi^{p-1}})(t\nu)-(\widehat{ \psi^{p-1}})(t\omega)|^{2}dt+2\int_{M}^{\infty}t^{2(n-1)}|(\widehat{\psi^{p-1 }})(t\nu)-(\widehat{\psi^{p}})(t\omega)|^{2}dt\] \[\leq 2\int_{0}^{M}t^{2(n-1)}|(\widehat{\psi^{p-1}})(t\nu)-( \widehat{\psi^{p-1}})(t\omega)|^{2}dt+\frac{\delta^{2}}{4\|f\|_{L^{2}(\mathbb{R })}}\] \[\leq 2M^{2n+2}L^{2}_{(\widehat{\psi^{p-1}})}|\nu-\omega|_{2}^{2}+ \frac{\delta}{4\|f\|_{L^{2}(\mathbb{R})}}\leq\frac{\delta^{2}}{2\|f\|_{L^{2}( \mathbb{R})}}.\] Therefore, given \(\omega\in S^{n-1}\) with \(|\nu-\omega|_{2}<\sqrt{\frac{\delta^{2}}{8C(\psi)\|f\|_{L^{2}(\mathbb{R})}}}\), the Cauchy-Schwartz inequality implies \[\langle\psi_{\omega},\widehat{f}\rangle=\langle\psi_{\omega}-\psi_{\nu}, \widehat{f}\rangle+\langle\psi_{\nu},\widehat{f}\rangle\leq\|f\|_{L^{2}}\|\psi_ {\omega}-\psi_{\nu}\|_{L^{2}(\mathbb{R})}-\delta<\frac{\delta}{2}-\delta<0.\] It follows that the function \(\psi_{\omega}\) is not positive definite on \(\mathbb{R}\) whenever \(\omega\in S^{n-1}\) is sufficiently close to \(\nu\). Set \[\Omega:=\left\{\omega\in S^{n-1}\colon|\nu-\omega|_{2}<\sqrt{\frac{\delta^{2}} {8C(\psi)\|f\|_{L^{2}(\mathbb{R})}}}\right\}.\] Denote by \(\sigma\) the uniform measure on \(S^{n-1}\). Then \(\sigma(\Gamma)>0\) and that \(\Omega\) is symmetric. Moreover, observe that \(\psi_{\omega}\) is not positive definite on \(\mathbb{R}\) whenever \(\omega\in\Gamma\). We extend \(\Omega\) to \(\mathbb{R}^{n}\) by letting \[\widetilde{\Omega}:=\bigcup_{\omega\in\Gamma}\mathbb{R}\omega.\] which is symmetric in \(\mathbb{R}^{n}\). Since \(\psi_{\omega}\) is not positive definite on the line \(\mathbb{R}\omega\), and \(\psi_{\omega}\in\mathcal{S}(\mathbb{R})\), it follows that there exists a non-empty symmetric open set \(\Lambda_{\omega}\subset\mathbb{R}\omega\) on which \(\widehat{\psi_{\omega}}<0\). Finally, we let \[\Lambda:=\bigcup_{\omega\in\Gamma}\Lambda_{\omega}.\] Let \(h\in\mathcal{S}(\mathbb{R}^{n})\) be non-negative, even, and such that \(h>0\) only on the set \(\Lambda\). Consider the function \(\varphi\) defined by \[\varphi(x)=\psi(x)-\eta h(x),\] with \(\eta\) sufficiently small so that \(\varphi\geq 0\) on \(\mathbb{R}^{n}.\) Notice that, for any \((t,\theta)\in\mathbb{R}\times S^{n-1}\), we obtain \[\mathcal{R}\varphi(t,\theta)=\mathcal{R}\psi(t,\theta)-\eta\mathcal{R}h(t, \theta)\leq\mathcal{R}\psi(t,\theta).\] Now, for each fixed direction in the sphere \(\theta\in S^{n-1}\), we have \[\int_{\mathbb{R}}\widehat{\psi_{\theta}}(t)\mathcal{R}\varphi(t, \theta)dt =\int_{\mathbb{R}}\widehat{\psi_{\theta}}(t)\mathcal{R}\psi(t, \theta)-\eta\int_{\mathbb{R}}\widehat{\psi_{\theta}}(t)\mathcal{R}h(t,\theta)\] \[\geq\int_{\mathbb{R}}\widehat{\psi_{\theta}}(t)\mathcal{R}\psi(t,\theta),\] However, if \(\theta\in\Gamma\), then the last inequality is strict, since \(h>0\) only on the admissible set \(\Lambda\). For each fixed \(\theta\in S^{n-1}\), applying the Parseval identity to both sides of the above inequality, we obtain \[\int_{\mathbb{R}}|s|^{n-1}\widehat{\psi^{p-1}}(s\theta)\widehat{\varphi}(s \theta)ds\geq\int_{\mathbb{R}}|s|^{n-1}\widehat{\psi^{p-1}}(s\theta)\widehat{ \psi}(s\theta)dr.\] Integrating the above inequality over \(S^{n-1}\), we obtain \[\int_{S^{n-1}}\int_{\mathbb{R}}|r|^{n-1}\widehat{\psi^{p-1}}(r\theta )\widehat{\varphi}(r\theta)drd\theta\] \[=\int_{S^{n-1}\setminus\Gamma}\int_{\mathbb{R}}|s|^{n-1}\widehat{ \psi^{p-1}}(r\theta)\widehat{\varphi}(s\theta)drd\theta+\int_{\Gamma}\int_{ \mathbb{R}}|s|^{n-1}\widehat{\psi^{p-1}}(s\theta)\widehat{\varphi}(s\theta)dsd\theta\] \[>\int_{S^{n-1}\setminus\Gamma}\int_{\mathbb{R}}|s|^{n-1}\widehat{ \psi^{p-1}}(s\theta)\widehat{\psi}(s\theta)dsd\theta+\int_{\Gamma}\int_{ \mathbb{R}}|s|^{n-1}\widehat{\psi^{p-1}}(s\theta)\widehat{\psi}(s\theta)dsd\theta\] \[=\int_{S^{n-1}}\int_{\mathbb{R}}|s|^{n-1}\widehat{\psi^{p-1}}(s \theta)\widehat{\psi}(s\theta)dsd\theta,\] where, in the second to last line, we used the fact that \(\sigma(\Gamma)>0\). Using symmetry of the functions \(\psi\) and \(\varphi\), and integrating in polar coordinates, we deduce the inequality \[\int_{\mathbb{R}^{n}}\widehat{\psi^{p-1}}(x)\widehat{\varphi}(x)dx>\int_{ \mathbb{R}^{n}}\widehat{\psi^{p-1}}(x)\widehat{\psi}(x)dx.\] Applying Parseval's identity to the above inequality followed by Holder's inequality, we complete the proof. The proof of (b) follows from combining the above proof with the ideas in proof of Theorem 1.5b. ## Acknowledgments M.R. would like to thank the Department of Mathematics at the University of Missouri. M.R. and A.Z would like to thank Sorbonne University and LAMA at Universite Gustave Eiffel for wonderful and productive stays during which a significant part of this manuscript was produced. M.R. would also like to thank Effrosyni Chasioti for many helpful discussions about the content of the article. We would like to thank Dimtry Ryabogin for suggesting we consider the case of \(p\in(0,1)\) in Problems 1.1 and 1.2.
2310.05124
Cross-domain Robust Deepfake Bias Expansion Network for Face Forgery Detection
The rapid advancement of deepfake technologies raises significant concerns about the security of face recognition systems. While existing methods leverage the clues left by deepfake techniques for face forgery detection, malicious users may intentionally manipulate forged faces to obscure the traces of deepfake clues and thereby deceive detection tools. Meanwhile, attaining cross-domain robustness for data-based methods poses a challenge due to potential gaps in the training data, which may not encompass samples from all relevant domains. Therefore, in this paper, we introduce a solution - a Cross-Domain Robust Bias Expansion Network (BENet) - designed to enhance face forgery detection. BENet employs an auto-encoder to reconstruct input faces, maintaining the invariance of real faces while selectively enhancing the difference between reconstructed fake faces and their original counterparts. This enhanced bias forms a robust foundation upon which dependable forgery detection can be built. To optimize the reconstruction results in BENet, we employ a bias expansion loss infused with contrastive concepts to attain the aforementioned objective. In addition, to further heighten the amplification of forged clues, BENet incorporates a Latent-Space Attention (LSA) module. This LSA module effectively captures variances in latent features between the auto-encoder's encoder and decoder, placing emphasis on inconsistent forgery-related information. Furthermore, BENet incorporates a cross-domain detector with a threshold to determine whether the sample belongs to a known distribution. The correction of classification results through the cross-domain detector enables BENet to defend against unknown deepfake attacks from cross-domain. Extensive experiments demonstrate the superiority of BENet compared with state-of-the-art methods in intra-database and cross-database evaluations.
Weihua Liu, Lin Li, Chaochao Lin, Said Boumaraf
2023-10-08T11:30:22Z
http://arxiv.org/abs/2310.05124v1
# Cross-domain Robust Deepfake Bias Expansion Network for Face Forgery Detection ###### Abstract The rapid advancement of deepfake technologies raises significant concerns about the security of face recognition systems. While existing methods leverage the clues left by deepfake techniques for face forgery detection, malicious users may intentionally manipulate forged faces to obscure the traces of deepfake clues and thereby deceive detection tools. Meanwhile, attaining cross-domain robustness for data-based methods poses a challenge due to potential gaps in the training data, which may not encompass samples from all relevant domains. Therefore, in this paper, we introduce a solution - a Cross-Domain Robust Bias Expansion Network (BENet) - designed to enhance face forgery detection. BENet employs an auto-encoder to reconstruct input faces, maintaining the invariance of real faces while selectively enhancing the difference between reconstructed fake faces and their original counterparts. This enhanced bias forms a robust foundation upon which dependable forgery detection can be built. To optimize the reconstruction results in BENet, we employ a bias expansion loss infused with contrastive concepts to attain the aforementioned objective. In addition, to further heighten the amplification of forged clues, BENet incorporates a Latent-Space Attention (LSA) module. This LSA module effectively captures variances in latent features between the auto-encoder's encoder and decoder, placing emphasis on inconsistent forgery-related information. Furthermore, BENet incorporates a cross-domain detector with a threshold to determine whether the sample belongs to a known distribution. The correction of classification results through the cross-domain detector enables BENet to defend against unknown deepfake attacks from cross-domain. Extensive experiments demonstrate the superiority of BENet compared with state-of-the-art methods in intra-database and cross-database evaluations. face forgery detection, deepfake, bias expansion, deep learning ## 1 Introduction Deepfake techniques produce perceptually convincing fake face images or videos. However, these techniques also pose a substantial threat to the security of face recognition systems. In order to defend against fake faces, the field of face forgery detection has arisen. It encompasses a discriminative task aimed at identifying forged elements through meticulous scrutiny of visual content. Fortunately, it is exceedingly challenging for deepfake techniques to replicate the statistical distribution of real faces. This is primarily due to the fact that the imaging principles governing cameras dictate a specific statistical distribution for the pixels in real images [39]. Generative models employed in deepfake techniques often result in inherent inconsistencies between the tampered and authentic regions. This inconsistent information is the key basis for discrimination of forged faces. Thus, the existing face forgery detection methods are designed to explore the forged clues left by the generative model, such as manual features [1][2][3], generative adversarial network (GAN) fingerprints [4][5][6][7], and deep visual features [8][9][10][11][12][13]. Nevertheless, malicious users may intentionally manipulate forged faces to obscure the traces of deepfake clues and thereby deceive detection tools. This may dilute the telltale signs of manipulation, substantially heightening the challenge of uncovering deepfake clues. It is essential to take proactive measures to adaptively enhance deepfake clues. Additionally, the proliferation of diverse deepfake techniques poses a significant challenge to the cross-domain robustness of face forgery detection models that have been trained on specific deepfake domains. This challenge arises from variations in pixel distribution resulting from different deepfake methods. Consequently, existing approaches often struggle to identify deepfake clues in unknown cross-domains. Although some methods attempted to expand the dataset to solve the cross-domain robustness problem of face forgery detection and achieved certain success, this incremental training approach comes with significant resource demands and may also lead to catastrophic forgetting. These data-based methods still carry a significant risk of misjudgment when confronted with entirely unknown deepfake samples, even in the presence of unmistakable forgery clues. To address these challenges, we propose a cross-domain robust deepfake bias expansion network (BENet) for face forgery detection. BENet accomplishes this by reconstructing input faces to unveil the deepfake clues within facial images. Importantly, due to the stable feature distribution of real faces, BENet's reconstruction results on real faces remain almost invariant. While the reconstruction results on fake faces exhibit significant differences from the original forged faces, the reconstruction process carried out by BENet expands the bias against fake faces. This bias amplifies the deepfake clues, forming the cornerstone for face forgery detection. To achieve this objective, a bias expansion loss incorporates the concept of a contrastive loss to optimize the reconstruction process. It works to minimize the distinctions between the reconstructed real faces while maximizing the bias against fake faces. To further enhance the deepfake clues of forged faces, BENet incorporates a latent-space attention (LSA) module, which captures the variation relationship of latent features in the reconstruction process. Besides, a cross-domain detector with a threshold is introduced to determine whether the sample belongs to a known distribution. It corrects the classification results and enables BENet to defend against unknown deepfake attacks from cross-domain. Extensive experiments illustrate that the proposed BENet significantly outperforms existing state-of-the-art methods on intra-database and cross-database evaluation. The main contributions of this paper are summarized as follows: (1) We propose BENet, a cross-domain robust deepfake bias expansion network for face forgery detection. BENet utilizes an auto-encoder to reconstruct the input faces, aiming to preserve the authenticity of real faces while accentuating the differences between the reconstructed faces and the original fake faces. To attain this objective, we introduce a bias expansion loss to supervise the learning of reconstruction. This loss incorporates the concept of contrastive loss, and it serves as a mechanism through which BENet can adaptively amplify forged clues within the deepfake context. (2) To enhance the deepfake clues in the reconstructed images, we designed a LSA module. The LSA model utilizes the variation relationship of latent features in the encoder and decoder to capture forged details, which leads BENet to focus on inconsistent information in the reconstruction process. (3) A cross-domain detector is also proposed, which treats the unknown cross-domain deepfake samples judged as fake faces. This correction of classification results assists in defending against unknown cross-domain fake faces. The remainder of this paper is organized as follows. Section 2 reviews the related works. Section 3 presents our BENet architecture. Experimental results and discussions are reported in Section 4. Finally, we provide some concluding remarks in Section 5. ## 2 Related works ### Face forgery detection Face forgery detection is a critical task that involves identifying forged faces, which can deceive conventional face recognition systems. Existing methods employ various strategies to detect such deepfake clues and have been categorized into three main approaches: handcrafted features-based, GAN fingerprint-based methods, and deep features-based methods. Handcrafted features-based methods focus on color space inconsistencies introduced by the synthesis process of deepfake images, such as HSV and YCbCr. Li et al. [1] introduced color statistics-based features to identify forged faces. He et al. [2] incorporated Lab color space and combined deep representations from different color spaces for face forgery detection. McCloskey et al. [3] differentiated fake faces by analyzing pixel frequency. GAN fingerprints-based methods leverage common traits present in GAN-generated images for forgery detection. Guarnera et al. [4] used expectation maximization to extract convolutional traces left by GAN. Giudice et al. [5] examined the statistics of discrete cosine transform coefficients for detection. Yang et al. [6] employed deep neural networks to capture subtle GAN artifacts and Huang et al. [7] focused on unique artifacts induced by GAN upsampling. Deep features-based methods utilize deep models to counter the threat of deepfakes. Zhou et al. [8] combined face classification and noise residual recognition to identify fake faces. Gandhi et al. [9] enhanced forgery detectors through Lipschitz regularization and model fusion. Cao et al. [10] emphasized the inconsistencies between real and fake faces in reconstruction and visual content. Dang et al. [11] dynamically emphasized discrepancies and attention in suspect regions. Jeong et al. [12] captured artifacts in the frequency domain, addressing subtle and imperceptible visual artifacts. Gu et al. [13] employed a discrete Fourier transform to extract deepfake clues from local patches. Although the landscape of face forgery detection is evolving rapidly, cross-domain forged faces still challenge the robustness of these methods. ### Autoencoder Autoencoders find extensive application in uncovering correlated input features and anomalies within datasets. Conversely, the objective of face forgery detection is to identify subtle indicators of deepfake clues within face images. Chakraborty et al. [14] employed autoencoders to extract features, followed by an ensemble of probabilistic neural networks for outlier identification, showcasing the improved performance obtained through autoencoder-based feature extraction. Chen et al. [15] present a sliding-window convolutional variational autoencoders for real-time anomaly detection in multivariate time series data. Dai et al. [16] proposed a multilayer one-class extreme learning machine to detect abnormal data, which leverages stacked autoencoders to enhance feature representation for complex data. Sarvari et al. [17] explored autoencoders to capture anomalies present in frequency information. Pimentel et al. [18] integrated autoencoders with active learning, enhancing unsupervised anomaly detection models. Akhriev et al. [19] combined regular data deep autoencoding with unique thresholding techniques to detect anomalies. The use of autoencoders as a foundational element in BENet architecture for cross-domain robust face forgery detection aligns with their demonstrated effectiveness in anomaly identification and data representation. ### Contrastive loss Contrastive loss allows networks to learn meaningful representations by distinguishing between data instances. Wu et al. [40] explored non-parametric instance-level discrimination using contrastive loss, which investigated learning feature representations that capture the apparent similarity among instances. Oord et al. [41] introduced contrastive predictive coding, which leveraged probabilistic contrastive loss to learn useful representations from high-dimensional data. Bachman et al. [42] proposed a contrastive presentation learning approach by maximizing mutual information between features from multiple views of data. Huang et al. [44] presented a contrastive learning method that discovers sample-based neighborhoods to facilitate feature representation, which emphasizes the importance of discriminative feature extraction during training, Zhuang et al. [45] introduced a contrastive idea that trains embedding functions using a metric of local aggregation, allowing similar data instances to cluster while separating dissimilar ones. These ideas of contrastive loss emphasize the significance of capturing meaningful discriminative representations from data. In this section, we introduce our proposed bias expansion network (BENet), which amplifies face forgery information via bias to detect deepfakes as shown in Fig. 1. Firstly, we provide an overview of the end-to-end BENet architecture. Following that, we delve into the process of deep fake expansion, which serves to restore the forgery clues within input images. To facilitate the fusion of latent features, we introduce a Latent-Space Attention (LSA) module. Lastly, we provide a comprehensive explanation of the BENet's cross-domain detector. ### Overview BENet attempts to amplify these forgery clues. Specifically, BENet utilizes an auto-encoder to reconstruct the potential forgery clues of input images \(x\). Through the auto-encoder, the input images \(x\) are transformed into reconstructed images \(x_{0}=D(E(x))\). The reconstructed images remain almost invariant when the input is a real face, while there is an expansive difference when the input is a fake face. Incorporating this bias into face forgery detection improves the reliability of the results. Then, BENet subtracts the input images \(x\) from the reconstructed images \(x_{o}\) to obtain the bias images \(\hat{x}=|x-x_{o}|\), which effectively highlights the forgery clues within the input images. To guide BENet in effectively learning and discerning these biases that distinguish real from fake faces, we introduce the concept of "contrastive loss". It minimizes the disparity between reconstructed real faces and their original counterparts while simultaneously accentuating the divergence between reconstructed fake faces and their originals. In order to enhance the bias between real and fake faces, a latent-space attention (LSA) module is designed, which utilizes the variation relationship of latent features to capture forged details in the reconstruction process. These features are multiplied by bias images to further expand deepfake bias clues. Bias expansion Figure 1: Overview of the proposed BENet. The input images \(x\) are reconstructed by an auto-encoder to gain \(x_{o}=D(E(x))\). The bias images \(\hat{x}\) are obtained by subtracting the input images \(x\) and the reconstructed images \(x_{o}\). The auto-encoder magnifies the forgery clues and expands the bias of face forgery information, which contributes to detecting deepfake. The latent-space attention (LSA) module fuses the latent features of the auto-encoder. The fusion features are multiplied with the bias images and the results are classified into real or fake by a multi-layer perceptron (MLP). BENet learns to extract real faces with concentrated feature distributions and distinguish them from fake faces by contrastive loss. Through a cross-domain detector, BENet corrects the classification results to defend against unknown attacks. effectively amplifies the forged clues of fake faces. Based on these clues, BENet can fully exploit the difference between real and fake faces, ultimately leading to a robust face forgery detection mechanism. Due to the distinct and concentrated distribution of real faces compared to the wider distribution observed with fake faces, BENet incorporates a cross-domain detector. The primary objective of this detector is to assess the conformity of a given sample with a known distribution. For samples with unknown distribution, they must not belong to real faces, thereby being classified as fake faces. Through the correction of the classification results by a cross-domain detector, BENet can defend against face forgery from unknown cross-domain deepfake. ### Deepfake bias expansion BENet employs an auto-encoder to obtain restored images \(x_{o}\), which amplifies the deepfake clues of input images \(x\). The restored images \(x_{o}\) are defined as: \(x_{o}=AE(x)\) where \(AE(\cdot)\) represents the reconstruction process of the auto-encoder. Then, it calculates the bias images \(\hat{x}\) by subtraction, which are denoted as: \(\hat{x}=|x-x_{o}|\) The bias images are the difference between the input images and the reconstructed images, indicating deepfake clues. The purpose of BENet is to expand deepfake bias while retaining the reconstructed faces invariant. This is consistent with the idea of contrastive loss [26]. Therefore, we define bias expansion loss \(\mathcal{L}_{be}\) as follows: \(\mathcal{L}_{be}=L_{1}+L_{2}+L_{3}\) Here, \(\mathcal{L}_{1}=\frac{1}{N}\sum_{i}^{N}(1-y_{i})|\hat{x}_{i}|_{2}^{2}\) \(\mathcal{L}_{2}=-\frac{1}{N}\sum_{i}^{N}y_{i}\max(m-|\hat{x}_{i}|_{2},0)^{2}\) \(\mathcal{L}_{3}=\frac{1}{N}\sum_{l}^{N}\frac{-1}{M}\sum_{j\neq i,y_{i}=y_{j}}^{ M}\log\frac{\exp(\hat{x}_{i}\cdot\hat{x}_{j})}{\sum_{k,k\neq i}^{N}\exp(\hat{x}_{i} \cdot\hat{x}_{k})}\) Where \(N\) is the number of samples from a batch, \(y_{i}\) is the label of input image \(x_{l}\) (\(0\) for real and \(1\) for fake faces), \(M\) is the number of samples that \(y_{i}=y_{j}\) from a batch, \(m\) is a margin parameter imposing the distance between the reconstructed fake faces and its original be larger than \(m\). Through bias expansion loss, BENet can adaptively enhance the deepfake clues of fake faces. The core aspect of \(\mathcal{L}_{be}\) is to encourage the reconstructed real faces to closely align with their original instances. This is achieved by minimizing the square of bias \(\hat{x}_{i}^{2}\) within real face samples in \(\mathcal{L}_{1}\). The other item \(\mathcal{L}_{2}\) of \(\mathcal{L}_{be}\) enhances the differences between reconstructed fake faces and their original counterparts. This is achieved by maximizing the square of bias \(\hat{x}_{i}^{2}\) within fake face samples in \(\mathcal{L}_{2}\). Furthermore, we expand the similarity between real and fake faces through \(\mathcal{L}_{3}\). Through this objective, BENet becomes highly sensitive to the slightest inconsistencies introduced by face forgery, enabling it to effectively detect fake faces. ### 3.3 Latent-space attention BENet enhances bias against fake images at different scales in the reconstruction process of the auto-encoder. This is achieved through a latent-space attention (LSA) module. The reconstruction process of an auto-encoder includes two stages, namely encoding and decoding. Let \(z\) represent the latent-space features in the middle. The calculation of auto-encoder is redefined as: \[z =E(z|x)\] \[x_{o} =D(x_{o}|z)\] Where \(E(\cdot)\) and \(D(\cdot)\) represent the encoding and decoding processes of the auto-encoder, respectively. The latent-space features of the encoder at different scales are \(z_{0}\), \(z_{1}\), \(z_{2}\), \(\cdots\) while the corresponding latent-space features of the decoder are defined as \(z_{0}\), \(z_{1}\), \(z_{2}\), \(\cdots\). In the LSA module, the latent-space feature maps of the encoder and decoder at multi-scales are first downsampled to the size of \(z\) through global average pooling (GAP). This application of GAP operators plays a pivotal role in effectively integrating global spatial information across the various multi-scale latent-space features. The calculation of latent-space attention maps is defined as \(\text{LSA}(\cdot,\cdot)\). Then, we calculate the latent-space attention maps on each level of feature maps. The final latent-space attention maps, denoted as \(s\) are obtained by summing the latent-space attention maps from multiple scales with the middle latent-space features \(z\), as shown in Fig. 2 (a). The calculation process is represented by the following equation. \[s=\sum_{k=0}^{n}\text{LSA}[\text{GAP}(z_{k}),\text{GAP}(z_{k}{}^{\prime})]+z= \sum_{k=0}^{n}s_{k}+z\] Finally, the final latent-space attention maps \(s\) are multiplied by bias images \(\hat{x}\) to obtain feature maps \(\nu\), which are then fed into the classifier for face forgery detection. The feature maps \(\nu\) is defined as: \[\nu=s\times\hat{x}\] Figure 2: The LSA Module. (a) Depicts the operational process of the LSA module. (b) Illustrates the computation of latent-space attention maps. The calculation of latent-space attention maps utilizes the variation relationship of latent-space features in the encoder and decoder to capture forged details. To achieve this, we define GAP(\(\text{z}_{k}\)) as queries and GAP(\(\text{z}_{k}\)\({}^{\prime}\)) as keys and values, representing the encoded or decoded latent-space features at the \(k\)-th scale. Firstly, as the primary source of deepfake clues primarily stems from inconsistent information generated by the model, we adopt a strategy to consolidate this inconsistency within the data fields. This is achieved by dividing both the queries, keys, and values into multiple \(P\times P\) patches, as illustrated in Fig. 2 (b). This approach additionally serves to alleviate the computational complexity associated with the LSA module. A value \(\beta\in\mathbb{R}\) of the latent-space attention maps \(s_{k}\) is calculated from the value \(\alpha\in\mathbb{R}\) of the corresponding position in queries and its corresponding patch \(Z\in\text{GAP}(\text{z}_{k}\)\({}^{\prime}\)). Firstly, the value \(\alpha\) is multiplied by patch \(Z\) from the key matrix, resulting in a \(P\times P\) size matrix. Subsequently, the values within this matrix undergo activation through the softmax function. The softmax results are the weighted sum of the patch \(Z\) originating from the value matrix. The resultant value of this weighted summation is assigned as \(\beta\), representing the value within the latent-space attention maps. It is denoted as: \[\beta=\text{softmax}(\alpha Z)\cdot Z\] Through the calculation of latent-space attention, BENet can pay attention to the differences in latent-space feature maps between the encoder and decoder. Simultaneously, it captures forged information from pixels of patches to further enhance deepfake clues on bias images. ### 3.4 Total Loss In the context of face forgery detection, the bias expansion loss \(\mathcal{L}_{be}\) plays a pivotal role in enhancing the ability of the BENet to discriminate between real and fake faces, which guides the BENet in grasping the inherent distribution of real face features and promoting robustness against forged faces. Its fundamental objective lies in narrowing the gap between reconstructed real faces and their original counterparts, while simultaneously magnifying the distance between reconstructed fake faces and their originals. Then, the results of the classifier are optimized by the standard cross-entropy loss, which is denoted as: \[\mathcal{L}_{c}=-\frac{1}{N}\sum_{i=1}^{N}[\mathcal{V}_{i}\log p_{i}+(1- \mathcal{V}_{i})\log(1-p_{i})]\] Where \(N\) is the number of samples from a batch, \(\mathcal{V}\) is the label and \(p\) is the predicted probability. Therefore, combining the bias expansion loss and the cross-entropy loss, the total loss of BENet is defined as: \[\mathcal{L}=\lambda\mathcal{L}_{c}+(1-\lambda)\mathcal{L}_{be}\] Where \(\mathcal{L}_{c}\) denotes the objective of face forgery detection, and \(\lambda\) is a hyperparameter. ### 3.5 Cross-domain Detector Data-driven face forgery detection networks may not exhibit inherent resistance to cross-domain attacks, despite the notable distinctions observed in the feature distributions between cross domain fake and real faces. Given the stability of distribution in real face features and the diversity of deepfake faces, BENet includes a cross-domain detector that deviates significantly from the known distribution patterns. The cross-domain detector corrects the classification results during prediction against cross-domain fake faces. It categorizes these instances as potentially unknown cross-domain fake faces, by a threshold \(\tau\) for bias. The bias threshold is obtained by ensuring 95% training data to be recognized as known. Details of the prediction procedure for face forgery detection are described in Alg. 1. ``` 1:face image \(\mathbf{x}\) 2:Threshold \(\tau\) for bias 3: reconstruct image \(\mathbf{x}_{o}=\text{AE}(\mathbf{x})\) 4: obtain bias image \(\hat{\mathbf{x}}=|\mathbf{x}-\mathbf{x}_{o}|\) 5: obtain final latent-space attention maps \(\mathbf{s}\) from LSA module 6: obtain feature maps \(\mathbf{v}=\mathbf{s}\times\hat{\mathbf{x}}\) 7: face forgery detection result \(\mathbf{c}=\text{Classifier}(\mathbf{v})\) 8:if\(|\hat{\mathbf{x}}|_{1}>\tau\)then 9: predict input face \(\mathbf{x}\) as fake 10:else 11: predict input face \(\mathbf{x}\) as a known sample with label \(\mathbf{c}\) 12:endif ``` **Algorithm 1**Prediction procedure for face forgery detection ## 4 Experiments ### Experimental setup **Dataset.** We evaluate our proposed method and existing approaches on Celeb-DF [27], FaceForensics\(++\) (FF\(++\)) [28], Diverse Fake Face Dataset (DFFD) [29] and DeepFake Detection Challenge dataset (DFDC) [30]. The Celeb-DF dataset contains 590 real videos and 5,639 Deepfake videos created using the same synthesis algorithm. The FF\(++\) dataset has 1,000 real videos from YouTube and 4,000 corresponding Deefake videos that are generated with 4 face manipulation methods: Deepfakes (DF) [31], FaceSwap (FS) [32], Face2Face (F2F) [33], and NeuralTextures (NT) [34]. DFFD adopts the images from FFHQ [35] and CelebA [36] datasets source subset, and synthesizes forged images with various Deepfake generation methods. DFDC is part of the DeepFake detection challenge, which has 1,131 original videos and 4,113 Deepfake videos. **Evaluation Metrics.** To evaluate our proposed method, we report the most commonly used metrics in the related state-of-the-arts, including accuracy (Acc), and area under the receiver operating characteristic curve (AUC). We also the report attack presentation classification error rate (APCER) and bona fide presentation classification error rate (BPCER). **Implementation Details.** During the experiment, we utilize dlib, a toolkit for face recognition, to detect the key points of the face. Then, we crop and align the face according to the key points. The resulting facial images are then resized to dimensions of 224\(\times\)224 pixels, serving as input for BENet.. In terms of data augmentation techniques, our methodology primarily incorporates random erasure and horizontal flipping. We train the network with a batch size of 8, using the Adam optimizer with an initial learning rate of 2e-4 and a weight decay of 1e-5. Furthermore, for the objective formulation of BENet, the parameter \(\lambda\) is empirically set to 0.5. ### Ablation study In this subsection, we evaluate the effectiveness and contributions of the proposed components integrated within BENet. Specifically, we explore three different configurations for the auto-encoder component and two alternatives for supervising the reconstruction results. The three configurations for the auto-encoder component include: 1- Absence of auto-encoder for reconstruction (w/o AE). 2- Utilization of an auto-encoder without the computation of bias images (AE w/o Bias). 3- Incorporation of an auto-encoder along with bias image calculation (AE). For supervising the reconstruction results, we consider two options: 1- Sole reliance on reconstruction loss for real faces (RL). 2- Full integration of the bias expansion loss (BE). Notably, CD and LSA denote the cross-domain detector and the LSA module, respectively. By selecting one of the configurations mentioned above, we generate a total of seven distinct ablated configurations. The quantitative results on FF\(++\) are listed in Table 1 and Table 2. #### 4.2.1 Effectiveness of bias calculation Compared to the configuration without the auto-encoder, using an auto-encoder to reconstruct input face images yields a notable improvement of 1.95% in Acc and 1.76% in AUC. By further calculating the bias images, Acc and AUC increased by 1.08% and 1.87%, respectively. This indicates that enhancing the deepfake clues by the reconstruction of the input face image is reliable. The calculation of bias images makes this information more intuitive for network optimization. #### 4.2.2 Effectiveness of bias expansion loss As we already mentioned above, the Bias Expansion loss plays a pivotal role in guiding BENet's learning process to discern the bias within the reconstruction of real and fake faces. It achieves this by minimizing the disparity between the reconstructed real faces and their real counterparts while concurrently accentuating the distinctions between fake faces and their originals. Therefore, our definition of bias expansion loss includes two parts: invariant reconstruction item for real faces and bias expansion item for fake faces. In contrast to the configurations without the bias expansion loss, which includes the use of only reconstruction loss for real faces (RL), and the complete bias expansion loss (BE), the inclusion of both the invariant reconstruction item and the bias expansion item leads to a substantial increase in both Acc and the AUC) for the model,, especially on 4 face manipulation methods from FF\(++\). #### 4.2.3 Effectiveness of latent-space attention When the LSA module is removed, it exhibits reduced sensitivity to the inconsistencies introduced by forgery faces within the latent space. As illustrated in Table 4, the APCER and BFPCER using LSA module configuration result in a drop by 0.96% and 2.56%, respectively. Due to the amplification of forged clues by the LSA module, the model is more sensitive to fake faces. The significant decrease in BFPCER indicates a reduction in the prediction of false negative samples. #### 4.2.4 Effectiveness of cross-domain detector We also examine the role of the cross-domain detector in BENet. When it is omitted, there is a significant decrease in the model's ability to handle unknown forgeries, particularly in cross-domain scenarios, as demonstrated in Table 2. It proves that the unknown detector is instrumental in identifying unknown cross-domain fake faces. \begin{table} \begin{tabular}{c c c c c} \hline \hline Methods & Acc & AUC & APCER & BPCER \\ \hline w/o AE & 0.8243 & 0.8667 & 0.3461 & 0.3567 \\ AE w/o Bias & 0.8438 & 0.8843 & 0.3454 & 0.3194 \\ AE & 0.8546 & 0.9030 & 0.3012 & 0.3204 \\ AE+LSA & 0.8734 & 0.9207 & 0.2916 & 0.2948 \\ AE+LSA+RL & 0.8967 & 0.9479 & 0.1623 & 0.2109 \\ AE+LSA+BE & 0.9341 & 0.9633 & 0.1311 & 0.1325 \\ AE+LSA+CD & 0.9225 & 0.9671 & 0.1036 & 0.1664 \\ Full BENet & **0.9683** & **0.9872** & **0.0642** & **0.0626** \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation study on FF\(++\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Test AUC} \\ & & DF & FS & F2F & NT \\ \hline \multirow{6}{*}{DF} & w/o AE & 0.8648 & 0.5682 & 0.5474 & 0.5022 \\ & AE w/o Bias & 0.8770 & 0.5738 & 0.5533 & 0.5128 \\ & AE & 0.8854 & 0.5832 & 0.5646 & 0.5249 \\ & AE+LSA & 0.9062 & 0.5944 & 0.5835 & 0.5524 \\ & AE+LSA+RL & 0.9225 & 0.6692 & 0.6528 & 0.6293 \\ & AE+LSA+BE & 0.9643 & 0.7324 & 0.7291 & 0.6836 \\ & AE+LSA+CD & 0.9574 & 0.7528 & 0.7348 & 0.6945 \\ \cline{2-6} & Full BENet & **0.9986** & **0.8075** & **0.7842** & **0.7548** \\ \hline FS & w/o AE & 0.6328 & 0.8579 & 0.5783 & 0.5633 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on 4 face manipulation methods from FF\(++\). To balance the contribution of \(\mathcal{L}_{c}\) and \(\mathcal{L}_{be}\) in the total loss function, we conducted experiments using different hyperparameter values for \(\lambda\), as shown in Table 3. The range of \(\lambda\) spanned from 0.1 to 1.0, with increments of 0.1. We observed that BENet achieved its best performance when \(\lambda\) is set to 0.5. In this configuration, the model effectively balanced the cross-entropy loss and the bias expansion loss, allowing it to maintain a high level of Acc and AUC. Indeed, as \(\lambda\) departs from the optimal value of 0.5, we observed a trade-off in the model performance. When \(\lambda\)\(<\)0.5, the model exhibits a tendency to prioritize bias expansion, resulting in a more aggressive detection of forgeries but also an increased risk of false positives. Conversely, when \(\lambda\)\(>\)0.5, BENet leans heavily on the cross-entropy loss, which makes it more conservative in detecting forgeries. Striking the right balance with \(\lambda\) at 0.5 is crucial to achieve the desired level of accuracy and robustness in the face forgery detection task. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\lambda\) & Acc & AUC & APCER & BPCER \\ \hline 0.1 & 0.9205 & 0.9564 & 0.1748 & 0.1432 \\ 0.2 & 0.9364 & 0.9633 & 0.1203 & 0.1341 \\ 0.3 & 0.9521 & 0.9670 & 0.0876 & 0.1040 \\ 0.4 & 0.9634 & 0.9746 & 0.0698 & 0.0766 \\ 0.5 & **0.9683** & **0.9872** & **0.0642** & **0.0626** \\ 0.6 & 0.9627 & 0.9821 & 0.0645 & 0.0847 \\ 0.7 & 0.9585 & 0.9801 & 0.0765 & 0.0895 \\ 0.8 & 0.9513 & 0.9752 & 0.1041 & 0.0907 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on the hyperparameter \(\lambda\) of total loss. **4.3 Comparison with state-of-the-art methods** To evaluate the effectiveness and robustness of BENet for face forgery detection, we conduct comprehensive comparison experiments against several state-of-the-art methods, including F\({}^{3}\)-Net [38], MultiAtt [37], PEL [13], and RECCE [10]. **4.3.1 Intra-database** Table 4 illustrates the intra-database performance of BENet, in comparison to state-of-the-art methods, across various datasets. BENet achieves Acc/AUC with scores of 0.9923/0.9998, 0.9683/0.9872, 0.9896/0.9993, and 0.9043/0.9638 on Celeb-DF, FF\(++\), DFFD, and DFDC, respectively. It maintains low APCER and BPCER, further highlighting its effectiveness. **4.3.2 Cross-database** In this section, we present a comprehensive cross-database evaluation of our proposed BENet, comparing it to existing state-of-the-art methods, as shown in Table 5. Firstly, we utilize FF\(++\) as the training database and test the performance of BENet \begin{table} \begin{tabular}{c c c c c c} \hline Dataset & Methods & Acc & AUC & APCER & BPCER \\ \hline \multirow{6}{*}{Celeb-DF} & F\({}^{3}\)-Net [38] & 0.9397 & 0.9570 & 0.1139 & 0.1273 \\ & MultiAtt [37] & 0.9792 & 0.9994 & 0.0462 & 0.0370 \\ & PEL [13] & 0.9852 & 0.9963 & 0.0306 & 0.0286 \\ & RECCE [10] & 0.9859 & 0.9994 & 0.0213 & 0.0351 \\ \cline{2-6} & BENet (ours) & **0.9923** & **0.9998** & **0.0142** & **0.0166** \\ \hline \multirow{6}{*}{FF\(++\)} & F\({}^{3}\)-Net [38] & 0.9595 & 0.9893 & 0.0874 & 0.0746 \\ & MultiAtt [37] & 0.9314 & 0.9484 & 0.1368 & 0.1376 \\ & PEL [13] & 0.9407 & 0.9680 & 0.1173 & 0.1199 \\ & RECCE [10] & 0.9404 & 0.9717 & 0.1166 & 0.1218 \\ \cline{2-6} & BENet (ours) & **0.9683** & **0.9872** & **0.0642** & **0.0626** \\ \hline \multirow{6}{*}{DFFD} & F\({}^{3}\)-Net [38] & 0.9584 & 0.9751 & 0.0810 & 0.0854 \\ & MultiAtt [37] & 0.9726 & 0.9912 & 0.0507 & 0.0589 \\ \cline{1-1} & PEL [13] & 0.9758 & 0.9926 & 0.0432 & 0.0536 \\ \cline{1-1} & RECCE [10] & 0.9763 & 0.9986 & 0.0565 & 0.0382 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.9896** & **0.9993** & **0.0195** & **0.0221** \\ \hline \multirow{6}{*}{DFDDC} & F\({}^{3}\)-Net [38] & 0.7617 & 0.8839 & 0.4685 & 0.4847 \\ & MultiAtt [37] & 0.7681 & 0.9032 & 0.4874 & 0.4402 \\ \cline{1-1} & PEL [13] & 0.8037 & 0.9106 & 0.3897 & 0.3955 \\ \cline{1-1} & RECCE [10] & 0.8120 & 0.9133 & 0.3752 & 0.3768 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.9043** & **0.9638** & **0.1954** & **0.1874** \\ \hline \end{tabular} \end{table} Table 4: Intra-database evaluation on Celeb-DF, FF\(++\), DFFD, and DFDC with other state-of-art methods. on Celeb-DF, DFFD, and DFDC, respectively. BENet demonstrates its robustness in cross-database testing, achieving impressive AUC scores of 0.7786, 0.7659, and 0.7875 on Celeb-DF, DFFD, and DFDC, respectively. These results notably outperform other methods, highlighting the effectiveness of BENet in handling cross-database scenarios. Table 6 provides valuable insights into the robustness of the face forgery detection methods when trained on one manipulation method and subsequently tested on another. BENet consistently outperforms other methods across all face manipulation methods. It achieves the highest AUC scores for each manipulation type, indicating its superior ability to detect forgeries, even when the test dataset differs from the training dataset in terms of manipulation method. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Test AUC} \\ & & DF & FS & F2F & NT \\ \hline \multirow{6}{*}{DF} & F\({}^{3}\)-Net [38] & 0.9974 & 0.7310 & 0.7238 & 0.7039 \\ & MultiAtt [37] & 0.9951 & 0.6733 & 0.6641 & 0.6601 \\ & PEL [13] & 0.9943 & 0.7048 & 0.6832 & 0.6715 \\ & RECCE [10] & 0.9965 & 0.7429 & 0.7066 & 0.6734 \\ \cline{2-6} & BENet (ours) & **0.9986** & **0.8075** & **0.7842** & **0.7548** \\ \hline \multirow{6}{*}{FS} & F\({}^{3}\)-Net [38] & 0.8392 & 0.9897 & 0.6289 & 0.5628 \\ & MultiAtt [37] & 0.8233 & 0.9882 & 0.6165 & 0.5479 \\ \cline{1-1} & PEL [13] & 0.8201 & 0.9787 & 0.6219 & 0.5027 \\ \cline{1-1} & RECCE [10] & 0.8239 & 0.9882 & 0.6444 & 0.5670 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.8644** & **0.9923** & **0.7628** & **0.7593** \\ \hline F2F & F\({}^{3}\)-Net [38] & 0.7528 & 0.6839 & 0.9838 & 0.7239 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.7875** & **0.2343** & **0.2476** \\ \hline \hline \end{tabular} \end{table} Table 6: Cross-database evaluation on 4 face manipulation methods from FF\(++\). \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Methods & AUC & APCER & BPCER \\ \hline \multirow{6}{*}{Celeb-DF} & F\({}^{3}\)-Net [38] & 0.6151 & 0.4297 & 0.3864 \\ & MultiAtt [37] & 0.6702 & 0.3753 & 0.3425 \\ & PEL [13] & 0.6918 & 0.3428 & 0.3563 \\ & RECCE [10] & 0.6871 & 0.3622 & 0.3468 \\ \cline{2-6} & BENet (ours) & **0.7786** & **0.2528** & **0.2442** \\ \hline \multirow{6}{*}{DFFD} & F\({}^{3}\)-Net [38] & 0.6320 & 0.4239 & 0.4103 \\ & MultiAtt [37] & 0.6714 & 0.3622 & 0.3654 \\ & PEL [13] & 0.6683 & 0.3608 & 0.3820 \\ & RECCE [10] & 0.6896 & 0.3602 & 0.3455 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.7659** & **0.2471** & **0.2520** \\ \hline \multirow{6}{*}{DFFD} & F\({}^{3}\)-Net [38] & 0.6460 & 0.4043 & 0.3902 \\ & MultiAtt [37] & 0.6801 & 0.3635 & 0.3456 \\ \cline{1-1} & PEL [13] & 0.6331 & 0.4231 & 0.4166 \\ \cline{1-1} & RECCE [10] & 0.6906 & 0.3354 & 0.3452 \\ \cline{1-1} \cline{2-6} & BENet (ours) & **0.7875** & **0.2343** & **0.2476** \\ \hline \hline \end{tabular} \end{table} Table 5: Cross-database evaluation from FF\(++\) to Celeb-DF, DFFD, and DFDC with other state-of-art methods. ## 5 Conclusion In this paper, we proposed BENet, a Cross-Domain Robust Bias Expansion Network for face forgery detection. It leverages an auto-encoder architecture to reconstruct input faces, which amplifies bias from deepfake clues for accurate forgery detection. To achieve this, we utilized a bias expansive loss to minimize the gap between reconstructed real faces and their original counterparts, while simultaneously enhancing the bias between reconstructed fake faces and their originals. Additionally, BENet incorporates an LSA module designed to capture variations in latent features, thereby emphasizing inconsistencies in the information extracted from forged faces. This contributes to the network's ability to discern potential forgeries. Furthermore, to correct detection results for unknown cross-domain deepfakes, BENet integrates a cross-domain detector. Extensive experimental evaluations validate the superior performance of BENet when compared to state-of-the-art methods, underscoring its efficacy in the field of face forgery detection.
2302.12649
Fiber-optic detection of snow avalanches using telecommunication infrastructure
We demonstrate the detectability of snow avalanches using Distributed Acoustic Sensing (DAS) with existing fiber-optic telecommunication cables. For this, during winter 2021/2022, we interrogated a 10 km long cable closely following the avalanche prone Fluelapass road in the Swiss Alps. In addition to other signals like traffic and earthquakes, the DAS data contain clear recordings of numerous snow avalanches, even though most of them do not reach the cable. Here we present two examples of snow avalanche recordings that could be verified photographically. Our results open new perspectives for cost-effective, near-real-time avalanche monitoring over long distances using pre-installed fiber-optic infrastructure.
Pascal Edme, Patrick Paitz, Fabian Walter, Alec van Herwijnen, Andreas Fichtner
2023-02-23T16:35:24Z
http://arxiv.org/abs/2302.12649v1
# Fiber-optic detection of snow avalanches using telecommunication infrastructure ###### Abstract We demonstrate the detectability of snow avalanches using Distributed Acoustic Sensing (DAS) with existing fiber-optic telecommunication cables. For this, during winter 2021/2022, we interrogated a \(\sim\)10 km long cable closely following the avalanche prone Fluelapass road in the Swiss Alps. In addition to other signals like traffic and earthquakes, the DAS data contain clear recordings of numerous snow avalanches, even though most of them do not reach the cable. Here we present two examples of snow avalanche recordings that could be verified photographically. Our results open new perspectives for cost-effective, near-real-time avalanche monitoring over long distances using pre-installed fiber-optic infrastructure. + Footnote †: journal: arXiv ## 1 Introduction As a result of climate change and the related increase of extreme weather events, there is a growing hazard by mass movements to both population and critical infrastructure worldwide. Landslides, avalanches and flash floods frequently pose a significant risk to the population, with thousands of fatalities each year and billions of dollars in financial damage (Dilley, 2005; Petley, 2012; Froude and Petley, 2018; Emberson et al., 2020). Early detection and mitigation require extensive, large-scale monitoring. Operational monitoring systems rely on camera and radar observations, tripwires, as well as infrasound and seismic sensors (Allstadt et al., 2018; Hurlimann et al., 2019). The latter two have been shown to detect and identify mass movements (Schimmel et al., 2018), and recent developments in machine learning further improved their detection and warning capabilities for debris flows (Chmiel et al., 2021). However, limitations in both resolution and spatial coverage remain for all currently available systems. Emerging fiber-optic sensing techniques may help to overcome these issues. Distributed Acoustic Sensing (DAS), in particular, effectively transforms a standard telecommunication optical fiber into a distributed deformation sensor with a measurement point spacing as small as 25 cm. The DAS system consists of an Interrogation Unit (IU) that emits and receives laser pulses. Hence, a single instrument, together with the fiber, forms a distributed sensing antenna with thousands of measurements along a fiber of up to several tens of kilometers length. DAS makes installations and real-time monitoring with such a large number of measurement points logistically feasible and cost-effective (Hartog, 2017; Lindsey and Martin, 2021). The DAS instrument response has been studied over a wide range of frequencies (Lindsey et al., 2020; Paitz et al., 2021), and the robustness of current IUs has opened new opportunities in cryosphere research (Walter et al., 2020; Klaasen et al., 2021; Hudson et al., 2021; Fichtner et al., 2022; Booth et al., 2020; Fichtner et al., 2023). Specifically in the context of snow avalanches, Paitz et al. (2022) demonstrated that optical fibers can be used to measure ground deformation induced by avalanches. In their study, the avalanches propagated along the fiber at a dedicated test site, which is not representative of realistic monitoring scenarios. Here we present results from interrogation of a telecommunication fiber along a mountain pass in Switzerland during winter 2021/22. The cable crosses several avalanche-prone sections of the pass road providing a commonly encountered situation that demands monitoring and warning solutions. ## 2 Experimental setup From 23 December 2021 to 9 May 2022 we interrogated a \(\sim\)10 km long existing fiber-optic cable along the Fluelapass, a high mountain pass road in the Swiss Alps, with a Silixa iDAS\({}^{TM}\) 2.0. Since the location is well-known to suffer from abundant snow avalanches, the road is closed during the avalanche season for about three months per year. The elevation of the cable ranges from 1414 to 2181 m above sea level. The fiber-optic cable geometry and the topography of the cable, as well as a photograph of the upper \(\sim\)3 km of the cable are visualized in Fig. 1. We located the cable with numerous tap tests, leading to an estimated accuracy of the channel mapping of \(\sim\)20 m. The interrogator was located in the basement of a Swisscom fiber distribution hub in the village of Susch, and the first \(\sim\)6 km of the fiber closely follow the mountain pass road. For the upper \(\sim\)4 km, the cable follows the south side of the Susasca River in the valley, with a maximum distance of \(\sim\)350 m from the road. We recorded the raw DAS data with a sampling frequency of 100 Hz and 2 m channel spacing, resulting in \(\sim\)5000 channels. To validate suspected avalanche records with ground truth observations, we performed several field visits and used a drone to collect photographic evidence, as shown, for example, in Fig. 2a. In addition, we installed a camera near the end of the cable, viewing downvalley towards the north-facing slopes in the southern part of the valley (Fig. 3a). ## 3 Examples During the experiment, we recorded a wide range of signals along the cable. They include signals from hikers and animals, earthquakes and numerous avalanche recordings, two examples of which are shown below. During the beginning and the end phase of the experiment, the mountain pass was open, and traffic noise is also amongst the recorded signals. Figures 2 and 3 show time windows during which avalanches were suspected. The background noise level is around 20 \(\eta\)m/m/s. Spatially coherent signals at the beginning of the cable (distances \(<\) 1.2 km) correspond to the vehicle traffic within the village of Susch, with velocities slower than 40 km/h and induced strain fluctuations not exceeding 1 \(\mu\)m/m/s. With drone images we could confirm a large avalanche traversing the pass road at around 8 km distance along the fiber on 19 March 2022. The event is shown in Fig. 2 and can be observed over about 1.5 km distance along the fiber, even though its visible physical extent is \(<\) 200 m. The snow mass stopped at the very bottom of the valley, on the north side of the river and therefore did not propagate over the cable situated slightly further up on the south side. The avalanche results in a quite complex signal, including seismic arrivals of apparent velocities faster than 1500 m/s. We confirmed another avalanche event on 15 April 2022 by taking the difference of two subsequent images from the camera installed at the end of the fiber, as shown in Fig. 3. Despite happening on the south side of the river, this relatively small avalanche did not traverse the fiber but stopped just before it. The event is still visible over a few hundreds of Figure 1: Summary of the experimental setup. a) Geometry of the fiber-optic telecommunication cable starting in Susch and leading up to the Flüelapass top. The cable following the pass road is marked in red. Yellow pins are distance markings in km. Thick orange lines indicate the locations of the avalanche examples shown in Figs. 2 and 3, respectively. Source: Google Earth. b) Topographic elevation along the cable. Source: Google Earth. c) Drone image of the upper \(\sim\)3 km of the fiber-optic cable along the Flüelapass road where most snow avalanches occurred. Picture credit: Lars Gebraad. meters in the DAS data, as a result of the seismic waves it generated. The induced strain fluctuation reached a level of 3 \(\mu\)m/m/s, slightly lower than the larger event of figure 2. Figure 2: Example of a larger snow avalanche that was clearly recorded along \(\sim\)1.5 km of the fiber-optic cable. a) Photograph of the snow avalanche traversing the Flüielapass road and reaching the Susasca river in the valley (opposite the cable location). b) DAS recording along the complete \(\sim\)10 km long fiber-optic cable. Recordings at \(<\)1 km are mostly cars on the Cantonal road through the Engadin valley, which is not affected by winter closure of the Flüielapass road. The snow avalanche recording is between \(\sim\)7.5 to 9 km distance. c) Zoom into the snow avalanche recording. Figure 3: Example of a smaller snow avalanche from 15 April 2022. a) Photographs taken before and after the occurrence of the snow avalanche (10min interval). Shown in gray scale is the difference (contrast boosted) between the two pictures. b) DAS recording of the snow avalanche around 9 km distance along the fiber-optic cable. ## 4 Discussion and conclusions We demonstrated that DAS with existing telecommunication cables can be used to detect snow avalanches, even of moderate size, and even though they do not reach the cable itself, as confirmed photographically. We believe that avalanche events can be discriminated from other signal like traffic, considering for example their fast apparent velocities and quite complex nature compared to the slow spatially consistent signal induced by vehicles. These initial results shows that alpine natural hazard monitoring with fiber-optic is possible at least for snow avalanches and probably for other mass movements like rock falls. The DAS based cost effective solution opens new perspectives for near-real-time and early warning applications, in particular for critical infrastructure monitoring such as roads, railways, dams, and tunnels. In many regions of interest, there is existing fiber-optic infrastructure that can be leveraged, thereby alleviating the need to install a large number of conventional instruments difficult to deploy and maintain. Furthermore, thanks to its high spatial and temporal sampling, DAS also provides information on traffic, which may assist in the estimation of potential damage caused by mass movements and can be crucial for first responders and rescue teams. ###### Acknowledgements. We gratefully acknowledge the support by Swisscom in the form of free access to the telecommunication cable along the Fluelapass road. The drone footage was collected by Lars Gebraad.
2308.07579
Connectivity of Markoff mod-p graphs and maximal divisors
Markoff mod-$p$ graphs are conjectured to be connected for all primes $p$. In this paper, we use results of Chen and Bourgain, Gamburd, and Sarnak to confirm the conjecture for all $p > 3.448\cdot10^{392}$. We also provide a method that quickly verifies connectivity for many primes below this bound. In our study of Markoff mod-$p$ graphs we introduce the notion of \emph{maximal divisors} of a number. We prove sharp asymptotic and explicit upper bounds on the number of maximal divisors, which ultimately improves the Markoff graph $p$-bound by roughly 140 orders of magnitude as compared with an approach using all divisors.
Jillian Eddy, Elena Fuchs, Matthew Litman, Daniel Martin, Nico Tripeny
2023-08-15T05:34:46Z
http://arxiv.org/abs/2308.07579v1
# Connectivity of Markoff mod-\(p\) graphs and maximal divisors ###### Abstract. Markoff mod-\(p\) graphs are conjectured to be connected for all primes \(p\). In this paper, we use results of Chen and Bourgain, Gamburd, and Sarnak to confirm the conjecture for all \(p>3.448\cdot 10^{392}\). We also provide a method that quickly verifies connectivity for many primes below this bound. In our study of Markoff mod-\(p\) graphs we introduce the notion of _maximal divisors_ of a number. We prove sharp asymptotic and explicit upper bounds on the number of maximal divisors, which ultimately improves the Markoff graph \(p\)-bound by roughly \(140\) orders of magnitude as compared with an approach using all divisors. ## 1. Introduction The _Markoff equation_ is given by \[x^{2}+y^{2}+z^{2}=xyz, \tag{1}\] and non-negative integer solutions \((a,b,c)\) to this equation are called _Markoff triples_. An integer that is a member of such a triple is called a _Markoff number_. Since their introduction by Andrey Markoff in [10], Markoff triples have arisen in many different contexts across the mathematical landscape. Recently, Bourgain-Gamburd-Sarnak have explored various arithmetic properties of Markoff triples (see [1]), proving that there are infinitely many composite Markoff numbers. A key ingredient in the proof of this fact is a combinatorial property that we describe below. Markoff triples can be realized as vertices of a _Markoff tree_ as follows (note that Markoff triples with negative entries can be realized in a nearly identical way, but we focus on the positive triples here for ease of exposition). Let \(R_{1},R_{2}\), and \(R_{3}\) be involutions acting on triples of numbers defined by \[R_{1}(a,b,c)=(bc-a,b,c),\;R_{2}(a,b,c)=(a,ac-b,c),\;R_{3}(a,b,c)=(a,b,ab-c) \tag{2}\] and note that each of these involutions sends a Markoff triple to another Markoff triple. In fact, all positive Markoff triples can be realized as some word in these involutions applied to the triple \((3,3,3)\). In studying the arithmetic of Markoff numbers, it is natural to consider the solutions to (1) mod \(p\): understanding this set is crucial to sieving on the set of Markoff numbers and is behind Bourgain-Gamburd-Sarnak's result on composite Markoff numbers. More specifically, it is useful Figure 1. A branch of the Markoff tree generated by applying the involutions \(R_{1},R_{2},R_{3}\) to the fundamental solution (3,3,3). Introduction Let \(\mathcal{G}_{p}\) be a connected graph and let \(\mathcal{G}_{p}\) be a connected graph. A _connected graph_\(\mathcal{G}_{p}\) is a connected graph if and only if it is connected. The graph \(\mathcal{G}_{p}\) is called _connected_ if it is connected. As an immediate corollary, we also obtain a similar bound on the total number of divisors of \(n\) less than \(x\) (Corollary 3.19). These results can be viewed as generalizations of Wigert's theorem: \(\log\tau(n)=(\log 2+o(1))\log n/\log\log n\), where \(\tau(n)\) is the number of divisors of \(n\)[20]. (The constant \(\log 2\) is recovered by setting \(\alpha=1/2\) in Theorem 1.3.) In Section 4 we use our work on maximal divisors to prove our main result. **Theorem 1.4**.: \(\mathcal{G}_{p}\) _is connected for all primes \(p>863\#53\#13\#7\#5\#3^{3}2^{5}\approx 3.448\cdot 10^{392}\), where \(n\#\) denotes the product of primes less than or equal to \(n\)._ The lower bound in Theorem 1.4 was output by a computer using Algorithm 1, which determines the exact point at which our method for proving connectivity via maximal divisors fails. Finally, in Section 5 we provide data on the proportion of smaller primes for which we can also verify connectivity of \(\mathcal{G}_{p}\). As Table 1 shows, our approach begins to work for a significant proportion of primes at around \(10^{8}\), and for \(22\leq n\leq 90\) it proves connectivity for 10,000 out of 10,000 randomly chosen primes between \(10^{n}\) and \(10^{n+1}\). Note that there are still primes for which our connectivity check fails up until the bound from Theorem 1.4. Table 1's success for smaller primes is due to the expected number of divisors of \(p\pm 1\) being much less than the maximum possible number of divisors. This ability to check for connectivity for smaller primes would be useful, for example, in a recent application of Markoff triples to a cryptographic hash function in [19], in which one needs to be able to check connectivity of a Markoff mod-p graph for a specific large (but still manageable using our criterion) prime \(p\) in order to construct the hash. Interestingly, our data reveals that already for primes of size \(10^{31}\), the Erdos-Kac theorem takes over in the sense that the expected value of \(\tau(p\pm 1)\) is small enough so that it becomes extremely rare to need the improvement that comes by considering maximal divisors rather than all divisors. This is one hint that our methods via maximal divisors alone will not prove connectivity of all Markoff graphs, and that this will require new insight. **Acknowledgements:** This project was started at the UC Davis 2021 REU, and we thank Javier Arsuaga and Greg Kuperberg for the REU's creation and organization. We also thank Matthew de Courcy-Ireland for helpful conversations and comments on this work. ## 2. A preliminary bound In this section, we prove a preliminary bound towards Theorem 1.4, which will not only serve to introduce the reader to the key points of our main argument, but will also be necessary in the proof of Theorem 1.4. The Appendix, which serves to make several statements in [1] more precise, will feed into the technical details of the proofs. We use the following parameterization, which matches that of Bourgain, Gamburd, and Sarnak up to a change of variables (equations (15), (16), and (18) in [1]). A triple \((a,b,c)\in\mathbb{F}_{p}\) with \(a\neq 0,\pm 2\) solves \(x^{2}+y^{2}+z^{2}=xyz\) if and only if it is of the form \[\left(r+r^{-1},\,\frac{(r+r^{-1})(s+s^{-1})}{r-r^{-1}},\,\frac{(r+r^{-1})(rs+r ^{-1}s^{-1})}{r-r^{-1}}\right) \tag{3}\] for some \(r,s\in\mathbb{F}_{p^{2}}\). The orbit of this triple under the Vieta involutions that fix the first coordinate, called \(R_{2}\) and \(R_{3}\) in (2), consists precisely of triples of the form \[\left(r+r^{-1},\,\frac{(r+r^{-1})(r^{2n}s+r^{-2n}s^{-1})}{r-r^{-1}},\,\frac{(r +r^{-1})(r^{2n\pm 1}s+r^{2n\pm 1}s^{-1})}{r-r^{-1}}\right) \tag{4}\] for some \(n\in\mathbb{Z}\), and one can similarly describe the orbits that fix the second or third coordinate, as well. So the number of triples in this orbit depends on the multiplicative order of \(r\) in \(\mathbb{F}_{p^{2}}^{*}\). Note that in [1], connectivity is proven for a slightly modified Markoff mod-\(p\) graph, where the edges are defined not by the involutions \(R_{i}\) as above, but by so-called rotations that they denote \(\operatorname{rot}(x_{k})\), but this is in essence the same as product \(R_{i}R_{j}\) where \(\{i,j,k\}=\{1,2,3\}\). Our strategy, based off of [1], is to assign an order to every triple in \(\mathcal{G}_{p}\) as follows. Given \(a=r+r^{-1}\) as above, let \(\operatorname{ord}_{p}(a)\) be the multiplicative order of \(r\) in \(\mathbb{F}_{p^{2}}^{*}\). This agrees with the notion of order in [1] (see their equations (8) and (9)) unless \(a=\pm 2\), but it is shown in [1] that a triple with \(\pm 2\) in some coordinate is necessarily in the large connected component, so we need not consider this case for our purposes. Define the order of \((a,b,c)\) to be \[\operatorname{Ord}_{p}((a,b,c)):=\max\{\operatorname{ord}_{p}(a), \operatorname{ord}_{p}(b),\operatorname{ord}_{p}(c)\} \tag{5}\] One of the key ideas in Bourgain-Gamburd-Sarnak's proof of the connectivity of \(\mathcal{G}_{p}\) is that, if a triple \((a,b,c)\in\mathcal{G}_{p}\) has large enough order in the above sense, then there is always a triple of larger order in one of the orbits of \(\langle R_{i},R_{j}\rangle\) acting on \((a,b,c)\). One then walks along these orbits in what Bourgain-Gamburd-Sarnak call the Middle Game of the proof, increasing the order gradually, until one gets to a triple of order roughly \(p^{1/2}\) (see Proposition 6.1 in our Appendix for a precise statement), which is then necessarily connected to the large connected component \(\mathcal{C}_{p}\) in Theorem 1.1. So, all triples of large enough order are connected to each other, and the question is then, how many triples potentially do not have large enough order, and hence may not be in \(\mathcal{C}_{p}\)? According to Chen [10], the number of these bad triples not connected to \(\mathcal{C}_{p}\) must be divisible by \(p\). Hence, if we can show that this number is strictly less than \(p\), we may deduce that there are no bad triples at all and, in fact, \(\mathcal{G}_{p}\) is connected. In fact, we can loosen this a bit as we explain in Lemma 2.2 below. We recall that a central ingredient in the Middle Game of [1] is an upper bound on the number of triples of order at most \(t\) in the orbit (4) and its analogues in which coordinates other than the first one are fixed. Without loss of generality, assume this maximal coordinate is the first one. Using the parametrization in (4), we have the following lemma, which sharpens the bound used by Bourgain-Gamburd Sarnak at the start of Section 4 in [1] when they reference a bound by Corvaja-Zannier in [10]. **Lemma 2.1**.: _If \(r\in\mathbb{F}_{p^{2}}^{*}\) has order \(t>2\), then the number of congruence classes \(n\pmod{t}\) for which \(\operatorname{ord}_{p}((r+r^{-1})(sr^{n}+(sr^{n})^{-1})/(r-r^{-1}))\) divides \(d\) is at most \(\frac{3}{2}\max((6td)^{1/3},4td/p)\)._ Proof.: The number of congruence classes in question is bounded by half the number of solutions \((x,y)\in\overline{\mathbb{F}_{p}}^{2}\) to the system of equations \(x^{t}=1\), \(y^{d}=1\), and \[\frac{(r+r^{-1})(sx+(sy)^{-1})}{r-r^{-1}}=y+y^{-1}.\] (We halve the number of solutions because \((x,y)\) and \((x,y^{-1})\) only give one congruence class, yet get counted as distinct solutions unless \(y=\pm 1\). But as mentioned in the introduction, the case \(y=\pm 1\) is ignored as any triple with coordinate \(\pm 2\) is known to be in \(\mathcal{C}_{p}\).) Solutions to the last equation above lie on the projective curve \(C\) defined by \[\frac{s(r+r^{-1})}{r-r^{-1}}X^{2}Y-XY^{2}-XZ^{2}+\frac{r+r^{-1}}{s(r-r^{-1})} YZ^{2}=0. \tag{6}\] Assume \(r+r^{-1}\neq 0\) since otherwise the proposition is trivial to check (and not useful). Along with \(r+r^{-1}\neq\pm(r-r^{-1})\), which is always true, this implies \(C\) is smooth. Therefore we can apply Theorem 2 in [10] to the rational functions \(u([X,Y,Z])=(X/Z)^{t}\) and \(v([X,Y,Z])=(Y/Z)^{d}\). The zeros and poles of \(u\) or \(v\) that lie on \(C\) are \([1,0,0]\), \([0,1,0]\), and \([0,0,1]\). The Euler characteristic of \(C\backslash\{[1,0,0],[0,1,0],[0,0,1]\}\) as defined in [10] is \[\chi=\big{|}\{[1,0,0],[0,1,0],[0,0,1]\}\big{|}+2\binom{\deg C-1}{2}-2=3.\] By [10], the number of points on \(C\) that solve \(u([X,Y,Z])=v([X,Y,Z])=1\) is bounded from above by \(3\max((2\chi\deg u\deg v)^{1/3},4\deg u\deg v/p)\). The claim follows. In Section 2, we mentioned Chen's result from [10] that any connected component in \(\mathcal{G}_{p}\) has size divisible by \(p\). We combine this with a few observations about the Markoff graphs to yield the following. **Lemma 2.2**.: _If \(p>3\), then the number of vertices in \(\mathcal{G}_{p}\backslash\mathcal{C}_{p}\) is divisible by \(4p\)._ Proof.: Chen proved that the number of vertices in any connected component of \(\mathcal{G}_{p}\) is divisible by \(p\)[10]. To prove divisibility by \(4\), it suffices to show that \(\mathcal{G}_{p}\backslash\mathcal{C}_{p}\) is closed under negating any pair of coordinates. Indeed, no triple has a \(0\) in two coordinates, so \((a,b,c)\), \((a,-b,-c)\), \((-a,b,-c)\), and \((-a,-b,c)\) are always distinct. If \(p\equiv 1\operatorname{mod}4\), then negating any two coordinates of a triple of order \(p-1\) also has order \(p-1\). If \(p\equiv 3\operatorname{mod}4\), then negating any two coordinates of a triple of order \(p+1\) also has order \(p+1\). In particular, we can always find some \((a_{0},b_{0},c_{0})\in\mathcal{C}_{p}\) such that \((a_{0},-b_{0},-c_{0})\), \((-a_{0},b_{0},-c_{0})\), and \((-a_{0},-b_{0},c_{0})\) are also in \(\mathcal{C}_{p}\). Since negating any two coordinates in a pair of path-connected triples leaves them path-connected, we see that \(\mathcal{C}_{p}\) is closed under negating of any pair of coordinates. This implies the same is true of \(\mathcal{G}_{p}\backslash\mathcal{C}_{p}\). **Remark 2.3**.: The \(4p\) in Lemma 2.2 could be improved to \(12p\) by proving that \((3,3,3)\in\mathcal{C}_{p}\). According to [11], this would be true if \((3,3,3)\) is connected to a triple of order \(p\pm 1\). Our computer experiments for the first \(10,000\) primes show that such a triple can always be found in the orbit of \((3,3,3)\) under the group generated by \(R_{2}R_{2}\), which consists of triples \[(3,3F_{2n-1},3F_{2n+1})\text{ for }n\geq 1,\] modulo \(p\), where \(F_{k}\) denotes the \(k\)-th Fibonacci number. **Proposition 2.4**.: _Let \(\tau_{d}(n)\) denote the number of divisors of \(n\) that are \(\leq d\). For \(d\) dividing \(p-1\) or \(p+1\), let \(T_{d}=\tau_{d}(p-1)+\tau_{d}(p+1)\). If no such divisor satisfies either inequality below:_ \[\frac{2\sqrt{2p}}{T_{d}}<d<\frac{81T_{d}^{3}}{4} \frac{p}{6T_{d}}<d<\frac{8\sqrt{p}(p\pm 1)\tau(p\pm 1)}{\phi(p\pm 1)}\] _(where the \(\pm\) is \(+\) when \(d|p+1\) and \(-\) if \(d|p-1\)), then \(\mathcal{G}_{p}\) is connected._ Proof.: Suppose \(p\) is such that the Markoff graph \(\operatorname{mod}p\) is not connected, and let \(d\) be the maximal order among triples that are not in \(\mathcal{C}_{p}\). Fix some triple not in \(\mathcal{C}_{p}\) that attains \(d\) as the order of its first coordinate (without loss of generality), and write it in the form of (3). By maximality of \(d\) among orders in \(\mathcal{G}_{p}\backslash\mathcal{C}_{p}\), each of second and third coordinates in the orbit (3) must have order \(d^{\prime}\leq d\), where \(d^{\prime}\,|\,p\pm 1\) as usual. There are exactly \(d\) choices of exponent \(n\operatorname{mod}d\) in the second and third coordinates of (4), so with \(\mathcal{T}_{d}\) denoting the set of divisors of \(p\pm 1\) that do not exceed \(d\), Lemma 2.1 implies \[d\leq\sum_{d^{\prime}\in\mathcal{T}_{d}}\frac{3}{2}\max\biggl{(}(6d^{\prime})^ {1/3},\frac{4dd^{\prime}}{p}\biggr{)}<\frac{3T_{d}}{2}\max\biggl{(}(6d^{2})^{ 1/3},\frac{4d^{2}}{p}\biggr{)}\,. \tag{7}\] First consider the case \(\max((6d^{2})^{1/3},4d^{2}/p)=4d^{2}/p\). Adding this to right-hand side above and solving for \(d\) gives \(d>p/6T_{d}\). A large divisor like this is amenable to the End Game in [11], so we apply Proposition 6.1 in the Appendix to get \[\frac{p}{6T_{d}}<d<\frac{8\sqrt{p}(p\pm 1)\tau(p\pm 1)}{\phi(p\pm 1)},\] as in the statement of this proposition. Next consider the case \(\max((6d^{2})^{1/3},4d^{2}/p)=(6d^{2})^{1/3}\). Again use this with (7) and solve for \(d\) to get \(d<81T_{d}^{3}/4\); so it remains only to show \(2\sqrt{2p}/T_{d}<d\) to complete the proof. To that end, the number of distinct \(a\in\mathbb{F}_{p}\backslash\{\pm 2\}\) for which \(\operatorname{ord}_{p}(a)\) divides \(d^{\prime}\) is at most \(d^{\prime}/2\) (as \(a=r+r^{-1}\) and \(a=r^{-1}+(r^{-1})^{-1}\) should only be counted once). So we can bound the number of Markoff triples \((a,b,c)\) of order at most \(d\) by summing over the different possible orders of \(a\) and \(c\) and noting that there are at most two choices for \(c\) that produce a Markoff triple once \(a\) and \(b\) are fixed: \[\sum_{d^{\prime},d^{\prime\prime}\in\mathcal{T}_{d}}\!\!\!\!\!\!\!\!\!\!\!\!2 \cdot\frac{d^{\prime}}{2}\cdot\frac{d^{\prime\prime}}{2}<\frac{T_{d}^{2}d^{2}}{2}. \tag{8}\] Our choice of \(d\) means \(|\mathcal{G}_{p}\backslash\mathcal{C}_{p}|\) cannot exceed the number of Markoff triples of order at most \(d\). This allows us to combine (8) and Lemma 2.2, giving \(4p<T_{d}^{2}d^{2}/2\). Thus \(2\sqrt{2p}/T_{d}<d\) as desired. **Corollary 2.5**.: \(\mathcal{G}_{p}\) _is connected for all primes \(p>10^{532}\)._ Proof.: First let us bound \(T_{d}\) from Proposition 2.4 using Nicolas' upper bound on \(\tau(n)\)[11], which is \[\tau(n)<\exp\!\left(\frac{\log 2\log n}{\log\log n}+\frac{1.342\log n}{(\log \log n)^{2}}\right).\] This gives \[T_{d}\leq\tau(p-1)+\tau(p+1)<2\exp\!\left(\frac{\log 2\log p}{\log\log p}+ \frac{1.342\log p}{(\log\log p)^{2}}\right), \tag{9}\] where the final inequality has used that the function bounding \(\tau(n)\) is concave in order to average the inputs \(p-1\) and \(p+1\). Now, to show that the first inequality in Theorem 3.2 is never satisfied for \(p>10^{532}\), we will check that \(81T_{d}/4\leq 2\sqrt{2p}/T_{d}\) for all \(d\). Rearranging this inequality slightly, taking the natural logarithm, and replacing \(T_{d}\) with the bound in (9) gives \[2\log(81\sqrt{2})\leq\log p\left(1-\frac{8\log 2}{\log\log p}-\frac{10.736}{( \log\log p)^{2}}\right),\] which is easily verified for \(p>10^{532}\). A similar approach shows that the second inequality in Proposition 2.4 is also never satisfied. Using the same bounds on \(\tau(p\pm 1)\) along with \(\phi(p\pm 1)>p/(2\log\log p)\) (a weaker version of Theorem 8.8.7 in [1]) shows that \(8\sqrt{p}(p\pm 1)\tau(p\pm 1)/\phi(p\pm 1)\leq p/6T_{d}\) when \(p>10^{141}\). ## 3. Maximal Divisors We can improve the bound in Corollary 2.5 by using the notion of what we call _maximal divisors_. The key observation is that the count in Lemma 2.1 comes from counting the number of solutions in a subgroup of \(\mathbb{F}_{p}^{*}\) of order \(t\) to the equation in (6). So whenever we consider two divisors \(t,t^{\prime}<d\) of \(p\pm 1\) where \(t|t^{\prime}\), we count the solutions relevant to the divisor \(t\) twice, since the subgroup of order \(t\) is contained in that of the subgroup of order \(t^{\prime}\). So, instead of summing over all divisors in (7), we can sum over a refined set of divisors that we call maximal. **Definition 3.1**.: Let \(n\) be a positive integer, and let \(x\in\mathbb{R}\). A positive divisor \(d\) of \(n\) is said to be _maximal with respect to \(x\)_ if \(d\leq x\) and there is no other positive divisor \(d^{\prime}\) of \(n\) such that \(d^{\prime}\leq x\) and \(d\,|\,d^{\prime}\). The set of maximal divisors with respect to \(x\) is denoted \(\mathcal{M}_{x}(n)\). Our goal now is to improve on the bound in Corollary 2.5 by replacing the set \(\mathcal{T}_{d}\) with the set \(\mathcal{M}_{d}\) as shown in this simple improvement of Proposition 2.4. **Theorem 3.2**.: _For \(d\) dividing \(p-1\) or \(p+1\), let \(M_{d}=|\mathcal{M}_{d}(p-1)|+|\mathcal{M}_{d}(p+1)|\). If no such divisor satisfies either inequality below:_ \[\frac{2\sqrt{2p}}{M_{d}}<d<\frac{81M_{d}^{3}}{4} \frac{p}{6M_{d}}<d<\frac{8\sqrt{p}(p\pm 1)\tau(p\pm 1)}{\phi(p\pm 1)}\] _(where the \(\pm\) is determined by whether \(d\) divides \(p-1\) or \(p+1\)), then \(\mathcal{G}_{p}\) is connected._ The proof of this is identical to that of Proposition 2.4, replacing all instances of \(T_{d}\) with \(M_{d}\), and noting that the rotation order \(d^{\prime}\) of the second and third coordinates in the orbit (3) must divide at least one maximal divisor of \(p\pm 1\) with respect to \(d\). In Section 2, we relied on known upper bounds for \(\tau(n)\), and now we hope to obtain helpful bounds on \(M_{d}\). There is very little in the literature on the number of maximal divisors of \(n\) with respect to \(x\). To find asymptotic and explicit bounds for small \(n\), our strategy is to first find those \(n\) for which \(|\mathcal{M}_{x}(n)|\) is maximized, akin to Ramanujan's "superior highly composite numbers." In [14], Ramanujan introduced a simple approach to bounding \(\tau(n)\) in which only a very sparse set of integers \(n\), which he called superior highly composite numbers, needs to be considered. They are those \(n\) that maximize \(\tau(n)/n^{\varepsilon}\) for some \(\varepsilon>0\). The prime factorization of a superior highly composite number was determined by Ramanujan to be \(2^{a_{1}}3^{a_{2}}5^{a_{3}}\cdots\) where \[a_{i}=\left\lfloor\frac{1}{p_{i}^{\varepsilon}-1}\right\rfloor.\] These numbers are convenient for two main reasons: First, they are easy to enumerate because the prime factorizations are known and there are fewer than \(\log x\) superior highly composite numbers less than \(x\) if \(x>10^{9}\). Second, if \(n_{1}\) and \(n_{2}\) are consecutive superior highly composite numbers and \(f\) is a convex function on the interval \((e^{n_{1}},e^{n_{2}})\), then \(\log\tau(n)\leq f(\log n)\) holds for all integers \(n\in[n_{1},n_{2}]\) if and only if it holds for \(n_{1}\) and \(n_{2}\). These two facts make it easy to obtain both asymptotic bounds on \(\tau(n)\) and a sharp bound on \(\tau(n)\) in a given interval. Our goal in this section is to recreate this approach for \(|\mathcal{M}_{x}(n)|\) in place of \(\tau(n)\). ### Reducing functions In this section we introduce a tool for narrowing down the list of integers \(n\) for which \(|\mathcal{M}_{x}(n)|\) needs to be computed to obtain upper bounds. Our work culminates in Definition 3.10 and Theorem 3.13. **Notation 3.3**.: For \(n\in\mathbb{N}\) let \(\mathcal{D}(n)\) denote the set of positive divisors of \(n\), and let \(\lambda(n)\) denote the least prime factor of \(n\) if \(n\geq 2\). Set \(\lambda(1)=1\). The function \(\lambda\) is often denoted "\(\operatorname{lpf}\)" or "\(\operatorname{LD}\)" in the literature. **Definition 3.4**.: For \(m,n\in\mathbb{N}\), a function \(f:\mathcal{D}(n)\to\mathcal{D}(m)\) is called _reducing_ if and only if the following hold for all \(d,d^{\prime}\in\mathcal{D}(n)\): 1. \(f(d)\leq d\), 2. \(\frac{m/f(d)}{n/d}\leq\min\biggl{\{}1,\,\frac{\lambda(m/f(d))}{\lambda(n/d)} \biggr{\}}\), 3. \(f(d)=2^{i}f(d^{\prime})\) for some \(i\in\mathbb{Z}\) implies \(d=2^{j}d^{\prime}\) for some \(j\in\mathbb{Z}\). We say \(n\)_reduces to_\(m\) when such a function exists. Observe that setting \(d=n\) in requirement (b) results in \(m/f(n)\leq 1\). Since \(f(n)\,|\,m\), this forces \(f(n)=m\), which combines with requirement (a) to give \(m\leq n\). So integers can only reduce to smaller integers. **Theorem 3.5**.: _If \(n\) reduces to \(m\) then \(|\mathcal{M}_{x}(n)|\leq|\mathcal{M}_{x}(2^{a}m)|\) for all \(x\in\mathbb{R}\), where \(a\) is the smallest integer satisfying \(2^{a}m\geq n\)._ Proof.: There is little to check if \(x\geq n\), so assume otherwise. We claim that a reducing function \(f:\mathcal{D}(n)\to\mathcal{D}(m)\) induces an injection \(\hat{f}:\mathcal{M}_{x}(n)\to\mathcal{M}_{x}(2^{a}m)\) defined by \(\hat{f}(d)=2^{i}f(d)\), where \(i\) is the largest integer such that \(2^{i}f(d)\leq x\) and \(2^{i}f(d)\in\mathcal{D}(2^{a}m)\). Note that (a) in Definition 3.4 guarantees \(i\geq 0\). First let us verify that \(\hat{f}(d)\in\mathcal{M}_{x}(2^{a}m)\). Since \(\hat{f}(d)\leq x<n\leq 2^{a}m\), we see that \(\hat{f}(d)\) has proper multiples in \(\mathcal{D}(2^{a}m)\), and it must be verified that they exceed \(x\). That is, we must show \(\hat{f}(d)\lambda(2^{a}m/\hat{f}(d))>x\). This is immediate by maximality of \(i\) if \(\lambda(2^{a}m/\hat{f}(d))\) happens to be \(2\). Referring to the three inequalities below, the first follows from \(\lambda(2^{a}m/\hat{f}(d))\neq 2\), the second is a slight rearrangement of (b) in Definition 3.4, and the third follows from our choice of \(a\): \[\hat{f}(d)\lambda\Bigg{(}\frac{2^{a}m}{\hat{f}(d)}\Bigg{)}=2^{i}f(d)\lambda \Bigg{(}\frac{2^{a}m}{2^{i}f(d)}\Bigg{)}\geq 2^{a}f(d)\lambda\Bigg{(}\frac{m}{f(d)} \Bigg{)}\geq\frac{2^{a}md}{n}\lambda\Big{(}\frac{n}{d}\Big{)}\geq d\lambda \Big{(}\frac{n}{d}\Big{)}\,.\] Since \(d\in\mathcal{M}_{x}(n)\) and \(d\) properly divides \(d\lambda(n/d)\) (recall that we are assuming \(x<n\), so \(d\neq n\)), we must have \(d\lambda(n/d)>x\) by definition of maximal divisors. Combined with the inequalities above, this completes our argument that \(\hat{f}(d)\in\mathcal{M}_{x}(2^{a}m)\). Next we check that \(\hat{f}\) is an injection. If \(\hat{f}(d)=\hat{f}(d^{\prime})\) then \(2^{i}f(d)=2^{i^{\prime}}f(d)\) for some \(i,i^{\prime}\in\mathbb{Z}\). This means \(d=2^{j}d^{\prime}\) for some \(j\in\mathbb{Z}\) by (c) in Definition 3.4, so either \(d\) divides \(d^{\prime}\) or vice versa. But then \(d,d^{\prime}\in\mathcal{M}_{x}(n)\) forces \(d=d^{\prime}\) by definition of maximal divisors. In this last theorem, \(2^{a}m<2n\). So at the expense of less than a factor of \(2\), we can forgo computing \(|\mathcal{M}_{x}(n)|\) in favor of computing \(|\mathcal{M}_{x}(2^{a}m)|\), the hope being that \(m\) has some kind of predictable prime factorization like the superior highly composite numbers. Let us consider a simple example. If \(p\) and \(q\) are primes with \(2\neq p\leq q\), then \(f:\mathcal{D}(q^{a})\to\mathcal{D}(p^{a})\) defined by \(f(q^{i})=p^{i}\) is a reducing function. All three requirements from Definition 3.4 are trivially satisfied. Using \(f\) to "replace" \(q^{a}\) with \(p^{a}\) may not seem useful computationally because \(|\mathcal{M}_{x}(q^{a})|\) just equals \(1\) for any \(x\), but we can actually use \(f\) to swap primes within a prime factorization. That is, if \(n\) is not divisible by \(p\) or \(q\), then \(f\) can be extended to a reducing function \(\mathcal{D}(nq^{a})\to\mathcal{D}(np^{a})\) via the next lemma. **Lemma 3.6**.: _Suppose \(n_{1},n_{2},m_{1},m_{2}\in\mathbb{N}\) are such that \(\gcd(n_{1},n_{2})=\gcd(m_{1},m_{2})=1\). If \(f_{1}:\mathcal{D}(n_{1})\to\mathcal{D}(m_{1})\) and \(f_{2}:\mathcal{D}(n_{2})\to\mathcal{D}(m_{2})\) are reducing then so is \(f_{1}f_{2}:\mathcal{D}(n_{1}n_{2})\to\mathcal{D}(m_{1}m_{2})\)._ Proof.: Let \(n=n_{1}n_{2}\), \(m=m_{1}m_{2}\), and \(f=f_{1}f_{2}\). Let \(d,d^{\prime}\in\mathcal{D}(n)\), and let \(d_{1},d^{\prime}_{1}\in\mathcal{D}(n_{1})\) and \(d_{2},d^{\prime}_{2}\in\mathcal{D}(n_{2})\) be the unique divisors satisfying \(d=d_{1}d_{2}\) and \(d^{\prime}=d^{\prime}_{1}d^{\prime}_{2}\). It is immediate that requirement (a) in Definition 3.4 holds for \(f\) and that the ratio in requirement (b) is indeed bounded by \(1\). So let us turn our attention to the bound in (b) involving the \(\lambda\) function. Suppose without loss of generality that \(\lambda(m_{1}/f_{1}(d_{1}))\leq\lambda(m_{2}/f_{2}(d_{2}))\). Then \[\frac{\lambda(m/f(d))}{\lambda(n/d)} = \frac{\min(\lambda(m_{1}/f_{1}(d_{1})),\lambda(m_{2}/f_{2}(d_{2}) ))}{\min(\lambda(n_{1}/d_{1}),\lambda(n_{2}/d_{2}))}\] \[= \frac{\lambda(m_{1}/f_{1}(d_{1}))}{\min(\lambda(n_{1}/d_{1}), \lambda(n_{2}/d_{2}))}\] \[\geq \frac{\lambda(m_{1}/f_{1}(d_{1}))}{\lambda(n_{1}/d_{1})}\] \[\geq \frac{m_{1}/f_{1}(d_{1})}{n_{1}/d_{1}}\cdot\frac{m_{2}/f_{2}(d_{2 })}{n_{2}/d_{2}}\] \[= \frac{m/f(d)}{n/d}.\] For requirement (c), suppose \(f(d)=2^{i}f(d^{\prime})\) for some \(i\in\mathbb{Z}\). Then \(f_{1}(d_{1})/f_{1}(d^{\prime}_{1})=2^{i}f_{2}(d^{\prime}_{2})/f_{2}(d_{2})\). By assumption, \(\gcd(f_{1}(d_{1}),f_{2}(d^{\prime}_{2}))=\gcd(f_{1}(d^{\prime}_{1}),f_{2}(d_{2 }))=1\), so \(f_{1}(d_{1})/f_{1}(d^{\prime}_{1})\) and \(f_{2}(d^{\prime}_{2})/f_{2}(d_{2})\) must be powers of \(2\). Thus \(d_{1}=2^{j_{1}}d^{\prime}_{1}\) for some \(j_{1}\in\mathbb{Z}\) because \(f_{1}\) is reducing and \(d_{2}=2^{j_{2}}d^{\prime}_{2}\) for some \(j_{2}\in\mathbb{Z}\) because \(f_{1}\) is reducing. This gives \(d=2^{j_{1}+j_{2}}d^{\prime}\) Returning to our example, if \(p\) and \(q\) do not divide some \(n\in\mathbb{N}\), then Lemma 3.6 allows us to combine our reducing function \(\mathcal{D}(q^{a})\to\mathcal{D}(p^{a})\) with the identity \(\mathcal{D}(n)\to\mathcal{D}(n)\) to obtain a reducing function \(\mathcal{D}(nq^{a})\to\mathcal{D}(np^{a})\) in which \(dq^{i}\mapsto dp^{i}\). That is, replacing larger primes with smaller ones in a prime factorization essentially produces no decrease in \(|\mathcal{M}_{x}(n)|\), as with the number of divisors function. The catch is the extra factor of \(2\); in Theorem 3.5, \(2^{a}m\) can be almost twice as large as \(n\). A natural concern is that with each successive maneuver like \(q^{a}\mapsto p^{a}\), we pick up an extra factor of \(2\). Knowing that \(|\mathcal{M}_{x}(n)|\leq|\mathcal{M}_{x}(2^{a}m)|\) from Theorem 3.5 would not be helpful if \(2^{a}m\) was significantly larger than \(n\). The next lemma eliminates that concern. **Lemma 3.7**.: _If \(f:\mathcal{D}(n)\to\mathcal{D}(m)\) and \(g:\mathcal{D}(m)\to\mathcal{D}(\ell)\) are reducing, then so is \(g\circ f\)._ Proof.: To see that \(g\circ f\) satisfies requirement (b) in Definition 3.4, we have \[\frac{\ell/(g\circ f)(d)}{n/d} = \frac{\ell/(g\circ f)(d)}{m/f(d)}\cdot\frac{m/f(d)}{n/d}\] \[\leq \min\biggl{\{}1,\,\frac{\lambda(\ell/(g\circ f)(d))}{\lambda(m/f( d))}\biggr{\}}\cdot\min\biggl{\{}1,\,\frac{\lambda(m/f(d))}{\lambda(n/d)} \biggr{\}}\] \[\leq \min\biggl{\{}1\cdot 1,\,\frac{\lambda(\ell/(g\circ f)(d))}{ \lambda(m/f(d))}\cdot\frac{\lambda(m/f(d))}{\lambda(n/d)}\biggr{\}}\] \[= \min\biggl{\{}1,\,\frac{\lambda(\ell/(g\circ f)(d))}{\lambda(n/d) }\biggr{\}}\,.\] Requirements (a) and (c) are immediate. When combined, Lemmas 3.6 and 3.7 allow us to manipulate a prime factorization one comprehensible piece at a time. We have already seen through an example how to reduce to those \(n\) whose \(\omega(n)\) distinct prime factors are exactly \(2,3,...,p_{\omega(n)}\). It turns out we can do even better: if \(p\) and \(q\) are primes with \(2\neq p\leq q\) and \(a\) and \(b\) are integers with \(0\leq a\leq b\), then there is a reducing function \(f:\mathcal{D}(p^{a}q^{b})\to\mathcal{D}(p^{b}q^{a})\). It is defined by \(f(p^{i}q^{j})=p^{i+k}q^{j-k}\), where \(k=\max(0,\min(i+j,b)-a)\). This allows us to rearrange prime exponents in decreasing order (except for the exponent of \(2\)). That is, to obtain bounds on \(|\mathcal{M}_{x}(n)|\), we need only consider those \(n\) that are products of primorials up to a power of \(2\). We will not prove that this function is reducing, because its purpose is subsumed by the next family of reducing functions. These not only rearrange exponents in decreasing order, they also limit the rate at which exponents can decrease. **Lemma 3.8**.: _Let \(p\) and \(q\) be distinct odd primes, let \(a\) and \(b\) be nonnegative integers, and set \(c=\lfloor(a+1)/(b+2)\rfloor\). If \(q<p^{c}\), then \(p^{a}q^{b}\) reduces to \(p^{a-c}q^{b+1}\)._ Proof.: Define \(f:\mathcal{D}(p^{a}q^{b})\to\mathcal{D}(p^{a-c}q^{b+1})\) by \(f(p^{i}q^{j})=p^{i}q^{j}\) if \(i<(b+1-j)c\) and \(f(p^{i}q^{j})=p^{i-c}q^{j+1}\) if \(i\geq(b+1-j)c\). We claim \(f\) is a reducing function. Suppose \(i<(b+1-j)c\). The nontrivial assertion behind \(f(p^{i}q^{j})\in\mathcal{D}(p^{a-c}q^{b+1})\) is that \(i\leq a-c\). Indeed, \(i\leq(b+1-j)c-1\leq(b+1)c-1=(b+2)c-c-1\leq(a+1)-c-1=a-c\). Requirements (a) and (c) are straightforward to check, so let us check (b), still in the case \(f(p^{i}q^{j})=p^{i}q^{j}\). We have \[\frac{p^{a-c}q^{b+1}/p^{i}q^{j}}{p^{a}q^{b}/p^{i}q^{j}}=\frac{q}{p^{c}}\leq \min\biggl{\{}1,\frac{q}{p}\biggr{\}}\leq\min\biggl{\{}1,\,\frac{\lambda(p^{a-c} q^{b+1}/p^{i}q^{j})}{\lambda(p^{a}q^{b}/p^{i}q^{j})}\biggr{\}}\,.\] Next suppose \(i\geq(b+1-j)c\). In this case it is clear that \(f(p^{i}q^{j})\in\mathcal{D}(p^{a-c}q^{b+1})\). For requirement (b), \[\frac{p^{a-c}q^{b+1}/p^{i-c}q^{j+1}}{p^{a}q^{b}/p^{i}q^{j}}=1=\frac{\lambda(p^{ a-c}q^{b+1}/p^{i-c}q^{j+1})}{\lambda(p^{a}q^{b}/p^{i}q^{j})}.\] Again, (a) and (c) are immediate in the case \(i\geq(b+1-j)c\) Next is a family of reducing functions devoted to controlling the exponent of \(2\) in a prime factorization. Ultimately, \(2\) will play the role of \(p\) below. Both in the lemma statement and its proof, the empty product is to be interpreted as \(1\). **Lemma 3.9**.: _Let \(p,q_{1},...,q_{k}\) be primes with \(p<q_{1}<\cdots<q_{k}\), and let \(a\in\mathbb{N}\). If \(p^{a-2}>q_{1}\cdots q_{k-1}q_{k}^{2}\) then \(p^{a}\) reduces to \(p^{b}q_{1}\cdots q_{k}\), where_ \[b=\left\lfloor\frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1})}{\log p} \right)\right\rfloor.\] Proof.: Let \(c_{k}=a-b\) and \(c_{j}=\lceil\log(q_{1}\cdots q_{j})/\log p\rceil\) for \(0\leq j<k\). We consider the case \(j=k\) at the end of the proof. Define \(f:\mathcal{D}(p^{a})\to\mathcal{D}(p^{b}q_{1}\cdots q_{k})\) by \(f(p^{i})=p^{b+c_{j}+i-a}q_{j+1}\cdots q_{k}\), where \(j\) is the largest index such that \(c_{j}\leq a-i\). We claim \(f\) is a reducing function. The nontrivial assertion behind \(f(p^{i})\in\mathcal{D}(p^{b}q_{1}\cdots q_{k})\) is that \(b+c_{j}+i-a\geq 0\). To verify this inequality, consider first the case \(j<k-1\). The first inequality below follows from the choice of \(j\), the second inequality uses the definitions of \(c_{j}\) and \(c_{j+1}\) (and assumes \(j<k-1\)), and the last inequality is the hypothesis \(p^{a-2}>q_{1}\cdots q_{k-1}q_{k}^{2}\): \[b+c_{j}+i-a\geq b+c_{j}-c_{j+1}+1 \geq b-\left\lceil\frac{\log q_{j+1}}{\log p}\right\rceil+1\] \[= \left\lfloor\frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1})}{ \log p}\right)\right\rfloor-\left\lceil\frac{\log q_{j+1}}{\log p}\right\rceil+1\] \[> \frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1})}{\log p} \right)-\frac{\log q_{j+1}}{\log p}-1\] \[> \frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1}q_{k}^{2})}{ \log p}\right)-1\] \[> 0.\] In the case \(j=k-1\) we must have \(a-i\leq c_{k}-1=a-b-1\) by choice of \(j\), so \[b+c_{j}+i-a \geq 2b+c_{k-1}+1-a\] \[= 2\left\lfloor\frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1})} {\log p}\right)\right\rfloor+\left\lceil\frac{\log(q_{1}\cdots q_{k-1})}{\log p }\right\rceil+1-a\] \[> 2\left(\frac{1}{2}\left(a-\frac{\log(q_{1}\cdots q_{k-1})}{\log p }\right)-1\right)+\frac{\log(q_{1}\cdots q_{k-1})}{\log p}+1-a\] \[= -1.\] And finally, if \(j=k\) then \(b+c_{j}+i-a=i\geq 0\). Now we turn to the bound \(f(p^{i})\leq p^{i}\) from Definition 3.4. If \(j=k\) then \(f(p^{i})=p^{i}\). Otherwise, \[\frac{\log(f(p^{i})/p^{i})}{\log p} = b+c_{j}-a+\frac{\log(q_{j+1}\cdots q_{k})}{\log p}\] \[\leq b-a+1+\frac{\log(q_{1}\cdots q_{k})}{\log p}\] \[\leq -\frac{1}{2}\left(a+\frac{\log(q_{1}\cdots q_{k-1})}{\log p} \right)+1+\frac{\log(q_{1}\cdots q_{k})}{\log p}\] \[< -\frac{1}{2}\left(2+\frac{2\log(q_{1}\cdots q_{k})}{\log p} \right)+1+\frac{\log(q_{1}\cdots q_{k})}{\log p}\] \[\leq 0.\] To verify requirement (b), \[\frac{p^{b}q_{1}\cdots q_{k}/f(p^{i})}{p^{a}/p^{i}}=\frac{q_{1}\cdots q_{j}}{p^{c_ {j}}}\leq 1\leq\frac{\lambda(p^{b}q_{1}\cdots q_{k}/f(p^{i}))}{\lambda(p^{a}/p^ {i})},\] where the final inequality above uses \(p<q_{1},...,q_{k}\). Requirement (c) is trivially satisfied. Let us now identify those numbers that cannot be reduced by Lemma 3.8 or 3.9. These are the numbers \(n\) that we use to determine the maxima of \(|\mathcal{M}_{x}(n)|\), as made precise in Theorem 3.13. Throughout the remainder of this section, \(p_{i}\) denotes the \(i^{\text{th}}\) prime number. **Definition 3.10**.: An integer \(2^{a_{1}}3^{a_{2}}5^{a_{3}}\cdots\) (where \(a_{i}=0\) for sufficiently large \(i\)) is _reduced_ if \[\left\lfloor\frac{a_{i}+1}{a_{j}+2}\right\rfloor<\frac{\log p_{j}}{\log p_{i}} \tag{10}\] whenever \(i,j\neq 1\), and \(2^{a_{1}}<8p_{j}^{2}\) whenever \(a_{j}=0\). As examples, the first odd reduced numbers that are less than 100 are 1, 3, 9, 15, and 45. Up to a power of 2, these numbers are products of primorials. This is always true, as mentioned before Lemma 3.8 and proved below. Also note the restriction on how quickly exponents can decrease. This is exhibited by the fact that 27 is not a reduced number--the exponent decrease from \(3^{3}\) to \(5^{0}\) is too much. **Lemma 3.11**.: _If \(2^{a_{1}}3^{a_{2}}5^{a_{3}}\cdots\) is reduced, then \(a_{2}\geq a_{3}\geq\cdots\)._ Proof.: On the one hand, if \(i>j\) in inequality (10) then the right-hand side is less than 1. On the other hand if \(a_{i}>a_{j}\) then the left-hand side is at least 1. **Lemma 3.12**.: _Let \(p_{k}\) be the largest prime divisor of \(n\in\mathbb{N}\). If \(n\) is reduced, so is \(np_{k+1}\)._ Proof.: Only the exponent \(a_{k+1}\) has changed, so we need only verify (10) when \(i=k+1\) or \(j=k+1\). First suppose \(i=k+1\) (so \(a_{i}=1\) for \(np_{k+1}\)). If \(j\leq k+1\) then \(a_{j}\geq 1\) by Lemma 3.11 applied to \(n\). Thus \[\left\lfloor\frac{a_{i}+1}{a_{j}+2}\right\rfloor\leq\left\lfloor\frac{2}{3} \right\rfloor=0<\frac{\log p_{j}}{\log p_{i}}.\] If \(j>k+1\) then \(a_{j}=0\). So \[\left\lfloor\frac{a_{i}+1}{a_{j}+2}\right\rfloor=1<\frac{\log p_{j}}{\log p_{ k+1}}=\frac{\log p_{j}}{\log p_{i}}.\] Now suppose \(j=k+1\) and \(i\neq k+1\). Here the fraction \((a_{i}+1)/(a_{j}+2)\) has decreased by adding the factor of \(p_{k+1}\). So if inequality (10) holds for \(n\), it certainly holds for \(np_{k+1}\) **Theorem 3.13**.: _For any integer \(n\geq 2\) there exists a reduced integer \(m\) such that \(n\leq m\leq 4n-6\) and \(|\mathcal{M}_{x}(n)|\leq|\mathcal{M}_{x}(m)|\) for all \(x\in\mathbb{R}\)._ Proof.: Let \(m^{\prime}\) be the odd part of the smallest positive integer to which \(n\) can be reduced. By Lemma 3.8, the exponents in the prime factorization of \(m^{\prime}\) satisfy (10). Let \(a\) be the smallest integer such that \(2^{a}m^{\prime}\geq n\). Then \(|\mathcal{M}_{x}(n)|\leq|\mathcal{M}_{x}(2^{a}m^{\prime})|\) for all \(x\in\mathbb{R}\) by Theorem 3.5. Note that \(2^{a}m^{\prime}\leq 2n-2\). Let \(p_{k}\) be the largest prime dividing \(2^{a}m^{\prime}\), and if one exists, let \(\ell\) be the largest index satisfying \(p_{k+1}\cdots p_{\ell-1}p_{\ell}^{2}<2^{a-2}\). If no such index exists, let \(\ell=k\). We claim that \(m=2^{a_{1}}m^{\prime}p_{k+1}\cdots p_{\ell}\) meets our theorem's requirements, where \(a_{1}\) is the smallest integer such that \(m\geq 2^{a}m^{\prime}\). From another application of Theorem 3.5, this time applied to the reduction in Lemma 3.9, we have \(|\mathcal{M}_{x}(2^{a}m^{\prime})|\leq|\mathcal{M}_{x}(m)|\) for all \(x\in\mathbb{R}\). Since \[m\leq 2(2^{a}m^{\prime})-2\leq 2(2n-2)-2=4n-6,\] we will be done provided \(m\) is reduced. Apply Lemma 3.12\(\ell-k\) times beginning with the reduced integer \(m^{\prime}\) to see that \(m^{\prime}p_{k+1}\cdots p_{\ell}\) is reduced, meaning (10) holds. Let us check that \(2^{a_{1}}<8p_{\ell+1}^{2}\). We have \[3+\bigg{\lfloor}\frac{2\log p_{\ell+1}}{\log 2}\bigg{\rfloor}+\frac{\log(p_{k+1} \cdots p_{\ell})}{\log 2}>2+\frac{\log(p_{k+1}\cdots p_{\ell}\,p_{\ell+1}^{2})}{ \log 2}\geq a,\] where the last inequality above uses maximality of \(\ell\). Thus \(3+\lfloor 2\log p_{\ell+1}/\log 2\rfloor\) solves the inequality for which \(a_{1}\) is the minimal solution, implying \(a_{1}<3+2\log p_{\ell+1}/\log 2\) as desired. Reduced numbers turn out to be sufficiently rare for our purpose. Data up to \(x\approx 10^{10000}\) suggests that \(12\log x\) is a very good approximation for the number of reduced \(n\leq x\). This density could potentially be diminished further via new reducing functions, though the authors suspect that Definition 3.4 is too restrictive to allow for a notion of reduced numbers with density approaching that of the superior highly composite numbers (less than \(\log x\) for large \(x\)). Definition 3.4 might be loosened, however, to permit functions \(f:\mathcal{D}(n)\to\mathcal{D}(m)\) with ratios \[\alpha\coloneqq\max_{d\in\mathcal{D}(n)}\frac{f(d)}{d}\quad\text{ and }\quad \beta\coloneqq\max_{d\in\mathcal{D}(n)}\frac{(m/f(d))\lambda(n/d)}{(n/d) \lambda(m/f(d))}\] that exceed \(1\). Then, as long as \(\alpha\leq\beta\), we could prove a version of Theorem 3.5 that requires \(2^{a}m\geq\beta n\) in order to conclude \(|\mathcal{M}_{x}(n)|\leq|\mathcal{M}_{\alpha x}(2^{a}m)|\) for all \(x\). ### An asymptotic bound Our strategy for bounding \(|\mathcal{M}_{x}(n)|\) asymptotically is as follows: We need only consider reduced \(n\) - that is the purpose of the last section - and reduced integers are not too far from being products of one or two primorials (Lemma 3.14). This makes \(\Omega(n)\) roughly equal to \(\log n/\log\log n\) (Lemma 3.15). If \(x=n^{\alpha}\) then we expect elements of \(\mathcal{M}_{x}(n)\) to be products of roughly \(\alpha\Omega(n)\) primes (Lemma 3.18), so we just apply Stirling's formula to bound how many ways we can choose these primes (Theorem 1.3). **Lemma 3.14**.: _For a reduced integer \(2^{a_{1}}3^{a_{2}}\cdots p_{k}^{a_{k}}\),_ \[\sum_{a_{i}\geq 3}(a_{i}-2)=O\!\left(\frac{k^{2/3}}{(\log k)^{1/3}}\right).\] Proof.: By setting \(j\) in Definition 3.10 equal to \(k+1\), we see that \(a_{i}<2\log p_{k+1}/\log p_{i}\) for any \(i\geq 2\), and that \(a_{1}<3+2\log p_{k+1}/\log 2\). In particular, if \(a_{i}\geq 3\) then \(p_{i}<p_{k+1}^{2/3}\). Let \(x=p_{k+1}^{2/3}\). Our established inequalities followed by partial summation gives \[\sum_{a_{i}\geq 3}(a_{i}-2) < 3+2\!\sum_{p_{i}<x}\!\left(\frac{\log p_{k+1}}{\log p_{i}}-1\right)\] \[= 3+2\pi(x)\!\left(\frac{\log p_{k+1}}{\log x}-1\right)+\int_{2}^{ x}\!\frac{\pi(t)\log p_{k+1}}{t(\log t)^{2}}dt\] \[= O(\pi(x)).\] Replacing \(x\) with \(p_{k+1}^{2/3}\) and applying the prime number theorem up to a constant multiple completes the proof. A small deficiency in our reducing functions from Section 3.1 is that they do nothing to bound the index at which prime exponents of a reduced number must switch from \(2\) to \(1\). In fact, reduced numbers can be perfect squares. This is why the previous lemma can only bound sums of exponents that are at least \(3\) rather than at least \(2\), and thus why the proof of the next lemma must consider products of two primorials instead of a single primorial. **Lemma 3.15**.: _Let \(\Omega(n\) denote the number of prime factors of \(n\), counted with multiplicity. For a reduced integer \(n\),_ \[\Omega(n)=\frac{\log n}{\log\log n}+O\!\left(\frac{\log n}{(\log\log n)^{2}} \right).\] Proof.: Suppose \(n\) is reduced, and let \(m\) be the largest factor of \(n\) that is cube-free. So \(m=p_{k}\#p_{j}\#\) for some \(j\leq k\), where \(p_{j}\#\) can be deleted if \(m\) happens to be a primorial. We have two initial claims: \[\log n>(k+j)\log(k\log k)-3k\] and (the crude bound) \[\log\log n<2\log(k\log k),\] both when \(n\) and thus \(k\) are large. To prove each of them, we will use standard bounds on Chebyshev's theta function, \[k(\log(k\log k)-1)<\vartheta(p_{k})<k\log(k\log k)\] (and similarly for \(\vartheta(p_{j})\) if \(j\) is not bounded by some absolute constant) [10]. First, we have \[\log n\geq\vartheta(p_{k})+\vartheta(p_{j}) >k(\log(k\log k)-1)+j(\log(j\log j)-1) \tag{11}\] \[=(k+j)\log(k\log k)-(k+j)+j\log\!\left(\frac{j\log j}{k\log k} \right).\] The smaller terms in the final expression are bounded multiples of \(k\): \[k+j\leq 2k,\quad\text{ and }\quad-j\log\!\left(\frac{j\log j}{k\log k} \right)<\frac{k}{e}\left(1+\frac{1}{\log j}\right)<k. \tag{12}\] Combining (11) and (12) shows that \(\log n>(k+j)\log(k\log k)-3k\) as desired. For the second claim, we have \(\Omega(n/m)<k^{2/3}\) by Lemma 3.14, so \[\log\log n =\log(\log(n/m)+\log m)\] \[\leq\log(k^{3/2}\log p_{k}+\vartheta(p_{k})+\vartheta(p_{j}))\] \[<\log(3\vartheta(p_{k})) \tag{13}\] \[<2\log(k\log k).\] Now we can combine our two initial claims as follows: \[\frac{\Omega(n)\log\log n}{\log n} < \frac{(k^{2/3}+k+j)\log\log n}{\log n}\] \[< \frac{(k^{2/3}+k+j)\log((k+j)\log(k\log k)-3k)}{(k+j)\log(k\log k )-3k}\] \[< \frac{(k+j)\log(k\log k)+2k}{(k+j)\log(k\log k)-3k}\] \[= 1+O\!\left(\frac{1}{\log(k\log k)}\right)\] \[= 1+O\!\left(\frac{1}{\log\log n}\right).\] Scaling both ends of the inequality above by \(\log n/\log\log n\) completes the proof. The notation below and the lemmas that follow it are purely combinatorial. We phrase them in the language of divisors for convenience. **Notation 3.16**.: For \(n,k\in\mathbb{Z}\) with \(n\geq 1\), let \(C_{k}(n)=|\{d\in\mathcal{D}(n):\Omega(d)=k\}|\). So \(C_{k}(n)\) counts the \(k\)-element multisets of the \(\Omega(n)\)-element multiset consisting of the prime factors of \(n\) with multiplicity. In particular, if \(n\) is square-free then \(C_{k}(n)\) is just a binomial coefficient. **Lemma 3.17**.: _For any \(n\in\mathbb{N}\), if \(k\leq\Omega(n)/2\) then \(C_{k-1}(n)\leq C_{k}(n)\). If \(k\geq\Omega(n)/2\) then \(C_{k}(n)\geq C_{k+1}(n)\)._ Proof.: In [1] it is shown that \(\mathcal{D}(n)\) can be partitioned into "symmetric chains" of the form \(\{d_{1},...,d_{j}\}\), where \(\Omega(d_{1})+\Omega(d_{j})=\Omega(n)\) and \(\Omega(d_{i+1})=\Omega(d_{i})+1\) for all \(i=1,...,j-1\). So the multiset \(\{\Omega(d):d\in\mathcal{D}(n)\}\) is a disjoint union of sequences of consecutive integers, each centered at \(\Omega(n)/2\). **Lemma 3.18**.: _Given \(n\in\mathbb{N}\) and \(x\geq 1\), let \(k\) be an integer that is closest to \(\Omega(n)/2\) in the range_ \[\min\{\Omega(d):d\in\mathcal{M}_{x}(n)\}\leq k\leq\max\{\Omega(d):d\in \mathcal{M}_{x}(n)\}.\] _Then \(|\mathcal{M}_{x}(n)|\leq C_{k}(n)\)._ Proof.: Again we partition \(\mathcal{D}(n)\) into symmetric chains \(\{d_{1},...,d_{j}\}\) as in the proof of Lemma 3.17. Since elements of \(\mathcal{M}_{x}(n)\) cannot divide one another while elements of a particular symmetric chain always divide one another, each symmetric chain contains at most one maximal divisor. This allows us to define an injection from \(\mathcal{M}_{x}(n)\) to \(\{d\in\mathcal{D}(n):\Omega(d)=k\}\), and the latter multiset has cardinality \(C_{k}(n)\). Indeed, to each \(d\in\mathcal{M}_{x}(n)\) we associate the unique divisor \(d^{\prime}\) that belongs to the same symmetric chain as \(d\) and satisfies \(\Omega(d^{\prime})=k\). Such a \(d^{\prime}\) always exists because we chose \(k\) to be at least as close to \(\Omega(n)/2\) as \(\Omega(d)\), and \(\Omega(n)/2\) is the "center" over which symmetric chains are symmetric. We can now prove the asymptotic bound on \(|\mathcal{M}_{x}(n)|\) stated in the introduction. **Theorem 1.3**.: _For any \(\varepsilon>0\), if \(\alpha\in[\varepsilon,1-\varepsilon]\) then_ \[\log|\mathcal{M}_{n^{\alpha}}(n)|=\log\biggl{(}\frac{1}{\alpha^{\alpha}(1- \alpha)^{1-\alpha}}\biggr{)}\frac{\log n}{\log\log n}+O\biggl{(}\frac{\log n }{(\log\log n)^{2}}\biggr{)}\,.\] _The implied constant depends only on \(\varepsilon\)._ Proof.: Recall from Theorem 3.13 that an integer \(n\) can be replaced with a reduced integer at most four times its size. Since the increase from \(\log n/\log\log n\) to \(\log 4n/\log\log 4n\) is absorbed by the error term above, we need only prove this theorem for reduced integers. So let \(n=2^{a_{1}}3^{a_{2}}\cdots p_{k}^{a_{k}}\) be reduced. Suppose first that \(\alpha\geq 1/2\). Let \(d_{0}\) be the divisor of \(n\) such that \(d_{0}\geq n^{\alpha}\), and \(\Omega(d_{0})\) is minimal among all divisors exceeding \(n^{\alpha}\). Note that \(d_{0}\) is composed of the largest primes dividing \(n\), so \(\Omega(d_{0})\leq\alpha\Omega(n)\). This gives \[\Omega(n/d_{0})\geq(1-\alpha)\Omega(n)\geq\varepsilon\Omega(n)\geq\varepsilon (k-1)\] (note that \(a_{1}\) might equal \(0\)). Since \(\varepsilon\) is fixed, if \(n\) is sufficiently large then Lemma 3.14 implies \(n/d_{0}\) must divisible by more primes than just those whose exponent in the factorization of \(n\) exceeds \(2\). In particular, we see that \(d_{0}\) is not divisible by any perfect cubes. That is, \(d_{0}=p_{k}\#p_{j}\#/(p_{i}\#)^{2}\) for some \(i\leq j\). Since \(\lambda(n/d)d>n^{\alpha}\) for any \(d\in\mathcal{M}_{n^{\alpha}}(n)\), the definition of \(d_{0}\) implies \(\Omega(d)+1\geq\Omega(d_{0})\) for any \(d\in\mathcal{M}_{n^{\alpha}}(n)\). So our goal is to bound \(\Omega(d_{0})-1\) from below. To this end, the exact same argument from inequalities (11) and (12) shows that \[\log(n)>(k+j)\log(k\log k)-3k\] for large \(n\), and a nearly identical argument shows that \[\log d_{0}\leq(k+j-2i-1)\log(k\log k)+3k\] for large \(n\). These are the first and third inequalities below, while the fourth uses Lemma 3.14: \[\Omega(d_{0})-1=k+j-2i-1 > \frac{\log d_{0}-3k}{\log(k\log k)}\] \[> \frac{\alpha\log n-3k}{\log(k\log k)}\] \[> \alpha(k+j)-\frac{3(1+\alpha)k}{\log(k\log k)}\] \[= \alpha(k+j+k^{2/3})\Bigg{(}1-\frac{3(1+\alpha)k+\alpha k^{2/3} \log(k\log k)}{\alpha(k+j+k^{2/3})\log(k\log k)}\Bigg{)}\] \[> \alpha\Omega(n)\bigg{(}1-\frac{10}{\log(k\log k)}\bigg{)}\,. \tag{14}\] Note that \(\alpha\geq 1/2\) to justify the constant \(10\) for large \(k\) in the final error term. Now recall from (13) that \(\log(k\log k)\) can be replaced with \((\log\log n)/2\) above. In particular if \(\beta\in\mathbb{R}\) is such that \(\beta\Omega(n)\) is the closest integer to \(\Omega(n)/2\) between \(\min\{\Omega(d):d\in\mathcal{M}_{x}(n)\}\) and \(\max\{\Omega(d):d\in\mathcal{M}_{x}(n)\}\) then \[\beta>\alpha\left(1-\frac{20}{\log\log n}\right). \tag{15}\] Lemma 3.18 followed by Stirling's formula tells us \[|\mathcal{M}_{n^{\alpha}}(n)|\leq C_{\beta\Omega(n)}(n)\leq\binom{\Omega(n)} {\beta\Omega(n)}=\Omega(n)^{O(1)}\left(\frac{1}{\beta^{\beta}(1-\beta)^{1- \beta}}\right)^{\Omega(n)}\!\!\!.\] Now let \(f(x)=(x-1)\log(1-x)-x\log x\), and take logarithms of the inequalities above to get \[\log|\mathcal{M}_{n^{\alpha}}(n)| = O(\log\Omega(n))+f(\beta)\Omega(n)\] \[= f(\beta)\bigg{(}\frac{\log n}{\log\log n}+O\bigg{(}\frac{\log n }{(\log\log n)^{2}}\bigg{)}\bigg{)}\] \[\leq \bigg{(}f(\alpha)+\frac{20\alpha|f^{\prime}(\alpha)|}{\log\log n }\bigg{)}\bigg{(}\frac{\log n}{\log\log n}+O\bigg{(}\frac{\log n}{(\log\log n )^{2}}\bigg{)}\bigg{)}\] \[= f(\alpha)\frac{\log n}{\log\log n}+O\bigg{(}\frac{\log n}{(\log \log n)^{2}}\bigg{)}\,.\] Both the second and last equality above use that \(\alpha\) (and \(\beta\)) are restricted to the interval \([\varepsilon,1-\varepsilon]\). The lone inequality symbol above is justified by (15) and the mean value theorem. We need not repeat these arguments for \(\alpha<1/2\). Indeed, the only missing piece is an analogous upper bound on \(\Omega(d_{1})\), where \(d_{1}\) is the divisor of \(n\) such that \(d_{1}\leq n^{\alpha}\), and \(\Omega(d_{1})\) is maximal among all divisors not exceeding \(n^{\alpha}\). But this makes \(d_{1}=n/d_{0}\). So by (14), but with \(\alpha\) replaced by \(1-\alpha\), we have \[\Omega(d_{1})=\Omega(n)-\Omega(d_{0})<\Omega(n)\bigg{(}1-(1-\alpha)\bigg{(}1- \frac{20}{\log\log n}\bigg{)}\bigg{)}=\alpha\Omega(n)\bigg{(}1+O\bigg{(}\frac {\log n}{\log\log n}\bigg{)}\bigg{)}\,.\] The uses of Stirling's formula and the mean value theorem work again with trivial modification. As a corollary, we get an asymptotic bound on the total number of divisors of \(n\) bounded by \(n^{\alpha}\). **Corollary 3.19**.: _For any \(\varepsilon>0\), if \(\alpha\in[\varepsilon,1/2]\) then_ \[\log\big{|}\{d\in\mathbb{Z}:d\,|\,n,\,d\leq n^{\alpha}\}\big{|}=\log\!\left( \frac{1}{\alpha^{\alpha}(1-\alpha)^{1-\alpha}}\right)\!\frac{\log n}{\log\log n }+O\bigg{(}\frac{\log n}{(\log\log n)^{2}}\bigg{)}\,.\] _The implied constant depends only on \(\varepsilon\)._ Proof.: Let \(x\in\mathbb{R}\), and suppose \(d\) is a proper divisor of \(n\) in \((x/2,x]\). Since \(\lambda(n/d)\geq 2\), we see that \(d\lambda(n/d)>x\), implying \(d\in\mathcal{M}_{x}(n)\). Therefore to cover the entire set of divisors in the corollary statement, it suffices to union only the sets \(\mathcal{M}_{x}(n)\) for \(x=\lfloor n^{\alpha}\rfloor,\lfloor n^{\alpha}/2\rfloor,...,1\). There are at most \(\lfloor\log_{2}n^{\alpha}\rfloor+1\) such values of \(x\). By Lemmas 3.17 and Lemma 3.18, each \(|\mathcal{M}_{x}(n)|\) is bounded by \(C_{\Omega(d_{1})}(n)\), where \(d_{1}\) is as in the previous proof: the divisor of \(n\) such that \(d_{1}\leq n^{\alpha}\), and \(\Omega(d_{1})\) is maximal among all divisors not exceeding \(n^{\alpha}\). We just showed that \[\log C_{\Omega(d_{1})}(n)=\log\biggl{(}\frac{1}{\alpha^{\alpha}(1-\alpha)^{1- \alpha}}\biggr{)}\frac{\log n}{\log\log n}+O\biggl{(}\frac{\log n}{(\log\log n )^{2}}\biggr{)}\,,\] and scaling \(C_{\Omega(d_{1})}(n)\) by \(\lfloor\log_{2}n^{\alpha}\rfloor+1\) does not change this. As mentioned in the introduction, when \(\alpha=1/2\) Corollary 3.19 recovers Wigert's theorem that \(\log\tau(n)=(\log 2+o(1))(\log n/\log\log n)\)[20]. ## 4. Proof of Theorem 1.4 Further reduction to the preliminary bound of \(p>10^{532}\) from Corollary 2.5 can now be obtained with maximal divisors. We aim to determine more precisely the minimal value of \(p\) needed to guarantee the first interval in Theorem 3.13 is empty. The second interval in Theorem 3.2 is ignored - it is empty for \(p>10^{141}\) as shown in the proof of Corollary 2.5. This is much smaller than what we might hope to work for the first interval. Let us give an intuitive outline of how Algorithm 1 works. Recalling Theorem 3.2, the first interval is empty precisely when \(81M_{d}^{4}<8\sqrt{2p}\). To determine when this occurs we need upper bounds on \[M_{d}\coloneqq|\mathcal{M}_{d}(p-1)\cup\mathcal{M}_{d}(p+1)|\] for varying \(d\) and \(p<10^{532}\). There are roughly \(10^{529}\) such primes, so of course we cannot hope to treat them individually. Instead we apply Theorem 3.13, which says we can obtain bounds on \(M_{d}\) by bounding \(|\mathcal{M}_{d}(n)|\) for all reduced \(n\) between \(p\) and \(4p-2\). There are only 16,899 reduced numbers less than \(4\cdot 10^{532}\), which is much more manageable. For a reduced number \(n\), Algorithm 1 begins by using Lemma 3.18 to find an upper bound \(C\) on \(2|\mathcal{M}_{x}(n)|\) that applies regardless of \(x\). (The "2" accounts for \(p-1\) and \(p+1\).) But then if \(M_{d}<C\), we realize from the first inequality in Theorem 3.2 that we actually only need a bound on \(M_{d}\) that applies when \(d<81C^{3}/4\). So we use Lemma 3.18 again to compute a potentially smaller \(C\) that need only apply in this reduced range of \(x\). The hope is to reduce \(C\) until the first interval in Theorem 3.2 that might contain \(d\) is empty because \(81C^{3}/4\leq 2\sqrt{p}/C\). ``` Input:\(a,b\in\mathbb{N}\) defining the range \((a,b]\) in which primes are tested Output:\(a,b\) with \(a\) updated so that \(\mathcal{G}_{p}\) is connected if \(a<p\leq b\) 1for reduced \(n\) from \(a\) to \(4b-2\)do\(\triangleright\) see Definition 3.10; order doesn't matter 2\(k\leftarrow\lfloor\Omega(n)/2\rfloor\)\(\triangleright\)\(2C_{k}(n)\) bounds \(M_{d}\) from Theorem 3.2 while\(n+2<8(3C_{k}(n))^{8}\)do\(\triangleright\) Theorem 3.2's first interval not empty... 3\(j\leftarrow\max\{\Omega(d):d\,|\,n,\,d<162C_{k}(n)^{3}\}\)if\(j\geq k\)then\(\triangleright\)...and it never will be 4\(a\gets n+1\)\(\triangleright\) connectivity test failed for \(p\leq a\) Proof.: Suppose \(p\) is a prime for which \(\mathcal{G}_{p}\) is not connected. Assuming \(a<p\leq b\), we must show that \(p\) is at most the output value of \(a\). By Theorem 3.2 there is a divisor, call it \(d_{0}\in\mathcal{D}(p+1)\cup\mathcal{D}(p-1)\), such that \[\frac{2\sqrt{2p}}{M_{d_{0}}}<d_{0}<\frac{81M_{d_{0}}^{3}}{4}, \tag{16}\] where \(M_{d_{0}}=|\mathcal{M}_{d_{0}}(p-1)\cup\mathcal{M}_{d_{0}}(p+1)|\). Let \(n_{\pm}\) be the reduced integers provided by Theorem 3.13 for \(p\pm 1\). According to Theorem 3.13, \[p\pm 1\leq n_{\pm}\leq 4(p\pm 1)-6,\] which in turn gives \(a\leq n_{\pm}\leq 4b-2\) since \(a<p\leq b\). So at some point(s) in Algorithm 1's **for** loop, \(n\) will assume the value of \(n_{-}\) and \(n_{+}\). Assume without loss of generality that \(|\mathcal{M}_{d_{0}}(n_{+})|\geq|\mathcal{M}_{d_{0}}(n_{-})|\). Call \(k\in\mathbb{N}\)_sufficiently large_ if it is at least as close to \(\Omega(n_{+})/2\) as anything between \[\min\{\Omega(d):d\in\mathcal{M}_{d_{0}}(n_{+})\}\] and \[\max\{\Omega(d):d\in\mathcal{M}_{d_{0}}(n_{+})\}.\] Lemmas 3.17 and 3.18 tell us that \(|\mathcal{M}_{d_{0}}(n_{+})|\leq C_{k}(n_{+})\) for such \(k\). Thus \[M_{d_{0}}\leq|\mathcal{M}_{d_{0}}(p-1)|+|\mathcal{M}_{d_{0}}(p+1)|\leq| \mathcal{M}_{d_{0}}(n_{-})|+|\mathcal{M}_{d_{0}}(n_{+})|\leq 2|\mathcal{M}_{d_{0} }(n_{+})|\leq 2C_{k}(n_{+}).\] This combines with (16) to give \[n_{+}+2\leq 4p<(3M_{d_{0}})^{8}/32\leq 8(3C_{k}(n_{+}))^{3}.\] Note that the first inequality uses the upper bound on \(n_{+}\) from Theorem 3.13, which also holds if \(n_{-}\) is used instead. So the **while** loop condition in line 3 is always satisfied if \(k\) is sufficiently large. Now, by induction on the number of **while** loop iterations completed for \(n_{+}\), the value of \(k\) used in line 3 is always sufficiently large. Indeed, the base case holds by line 2. And for the induction step, either \(j\) from line 4 is at least \(\lfloor\Omega(n_{+})/2\rfloor\) (in which case the **while** loop terminates by lines 5 and our proof is complete by line 6), or \(j\) is sufficiently large because \[j = \max\{\Omega(d):d\,|\,n_{+},\,d<162C_{k}(n_{+})^{3}\}\] \[\geq \max\{\Omega(d):d\,|\,n_{+},\,d<81M_{d_{0}}^{3}/4\}\] \[\geq \max\{\Omega(d):d\,|\,n_{+},\,d\leq d_{0}\}\] \[= \max\{\Omega(d):d\,|\,n_{+},\,d\in\mathcal{M}_{d_{0}}(n_{+})\}.\] Thus the **while** loop continues to iterate until the **if** condition in line 5 is met, which happens eventually since \(k\) cannot decrease indefinitely. So by line 6, the output satisfies \(a\geq n_{\pm}+1\geq p\). Finally, we use Algorithm 1 to produce our main result. **Theorem 1.4**.: \(\mathcal{G}_{p}\) _is connected for all primes \(p>863\#53\#13\#7\#5\#3^{3}2^{5}\approx 3.448\cdot 10^{392}\)._ Proof.: By Corollary 2.5, we need only check connectivity for primes less than \(10^{532}\). When \(a=2\) and \(b=10^{532}\) are input into Algorithm 1, the output is \(a=863\#53\#13\#7\#5\#3^{3}2^{5}+1\). Since this number is not prime, the "+1" has been omitted in the theorem statement. The prime \(p=863\#53\#13\#7\#5\#3^{3}2^{5}-1471\) is the largest for which we do not know whether \(\mathcal{G}_{p}\) is connected. ## 5. Data on Connectivity Aside from justifying Algorithm 1, Theorem 3.2 also provides a method for verifying connectivity of \(\mathcal{G}_{p}\) for a given prime \(p\). Previously, proving connectivity for \(\mathcal{G}_{p}\) has been done in [1] for primes less than \(3000\) by computing the adjacency matrix of the graph. Due to the large amount of memory required by this method, it has limitations as to how large a prime it could handle. Most likely one could not prove connectivity for primes larger than a few thousand using this method. Our algorithm, on the other hand, is specifically catered towards larger primes (and, indeed, is inconclusive for nearly all the primes handled in [1]). In this section, we prove connectivity for many more primes and explore how powerful our method is regarding the size of a primes that it can handle. We programmed the two conditions of Theorem 3.2 and performed an exhaustive search over all primes less than \(10^{7}\) that satisfy these conditions. We found that Theorem 3.2 proves connectivity for \(p=3,7,101\) and then the next prime is on the order of \(10^{6}\), given by \[p=1,327,363.\] After finding this first prime with a connected Markoff mod-\(p\) graph that was not handled by [1], we tackled two collections of primes: the first \(10000\) primes greater than \(10^{n}\) and \(10000\) "random" primes between \(10^{n}\) and \(10^{n+1}\) for \(8\leq n\leq 35\). By random primes, we mean that we take \(10000\) numbers between \(10^{n}\) and \(10^{n+1}\) chosen uniformly at random, and then for each number find the first prime greater than it. Beginning at \(n=31\) in the table above, the value of \(M_{d}\) in Theorem 3.2 can be replaced with \(\tau(p-1)+\tau(p+1)\) (which can be computed quickly for primes up to at least \(10^{90}\)), and there is still no value of \(d\) satisfying either of the inequalities for the \(10{,}000\) random primes we tested between \(10^{n}\) and \(10^{n+1}\). That is, \(10^{31}\) is roughly where the Erdos-Kac theorem takes over--the expected value of \(\tau(p\pm 1)\) is small enough so that it becomes extremely rare to need the improvement that comes by considering maximal divisors rather than all divisors. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(n\) & \(q_{1000}(10^{n})\) & \(q_{10000}(10^{n})\) & \(r_{10000}(10^{n})\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1. For each value of \(8\leq n\leq 35\), we calculate the two quantities \(q_{m}(10^{n})\) and \(r_{m}(10^{n})\). \(q_{m}(10^{n})\) denotes the percentage of the first \(m\) primes after \(10^{n}\) for which Theorem 3.2 guarantees connectivity of \(\mathcal{G}_{p}\) and \(r_{m}(10^{n})\) denotes the percentage of \(m\) random primes between \(10^{n}\) and \(10^{n+1}\) for which Theorem 3.2 guarantees connectivity of \(\mathcal{G}_{p}\). **Example of Inconclusiveness**: Theorem 3.2 guarantees connectedness of the Markoff mod \(p\) graph given that no divisor \(d\) of \(p\pm 1\) satisfies \(\frac{2\sqrt{2p}}{M_{d}}<d<\frac{81M_{d}^{3}}{4}\) or \(\frac{p}{6M_{d}}<d<\frac{8\sqrt{p}(p\pm 1)\tau(p\pm 1)}{\phi(p\pm 1)}\). From Table 1, we see that once we are on the order of \(10^{21}\), Theorem 3.2 captures almost all primes \(p\). However there are still some exceptional cases where this theorem is inconclusive. For the first 10,000 primes greater than \(10^{21}\), there is a single prime \(p^{\prime}\) that does not pass these two criteria, \(p^{\prime}=1,000,000,000,000,124,399\). We have \[p^{\prime}-1 =2\cdot 7\cdot 13\cdot 29^{2}\cdot 43\cdot 705,737\cdot 215,288,719\] \[p^{\prime}+1 =2^{4}\cdot 3\cdot 5^{2}\cdot 11^{2}\cdot 17\cdot 19\cdot 23\cdot 9 7\cdot 757\cdot 1,453\cdot 8,689\] Number of divisors of \[p^{\prime}\pm 1 =\tau(p-1)+\tau(p+1)-2=192+11,520-2=11,710\] Number of divisors of \[p^{\prime}\pm 1\] which fail either bound of Theorem 3.2 \[=989\] The largest value that \(M_{d}=|\mathcal{M}_{d}(p^{\prime}-1)\cup\mathcal{M}_{d}(p^{\prime}+1)|\) attains as \(d\) varies over the \(989\) divisors of \(p^{\prime}\pm 1\) that fail one of the bounds in Theorem 3.2 is \(438\). An example of a divisor \(d\) with \(M_{d}=438\) is \(d=1,664,125,969\). For this divisor we have \[\frac{2\sqrt{2p^{\prime}}}{438}\approx 2.042\times 10^{8}<d\approx 1.664\times 1 0^{9}<1.702\times 10^{9}\approx\frac{81\cdot 438^{3}}{4}.\] Note that \(\frac{p^{\prime}}{6M_{d}}\approx 3.80518\times 10^{17}\), \(\frac{8\sqrt{p}(p^{\prime}+1)\tau(p^{\prime}+1)}{\phi(p^{\prime}+1)}\approx 1.42 7\times 10^{16}\), and \(\frac{8\sqrt{p}(p^{\prime}-1)\tau(p^{\prime}-1)}{\phi(p^{\prime}-1)}\approx 1.302\times 10^{14}\) so there are no divisors that can ever satisfy the second bound of Theorem 3.2. While examples like this become exceedingly rare, they persist throughout the range in which we are able to execute Theorem 3.2's test. Indeed, we have verified that our test fails for every prime \(p<10^{100}\) such that \(p\pm 1\) is a reduced number as defined in 3.10. There are 591 such primes, and there are certainly many others for which our test also fails, just not enough to be picked up by our random samples of 10,000. ## 6. Appendix In this section, we make more precise some of the implied constants in the proof of the following proposition in [11]. The point of this is to determine exactly how large an order a triple must have in order to conclude that it is connected to \(\mathcal{C}_{p}\) as in the End Game in [11]. **Proposition 6.1** (Explicit version of Proposition 7 in [11]).: _For \(d\) dividing \(p-1\) or \(p+1\), a Markoff triple of order \(d\) belongs to \(\mathcal{C}_{p}\) provided_ \[d>\frac{8\sqrt{p}(p\pm 1)\tau(p\pm 1)}{\phi(p\pm 1)} \tag{17}\] _(where the \(\pm\) is determined by whether \(d\) divides \(p-1\) or \(p+1\))._ Proof.: Without loss of generality, let \(d\) be the first coordinate order of some Markoff triple, and recall notation from (3). In Proposition 7 of [11], Bourgain, Gamburd, and Sarnak show that if \(d\) is sufficiently large (at least \(p^{1/2+\delta}\) for some \(\delta>0\) depending on \(p\)), then either the second or third coordinate in the orbit \[\left(r+r^{-1},\,\frac{(r+r^{-1})(r^{2n}s+r^{-2n}s^{-1})}{r-r^{-1}},\,\frac{(r +r^{-1})(r^{2n\pm 1}s+r^{2n\pm 1}s^{-1})}{r-r^{-1}}\right)\] has order \(p-1\) for some \(n\). We will run through their argument and show that (17) is sufficient for the relevant inequalities to hold. Since every triple of order \(p-1\) is in \(\mathcal{C}_{p}\) (Proposition 6 in [11]), this will complete the proof. First suppose \(d\,|\,p-1\). We seek a solution \((x,y)\in\mathbb{F}_{p}^{*}\) to \[\frac{(r+r^{-1})(sx+s^{-1}x^{-1})}{r-r^{-1}}=y+y^{-1} \tag{18}\] such that \(x\) belongs to the cyclic subgroup of order \(d\) (generated by \(r\) in the notation above), and \(y\) is a primitive root modulo \(p\). We will show such a solution exists with a counting argument. Let \(d^{\prime}=(p-1)/d\), and given some \(e\) dividing \(p-1\), let \(e^{\prime}=(p-1)/e\). Consider the equation \[\frac{(r+r^{-1})(sx^{d^{\prime}}+s^{-1}x^{-d^{\prime}})}{r-r^{-1}}=y^{e^{\prime }}+y^{-e^{\prime}}. \tag{19}\] Assume for the moment that \(d^{\prime}\geq e^{\prime}\) so that the projective completion of the affine curve defined above is given by \[\frac{s(r+r^{-1})}{r-r^{-1}}X^{2d^{\prime}}Y^{e^{\prime}}+\frac{r+r^{-1}}{s(r- r^{-1})}Y^{e^{\prime}}Z^{2d^{\prime}}-X^{d^{\prime}}Y^{2e^{\prime}}Z^{d^{ \prime}-e^{\prime}}-X^{d^{\prime}}Z^{d^{\prime}+e^{\prime}}=0.\] Call this curve \(C\). Bourgain-Gamburd-Sarnak show that \(C\) is irreducible over \(\overline{\mathbb{F}}_{p}\). Furthermore, its geometric genus is bounded from above by \[\binom{\deg C-1}{2}-\sum_{P\in C}\binom{m_{P}}{2},\] where \(m_{P}\) denotes the multiplicity of the point \(P\) in \(C\). (See Corollary 1 in Section 8.3 of [10], for example.) Observe that \(P=[0:1:0]\) has multiplicity \(m_{P}=2d^{\prime}-e^{\prime}\), so the genus is at most \[\binom{2d^{\prime}+e^{\prime}-1}{2}-\binom{2d^{\prime}-e^{\prime}}{2}=4d^{ \prime}e^{\prime}-4d^{\prime}-2e^{\prime}+2.\] Thus we can apply the Weil bound to conclude that the number of points on \(C\) over \(\mathbb{F}_{p}\) differs from \(p+1\) by at most \(2(4d^{\prime}e^{\prime}-4d^{\prime}-2e^{\prime}+2)\sqrt{p}\). Now let us exclude the points \([1:0:0]\), \([0:1:0]\), and \([0:0:1]\), which occur on \(C\) with multiplicities \(e^{\prime}\), \(2d^{\prime}-e^{\prime}\), and \(e^{\prime}\), respectively. Then, via the map \([X:Y:Z]\mapsto((X/Z)^{d^{\prime}},(Y/Z)^{e^{\prime}})\), there is an \(e^{\prime}d^{\prime}\)-to-1 correspondence between the remaining points on \(C\) and solutions to (19) in which \(x\) belongs to the subgroup of order \(d\) and \(y\) to the subgroup of order \(e\) in \(\mathbb{F}_{p}^{*}\). In particular, if \(f(e)\) denotes the number of such solutions \((x,y)\), then we have shown \[|d^{\prime}e^{\prime}f(e)+(e^{\prime}+(2d^{\prime}-e^{\prime})+e^{\prime})-(p +1)|<2(4d^{\prime}e^{\prime}-4d^{\prime}-2e^{\prime}+2)\sqrt{p}.\] This simplifies to the following slightly weaker form: \[\left|f(e)-\frac{p+1}{d^{\prime}e^{\prime}}\right|<8\sqrt{p}.\] The exact same bound can be obtained in the case \(e^{\prime}>d^{\prime}\) by swapping \(d^{\prime}\) and \(e^{\prime}\) throughout the argument and using the singular point \([1:0:0]\) instead of \([0:1:0]\) to bound the genus. Let \(\mu\) be the Mobius function and let \(\phi\) be Euler's totient function. By inclusion-exclusion, the number of solutions to (19) in which \(x\) belongs to the cyclic group of order \(d\) and \(y\) is a primitive root is \[\sum_{e\,|\,p-1}\mu\bigg{(}\frac{p-1}{e}\bigg{)}\,f(e) \geq \sum_{e^{\prime}\,|\,p-1}\bigg{(}\mu(e^{\prime})\frac{p+1}{d^{ \prime}e^{\prime}}-8\sqrt{p}\bigg{)}\] \[\geq \frac{p+1}{d^{\prime}}\] \[= \frac{(p+1)\phi(p-1)}{d^{\prime}(p-1)}-8\sqrt{p}\,\tau(p-1)\] \[> \frac{d\phi(p-1)}{p-1}-8\sqrt{p}\,\tau(p-1).\] The last expression above is positive precisely when \(d\) satisfies (17). A very similar argument works when \(d\,|\,p+1\). But now \(r\not\in\mathbb{F}_{p}\), so a modification is needed in order to reapply the Weil bound over \(\mathbb{F}_{p}\). Let \(d^{\prime}=(p+1)/d\). Instead of (18), we now count points on the curve \[\sum_{i=0}^{\lfloor d^{\prime}/2\rfloor}\binom{d}{2i}x^{d^{\prime}-2i}(1-x^{2} )^{i}=y^{e^{\prime}}+y^{-e^{\prime}},\] where \(e^{\prime}\) is still some divisor of \(p-1\) (see equation (42) in [1]). The same singular points, \([0:1:0]\) when \(d^{\prime}\geq e^{\prime}\) and \([1:0:0]\) when \(e^{\prime}\geq d^{\prime}\), can be used to bound the genus of the curve above, and in fact we get an even smaller bound of \(2d^{\prime}e^{\prime}\). The remainder of the proof is unchanged.
2308.10005
Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of Biases-Specific Experts
Bias mitigation in image classification has been widely researched, and existing methods have yielded notable results. However, most of these methods implicitly assume that a given image contains only one type of known or unknown bias, failing to consider the complexities of real-world biases. We introduce a more challenging scenario, agnostic biases mitigation, aiming at bias removal regardless of whether the type of bias or the number of types is unknown in the datasets. To address this difficult task, we present the Partition-and-Debias (PnD) method that uses a mixture of biases-specific experts to implicitly divide the bias space into multiple subspaces and a gating module to find a consensus among experts to achieve debiased classification. Experiments on both public and constructed benchmarks demonstrated the efficacy of the PnD. Code is available at: https://github.com/Jiaxuan-Li/PnD.
Jiaxuan Li, Duc Minh Vo, Hideki Nakayama
2023-08-19T13:11:40Z
http://arxiv.org/abs/2308.10005v1
# Partition-and-Debias: Agnostic Biases Mitigation via ###### Abstract Bias mitigation in image classification has been widely researched, and existing methods have yielded notable results. However, most of these methods implicitly assume that a given image contains only one type of known or unknown bias, failing to consider the complexities of real-world biases. We introduce a more challenging scenario, **agnostic biases mitigation**, aiming at bias removal regardless of whether the type of bias or the number of types is unknown in the datasets. To address this difficult task, we present the Partition-and-Debias (PnD) method that uses a mixture of biases-specific experts to implicitly divide the bias space into multiple subspaces and a gating module to find a consensus among experts to achieve debiased classification. Experiments on both public and constructed benchmarks demonstrated the efficacy of the PnD. Code is available at: [https://github.com/Jiaxuan-Li/PnD](https://github.com/Jiaxuan-Li/PnD). ## 1 Introduction One of the reasons for poor generalization in image classification is the presence of biased features in training data [21, 33, 15], which distracts the model from learning the target features associated with the classification objects. Thus, accurately capturing the target features while reducing the influence of these biases1 has become a critical issue, resulting in increased bias mitigation research [28]. Footnote 1: Similar to [27], “_bias_” refers to the attribute spuriously correlated with the target attribute. Unlike most previous studies that implicitly assumed that only one type of known/unknown bias exists in a given image, we investigate the coexistence of multiple unknown biases in an image. For instance, most _young_ samples in CelebA [19] are associated with the _female_, _attractive_, and _lipstick_ categories (Fig. 0(a)), whereas the _old_ samples have corresponding yet reversed ones. Consequently, for the (_young/old_) classification in CelebA, these three biases, including gender, attractiveness, and wearing lipstick, degrade the prediction performance. Overall, we discovered that 43.75% of the _young_ samples had three biases, which means they were all annotated with _female_, _attractive_, and _lipstick_, whereas 58.28% of the samples had at least two of them (Fig. 0(b)). The _old_ samples show similar patterns. These observations imply that multiple biases are inevitable in a given image. At the same time, we cannot determine all types of bias that may appear in the image. Dealing with multiple unknown biases is thus emergent, and cannot be fully solved using prior methods (Fig. 0(c)) because (i) they fail to capture the biases of different types and (ii) removing a single bias does not always eliminate the effects of all biases. Therefore, we introduce a more challenging scenario, **agnostic biases**, in which the unknown biases include not only the type of bias, but also the number of types. Here, we use "agnostic biases" to bring attention to biases in real-world scenarios, where the bias type and number of types are unknown. We do not use "unknown biases" proposed in [10], because it ignores multiple unknown biases. Our scenario overcomes the existing bias constraints, boosting the performance of real-world applications. We hypothesized and empirically found that the features of agnostic biases scatter at different depths of the network depending on the biases' nature. Even if multiple biases are entangled at the same depth, they can be be regarded as one type of bias. Thus, agnostic biases can be grouped by their feature levels and processed individually at different network depths. As a result, we propose a Partition-and-Debias (PnD) approach based on the divide-and-conquer strategy to capture and remove agnostic biases at different levels for debiased classification. Thus, the entire agnostic bias scenario space is divided into multiple subscenario spaces that can be handled by multiple biases-specific experts. The final prediction is obtained based on the consensus of all the experts using a gating module. Our contributions are: * We point out the existence of multiple biases in the real world, proposing a new scenario with agnostic biases that fills in the gaps of previous works' bias assumptions. * We present a Partition-and-Debias approach to solve the new scenario via a mixture of biases-specific experts. * On both public and our constructed challenging bias datasets, experimental results show that the proposed method achieves cutting-edge performance. ## 2 Related Work ### Bias mitigation Bias mitigation learns the target features without influence by spurious correlations when training data is biased. **Known bias mitigation** assumes the annotation of bias or the type of bias is accessible. The previous methods can be classified as supervised or unsupervised. The former includes reweighting samples with higher uncertainty [16], regularization [24, 27], data augmentation [22], and supervised bias estimation [2, 12, 1]. The latter often uses mixup [8], a two-branch network [20, 15], prioritizing simple target features while ignoring complex biased features [26], and MaskTune [3]. These methods make strong assumptions about the type of bias. For instance, bias can be easily learned [20, 15, 8], target features are simpler than bias features [26], and the bias is editable [3]. Furthermore, most studies considered only a single type of bias appearing in an image. Although [26] used a multiple-bias dataset in their experiments, they still adhered to the limitations of the assumptions on the type of bias. Our method belongs to the unsupervised approach, yet we relax the strong bias assumptions and use a partition-and-debias strategy. **Unknown bias mitigation** does not require a pre-definition for bias in the dataset. Jeon _et al_. [10] proposed obtaining unbiased target features from the shallow layers of the classification network. However, their definition of unknown bias misses that there are multiple unknown biases in the data. For real-world datasets, spurious correlations are complex and cannot be defined simply as a result of a specific attribute. By contrast, our agnostic biases assumption emphasizes that both the type and number of bias types are unknown. Also, Li _et al_. [18] proposed an Equal Opportunity Violation loss to discover the most salient bias from unknown biases and then mitigate it by reweighting. Although they considered two biases in their experiments, their method theoretically could only eliminate one dominating bias, which was binary rather than multi-class bias. In contrast to them, our model overcomes these limitations. ### Mixture of experts The mixture of experts (MoE) technique was originally proposed by Jacobs _et al_. [9] to mitigate the effects of different types of samples on the training data. It divides data into different domains using a gating network and assigns multiple experts to handle each domain. Recently, Zuo al_. [34] used MoE in language models by breaking a pre-trained model into multiple experts to speed up the inference process. Zhang _et al_. [31] combined MoE with fine-grained categorization by training each subsequent expert using prior information obtained from the previous expert. Unlike these methods, we employ the MoE strategy for debiasing and specifically design it to remove agnostic biases by inserting experts at different depths of the network. ## 3 Features of Different Levels Matters **Our hypothesis.** When training a neural network with target categories, agnostic biases manifest as scattered features at different network depths. **Experimental setup.** We use the Biased MNIST [26] dataset (see Sec. 5.1) in this exploratory experiment. The biases arise from the co-occurrence of each digit category with specific categories from all other attributes, such as digit color and digit position. Unlike an image with a single bias, one image in this dataset may have up to seven biases. The bias ratio, which denotes the probability of co-occurrence, is 0.95. We selected ResNet-18 [7] consisting of four residual blocks, as the classification network. First, we trained a classification network from scratch using the target categories _0 - 9_ in the digits and obtained an average classification accuracy of 33.73% for all categories. The learned features obtained from the trained model can then be visualized to investigate how bias features are distributed across the network when training the network with target categories. However, since many attributes such as digital color, digit position, and digit scale are interdependent, their features overlap in feature maps, making it difficult to distinguish their differences by simply looking at feature maps. We used the classification accuracy for each attribute separately in each block to determine their distributions. Specifically, we froze the trained network weights and added a binary classifier after each trained block. We trained the additional four classifiers to obtain the corresponding classification accuracies for all eight attributes. **Features of biases with different levels are distributed at different depths of the network.** We obtained \(4\times 8\) accuracy results after retraining, as shown in Fig. 2. (i) From the perspective of different attributes, the classification accuracy of all attributes except texture color was notably higher than that of the digit in the last block (block 4), which is usually used to determine the final prediction. Furthermore, the other blocks followed the same pattern as the last block. This phenomenon implies that the previously learned features from the target attribute classification (here, digits) are more easily classifiable in the bias attribute classification than in the target attribute classification. We concluded that many spuriously correlated features exist at all depths of the network, degrading the target attribute predictions. (ii) From the perspective of different blocks, although most bias attributes can be classified in each layer, the classification performance for some bias attributes varies depending on the block. Texture-relevant attribute classifiers performed well in the first block, while those with position- and scale-relevant attributes performed better in the last block; the remaining attributes achieved the best results in the third block. These findings are consistent with our intuition regarding the distribution of image features, which holds that texture features are more abundant in the shallower parts of the network and that spatial and scale information are more prevalent in the deeper parts of the network. We conclude that each bias attribute feature exists at all network depths, yet these features are clustered at different network depths. ## 4 Proposed Partition-and-Debias The above experiment suggests that an ideal strategy for resolving our problem should be able to remove as many biases from the network depths as possible. Thus, we adopt a partition-and-debias strategy in our method, namely PnD, which divides the entire agnostic bias scenario space into different subscenario spaces across the classification network depths. Multiple biases of the same level are allowed in each subscenario space because they can be viewed naturally as a type of bias. This simple concept overcomes the limitations of previous studies allow the model to simultaneously capture and remove multiple biases at one time. Our PnD consists of a debiased encoder \(\mathcal{D}\), bias encoder \(\mathcal{B}\), biases-specific experts \(\mathcal{E}\), and gating module (Fig. 3). Figure 2: Classification accuracy scores (%) for the 8 attributes when retraining in features learned from target class classification at blocks of different depths in ResNet-18. We find that the classification performance for different attributes trends differently across the depths. Both \(\mathcal{D}\) and \(\mathcal{B}\) contain several blocks of convolution layers to generate the target and bias features from an image (Sec. 4.1). \(\mathcal{E}\) are responsible for purifying the target and bias features under agnostic bias scenarios (Sec. 4.2). Finally, a gating module adaptively gathers all the expert predictions before making a final decision (Sec. 4.3). ### Target and bias features extraction Given an image \(\mathbf{x}\) with the target label \(\mathbf{y}\) (vector-like is used to represent class \(y\) for simplicity), we use a debiased encoder \(\mathcal{D}=\{D^{(i)}\}_{i=1}^{M}\) and bias encoder \(\mathcal{B}=\{B^{(i)}\}_{i=1}^{M}\) to extract target and bias features separately. Note that \(D^{(i)}\) and \(B^{(i)}\) are residual blocks in ResNet [7] although the network architecture can be any and \(M=4\). The image is fed into \(\mathcal{D}\) to obtain the target features \(\mathbf{z}_{\mathrm{d}}^{(i)}\) in \(i^{\mathrm{th}}\) block. Simultaneously, we obtain the bias features \(\mathbf{z}_{\mathrm{b}}^{(i)}\) for each block in \(\mathcal{B}\). The size of \(\mathbf{z}_{\mathrm{d}}^{(i)}\) is identical to that of \(\mathbf{z}_{\mathrm{b}}^{(i)}\). We omitted the feature size when referencing the extracted features to simplify the process. Next, biases-specific experts processed these features. ### Biases-specific experts The biases-specific experts \(\mathcal{E}\) consist of four experts \(E^{(i)}\). Each of them contains two classifiers: a debiased classifier \(C_{\mathrm{d}}^{(i)}\) and bias classifier \(C_{\mathrm{b}}^{(i)}\). The inputs of each \(E^{(i)}\) are created from the features \(\mathbf{z}_{\mathrm{d}}^{(i)}\) and \(\mathbf{z}_{\mathrm{b}}^{(i)}\) obtained from the corresponding \(D^{(i)}\) and \(B^{(i)}\). We combined \(\mathbf{z}_{\mathrm{d}}^{(i)}\) and \(\mathbf{z}_{\mathrm{b}}^{(i)}\) features in two ways, creating the original and counterfactual features used in our two-stage training (initial and counterfactual trainings). In both training stages, debiased classifier \(C_{\mathrm{d}}^{(i)}\) and bias classifier \(C_{\mathrm{b}}^{(i)}\) are used for debiased classification and bias detection, respectively. #### 4.2.1 Initial training We combine the features \(\mathbf{z}_{\mathrm{d}}^{(i)}\) and \(\mathbf{z}_{\mathrm{b}}^{(i)}\) to create the original features \(\mathbf{z}^{(i)}=[\mathbf{z}_{\mathrm{d}}^{(i)};\mathbf{z}_{\mathrm{b}}^{(i) }]\) (\([\cdot;]\) denotes concatenation) (Fig. 3b, left). The \(i^{th}\) expert \(E^{(i)}\) takes \(\mathbf{z}^{(i)}\) as the input, and outputs a bias detection result \(\mathbf{\hat{y}}_{\mathrm{b}}^{(i)}\) and a debiased classification result \(\mathbf{\hat{y}}_{\mathrm{d}}^{(i)}\) made by \(C_{\mathrm{b}}^{(i)}\) and \(C_{\mathrm{d}}^{(i)}\), respectively. **Bias detection.** This encourages the bias encoder to learn Figure 3: (a) Schematic of our proposed PnD, including a debiased encoder, a bias encoder, multiple biases-specific experts, and a gating module. The debiased encoder and bias encoder extract target and bias features from input images, which are fed into the biases-specific expert after each block for debiased classification and bias detection. The gating module adaptively mixes all the debiased classification results for the final output. (b) Biases-specific expert. It processes target features and bias features by combining them in the order of the original input batch for initial training, and recombining them to generate positive and negative samples during counterfactual training. the bias features. Because the bias features are easier to learn during training with target categories, we can concentrate our bias encoder on the more easily learned features by employing GCE loss [32] as discussed in [20, 15, 13], although the bias information in the dataset is unavailable: \[\mathcal{L}_{\text{bias}}=\sum_{i=1}^{M}\text{GCE}\left(\mathbf{\hat{y}}_{ \text{b}}^{(i)},\mathbf{y}\right). \tag{1}\] **Debiased classification.** To optimize the debiased and bias encoders separately, as opposed to bias detection, debiased classification should prioritize unbiased samples, which do not contain bias features and are difficult to fit using the bias encoder. Consequently, when bias detection is used, these samples are misclassified or classified with lower confidence, whereas debiased classification classifies them correctly or with higher confidence. Considering this, we follow [20] and add a weight \(\text{w}^{(i)}\) to each sample in debiased classification. \(\text{w}^{(i)}\) is defined as: \(\text{w}^{(i)}=\frac{\text{CE}\left(\mathbf{\hat{y}}_{\text{b}}^{(i)}, \mathbf{y}\right)}{\text{CE}\left(\mathbf{\hat{y}}_{\text{d}}^{(i)},\mathbf{y }\right)+\text{CE}\left(\mathbf{\hat{y}}_{\text{b}}^{(i)},\mathbf{y}\right)}\), where we use \(\text{CE}(\mathbf{\hat{y}}_{\text{d}}^{(i)},\mathbf{y})\) and \(\text{CE}(\mathbf{\hat{y}}_{\text{b}}^{(i)},\mathbf{y})\) to measure the relative difficulty between debiased classification and bias detection, \(\text{CE}(\cdot,\cdot)\) denotes cross-entropy loss function. The loss for debiased classification is expressed as: \[\mathcal{L}_{\text{debias}}=\sum_{i=1}^{M}\text{w}^{(i)}\times\text{CE}\left( \mathbf{\hat{y}}_{\text{d}}^{(i)},\mathbf{y}\right). \tag{2}\] Combining Eq. 1 and Eq. 2, we obtain the total classification loss \(\mathcal{L}_{\text{cls}}\) for debiased classification and bias detection: \(\mathcal{L}_{\text{cls}}=\alpha\times\mathcal{L}_{\text{debias}}+\mathcal{L}_ {\text{bias}}\), \(\alpha\) is hyperparameter that balances \(\mathcal{L}_{\text{debias}}\) and \(\mathcal{L}_{\text{bias}}\); \(\mathcal{L}_{\text{debias}}\) forces the debiased classification to focus more on unbiased samples with weight \(\text{w}^{(i)}\) added to CE loss, whereas \(\mathcal{L}_{\text{bias}}\) focuses on bias features owing to the GCE loss to support easier-learned features. **Diversity penalty for biases-specific experts.** To achieve diversified biases-specific experts, we introduc a Kullback-Leibler (KL) divergence-based loss function [30] to penalize the bias detection of each expert. The diversity loss for experts can be formulated as: \[\mathcal{L}_{\text{div}}=\sum_{i=2}^{M}\text{exp}\left(-\text{KL}\left( \mathbf{\hat{y}}_{\text{b}}^{(i)},\mathbf{\hat{y}}_{\text{b}}^{(i-1)}\right) \right). \tag{3}\] Thus, using Eq. 3, we can regularize the diversity of the bias detection by each expert, allowing them to capture as many biases as possible. In this way, each expert can focus on different level features, and thus, different biases. #### 4.2.2 Counterfactual training We obtain relatively accurate bias and target features after warming the model during the initial training. Counterfactual training is used to further separate target features from bias features. This approach is based on two counterfactual procedures. (i) When we change the sample's target features while keeping its bias features unchanged, the model's decision should be changed; (ii) When we keep its target features unchanged while changing the sample's bias features, the model should make the same decision for the changed features as for the original features. To leverage these two procedures, we first synthesize counterfactual features before conducting counterfactual inference using contrastive loss. **Synthesizing counterfactual features.** We randomly sample a mini-batch of \(K\) samples to construct the counterfactual features. For the \(j^{th}\) sample, in the mini-batch, we first randomly select one bias feature and \(P\) target features from the other samples as follows: \(\tilde{\mathcal{Z}}_{\text{b}}^{(i)}=\{\mathbf{z}_{\text{b}_{\text{q}}}^{(i)}\}\), and \(\tilde{\mathcal{Z}}_{\text{d}}^{(i)}=\{\mathbf{z}_{\text{d}_{\text{l}}}^{(i) }\}_{l=1}^{P}\), where \(q\neq j\), \(l\neq j\), and \(0<q\leq K\). Subsequently, the target feature \(\mathbf{z}_{\text{d}_{j}}^{(i)}\) is paired with the selected bias feature to construct positive features \(\mathcal{Z}_{\text{pos}}^{(i)}=\{[\mathbf{z}_{\text{d}_{j}}^{(i)},\mathbf{z}_ {\text{b}_{\text{q}}}^{(i)}]\}\) (Fig. 3b, right). Similarly, the bias feature \(\mathbf{z}_{\text{b}_{j}}^{(i)}\) is paired with the other \(P\) target features to construct its negative features \(\mathcal{Z}_{\text{neg}}^{(i)}=\{[\mathbf{z}_{\text{d}_{\text{l}}}^{(i)}; \mathbf{z}_{\text{b}_{j}}^{(i)}]\}_{l=1}^{P}\). **Counterfactual inference.** Positive and negative features were fed into the debiased classifier \(C_{\text{d}}^{(i)}\) and bias classifier \(C_{\text{b}}^{(i)}\), respectively. We obtain a positive prediction \(\mathcal{Y}_{\text{pos}}^{(i)}=\{\mathbf{\hat{y}}_{\text{pos}}^{(i)}\}\) and a set of negative predictions \(\mathcal{Y}_{\text{neg}}^{(i)}=\{\mathbf{\hat{y}}_{\text{neg}}^{(i)}\}_{l=1}^ {P}\) for the original result \(\mathbf{\hat{y}}_{\text{d}}^{(i)}\). We then use the contrastive loss \(\mathcal{L}_{\text{con}}\) for this counterfactual inference: \[\mathcal{L}_{\text{con}}=\sum_{i=1}^{M}-\text{log}\frac{\text{exp}\left(- \text{dist}\left(\mathbf{\hat{y}}_{\text{d}}^{(i)},\mathbf{\hat{y}}_{\text{ pos}}^{(i)}\right)\right)}{\sum_{\mathbf{y}^{\prime}\in\mathcal{Y}_{\text{neg}}^{(i)} \cup\{\mathbf{\hat{y}}_{\text{pos}}^{(i)}\}}\text{exp}\left(-\text{dist}\left( \mathbf{\hat{y}}_{\text{d}}^{(i)},\mathbf{y}^{\prime}\right)\right)},\] where \(\text{dist}(\cdot,\cdot)\) denotes Euclidean distance. This encourages the model to group samples with identical target features into the same category, regardless of their bias features. Conversely, even if samples have the same bias features, they can be classified into different categories if they have different target features. ### Mixture of biases-specific experts using adaptively gating The final output \(\mathbf{\hat{y}}_{\text{d}}\) of the model is obtained by combining the debiased classification results \(\mathbf{\hat{y}}_{\text{d}}^{(i)}\) from each biases-specific expert through an gating module. The gating loss for this operation can be presented as: \(\mathcal{L}_{\text{gate}}=\text{CE}\left(\mathbf{\hat{y}}_{\text{d}},\mathbf{y }\right)\). \(\mathbf{\hat{y}}_{\text{d}}=\sum_{i=1}^{M}\text{p}^{(i)}\times\mathbf{\hat{y}}_{ \text{d}}^{(i)}\), where \(\text{p}^{(i)}\) denotes the probability value assigned to the biased classification result \(\mathbf{\hat{y}}_{\text{d}}^{(i)}\) of \(E^{(i)}\); it is the softmax result from the gating module by taking all experts' debiased classification results as the input. We call this module "gating" derived from the "gating" in MoE [9], where it refers to the weighted inputs from the gating network followed by a softmax function. The complete loss for updating the entire model is: \[\mathcal{L}=\left\{\begin{array}{ll}\mathcal{L}_{\text{cls}}+\mathcal{L}_{ \text{gate}}+\mathcal{L}_{\text{div}}&\text{initial training}\\ \mathcal{L}_{\text{cls}}+\mathcal{L}_{\text{gate}}+\mathcal{L}_{\text{div}}+ \beta\times\mathcal{L}_{\text{con}}&\text{counterfactual training}\end{array}\right. \tag{4}\] where \(\beta\) balances \(\mathcal{L}_{\text{con}}\) with other terms. ## 5 Experiments ### Datasets **Biased MNIST**[26] contains ten digits (\(0-9\)) as its target categories and seven biases: digit color, digit scale, digit position, type of background texture, background texture color, co-occurring letter, and letter color. There are 50000, 10000, 10000 images for training, validation, and testing. **BAR**[20] consists of typical action-place pairs, like _climbing_ and _rockwall_ in the training set; and unseen samples beyond the settled pairs in the test set. There are six target actions in 1941 training and 654 test images. **Modified IMDB** is our constructed dataset using IMDB face images [23], containing 20000 training, 1617 validation, and 1617 test images. The targets are _young_ and _old_, and the biases are gender and wearing glasses (Fig. 4). **MIMIC-CXR + NIH** was constructed by simulating the biases brought about by different data sources when collecting the datasets. We mixed the MIMIC-CXR [11] and NIH [29] datasets into a MIMIC-CXR + NIH dataset. The target categories are _no finding_ and _pneumonia_, and the biases come from two data sources where the correlation between the target and biases is not tangible. It contains 8500 training, 500 validation, and 500 test images. ### Implementation details **Model architecture.** We employed feature extraction layers of ResNet-18 [7] as the backbone of the debiased encoder and bias encoder. Two convolutional and two linear layers were used to design classifiers for biases-specific experts, and one linear layer was used to construct the gating module. **Training procedure.** Our PnD was built using PyTorch, and all the experiments were conducted on an NVIDIA RTX A4000 GPU. For input to the PnD model, all images were resized to 160 x 160 x 3 except for the BAR where they were randomly cropped to \(224\times 224\times 3\) and horizontally flipped following [20]. ## 6 Results and Analysis We compared our model to ResNet-18 [7], LfF [20], DFA [15], OccamNet [26], DebiAN [18], and UBNet [10]. ResNet-18 was pretrained using ImageNet [4] and simply used cross-entropy as its loss function without any debiasing strategy. In all the experiments, we calculated the means and standard deviations of accuracies of the test set across three runs for all datasets. Unless otherwise specified, the bias ratio is 0.95 for all cases in the following subsections. ### Comparisons against state-of-the-art **Overall comparisons.** We report the accuracy scores of all compared methods in Tab. 1. We used the results with two different bias ratios for each dataset except for BAR, because almost all of its training images were biased, and no bias labels were provided. We selected a relatively large bias ratio and a small bias ratio for this set of experiments. Nevertheless, due to the limitations of the original dataset, we could not set a smaller bias ratio for the Modified IMDB. We can see that PnD outperforms all methods on Biased MNIST and MIMIC-CXR + NIH. Meanwhile, for the BAR, our method achieved the second-best performance, which is comparable to the results of DebiAN [18]. BAR has only one type of bias for each target category, and the images in the training set are purely biased. In addition, the proposed framework requires unbiased data. Therefore, our accuracy score is slightly lower than that of DebiAN [18]. For the Modified IMDB, PnD contains the best or second-best scores. Because there are only two biases in this dataset and this classification task is relatively simpler than that of the Biased MNIST, ResNet-18 also works well on it, whereas the SOTAs are inferior to PnD and ResNet-18 on this dataset. We conclude that PnD performs best in agnostic biases mitigation owing to the mixture of biases-specific experts, especially in the presence of multiple biases. Even when the number of biases was small, its performance was comparable to that of the others. **Robustness to different numbers of bias types.** To evaluate the performance of all methods under different numbers of biases, we synthesized multiple biased MNISTs with varying numbers of biases (ranging from 1 to 7) by gradually adding digit color, digit scale, digit position, texture, texture color, letter, and letter color following the data synthesis operation in [26] as shown in the supplement. Figure 4: Examples from Modified IMDB dataset, the left two are annotated with _young_, but also with _female_ and _wearing glasses_. The right two are annotated with _old_, but also with _male_ and _not wearing glasses_. Regardless of the number of biases, PnD always achieved the best performance (Fig. 5). When the number of biases was one (digit color), all methods achieved high scores. This is because, at this time, the digit only occupies a small area in the center of the images, resulting in difficulty in learning digit color features. Owing to the partition-and-debiasing strategy, our method does not eliminate its performance was as fast as those of the other methods after adding the second bias. Although our method suffers from a performance drop after the fourth bias, it still outperforms the other methods and remains nearly stable when additional biases are added. ### Ablation study **Ablation study for different loss terms.** The ablation study results on the biased MNIST and MIMIC-CXR + NIH datasets are shown in Tab. 2. We evaluated the impact of using multiple loss terms in Eq. 4 by dropping each loss term individually (3rd - 6th rows). Note that we drop each loss term in both training phases. For clarity, we only discuss the impact of the loss on the overall framework, not involving the analysis of the two training phases. The model with only \(\mathcal{L}_{\text{cls}}\) (3rd row) (i.e., the model with only two encoders and two classifiers following the ends of the blocks) performed the worst. This is because it attempted to remove agnostic biases only once from the end of the network, as in previous studies, ignoring the fact that the number and type of agnostic biases are unknown. When other loss terms are gradually added, the performance improves. Particularly, the model with \(\mathcal{L}_{\text{gate}}\) (4th row) boosts the performance significantly because we begin to process the agnostic biases according to the network depth. Moreover, the model with either \(\mathcal{L}_{\text{div}}\) or \(\mathcal{L}_{\text{con}}\) (5th and 6th rows) slightly improves the performance (less than 1%). When we used both \(\mathcal{L}_{\text{div}}\) and \(\mathcal{L}_{\text{con}}\) (11th row), the performance increased by 1.53% com \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multicolumn{5}{c|}{Method} & \multicolumn{2}{c|}{Dataset} \\ \hline \(\mathcal{L}_{\text{cls}}\) & \(\mathcal{L}_{\text{gate}}\) & \(\mathcal{L}_{\text{div}}\) & \(\mathcal{L}_{\text{con}}\) & Biased MNIST & MIMIC-CXR + NIH \\ \hline ✓ & & & 44.11 \(\pm\) 0.76 & 54.30 \(\pm\) 0.46 \\ ✓ & ✓ & & 68.90 \(\pm\) 0.38 & 57.87 \(\pm\) 0.71 \\ ✓ & ✓ & ✓ & 69.88 \(\pm\) 1.43 & 58.10 \(\pm\) 0.99 \\ ✓ & ✓ & ✓ & 69.48 \(\pm\) 1.78 & 59.20 \(\pm\) 0.26 \\ \hline w/o concatenation & & 67.11 \(\pm\) 0.90 & 58.57 \(\pm\) 0.49 \\ w/o adaptive gating & & 67.84 \(\pm\) 0.38 & 58.13 \(\pm\) 0.64 \\ initial training only & & 69.03 \(\pm\) 0.53 & 58.07 \(\pm\) 1.27 \\ counterfactual training only & & 69.61 \(\pm\) 1.22 & 59.00 \(\pm\) 0.56 \\ \hline PnD (full model) & & 70.43 \(\pm\) 0.74 & 60.73 \(\pm\) 0.87 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on Biased MNIST and MIMIC-CXR + NIH, including ablating the loss terms in PnD (3rd – 6th rows), the implementation strategies (7th – 8th rows), and training procedure (9th – 10th rows). The results reveal that each component in PnD is effective. \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Biased MNIST (7 biases)} & \multicolumn{2}{c|}{BAR (1 bias)} & \multicolumn{2}{c|}{Modified IMDB (2 biases)} & \multicolumn{2}{c}{MIMIC-CXR + NIH (1 bias)} \\ & 0.75 & 0.95 & & 0.95 & 0.99 & 0.80 & 0.95 \\ \hline ResNet-18 [7] & 94.49 \(\pm\) 0.15 & 53.86 \(\pm\) 1.90 & 51.85 \(\pm\) 5.92 & **74.31**\(\pm\) 0.44 & **67.04**\(\pm\) 2.07 & 64.53 \(\pm\) 2.07 & 56.83 \(\pm\) 2.07 \\ LfF [20] & 84.58 \(\pm\) 2.46 & 35.16 \(\pm\) 9.80 & 62.98 \(\pm\) 2.76 & 63.22 \(\pm\) 1.53 & 62.01 \(\pm\) 1.58 & 55.40 \(\pm\) 0.00 & 56.23 \(\pm\) 0.67 \\ DFA [15] & 90.79 \(\pm\) 0.14 & 44.52 \(\pm\) 2.51 & 58.97 \(\pm\) 1.28 & 64.19 \(\pm\) 2.92 & 62.46 \(\pm\) 0.71 & 52.67 \(\pm\) 2.70 & 50.93 \(\pm\) 0.58 \\ OccamNet [26] & **96.06**\(\pm\) 0.33 & **66.85**\(\pm\) 0.55 & 52.60 \(\pm\) 1.90 & 68.17 \(\pm\) 0.99 & 61.60 \(\pm\) 1.07 & 61.93 \(\pm\) 0.40 & 52.15 \(\pm\) 0.35 \\ DebiAN [18] & 90.90 \(\pm\) 1.36 & 46.52 \(\pm\) 2.65 & **69.88**\(\pm\) 2.92 & 72.42 \(\pm\) 0.33 & 65.99 \(\pm\) 0.80 & **67.40**\(\pm\) 0.96 & **60.00**\(\pm\) 1.40 \\ UBBNet [10] & 90.40 \(\pm\) 0.05 & 54.31 \(\pm\) 1.13 & 61.93 \(\pm\) 0.46 & 70.62 \(\pm\) 0.25 & 63.02 \(\pm\) 0.21 & 66.00 \(\pm\) 0.46 & 55.00 \(\pm\) 0.17 \\ \hline PnD & **96.60**\(\pm\) 0.22 & **70.43**\(\pm\) 0.74 & **69.83**\(\pm\) 2.09 & **74.34**\(\pm\) 0.22 & **66.58**\(\pm\) 0.26 & **67.87**\(\pm\) 0.91 & **60.73**\(\pm\) 0.87 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy scores (%) on Biased MNIST, Modified IMDB, MIMIC-CXR + NIH, and BAR datasets with different bias ratios. We compare our proposed method with ResNet-18 and other SOTA methods. Our method is clearly far superior or close to other methods. The best results are highlighted in **blue**, and the second-best results are in **red**. Figure 5: The accuracy scores (%) when changing the number of bias types from 1 to 7 in Biased MNIST. Regardless of the number of biases, PnD always achieves the best. pared to the model with \(\mathcal{L}_{\text{gate}}\). This is because \(\mathcal{L}_{\text{div}}\) regularizes the diversity of each block in the bias encoder, which also increases the diversity of counterfactual features, thus improving the results. We conclude that the absence of any loss term reduces PnD's performance, indicating that all terms work properly and contribute to the final results. **Ablation study for different strategies used in our implementation.** In PnD, we do not feed the bias and target features into the classifier separately for classification, but combine them and then classify them. The purpose, on the one hand, is to make the bias features a perturbation term on the target features to prevent overfitting, and on the other hand, to facilitate the synthesis of counterfactual features. We verified the results of a separate classification in our ablation experiments without concatenation (7th row), which showed significant decreases of 3.32% and 2.16%. For the adaptively gating operation, we demonstrated the effect of unweighted averaging (8th row), with a drop of 2.59% and 2.60% without adaptive gating. **Ablation study for two training phases.** For a fair comparison, we still keep the same epochs. We can see that the two-stage training is better than only the first stage (9th row) or the second stage (10th row). The first training provides relatively purer target and bias features for the second stage, whereas the second stage further disentangles these two features via counterfactual inference. Consequently, performance improves when the two stages work together. **Ablation study for multiple biases-specific experts.** We individually removed the expert modules inserted into the shallowest block of ResNet-18, to obtain the classification results when the number of experts ranged from 1 to 4 (Fig. 6). When multiple biases exist, we can see that the greater the number of blocks covered by the expert, the better the debiased classification effect. When the number of biases is two or one, the performance remains almost stable. This also confirmed the conclusion from the exploratory experiment and the sufficiency of our strategy. We further evaluate the performance of multiple experts ensemble in Tab. 3. We give the results of ensemble multiple biases-specific experts only in the last block (2nd row). When a single bias (MIMIC-CXR + NIH) exists, its performance drops by 1.9% compared to the PnD. However, when multiple biases (Biased MNIST), the performance decreases significantly by 18.37%. It fully illustrates the effectiveness of our idea that we should remove multiple biases from different depths in the network. In order to distinguish the effect of MoE and other debiasing strategies, we added an additional set of experiments, where we only keep the classification loss without weight for samples and the gating loss. The results show that the MoE strategy can also achieve relatively great results (3rd row). This is because we consider the features from the network in a depth-by-depth manner. The network focuses on more diverse regions, thus outputting a prediction result that is not limited to a single feature, which may be bias. ### Detailed analysis **More results on real-world dataset.** We additionally evaluate the performance on CelebA [19], a real-world dataset, in Tab. 4. For wearing lipstick or not classification, we show the accuracy scores of worst group and all groups in four bias attributes (2nd - 5th cols). From this table, we can see that the performance of PnD outperforms other methods in worst groups and all groups of almost all bias attributes. It demonstrates the advantage of our method in removing agnostic biases for real-world dataset. **Visualization of learned target and bias features.** We visualized the region of interest of each expert using GradCAM [25] (Fig. 7) to qualitatively verify the debiased classification (upper) and bias detection (lower) performances of each block in PnD. In the debiased classification, all ex \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{Method} & Biased MNIST & MIMIC-CXR + NIH \\ \hline MoE on the last block & 52.06 \(\pm\)1.10 & 58.83 \(\pm\) 0.42 \\ MoE on all blocks w/o other debiasing strategies & 67.49 \(\pm\)1.41 & 58.47 \(\pm\) 0.45 \\ \hline PaD & 70.43 \(\pm\) 0.74 & 60.73 \(\pm\) 0.87 \\ \hline \hline \end{tabular} \end{table} Table 3: Further ablation study for multiple biases-specific experts on Biased MNIST and MIMIC-CXR + NIH. The accuracy (%) results reveal that inserting multiple biases-specific experts at different depths of PnD is effective. Figure 6: The accuracy scores (%) when changing the number of experts from 1 to 4 on Biased MNIST, Modified IMDB, MIMIC-CXR + NIH, and BAR datasets. In the case of multiple biases (Biased MNIST), the debiased classification accuracy is raised as the number of experts increases. But for two biases (Modified IMDB) or single bias (BAR and MIMIC-CXR + NIH), the number of experts does not affect the performance too much. perts were able to focus on the target region (climbing). Meanwhile, in bias detection, each expert could handle bias features differently; for instance, the first expert focuses on background texture, and the third expert concentrates on snowy slopes. We conclude that the two encoders can capture target and bias features properly and independently. **Analysis on the mixed debiased classification of experts.** To analyze the effectiveness of mixing different expert results using adaptively gating, we checked the output from each expert and the probability assigned by the gating module. Tab. 5 shows that, across these four datasets, the highest debiased classification accuracy scores were located at 3rd or 4th expert due to the different complexities of biases. Additionally, the results of the expert modules slightly exceeded the final results coordinated by multiple experts. This is reasonable because shallow blocks may contain few target features, resulting in poor performance of the corresponding experts, and thus degrading the final results. This occurs particularly when the probability \(\mathrm{p}^{(i)}\) assigned to the best expert is relatively low. However, we cannot remove shallow experts directly (see the results of reducing the number of experts in Fig. 6). We may be able to enhance the performance by selecting the output of the expert that has the highest \(\mathrm{p}^{(i)}\) during testing. **Limitations.** We employed multiple expert modules, which inevitably increased the number of network parameters. One potential solution is to increase the sparsity of expert networks [5]. Furthermore, as discussed in [17], removing multiple unknown biases without an inductive bias is difficult. Due to the requirement of unbiased training data, PhD does not perform as well on fully biased datasets such as BAR. We will examine these issues further in the future. **Societal impacts.** This study introduces a more realistic bias scenario and provides a simple yet effective approach to ensure that deep learning-based decision processes are not biased toward agnostic attributes in the data. We believe that the proposed approach will encourage the development of more trustworthy AI applications. For example, it can increase racial and gender equity in face recognition systems by protecting minority populations from systemic biases. ## 7 Conclusion Existing bias mitigation methods struggle to deal with multiple unknown biases in real-world scenarios. To address these limitations, we presented a novel bias scenario, namely, agnostic biases mitigation. First, we investigated our hypothesis that different bias features would cluster at different depths in a network. We then proposed a PnD method to address the new scenario by dividing the bias space into multiple subspaces across network depths and removing them using a mixture of biases-specific experts. Extensive experiments on both public and our constructed datasets demonstrated PnD's excellent performance. **Acknowledgement.** This work was supported by JST SPRING Grant Number JPMJSP2108, Institute for AI and Beyond of the University of Tokyo, JSPS KAKENHI Grant Numbers JP23H03449, JP23KJ0404, and JP22K17947. \begin{table} \begin{tabular}{l|c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Attractive or not} & \multicolumn{2}{c|}{Heavy makeup or not} & \multicolumn{2}{c|}{High checkbones or not} & \multicolumn{2}{c|}{Gender} & \multicolumn{2}{c}{Average} \\ & Worst group & All groups & Worst group & All groups & Worst group & All groups & Worst group & All groups & Worst group & All groups \\ \hline ResNet-18 [7] & **85.92**\(\pm\) 0.19 & **91.53**\(\pm\) 0.29 & **26.23**\(\pm\) 1.73 & **74.77**\(\pm\) 0.07 & **91.54**\(\pm\) 0.48 & **93.32**\(\pm\) 0.05 & 25.96 \(\pm\) 4.08 & 71.54 \(\pm\) 0.99 & 57.41 \(\pm\) 1.62 & 82.79 \(\pm\) 0.35 \\ DebiAN [18] & 85.82 \(\pm\) 1.56 & 91.49 \(\pm\) 0.14 & 24.51 \(\pm\) 6.24 & 74.37 \(\pm\) 0.55 & 90.27 \(\pm\) 2.14 & 93.03 \(\pm\) 0.26 & **32.05**\(\pm\) 1.81 & **73.41**\(\pm\) 3.33 & **58.16**\(\pm\) 2.94 & **83.07**\(\pm\) 1.07 \\ \hline PAD & **87.33**\(\pm\) 1.94 & **91.77**\(\pm\) 0.02 & **27.94**\(\pm\) 6.32 & **75.08**\(\pm\) 1.17 & **92.00**\(\pm\) 1.29 & **93.37**\(\pm\) 0.08 & **32.69**\(\pm\) 5.44 & **73.39**\(\pm\) 0.51 & **60.00**\(\pm\) 1.53 & **83.40**\(\pm\) 0.44 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy scores on worst groups and all groups for wearing lipstick or not classification on CelebA, where bias attributes are attractive or not, heavy makeup or not, high cheekbone or not, and gender. In the last column, we average the results in four bias attributes. The best results are highlighted in blue, and the second best results are in red. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(E^{(1)}\) & \(E^{(2)}\) & \(E^{(3)}\) & \(E^{(4)}\) & Final \\ \hline Biased MNIST & 48.47 (0.07) & 68.75 (0.15) & 70.66 (0.46) & 68.19 (0.32) & 70.43 \\ BAR & 33.44 (0.10) & 94.54 (0.19) & 65.71 (0.18) & 69.83 (0.53) & 69.83 \\ Modified IMDB & 68.13 (0.04) & 72.89 (0.35) & 74.71 (0.38) & 74.25 (0.22) & 74.34 \\ MIMIC-CXR+NIH & 51.77 (0.11) & 57.90 (0.30) & 60.70 (0.23) & 61.00 (0.36) & 60.73 \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy scores (%) for debiased classification results \(\widehat{\mathbf{y}}_{\mathbf{q}}^{(i)}\) of each expert (from \(E^{(1)}\) to \(E^{(4)}\)), and the probability \(\mathrm{p}^{(i)}\) (in parentheses) assigned by the gating module on Biased MNIST, Modified IMDB, MIMIC-CXR + NIH, and BAR datasets. We can see that each dataset has different trends in classification accuracy across experts. Figure 7: Regions of interest (ROIs) for biases-specific experts of our PnD in the debiased (upper) and bias (lower) encoders, when conducting action classification in the test set of BAR. (a) are original images, (b)-(e) are their saliency maps generated using Grad-CAM, from the first expert to the fourth expert. The ROIs for debiased classification and bias detection are changing as the network gets deeper, and there are also significant differences between the two tasks.
2307.01955
Algorithme EM régularisé
Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing maximum likelihood estimate when dealing with Gaussian Mixture Model (GMM). When the sample size is smaller than the data dimension, this could lead to a singular or poorly conditioned covariance matrix and, thus, to performance reduction. This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size. This method aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates by shrinking the estimators towards some structured target covariance matrices. Finally, experiments on real data highlight the good performance of the proposed algorithm for clustering purposes
Pierre Houdouin, Matthieu Jonkcheere, Frederic Pascal
2023-07-04T23:19:25Z
http://arxiv.org/abs/2307.01955v1
# Algorithme EM regularise ###### Abstract Expectation-Maximization (EM) set un algorithme iteratif, notamment utilise pour estimer les maximum de vraisemblance de donnees issues d'un modele de melange gaussien (GMM). Lorsque la taille de l'echantillon des donnees est proche de leur dimension, les estimations successives de la matrice de covariance peuvent etre singulieres ou mal conditiones, entrainnt une baisse des performances. Nous presentons dans ce papier une nouvelle version regularisee de l'algorithme EM adapte au cas ou le rapport entre taille de l'echantillon et la dimension est faible. Cette methode maximise une version penalisee de la vraisemblance de l'algorithme EM-GMM qui assure que les covariances soient toujours definies positives. Des tests sur donnees reelles illustrent enfin l'interet de cette approche pour un probleme de clustering. **Abstract -** Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing maximum likelihood estimate when dealing with Gaussian Mixture Model (GMM). When the sample size is smaller than the data dimension, this could lead to a singular or poorly conditioned covariance matrix and, thus, to performance reduction. This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size. This method aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates by shrinking the estimators towards some structured target covariance matrices. Finally, experiments on real data highlight the good performance of the proposed algorithm for clustering purposes. ## 1 Introduction L'algorithme EM [1] est un algorithme frequemment utilise en apprentissage non supervise et en modelisation statistique permettant de trouver un maximum local de la vraisemblance de donnees non labellisees et d'estimer les labels associes. En procedant iterativement, cet algorithme estime les parametres incommus du modele qui augmentent l'esperance de la vraisemblance des donnees completes groace aux parametres de l'iteration precedente. Historiquement elabore pour les modeles de melange de gaussien (GMM) [2], l'algorithme a ete ete etendu aux distributions de Student par [3] pour mieux fare face aux donnees aberrantes et aux donnees a queues lourdes. Plus re-cement, une generalisation aux distributions elliptiques symetriques a ete developpee ([4] pour le clustering et [5] pour la classification). En traitement du signal, la dimension \(m\) des donnees est souvent eleveee par rapport a leur nombre \(n:n\sim m\). Dans de telles conditions, des problemes de convergence surviennent lors de l'estimation des matrices de covariance qui ne sont plus forcement bien conditionnees ou meme inversibles a chaque iteration. L'estimation de matrices de covariance regularisees est une technique courarmment utilisee pour surmonter cette difficulte dans les modeles de clustering [7, 8, 9]. En 2022, [10] presente une nouvelle version regularisee de l'algorithme EM, RG-EM, qui utilise une nouvelle penalisation de la vraisemblance permettant de tirer parti de la structure sous-jacente supposee des matrices de covariance. [10] montre que des estimateurs mieux conditionnes sont obtenus, avec de meilleures performances de clustering dans les regimes ou la dimension est eleveee par rapport au nombre de donnees. Nous proposons ici d'evaluer les performances de RG-EM sur des donnees reelles. La structure du papier est la suivante: la section 2 rappelle les elements theoriques de l'algorithme, la section 3 contient les experiences sur donnees reelles et les conclusions, remarques et perspectives sont etablies dans la section 5. ## 2 Algorithme EM regularise On suppose que chaque observation \(\mathbf{x}_{i}\in\mathbb{R}^{m}\) est issue d'un GMM ou chaque cluster \(\mathcal{C}_{k}\) a son propre vecteur moyenne \(\boldsymbol{\mu}_{k}\in\mathbb{R}^{m}\), sa propre matrice de covariance symetrique definie positive \(\boldsymbol{\Sigma}_{k}\in\mathbb{R}^{m\times m}\) et sa probabilite d'appartenance \(\pi_{k}\in[0,1]\) avec \(\sum_{k}\pi_{k}=1\). La densite de probabilite de \(\mathbf{x}_{i}\) s'ecrit alors : \[f(\mathbf{x}_{i}|\boldsymbol{\theta})=(2\pi)^{-\frac{m}{2}}\sum_{k=1}^{K}\pi_{k} \left|\mathbf{\Sigma}_{k}\right|^{-\frac{1}{2}}e^{-\frac{1}{2}(\mathbf{x}_{i}- \boldsymbol{\mu}_{k})^{\top}\mathbf{\Sigma}_{k}^{-1}(\mathbf{x}_{i}- \boldsymbol{\mu}_{k})}\] avec \(\boldsymbol{\theta}=(\pi_{1},...,\pi_{K},\boldsymbol{\mu}_{1},...,\boldsymbol {\mu}_{K},\mathbf{\Sigma}_{1},...,\mathbf{\Sigma}_{K})\), le vecteur de tous les parametres inconnus. Supposons egalemet qu'une informatio a priori sur la structure des matrices de covariance de chaque cluster est disponible : e.g., elles sont proches de matrices cibles \(\mathbf{T}_{k}\), \(k=1,\ldots,K\). On exploite cette structure en penalisant la vraisemblance avec la divergence de Kullback-Leibler (definic dans [7]) ent chaque \(\mathbf{\Sigma}_{k}\) et \(\mathbf{T}_{k}\) : \[\Pi_{\mathrm{KL}}(\mathbf{\Sigma}_{k},\mathbf{T}_{k})=\frac{1}{2}\big{(} \mathrm{tr}(\mathbf{\Sigma}_{k}^{-1}\mathbf{T}_{k})-\log\left|\mathbf{\Sigma} _{k}^{-1}\mathbf{T}_{k}\right|-m\big{)}.\] Soit \(\mathbf{X}=(\mathbf{x}_{1}\;\;\cdots\mathbf{x}_{n})\) la matrice des donnees issues de notre modele GMM, notre vraisemblance penalisee est alors : \[\ell_{\boldsymbol{\eta}}(\boldsymbol{\theta}|\mathbf{X})=\ell(\mathbf{X}| \boldsymbol{\theta})-\sum_{k=1}^{K}\eta_{k}\Pi_{\mathrm{KL}}(\mathbf{\Sigma}_ {k},\mathbf{T}_{k})\] ou \(\eta_{1},...,\eta_{K}\geq 0\) sont des parametres automatiquement rajustes. **Proposition 2.1**.: _L'etape E de l'algorithme EM regularise n'est pas modifiee, on a \(\forall i\in[1,n],\forall k\in[1,K]\) :_ \[p_{ik}^{(t)}=\frac{\hat{\pi}_{k}^{(t)}|\mathbf{\hat{\Sigma}}_{k}^{(t)}|^{- \frac{1}{2}}e^{-\frac{1}{2}(\mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{k}^{(t)})^ {\top}\mathbf{\hat{\Sigma}}_{k}^{-1}(\mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{k }^{(t)})}}{\sum_{j=1}^{K}\hat{\pi}_{j}^{(t)}|\mathbf{\hat{\Sigma}}_{j}^{t}|^{- \frac{1}{2}}e^{-\frac{1}{2}(\mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{j}^{(t)})^ {\top}\mathbf{\hat{\Sigma}}_{j}^{t-1}(\mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{j }^{(t)})}} \tag{1}\] **Preuve.**_Voir [10]_ **Proposition 2.2**.: _Les mises a jours de l'etape M sont les suivantes :_ \[\pi_{k}^{(t+1)}=\frac{1}{n}\sum_{i=1}^{n}p_{ik}^{(t)}\,,\qquad\mathbf{\hat{ \mu}}_{k}^{(t+1)}=\sum_{i=1}^{n}w_{ik}^{(t)}\mathbf{x}_{i}\] \[\mathbf{\hat{\Sigma}}_{k}^{(t+1)}=\beta_{k}^{(t+1)}\sum_{i=1}^{n}w_{ik}^{(t)} (\mathbf{x}_{i}-\mathbf{\hat{\mu}}_{k}^{(t)})(\mathbf{x}_{i}-\mathbf{\hat{ \mu}}_{k}^{(t)})^{\top}+(1-\beta_{k}^{(t+1)})\mathbf{T}_{k},\] \[o\dot{u}\;\beta_{k}^{(t+1)}=\frac{n\pi_{k}^{(t+1)}}{\eta_{k}+n\pi_{k}^{(t+1)}} \text{ et }w_{ik}^{(t)}=\frac{p_{ik}^{(t)}}{\sum_{i=1}^{n}p_{ik}^{(t)}}\] **Preuve.**_Voir [10]_ La matrice cible \(\mathbf{T}_{k}\) permet ainsi d'injecter des connaissances a priori sur \(\mathbf{\Sigma}_{k}\) dans l'estimation. Si aucune information a priori n'est disponible, on peut choisir \(\mathbf{T}_{k}=\hat{\theta}_{k}^{0}\mathbf{I}_{m}\), ce qui permet simplement d'assurer le bon conditionement des estimateurs. On utilise alors l'estimateeur classique du parametre d'echelle \(\theta_{k}=\mathrm{tr}(\mathbf{\Sigma}_{k})/m\). Dans nos experiences, on utilise \(\hat{\theta}_{k}^{0}=\mathrm{tr}(\hat{\mathbf{\Sigma}}_{k}^{0})/m\) ou \(\hat{\mathbf{\Sigma}}_{k}^{0}\) est la valeur initiale de l'estimation de la matrice de covariance, obtenue grace a un premier clustering avec l'algorithme K-means. Dans l'algorithme EM, la valeur du parametre d'echelle est periodiquement mise a jour avec la nouvelle valeur de \(\hat{\mathbf{\Sigma}}_{k}\). Le choix du parametre de regularisation est egalement essentiel. On utilise une selection par validation croisee qui maximise la log-vraisemblance gaussienne [9]. Chaque \(\eta_{k}\) est estime independamment parmi un ensemble de candidates \(\{\eta_{1},\ldots,\eta_{J}\}\) par la procedure decrite dans l'algorithme 1. ## 3 Experiences sur donnees simulees L'algorithme EM regularise, RG-EM, est compare a l'EM classique, note G-EM, ainsi qu'a l'algorithme K-means. Les deux versions de l'EM sont implementees et la version de Scikit-learn pour le K-means est utilisee. Afin que l'EM classique converge meme lorsque la dimension est elevee et qu'il y a peu de donnees, on apjoute une regulisation classique avec la matrice \(\epsilon\,\mathbf{I}_{m}\) a chaque iteration. On utilise pour \(\mathbf{K-means}\)\(n_{init}=10\) et \(max_{iter}=200\), pour \(\mathbf{G-EM}\)\(\epsilon=10^{-4}\) et \(max_{iter}=40\) et pour \(\mathbf{RG-EM}\)\(L=5\) (Algorithme 1) \(et\)\(max_{iter}=40\). Comme indique dans la section 2, on utilise pour matrices cibles \(\mathbf{T}_{k}=\mathrm{tr}(\hat{\mathbf{\Sigma}}_{k}^{0})/m\mathbf{I}_{m}\). Pour \(\mathbf{RG-EM}\), on recalcule les \(\eta_{k}\) optimaux toute les 10 iterations. Les donnees generees a partir de distributions gaussiennes sont reparties en \(K=3\) clusters avec les priors \(\pi_{k}=\frac{1}{3}\). Le vecteur moyenne est tire aleatoirement sur la sphere centree de rayon 2 tandis qu'on utilise une structure autoregressive pour les covariances. On choisit \(\left(\mathbf{\Sigma}_{k}\right)_{i,j}=\rho_{k}^{|i-j|}\) avec les coefficients 0.8, 0.5 et 0.2. Cela traduit une structure autoregressive dans les donnees. On teste deux configurations avec, respectivement, \(n=1000\) et \(n=500\). On evalue la performance des modeles en calculant leur precision. Pour calculer la precision en clustering, on commence par calculer la matrice de confusion, puis on permute les colonnes de sorte a maximiser la somme des elements diagonaux. Les resultats sont presentes en figure 1. Dans les deux configurations, il y a une dimension a partir de laquelle les performances de l'EM classique chutent et cela correspond au ratio \(\frac{n}{m}\approx 14\). A l'inverse, l'EM regularise parvient a conserver des performances similaires entre la dimension 10 et la dimension 100. En effet, les matrices de covariance des clusters ont une structure proche d'une identite, surtout lorsque \(\rho\) est proche de 0. La matrice cible choisie s'avere donc ici particulierement pertinent ## 4 Experiences sur donnees reelles On teste chaque methode sur des jeux de donnees reelles issus de l'UCI machine learning repository [11]. Deux datasets sont utilises: * **Ionosphere** : \(n=351\), \(p=34\) et \(K=2\) * **Breast cancer** : \(n=699\), \(p=9\) et \(K=2\) On utilise 70% des donnees pour l'entrainement et 30% pour l'evaluation des performances. Les resultats sont moyennes sur 100 simulations et les datasets sont recomposes toutes les 10 simulations. Utiliser une matrice cible circulaire n'est pas adapte si certaines valeurs propres des matrices de covariance sont proche de 0. On effectue donc une analyse en composantes principales pour reduire la dimension, on choisit la nouvelle dimension comme la plus petite permettant de conserver 95% de l'information (variance). Cela correspond a \(m=8\) pour Breast cancer et \(m=26\) pour Ionosphere. Les nouvelles matrices etant proches de matrices diagonales, le choix de \(\mathbf{T}_{k}=\hat{\theta}_{k}^{0}\cdot\mathbf{I}_{m}\) semble pertinent. On obtient les resultats presentes sur la figure 2. Sur les deux jeux de donnees, K-means obtient des performances sensiblement inferieures aux methodes EM, avec un ecart d'environ 10% de precision. La version regularisee de l'EM conduit, sur les deux jeux de donnees, a de meilleurs resultats que l'algorithme GMM classique, la reduction de dimension ayant rendu pertinent l'utilisation d'une matrice cible proportionnelle a l'identite. On peut maintenant s'interesser a l'evolution des performances de chaque methode lorsque le rapport \(\frac{n}{m}\) devient de plus en plus e plus faible. Pour observer cela, on supprime progressivement des donnees du dataset d'entrainement pour reduire sa taille de 100% a 10%, ce qui fait diminuer le rapport \(\frac{n}{m}\). Les resultats sont presentes sur la figure 3. Sur les deux jeux de donnees, K-means n'est pas treps impacte par la diminution du nombre de donnees. En effet, la suppression des donnees ne change pas la structure geometrique des clusters, et K-means construit une frontiere similaire avec peu de donnees. A l'inverse, les estimateurs des algorithmes EM sont impactes par la baisse du nombre Figure 1: Evolution de la précision en fonction de la dimension Figure 2: Precision median de donnees, ce qui provoque une diminution des performances. Sur le dataset breast cancer wisconsin, les deux methodes conservent des performances similaires jusqu'a ce que le nombre de donnees soit reduit de 80%. La performance chute alors rapidement pour rejoindre celle des autres methodes. Sur le dataset ionosphere, les deux algorithmes EM baissent progressivement, mais encore une fois, la version regularisee chute moins vite et conserve de meilleures performances. ## 5 Conclusion Nous avons presente dans cet article une version regulariisee de l'algorithme EM-GMM qui surpasse les methodes classiques de clustering dans les regimes ou le nombre de donnees est faible par rapport a la dimension. Dans cette nouvelle approche, l'estimation de la matrice de covariance est regularisee avec un terme de penalisation qui oriente l'estimation vers une matrice cible. Les coefficients de regularisation \(\eta_{k}\) optimaux sont selectionnes grace a un algorithme de validation croisee et regulierement mis a jour au cours des iterations. Les performances obtenues avec nouvel algorithme sont meilleures que celles obtenues avec des algorithmes classiques. De plus, la methode proposee, qui peut etre vue comme une amelioration de l'EM classique, est relativement stable en fonction du rapport \(m/n\). Les perspectives de ces travaux vont se focaliser sur l'apprentissage des matrices cibles, ainsi que sur la version totalement non-supervisee de RG-EM.
2304.09681
Spectral flow, twisted modules and MLDE of quasi-lisse vertex algebras
We calculate the fusion rules among $\mathbb{Z}_2$-twisted modules $L_{\mathfrak{sl}_2}(\ell,0)$ at admissible levels. We derive a series MLDEs for normalized characters of ordinary twisted modules of quasi-lisse vertex algebras. Examples include affine VOAs of type $A_1^{(1)}$ at boundary admissible level, admissible level $k=-1/2$, $A^{(1)}_{2}$ at boundary admissible level $k=-3/2$, and $\mathrm{BP}^{k}$-algebra with special value $k=-9/4$. We also derive characters of some non-vacuum modules for affine VOA of type $D_4$ at non-admissible level $-2$ from spectral flow automorphism.
Bohan Li, Hao Li, Wenbin Yan
2023-04-19T14:19:25Z
http://arxiv.org/abs/2304.09681v2
# Spectral flow, Twisted modules and Mlde of quasi-Lisse vertex algebras ###### Abstract. We calculate the fusion rules among \(\mathbb{Z}_{2}\)-twisted modules \(L_{42}(\ell,0)\) at admissible levels. We derive a series MLDEs for normalized characters of ordinary twisted modules of quasi-lisse vertex algebras. Examples include affine VOAs of type \(A_{1}^{(1)}\) at boundary admissible level, admissible level \(k=-1/2\), \(A_{2}^{(1)}\) at boundary admissible level \(k=-3/2\), and BP\({}^{k}\)-algebra with special value \(k=-9/4\). We also derive characters of some non-vacuum modules for affine VOA of type \(D_{4}\) at non-admissible level \(-2\) from spectral flow automorphism. ###### Contents * 1 Introduction * 2 Character formulae at admissible level and their modularity * 2.1 Twisted modular transformation * 2.2 Twisted characters * 2.3 Twisted modules * 2.4 Spectral Flow Automorphsim * 2.5 Li's delta operator, spectral flow and twisted modules * 2.6 Half-integer spectral flow for \(A_{1}^{(1)}\) at boundary admissible level * 2.7 Modular transformation * 3 Twisted Zhu's Bimodule of highest weight modules * 3.1 Examples * 3.2 Relation between Dong-Li-Mason's Zhu's bimodules and twisted Zhu's bimodules * 3.3 Fusion rules among twisted modules * 3.4 Verlinde formula for \(L_{-\frac{4}{3}}(\mathfrak{sl}_{2})\) * 4 Twisted Zhu's bimodules of contragredient modules of highest weight modules * 4.1 Motivation * 4.2 Untwisted Zhu's bimodules * 4.3 The contragredient modules of the highest weight modules * 4.4 Fusion rules * 4.5 Fusion rules among twisted modules * 5 Twisted modules from spectral flow and their MLDEs * 5.1 \(A_{1}^{(1)}\) at boundary admissible levels \(k=-2+\frac{2}{u}\) * 5.2 \(A_{1}^{(1)}\) at admissible level \(k=-\frac{1}{2}\) * 5.3 \(A_{2}^{(1)}\) at boundary admissible level \(k=-\frac{3}{2}\) * 5.4 Bershadsky-Polyakov Algebra \(\text{BP}^{k}\) with \(k=-\frac{9}{4}\) * 6 \(\mathfrak{d}_{4}\) with non-admissible level \(k=-2\) * A Theta functions * B Proof in Proposition 4.4 ## 1. Introduction Four-dimensional \(\mathcal{N}=2\) superconformal field theories (SCFTs) in physics have rich mathematical structures. In [1], the authors propose a correspondence between the Schur sectors of \(4d\)\(\mathcal{N}=2\) SCFTs and \(2d\) vertex operator algebras (VOAs). This correspondence has fueled a lot of work in the past years, including some conjectures about the chiral algebra in the context of theories of Class \(\mathcal{S}\)[12, 13, 14, 15]. For the genus zero case, the conjecture has been proved in terms of a functorial construction [11]. Class \(\mathcal{S}\) theories have the Coulomb branch operators with integral scaling dimension, yet there is another class of \(\mathcal{N}=2\) SCFTs called Argyres-Douglas (AD) theories [1, 21] which usually have fractional scaling dimensions for the Coulomb branch operators. These AD theories can be constructed by compactifying \(6d\)\((2,0)\) theory on a Riemann surface with irregular singularities. The corresponding VOAs of a class of AD theories are identified with certain affine Kac-Moody algebras \(L_{k}(\mathfrak{g})\) at admissible level \(k\), or affine \(\mathcal{W}_{k}(\mathfrak{g},f)\)-algebras [16, 17, 18, 19, 20]. Dualities of \(4d\) theories imply nontrivial isomorphism and collapsing levels of VOAs [19, 20, 21], some of which were proved rigorously [1, 2]. One consequence of this SCFT/VOA correspondence is that the Schur index of the 4d SCFT is equal to the normalized vacuum character of the corresponding VOA, hence the character formula provides a valuable tool to study the spectrum of 4d SCFTs. The character here means that the trace \(\chi_{\lambda}(\tau,z)=\operatorname{tr}_{L(\ell,\lambda)}e^{2\pi i\tau(L(0)- \frac{1}{2}zh(0)-\frac{3}{4}c\ell)}\) over the \(L(\ell,\lambda)\). In [21], the authors derived character formulas for admissible representations of an affine Kac-Moody Lie algebra \(\hat{\mathfrak{g}}\) at a rational level \(\ell\), i.e., \(L(\ell,\lambda)\), and also investigated the modular property of these characters. In particular, the transformed character \(\chi_{\lambda}(-\frac{1}{\tau},\frac{z}{\tau})\) can be written as a linear combination of characters of admissible representations with a shifted conformal vector, while \(\chi_{\lambda}(-\frac{1}{\tau},z)\) is a linear combination of the characters of some \(\mathbb{Z}_{2}\)-twisted modules. The character formulas were used to derive the Schur index of a large class of AD theory [19], while the modular properties also have applications in physics [10]. Another conjecture of the SCFT/VOA correspondence is the identification between the Higgs branch of vacua of an \(\mathcal{N}=2\) SCFT and the associated variety of the corresponding VOA [1, 20], which were also used to propose lisse VOAs from \(4d\) SCFTs [19]. The VOA corresponding to a 4d SCFT is often of the quasi-lisse type whose associated variety has finitely many symplectic leaves. The normalized character of an ordinary representation of a quasi-lisse VOA was shown to satisfy a modular linear differential equation (MLDE), and solving MLDE gives explicit expression for the characters of the affine Lie algebra of the Deligne-Cvitanovic (DC) series [1]. The MLDE for VOAs corresponding to several families of AD theories and \(\mathcal{N}=4\) super Yang-Mills with \(\mathfrak{su}(n)\) gauge group were also discussed in [1]. Recently in [23] the authors constructed flavored MLDEs for the Schur index of \(\mathfrak{a}_{1}\) Class \(\mathcal{S}\) theories based on a compact formula for the index they found earlier [24]. Both works used the Higgs branch structure to probe the singular vector of the corresponding VOA and then derive the MLDE. The SCFT/VOA correspondence also goes beyond the Schur index and the vacuum module. One generalization is to consider the lens space index [22] instead of the normal index. In [21], the lens space index was identified with the characters of the twisted modules. Given an automorphism \(g\) of a VOA \(V\) of finite order, the basic properties of \(g\)-twisted modules for \(V\) are systematically studied by Haisheng Li in [13] using the twisted local systems. In particular, he showed that \(\mathbb{Z}_{2}\)-twisted modules of the affine VOA mentioned above can be obtained from its untwisted modules via Li's \(\Delta\)-operators. Later on, in the important work [16], authors showed that the trace functions of the \(g\)-twisted modules for the \(C_{2}\)-cofinite rational VOA satisfy certain twisted modular linear differential equations (MLDE) and possess some modular invariance properties. In [18], the author generalized some of the results obtained by Dong, Li, and Mason to the quasi-lisse vertex operator (super)algebras case, as while as proved that characters of twisted modules of quasi-lisse vertex algebras satisfy certain twisted MLDEs. Another generalization is to consider the index in the presence of defects, which were identified with the twisted modules using spectral flow [16, 17, 18]. The spectral flow used in these works has a long history in the conformal field theory (see e.g. [20, 1, 13, 14, 15, 16, 17]). As reviewed above, there is an increasing interests in understanding twisted modules and spectral flowed modules of VOAs from the perspective of SCFT/VOA correspondence, as it provides the knowledge of lens space indices and defects of the corresponding SCFTs. Especially if one can show that characters of these modules are solutions of certain MLDEs, closed form expression might also within the grasp as in the ordinary module case. Since conventional methods in physics usually give only power series, such closed form expressions are valuable and may review more interesting properties. Mathematically, people introduce the twisted modules of \(V\) to study the module category of \(G\) invariants \(V^{G}\), where \(G\) is finite automorphism group of \(V\). In the present work we study the \(\mathbb{Z}_{2}\)-twisted modules denoted by \(\sigma^{-\frac{1}{2}}(L(\ell,\lambda))\) for affine VOA \(L_{\mathfrak{sl}_{2}}(\ell,0)\) associated with Lie algebra \(\mathfrak{sl}_{2}\) at admissible level and the fusion rules among them by using the twisted Zhu's bimodule recently introduced by [14] and a conjectural twisted version of Frenkel-Zhu's bimodule theorem. One of our main results is the fusion rules among admissible modules and twisted modules. Firstly, we showed: **Theorem 1.1**.: _Let \(\ell=-2+\frac{p}{q}\) be the admissible level where \(p\) and \(q\) coprime positive integers with \(p\geq 2\). Then \(\mathbb{Z}_{2}\)-twisted Zhu's algebra for \(L_{\ell}(\mathfrak{sl}_{2})\) is \(\mathbb{C}[x]/(\prod_{r=0}^{p-2}\prod_{s=0}^{q-1}(x+\frac{1}{2}\ell-r+st))\), where \(0\leq r\leq p-2\), \(0\leq s\leq q-1\). In particular, the Dynkin labels (eigenvalues of \(h_{(0)}\) on the highest weight vector) of all \(\mathbb{Z}_{2}\)-twisted modules in category \(\mathcal{O}\) are \(\{r-st-\frac{1}{2}\ell|0\leq r\leq p-2,0\leq s\leq q-1\}\)._ **Theorem 1.2**.: _All irreducible \(\mathbb{Z}_{2}\)-twisted modules of \(L_{k}(\mathfrak{sl}_{2})\) at admissible level in category \(\mathcal{O}\) can be obtained by using \(\ell=-\frac{1}{2}\) spectral flow on the untwisted modules in category \(\mathcal{O}\). In, particular, all of those irreducible twisted modules are ordinary modules at boundary admissible level._ Then we obtain the fusion rules among these \(\mathbb{Z}_{2}\)-twisted modules. **Theorem 1.3**.: _For admissible weights \(j_{i}=n_{i}-(k_{i}-1)t\)\((i=1,2)\) of the vertex affine algebra \(L_{k}(\mathfrak{sl}_{2})\), there are following fusion rules between one admissible module and one twisted module:_ \[L(k,j_{1})\times\sigma^{-\frac{1}{2}}(L(k,j_{2}))=\sum_{i=\max\{0,n_{i}+n_{2}- p\}}^{\min\{n_{1}-1,n_{2}-1\}}\sigma^{-\frac{1}{2}}(L(k,j_{1}+j_{2}-2i)) \tag{1}\] _if \(0\leq k_{2}-1\leq q-k_{1}\), and \(L(k,j_{1})\times\sigma^{-\frac{1}{2}}(L(k,j_{2}))=0\) otherwise._ **Remark 1.4**.: _According to the results in [20], one can generalize the symmetries of fusion rules in [11] to the twisted case:_ \[N_{jk}^{i} =N_{kj}^{i}\] \[N_{jk}^{i} =N_{ji}^{k}.\] _Therefore, using above results one can obtain the fusion rules for \(\sigma(L)\times\sigma(L)\)._ We also prove the following theorem. **Theorem 1.5**.: _The vertex operator algebra \(L_{\ell}(\mathfrak{sl}_{2})\) at the boundary admissible level \(\ell=-2+\frac{2}{q}\)\((\gcd(q,2)=1)\) is \(\mathbb{Z}_{2}\)-rational._ Following the idea in [10] we compute the fusion rules among admissible modules of \(L_{k}(\mathfrak{sl}_{2})\) and their contragredient modules. **Theorem 1.6**.: _For admissible weight \(j_{i}=n_{i}-1-(k_{i}-1)t\)\((i=1,2)\) of the vertex affine algebra \(L_{k}(\mathfrak{sl}_{2})\), there are following fusion rules:_ \[L(k,j_{1})\times L(k,j_{2})=\sum_{i=\max\{0,n_{1}+n_{2}-p\}}^{\min \{n_{1}-1,n_{2}-1\}}L(k,j_{1}+j_{2}-2i), \tag{3}\] \[(L(k,j_{1}))^{*}\times L(k,j_{2})=L(k,j_{2})\times(L(k,j_{1}))^{* }=\left\{\begin{array}{ll}L(k,-j_{1}+j_{2}),&\mbox{if $n_{2}-n_{1}\geq 0$;}\\ (L(k,j_{1}-j_{2}))^{*},&\mbox{if $n_{2}-n_{1}<0$.}\end{array}\right.\] (4) \[(L(k,j_{1}))^{*}\times(L(k,j_{2}))^{*}=\sum_{i=\max\{0,n_{i}+n_{ 2}-p\}}^{\min\{n_{1}-1,n_{2}-1\}}(L(k,j_{1}+j_{2}-2i))^{*}. \tag{2}\] _where \((L(k,j))^{*}\) denote the contragredient module of the irreducible highest weight module \(L(k,j)\)._ We further use this result to obtain the fusion rules among \(Z_{2}\)-twisted modules and their contragredient modules: **Theorem 1.7**.: (5) \[(L(\ell,j_{1}))^{*}\times\sigma^{\frac{1}{2}}((L(\ell,j_{2}))^{*})=\sum_{i= \max\{0,n_{1}+n_{2}-p\}}^{\min\{n_{1}-1,n_{2}-1\}}\sigma^{\frac{1}{2}}((L(\ell, j_{1}+j_{2}-2i))^{*}).\] (6) \[(L(\ell,j_{1}))^{*}\times\sigma^{\frac{1}{2}}(L(\ell,j_{2}))= \left\{\begin{array}{ll}\sigma^{\frac{1}{2}}(L(\ell,-j_{1}+j_{2})),&\mbox{ if $n_{2}-n_{1}\geq 0$;}\\ \sigma^{\frac{1}{2}}((L(\ell,j_{1}-j_{2}))^{*}),&\mbox{if $n_{2}-n_{1}<0$.}\end{array}\right.\] (7) \[L(\ell,j_{2})\times\sigma^{-\frac{1}{2}}((L(\ell,j_{1}))^{*})= \left\{\begin{array}{ll}\sigma^{-\frac{1}{2}}(L(\ell,-j_{1}+j_{2})),&\mbox{ if $n_{2}-n_{1}\geq 0$;}\\ \sigma^{-\frac{1}{2}}((L(\ell,j_{1}-j_{2}))^{*}),&\mbox{if $n_{2}-n_{1}<0$.}\end{array}\right.\] We show that the ordinary twisted modules satisfy the twisted MLDEs. For \(A_{1}^{(1)}\) with boundary admissible level, if we only consider \(q\)-series, the normalized characters of those modules form the complete solutions of a \((u+1)/2\)-order \(\Gamma^{0}(2)\) MLDE. For \(A_{2}^{(1)}\) at boundary admissible level, we construct some ordinary \(e^{2\pi iv_{(0)}}\)-twisted modules by the spectral flow of untwisted modules along different directions in the weight lattice (see Section 2.5), whose normalized characters satisfy a second-order \(\Gamma^{0}(2)\) MLDE. For \(\text{BP}^{k}\)-algebra with \(k=-9/4\), we show that characters of twisted modules are solutions of a third-order MLDE under full \(SL(2,\mathbb{Z})\) group. Finally we study characters of \(\mathbb{Z}_{2}\)-twisted modules and spectral flowed modules of the affine vertex algebra \(\mathcal{L}_{\mathfrak{d}_{4}}(-2,0)\). Since it is non-admissible, one can not use Kac-Wakimoto formula to write down the characters of simple modules directly. Fortunately, we can use results from physics to say something about relations between simple modules of \(\mathcal{L}_{\mathfrak{d}_{4}}(-2,0)\) and spectral flowed modules. We also get one ordinary \(\mathbb{Z}_{2}\)-twisted module, whose character satisfies a second-order \(\Gamma^{0}(2)\)-MLDE. In general, for the simple affine vertex algebra \(L_{k}(\mathfrak{g})\) with \(\mathfrak{g}\) being a Lie algebra of DC series and \(k=-h^{\vee}/6-1\), we make the following conjecture on the MLDE of characters of its \(\mathbb{Z}_{2}\)-twisted modules. **Conjecture 1.8**.: _Let \(V\) be a simple affine vertex algebras associated with the DC series,_ \[A_{1}\subset A_{2}\subset G_{2}\subset D_{4}\subset F_{4}\subset E_{6}\subset E _{7}\subset E_{8}\] _at level \(-h^{\vee}/6-1\). The normalized \(\mathbb{Z}_{2}\)-twisted characters \(\chi^{\text{twi}}(q)\) are solutions of a second-order \(\Gamma^{0}(2)\)-MLDE with suitable coefficients \(a_{1}\) and \(a_{2}\) but without a \(D_{q}^{(1)}\) term,_ \[\left(D_{q}^{(2)}+a_{1}\Theta_{0,2}+a_{2}\Theta_{1,1}\right)\chi^{\text{twi}} (q)=0 \tag{8}\] We show that this conjecture is true when \(\mathfrak{g}\) is a classical Lie algebra. **Remark 1.9**.: _For affine VOA of type \(A_{1}\) and \(A_{2}\) at boundary admissible level, one can get ordinary \(\mathbb{Z}_{2}\)-twisted module \(\sigma^{-\frac{1}{2}\Lambda}(M_{\text{vac}})\) by taking a spectral flow on the vacuum module \(M_{\text{vac}}\) along a special direction \(-\frac{1}{2}\Lambda\) in the weights lattice. We believe that this phenomenon is universal for all affine VOAs associated with the semisimple Lie algebra \(\mathfrak{g}\) at admissible level \(k\). So far, for a general VOA \(V\), we do not have a systematic approach to get the ordinary \(\mathbb{Z}_{2}\)-twisted modules from irreducible non-twisted \(V\)-modules in the category \(\mathcal{O}\)._ In this work, the usage of the spectral flow is crucial, as we use it to obtain twisted modules and new modules. In particular, we hope to prove Conjecture 1.8 and address some of the questions raised in above Remark 1.9 in the future work. It is also interesting to investigate the tensor category structures for some subcategory of the relaxed highest weight category for the affine VOA at fractional level. We summarize the main contents in this paper. In Section 2, we recall some basic notions of admissible representations, their (twisted) characters, the modularity of their characters under the transformations \(\tau\to-\frac{1}{\tau}\), and \(\mathbb{Z}_{2}\)-rationality of \(L_{k}(\mathfrak{sl}_{2})\) at boundary admissible level. In Section 3, we study the twisted Zhu's bimodules of the \(\mathbb{Z}_{2}\)-twisted modules of \(L_{k}(\mathfrak{sl}_{2})\) at admissible level, then use it to classify twisted modules and compute their fusion rules. In Section 4, we further calculate fusion rules among the highest weight modules and their contragredient modules for \(L_{k}(\mathfrak{sl}_{2})\) at admissible level. In Section 5, we study the \(e^{2\pi iv_{(0)}}\)-twisted modules of affine VOAs at admissible level, then derive MLDEs their characters satisfy. In Section 6, we discuss on the generalization of the result in the previous section to \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\). ## Acknowledgement The authors wish to express their gratitude to Peng Shan for numerous fruitful discussions. HL would like to thank Antun Milas for some motivating questions and suggestions back in 2021. BL, HL and WY are supported by national key research and development program of China (NO. 2020YFA0713000) and Dushi program of Tsinghua. WY is also supported by NNSF of China with Grant NO: 11847301 and 12047502. ## 2. Character formulae at admissible level and their modularity Let \(\hat{\Delta}\), \(\hat{\Delta}_{+}\), \(\hat{\Delta}_{+}^{re}\) be all roots, positive roots, positive real roots of an affine Lie algebra \(\hat{\mathfrak{g}}\). Let \(\ell=\dim(\mathfrak{g})\). Given \(\lambda\in\mathfrak{h}^{*}\), the set of \(\lambda\)-integral roots is \(\hat{\Delta}^{\lambda}:=\{\alpha\in\hat{\Delta}^{re}|\lambda(\alpha^{\vee}) \in\mathbb{Z}\}\). \(\lambda\) is an _admissible weight_ if the following two properties hold * \((\lambda+\rho)(\alpha^{\vee})\notin\mathbb{Z}_{\leq 0}\) for all \(\alpha\in\hat{\Delta}_{+}^{re}\), * \(\mathbb{Q}\hat{\Delta}^{\lambda}=\mathbb{Q}\hat{\Delta}\). Roughly speaking, the admissible weight is integrable with respect to \(\hat{\mathfrak{g}}_{\hat{\Delta}^{\lambda}}\) after a shift by the Weyl vector \(\rho\) of \(\hat{\mathfrak{g}}\). If there exists a further isometry \(\phi\) of \(\mathfrak{h}^{*}\) such that \(\phi(\hat{\Delta}^{\lambda})=\hat{\Delta}\), \(\lambda\) is called a _principal_ admissible weight, i.e., a principal admissible weight is integrable with respect to \(\hat{\mathfrak{g}}\) after a shift of a Weyl vector. **Theorem 2.1**.: _[_11_]_ _Let \(\hat{\mathfrak{g}}\) be a Kac-Moody Lie algebra with a symmetrizable generalized Cartan matrix, and \(\lambda\in\mathfrak{h}^{*}\) be an admissible weight. Then the character \(L(\lambda)\) is given by the following formula:_ \[\mathrm{ch}L(\lambda)=\frac{1}{R}\cdot\sum_{w\in W_{\lambda}}\varepsilon(w)e^{ w(\lambda+\rho)}, \tag{9}\] _where \(R:=e^{\rho}\prod_{n=1}^{\infty}(1-q^{n})^{\ell}\prod_{\alpha\in\hat{\Delta}_{+} }(1-e^{\alpha}q^{n})(1-e^{-\alpha}q^{n-1})\) is the Kac-Weyl denominator, and \(W^{\lambda}:=\{r_{\alpha}|\alpha\in\hat{\Delta}^{\lambda}\}\)._ From now on we focus on the case of \(A_{1}^{(1)}\) until stated otherwise. It is known that all admissible weights of \(A_{1}^{(1)}\) are principal admissible. The level \(\lambda(c)=k=\frac{t}{u}\) of an admissible weight satisfies the following condition: \[k+h^{\vee}\geq\frac{h^{\vee}}{u}\quad\text{and}\quad\gcd(u,h^{\vee})=\gcd(u, r^{\vee})=1, \tag{10}\] with the dual Coxeter number \(h^{\vee}=2\) and the lacey number \(r^{\vee}=1\). All admissible weights at level \(m=\frac{t}{u}\) are given by [11]: \[P^{m} =\{\lambda_{m,k,n}:=(m-n+k(m+2))\Lambda_{0}+(n-k(m+2))\Lambda_{1 }|n,k\in\mathbb{N},n\leq 2u+t-2,k\leq u-1.\}\] \[=\{t_{-\frac{k\alpha}{2}}(\widetilde{\Lambda}^{0}-(u-1)(m+2) \Lambda_{0}\},\] where \(\widetilde{\Lambda^{0}}=(u(m+2)-2-n)\Lambda_{0}+n\Lambda_{1}\). One can write down the normalized character \(\chi_{\lambda}(\tau,z,t)\) for \(L(\lambda_{m,k,n})\)[11]: \[\begin{split}\chi_{\lambda}(\tau,z,t)&=\frac{A_{ \lambda+\rho}(h)}{A_{\rho}(h)}\\ &=\frac{(\Theta_{a^{+},b}-\Theta_{a^{-},b})(\tau,\frac{z}{u}, \frac{t}{u^{2}})}{(\Theta_{1,2}-\Theta_{-1,2})(\tau,z,t))}\\ &=\frac{(\Theta_{a^{+},b}-\Theta_{a^{-},b})(\tau,\frac{z}{u}, \frac{t}{u^{2}})}{-ie^{-4\pi it}\phi_{11}(\tau,z)}\end{split} \tag{11}\] where theta functions \(\Theta_{m,n}\) are defined in Appendix and \(A_{\lambda}\) is defined as \[A_{\lambda}(h):=\sum_{w\in W^{\lambda}}\varepsilon(w)\Theta_{w(\lambda)}(h). \tag{12}\] Here \(W^{\lambda}:=\{r_{\alpha}|\alpha\in\hat{\Delta}^{\lambda}\}\), and \[a^{+}:=u((n+1)-k(m+2)),\quad a^{-}:=u(-(n-1)-k(m+2)),\quad b:=u^{2}(m+2).\] The modular \(S\) transformation property of \(\chi_{\lambda}\) is \[\chi_{\lambda_{m,k,n}}(-\frac{1}{\tau},\frac{z}{\tau},t-\frac{|z|^{2}}{2\tau} )=\sum_{\begin{subarray}{c}0\leq k^{\prime}\leq u-1\\ 0\leq n^{\prime}\leq u(m+2)-2\end{subarray}}a^{(m)}_{(k,n),(k^{\prime},n^{ \prime})}\chi_{\lambda_{m,k^{\prime},n^{\prime}}}(\tau,z,t), \tag{13}\] where \[a^{(m)}_{(k,n),(k^{\prime},n^{\prime})}:=\sqrt{\frac{2}{u^{2}(m+2)}}e^{i\pi( k^{\prime}(n+1)+k(n^{\prime}+1))}e^{-i\pi k^{\prime}(m+2)}\times\sin\frac{(n+1)(n^{ \prime}+1)\pi}{m+2}. \tag{14}\] ### Twisted modular transformation Let \(N:=\overline{\mathfrak{h}}_{\mathbb{R}}\times\overline{\mathfrak{h}}_{\mathbb{ R}}\times i\mathbb{R}\) be the Heisenberg group with multiplication \[(\alpha,\beta,u)\cdot(\alpha^{\prime},\beta^{\prime},u^{\prime}):=(\alpha+ \alpha^{\prime},\beta+\beta^{\prime},u+u^{\prime}+\pi i(\langle\alpha|\beta^ {\prime}\rangle-\langle\alpha^{\prime},\beta\rangle)). \tag{15}\] One defines the action of modular group \(SL_{2}(\mathbb{Z})\) and Heisenberg group \(N\) on the space \(\mathfrak{h}^{*}\) as follows: \[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot(\tau,z,t);=(\frac{a\tau+b}{c\tau+d},\frac{z}{c\tau+d},t- \frac{c|z|^{2}}{2(c\tau+d)}), \tag{16}\] and \[(\alpha,\beta,u)\cdot h:=t_{\beta}h+2\pi i\alpha+(u-\pi i\langle\alpha,\beta \rangle)\delta, \tag{17}\] where \(t_{\beta}\) is the translation operator. One can check that the action of Heisenberg group \(N\) on \(\mathfrak{h}^{*}\) is an group action, i.e., \[((\alpha,\beta,u)(\alpha^{\prime},\beta^{\prime},u^{\prime}))\cdot h=(\alpha, \beta,u)\cdot((\alpha^{\prime},\beta^{\prime},u^{\prime})\cdot h). \tag{18}\] **Lemma 2.2**.: _[_11_]_ _For \((\alpha,\beta,u)\in N\) and \(h=(\tau,z,t)=2\pi i(-\tau\Lambda_{0}+z+t\delta)\in\mathfrak{h}^{*},\) the following holds:_ \[(\alpha,\beta,u)\cdot(\tau,z,t)=(\tau,z+\alpha-\tau\beta,t+\frac{u}{2\pi i}- \frac{\langle\alpha,\beta\rangle}{2}+\frac{\tau}{2}|\beta|^{2}-\langle\beta,z \rangle). \tag{19}\] Proof.: By direct calculation, one has \[t_{\beta}(\Lambda_{0})=\Lambda_{0}+\beta-\frac{|\beta|^{2}}{2}\delta,\] \[t_{\beta}(\delta)=\delta,\] \[t_{\beta}(z)=z-\langle z,\beta\rangle\delta.\] Thus, \[t_{\beta}(h) =t_{\beta}(-2\pi i\Lambda_{0}+2\pi iz+2\pi i\delta t)\] \[=-2\pi i\tau(\Lambda_{0}+\beta-\frac{|\beta|^{2}}{2}\delta)+2 \pi i(z-\langle z,\beta\rangle\delta)+2\pi it(\delta)\] \[=-2\pi i\Lambda_{0}\tau+2\pi i(z-\tau\beta)+2\pi i\delta(t- \langle z,\beta\rangle+\tau\frac{|\beta|^{2}}{2}).\] Furthermore, \[(\alpha,\beta,u)\cdot h=-2\pi i\Lambda_{0}\tau+2\pi i(z+\alpha-\tau\beta)+2 \pi i\delta(t+\frac{u}{2\pi i}-\frac{\langle\alpha,\beta\rangle}{2}-\langle z,\beta\rangle+\tau\frac{|\beta|^{2}}{2}).\] We are done. The action of groups \(SL_{2}(\mathbb{Z})\) and \(N\) on \(\mathfrak{h}^{*}\) is compatible: **Lemma 2.3**.: _For \((\alpha,\beta,u)\in N\) and \(h\in\mathfrak{h}^{*}\) and \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL_{2}(\mathbb{Z})\), the following holds:_ \[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot(\alpha,\beta,u)\cdot h=(a\alpha+b\beta,c\alpha+d\beta,u) \cdot\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot h. \tag{20}\] The metaplectic group is defined as \[Mp_{2}(\mathbb{Z}):=\left\{(A,j)\left|\begin{array}{cc}A\in SL_{2}(\mathbb{Z }),\\ j&\text{is a holomorphic function in }\tau\in\mathbb{H}\\ &\text{such that }j(\tau)^{2}=c\tau+d\end{array}\right.\right\}.\] Given a holomorphic function \(F\) on \(Y=\mathbb{H}\times\mathbb{C}\times\mathbb{C}\), one has the right action of \(Mp_{2}(\mathbb{Z})\) and \(N\) on \(F\): \[F|_{(A,j)}(\tau,z,t):=\frac{1}{j(\tau)^{\ell}}\cdot F(A\cdot( \tau,z,t)),\] \[F|_{(\alpha,\beta,u)}(\tau,z,t):=F((\alpha,\beta,u)\cdot(\tau,z,t )).\] Then one defines very important functions for \(\alpha,\beta\in\overline{\mathfrak{h}}^{*}\): \[F^{\alpha,\beta}(\tau,z,t):=F((\alpha,\beta,0)\cdot(\tau,z,t)), \tag{21}\] namely \[F^{\alpha,\beta}(\tau,z,t)=F(\tau,z+\alpha-\tau\beta,t-\frac{\langle\alpha, \beta\rangle}{2}-\langle\beta,z\rangle+\frac{\tau}{2}|\beta|^{2}).\] Modular transformation of these functions are given as follows: **Lemma 2.4**.: _[_11_]_ _Under the action of \((A,j)\in Mp_{2}(\mathbb{Z})\),_ * \((F|_{(A,j)})^{\alpha,\beta}=F^{a\alpha+b\beta,\alpha+d\beta}|_{(A,j)},\)__ * \(F^{\alpha,\beta}|_{(A,j)}=(F|_{(A,j)})^{a^{\prime}\alpha+b^{\prime}\beta,c^{ \prime}\alpha+d^{\prime}\beta},\)__ _where \(A^{-1}=\begin{pmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{pmatrix}\)._ ### Twisted characters Now we consider the normalized character of \(L(\lambda)\) at level \(k\), \(\chi_{\lambda}(h)=q^{m_{\lambda}}\operatorname{tr}_{L(\Lambda)}e^{2\pi ih}\) evaluated at \(h=2\pi i(-\tau d+z+tc)\), i.e., \(\chi_{\lambda}(\tau,z,t)\). Then given \(\alpha,\beta\in\overline{\mathfrak{h}}^{*}\), one has \[\begin{split}\chi_{\lambda}^{\alpha,\beta}(\tau,z,t)& =q^{m_{\lambda}}\operatorname{tr}_{L(\lambda)}e^{-2\pi i\tau d+2 \pi i(z+\alpha-\tau)+2\pi ik(t-\frac{\alpha,\beta}{2}-\langle z,\beta\rangle+ \tau\frac{|\beta|^{2}}{2})}\\ &=e^{2\pi ikt}\operatorname{tr}_{L(\lambda)}e^{2\pi i((z+\alpha)+ k(-\frac{\alpha,\beta}{2}-\langle z,\beta\rangle))}q^{-\beta+k\frac{|\beta|^{2}}{2}}q^{L _{(0)}-\frac{b}{24}}\\ &=e^{2\pi ikt}\operatorname{tr}_{L(\lambda)}e^{2\pi i(z+\alpha- k\frac{\langle\alpha,\beta}{2}-k\langle z,\beta\rangle))}q^{L_{(0)}-\beta+k\frac{| \beta|^{2}}{2}-\frac{b}{24}}\end{split} \tag{22}\] The twisted character (22) is very important. When \(z=0\), multiplying (22) with \(\eta(\tau)^{k}\) produces preciously the theta function defined on vertex algebra by Miyamoto [10]. When \(\alpha=0\), (22) can be written as \[\chi_{\lambda}^{0,\beta}(\tau,z,t)=(\mathbf{y}e^{-2\pi i(z,\beta)}q^{\frac{| \beta|^{2}}{2}})^{k}\operatorname{tr}_{L(\Lambda)}e^{2\pi iz}q^{-\beta}q^{L_{( 0)}-\frac{b}{24}} \tag{23}\] where \(\mathbf{y}:=e^{2\pi it}\). ### Twisted modules Let \(g\) be the automorphism of \(V\) of finite order \(T\). We first recall the definition of \(g\)-twisted module: **Definition 2.5**.: _An (ordinary) \(g\)-twisted \(V\)-module is a \(\mathbb{C}\)-linear space \(M\) equipped with a linear map_ \[V \to\operatorname{End}(V)[[z^{\frac{1}{T}},z^{-\frac{1}{T}}]],\] \[v \mapsto Y_{M}(v,z)=\sum_{n\in\mathbb{Q}}v_{(n)}z^{-n-1}\] _satisfying_ * _For_ \(v,w\in M\)_,_ \(v_{(m)}w=0\) _if_ \(m\) _is large enough._ * \(Y_{M}(\mathbf{1},z)=id_{V}\)_._ * _For_ \(v\in V^{r}=\Big{\{}v\in V|gv=e^{\frac{2\pi i\tau}{2}}v\Big{\}}\)_, and_ \(0\leq r\leq T-1\)__ \[Y_{M}(v,z)=\sum_{n\in\frac{r}{T}+\mathcal{Z}}v_{(n)}z^{-n-1}.\] * _(Jacobi identity) For_ \(u\in V^{r}\)__ \[z_{0}^{-1}\delta(\frac{z_{1}-z_{2}}{z_{0}})Y_{M}(v,z_{1})Y_{M}(w,z_{2})-(-1 )^{|v||w|}z_{0}^{-1}\delta(\frac{z_{0}-z_{1}}{-z_{0}})Y_{M}(v,z_{2})Y_{M}(u,z_ {1})\] \[=z_{2}^{-1}(\frac{z_{1}-z_{0}}{z_{2}})^{-\frac{r}{T}}\delta(\frac{z_ {1}-z_{0}}{z_{2}})Y_{M}(Y(u,z_{0})v,z_{2}).\] * \(M=\oplus_{\lambda\in\mathbb{C}}M_{\lambda}\)_, where_ \(M_{\lambda}=\big{\{}w\in M|L_{(0)}w=\lambda w\big{\}}\)_,_ * \(M_{\lambda}\) _is finite dimensional,_ * _for a fixed_ \(\lambda\)_,_ \(M_{\frac{r}{T}+\lambda}=0\) _for all small enough integers_ \(n\)_._ **Remark 2.6**.: _If the grading condition is dropped, one calls it the weak \(g\)-twisted module._ Now we review some basic facts about twisted modules under the vertex algebra setting. Let \((V,\omega)\) be a \(\mathbb{Z}\)-graded conformal vertex algebra. Let \(v\in V_{1}\) be an even vector satisfying the Heisenberg \(\lambda\)-bracket relation \[[v_{\lambda}v]=k\lambda,\quad[\omega_{\lambda}v]=(T+\lambda)v. \tag{24}\] Suppose further that \(v_{(0)}\) acts semisimply on \(V\) such that the eigenvalues of \(v_{(0)}\) belong to \(\frac{1}{T}\mathbb{Z}\). Li's \(\Delta\)-operator is defined as \[\Delta(z):=z^{v_{(0)}}\exp(\sum_{n=1}^{\infty}\frac{v_{(n)}}{-n}(-z)^{-n}). \tag{25}\] When \(g(v)=v\), one can obtain an \(ge^{2\pi iv_{(0)}}\)-twisted module from a \(g\)-twisted \(V\)-module \(M\) by using Li's \(\Delta\)-operator as the following **Proposition 2.7**.: _[_10_]_ \((M,Y_{M}(\Delta(z)\cdot,z))\) _is a weak \(ge^{2\pi iv_{(0)}}\)-twisted module._ When \(g=\operatorname{id}\), \(e^{2\pi iv_{(0)}}\) is an automorphism of \(V\) of order \(T\). Then \((M,Y_{M}(\Delta(z)\cdot,z))\) is a \(e^{2\pi iv_{(0)}}\)-twisted module. Let \(v^{\prime}\in V_{1}\) satisfy \(v_{(0)}v^{\prime}=0\). Define \[\hat{L}_{(0)} :=\operatorname{Res}_{z}zY_{M}(\Delta(z)\omega,z),\] \[\hat{v^{\prime}}_{(0)} :=\operatorname{Res}_{z}Y_{M}(\Delta(z)v^{\prime},z). \tag{26}\] By direct calculation, one has \[\hat{L}_{(0)}=L_{(0)}+v_{(0)}+\frac{1}{2}\langle v,v\rangle,\quad\hat{v^{ \prime}}_{(0)}=v^{\prime}_{(0)}+\langle v,v^{\prime}\rangle.\] The normalized character of weak \(V\)-module \(M\) is defined as \[\operatorname{ch}(M)(q,z)=\operatorname{tr}_{M}e^{2\pi izv^{\prime}_{(0)}}q^{ L_{(0)}-\frac{\varepsilon}{24}}. \tag{27}\] Then the normalized character of weak \(e^{2\pi iv_{(0)}}\)-twisted module \(\hat{M}=(M,Y_{M}(\Delta(z)\cdot,z))\) is \[\operatorname{ch}(\hat{M})(q,z)=\operatorname{tr}_{M}e^{2\pi iz(v^{\prime}_{(0 )}+k\langle v,v^{\prime}\rangle)}q^{L_{(0)}+v_{(0)}+\frac{1}{2}k\langle v,v \rangle-\frac{\varepsilon}{24}}. \tag{28}\] **Remark 2.8**.: _In the affine vertex algebra case, let \(M=L(\Lambda)\), \(v^{\prime}=z\) and \(v=-\beta\). One has_ \[\operatorname{ch}(\hat{M})(q,z)=\chi_{\Lambda}^{0,-\beta}(\tau,z,0),\] _where \(q=e^{2\pi i\tau}\). It is also worth noting that the central charge of the conformal element \(\omega+L_{(-1)}v\) is \(c-12\langle v,v\rangle\). Also \((\omega+L_{(-1)}v)_{(0)}=L_{(0)}+v_{(0)}\). Thus,_ \[\operatorname{tr}_{M}e^{2\pi izv^{\prime}_{(0)}}q^{L_{(0)}+v_{(0)}+\frac{1}{2 }k\langle v,v\rangle-\frac{\varepsilon}{24}} \tag{29}\] _can be understood as the trace function on \(M\) with respect to the new conformal vector, \(\omega+L_{(-1)}v\). Careful readers may notice the difference between (28) and (29); this is because the first one changes the module structure and the second one changes the conformal structure._ ### Spectral Flow Automorphism Let us give a brief introduction to the spectral flow automorphisms (see [12, 13, 14] for details). Let \(\hat{\mathfrak{g}}\) be the (non-twisted) affine Lie algebra associated with the semisimple complex finite Lie algebra \((\mathfrak{g},(\mid))\). Let \(\hat{\mathfrak{D}}\) be the set of roots of \(\mathfrak{g}\). Let \(\alpha\in\hat{\Delta}\) denote a root of \(\mathfrak{g}\) with root vector \(e^{\alpha}\) and coroot \(\alpha^{\vee}\), and let \(W\) be the Weyl group of finite Lie algebra \(\mathfrak{g}\). Each element \(w\in W\) permutes the roots and induces an automorphism of \(\mathfrak{g}\) via \(w(e^{\alpha})=e^{w(\alpha)}\), this action can be generalized to an affine Lie algebra \(\hat{\mathfrak{g}}\) as follows, the corresponding root vector of real roots are \(e^{\alpha}_{n}\), and the roots vector corresponding to imaginary roots are denoted by \(h^{i}_{n}\), one can associate \(h^{i}\) with the simple coroot \(\alpha^{\vee}_{i}\) of \(\mathfrak{g}\). Let \(W\subset GL(\mathfrak{h}^{*})\) be the Weyl group of \(\mathfrak{g}\) generated by the reflections \(s_{\alpha}\) with \(\alpha\in\hat{\Delta}^{\mathrm{re}}\), where \(s_{\alpha}(\lambda)=\lambda-\langle\lambda,\alpha^{\vee}\rangle\alpha\) for \(\lambda\in\mathfrak{h}^{*}\). Let the affine Weyl group be \(\widehat{W}=W\ltimes\bar{Q}^{\vee}\). The coroot lattice acts on the roots of \(\hat{\mathfrak{g}}\) by translation in the imaginary direction. Then each simple coroot \(\alpha^{\vee}_{i}\) of \(\mathfrak{g}\) defines an independent transformation \(\tau_{i}\) on the root vector of an affine Lie algebra \(\hat{\mathfrak{g}}\) via, \[\tau_{i}(e^{\alpha}_{n})=e^{\alpha}_{n-\langle\alpha,\alpha^{\vee}_{i}\rangle} \quad(n\in\mathbb{Z}),\qquad\tau_{i}(h^{j}_{n})=h^{j}_{n}\quad(n\neq 0) \tag{30}\] We finally obtain the spectral flow automorphisms \(\tau_{i}\) act on generators of the affine Lie algebra \(\hat{\mathfrak{g}}\) and Virasoro element \(L_{0}\) as follows, \[\tau_{i}(e^{\alpha}_{n})=e^{\alpha}_{n-\langle\alpha,\alpha^{ \vee}_{i}\rangle},\quad\tau_{i}(h^{j}_{n})=h^{j}_{n}-(\alpha^{\vee}_{i}|\alpha^ {\vee}_{j})K \tag{32}\] \[\tau_{i}(K)=K,\quad\tau_{i}(L_{0})=L_{0}-h^{i}_{0}+\frac{2}{( \alpha_{i}|\alpha_{i})}K \tag{31}\] For the \(\hat{\mathfrak{g}}=A^{(1)}_{1}\), the spectral flow automorphism \(\sigma\) which may be regarded as a square root of the affine Weyl translation by the simple coroot \(\alpha^{\vee}_{i}\) of finite Lie algebra \(\mathfrak{g}\), the powers of \(\sigma\) acts as follows, \[\sigma^{\ell}(e_{n})=e_{n-\ell},\quad\sigma^{\ell}(h_{n})=h_{n}-\ell\delta_{n,0 }K,\quad\sigma^{\ell}(f_{n})=f_{n+\ell} \tag{34}\] \[\sigma^{\ell}(K)=K,\quad\sigma^{\ell}(L_{0})=L_{0}-\frac{1}{2}\ell h _{0}+\frac{1}{4}\ell^{2}K \tag{33}\] One can use spectral flow automorphsim to modify the action of \(A^{(1)}_{1}\) on any module \(M\), thereby obtaining new modules \(\sigma^{*}(M)\). Explicitly, the modified algebra action defining these new modules is given by, \[X\cdot\sigma^{*}|v\rangle=\sigma^{*}(\sigma^{-1}(X)|v\rangle),\quad(X\in \mathfrak{sl}_{2}). \tag{35}\] For example, if \(|\lambda,\Delta\rangle\) is a vector of weight \(\lambda\) and conformal dimension \(\Delta\), then the vector \((\sigma^{\ell})^{*}|\lambda,\Delta\rangle\in(\sigma^{\ell})^{*}(M)\), the weight and conformal dimension of this vector becomes, \[h_{0}(\sigma^{\ell})^{*}|\lambda,\Delta\rangle=(\lambda+\ell K)(\sigma^{\ell})^ {*}|\lambda,\Delta\rangle \tag{36}\] \[L_{0}(\sigma^{\ell})^{*}|\lambda,\Delta\rangle=(L_{0}+\frac{1}{2}\ell h_{0}+\frac {1}{4}\ell^{2}K)(\sigma^{\ell})^{*}|\lambda,\Delta\rangle \tag{37}\] (From now on, we denote the new module as \(\sigma^{\ell}(M)\)). In order to obtain the character of new module \(\sigma^{\ell}(M)\), we need to compute the character of module \(M\). The normalized character of irreducible highest weight \(\hat{\mathfrak{g}}\) module \(M\) at level \(k\) is defined as, \[\operatorname{ch}[M](\mathbf{y},\mathbf{z},q)=\operatorname{tr}_{M}\mathbf{y }^{k}\mathbf{z}^{h_{0}}q^{L_{0}-c/24} \tag{38}\] where \(\mathbf{y}=e^{2\pi it}\), \(\mathbf{z}=e^{2\pi iz}\) and \(q=e^{2\pi ir}\). The character of new module \(\sigma^{\ell}(M)\) can be written in terms of the character of module \(M\) as follows, \[\operatorname{ch}[\sigma^{\ell}(M)](\mathbf{y},\mathbf{z},q)=\operatorname{ ch}[M]\left(\mathbf{y}\mathbf{z}^{\ell}q^{\ell^{2}/4},\mathbf{z}q^{\ell/2},q \right), \tag{39}\] One can check that the character of module \(M\) satisfies the following relation, \[\operatorname{ch}[\sigma^{\ell+\ell^{\prime}}(M)](\mathbf{y},\mathbf{z},q)= \operatorname{ch}[\sigma^{\ell}\circ\sigma^{\ell^{\prime}}(M)](\mathbf{y}, \mathbf{z},q)=\operatorname{ch}[M]\left(\mathbf{y}\mathbf{z}^{\ell+\ell^{ \prime}}q^{(\ell+\ell^{\prime})^{2}/4},\mathbf{z}q^{(\ell+\ell^{\prime})/2},q \right). \tag{40}\] ### Li's delta operator, spectral flow and twisted modules Let \(M\) be a highest weight \(\hat{\mathfrak{g}}\)-module or its contragredient module. Let \(v\in\mathfrak{h}_{\mathbb{R}}^{*}\). We have \[\Delta(z)e_{-1}^{\alpha}\mathbf{1} =z^{\langle v,\alpha\rangle}e_{-1}\mathbf{1},\] \[\Delta(z)h_{-1}^{j}\mathbf{1} =h_{-1}^{j}\mathbf{1}+\langle v,h^{j}\rangle kz^{-1}\mathbf{1}\] Thus, it induces the spectral flow \(\tau\): \[\tau(e_{n}^{\alpha}) =e_{n+\langle v,\alpha\rangle}^{\alpha}\] \[\tau(h_{n}^{j}) =h_{n}^{j}+\langle v,h^{j}\rangle k\delta_{n,0}.\] If \(\langle v,\alpha\rangle\in\frac{1}{2}\mathbb{Z}\) for all roots \(\alpha\), according to Proposition 2.7, \((M,Y_{M}(\Delta(z).,z))\) is a weak \(e^{2\pi iv_{(0)}}\) -twisted (\(\mathbb{Z}_{T}\)-twisted) module. But the \(\mathbb{Z}_{T}\)-twisted modules are not necessarily coming from spectral flow. In the future, we shall call \((M,Y_{M}(\Delta(z).,z))\) the spectral flowed module or \(e^{2\pi iv_{(0)}}\)-twisted module. **Proposition 2.9**.: _For \(A_{1}^{(1)}\) at admissible level \(k=-2+\frac{v}{u}\), let \(\lambda(i,j,v,u)\) be the Dynkin label for the finite Lie algebra \(\mathfrak{sl}_{2}\) of an irreducible highest weight module and \(\Delta(i,j,v,u)\) its conformal weight determined by the Sugawara construction. They become \(\sigma^{-\frac{1}{2}}(\lambda(i,j,v,u))\) and \(\sigma^{-\frac{1}{2}}(\Delta(i,j,v,u))\) after taking \(\ell=-\frac{1}{2}\) spectral flow action on \(\lambda(i,j,v,u)\) and \(\Delta(i,j,v,u)\) as follows,_ \[\sigma^{-\frac{1}{2}}(\lambda(i,j,v,u)) =i-1-(k+2)s-\frac{k}{2},\quad(i=1,\cdots,v-1,j=0,\cdots,u-1)\] \[\sigma^{-\frac{1}{2}}(\Delta(i,j,v,u)) =\frac{1}{16}\left(4+k-4i+4(2+k)j+\frac{4(-1+(i-(2+k)j)^{2})}{2+k }\right),\quad(i=1,\cdots,v-1,j=0,\cdots,u-1)\] Proof.: The statement follows from (36), (37) and theorem (3.5.3) in [1]. One can compute the \(\mathbb{Z}_{2}\)-twisted Zhu's algebra \(A_{\sigma}(L_{k}(\mathfrak{sl}_{2}))\) at admissible levels \(k=-2+\frac{v}{u}\) (The proof will be given in Proposition 3.4), and find that all irreducible \(\mathbb{Z}_{2}\)-twisted modules of \(L_{k}(\mathfrak{sl}_{2})\) from \(\ell=-\frac{1}{2}\) spectral flow on the untwisted modules in category \(\mathcal{O}\). ### Half-integer spectral flow for \(A_{1}^{(1)}\) at boundary admissible level For \(A_{1}^{(1)}\), \(h^{\vee}=2\), the boundary admissible levels are \(k=-2+\frac{2}{u}\), where \(u=2n+1\) is a positive odd integer. All admissible weights are, \[\Lambda_{k,j}:=t_{-\frac{j}{2}}\cdot(k\Lambda_{0})=\left(k+\frac{2j}{u} \right)\Lambda_{0}-\frac{2j}{u}\Lambda_{1},\quad j=0,1,\cdots,u-1, \tag{41}\] where \(\Lambda_{0}\) and \(\Lambda_{1}\) are fundamental weights of affine Lie algebra \(A_{1}^{(1)}\). All characters of irreducible modules can be written in terms of Jacobi theta function \(\theta_{1}(\mathbf{z};q)\) as follows, \[\operatorname{ch}[L(\Lambda_{k,j})](\mathbf{y},\mathbf{z},q)=\mathbf{y}^{k} \mathbf{z}^{-\frac{2j}{u}}q^{\frac{2}{u}}\frac{\theta_{1}(\mathbf{z}^{2}q^{-j} ;q^{u})}{\theta_{1}(\mathbf{z}^{2};q)},\quad(j=0,1,\ldots,u-1) \tag{42}\] We should emphasize that the vaccum character \((j=0)\) of these VOAs denoted by \(L_{\frac{-4n}{2n+1}}(\mathfrak{sl}_{2})\) coincide with superconformal indices of the \(4d\) supersymmetric gauge theories which called \((A_{1},D_{2n+1})\) theories. Especially, when \(u=3\), level \(k=-4/3\), the \(L_{-\frac{4}{3}}(\mathfrak{sl}_{2})\) coincides with the \(\mathfrak{a}_{1}\) DC series of simple Lie algebras. One can generalize the integer flow parameter \(\ell\) to half integer, (39) stays the same. For the \(A_{1}^{(1)}\) at boundary admissible levels \(k=-2+\frac{2}{u}\), using (39), all characters of these twisted modules take the following form, \[\operatorname{ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))](\mathbf{y},\mathbf{ z},q)=(\mathbf{y}\mathbf{z}^{-\frac{1}{2}}q^{\frac{1}{16}})^{k}(\mathbf{z}q^{- \frac{1}{4}})^{-\frac{2i}{u}}q^{\frac{2}{2u}}\frac{\theta_{1}(\mathbf{z}^{2}q^ {-j-\frac{1}{2}};q^{u})}{\theta_{1}(\mathbf{z}^{2}q^{-\frac{1}{2}};q)},\quad(j =0,1,\ldots,u-1) \tag{43}\] Now, we use \(\lambda(j,u)\) instead of \(\lambda(1,j,2,u)\) (Similar to \(\Delta(j,u)\)), and get, \[\sigma^{-\frac{1}{2}}(\lambda(j,u))=\frac{u-2j-1}{u},\quad\sigma^{-\frac{1}{2 }}(\Delta(j,u))=\frac{1+4j(1+j-u)-u}{8u},\quad(j=0,1,\ldots u-1)\] In particular, we observe that the \(\sigma^{-\frac{1}{2}}(\lambda(j,u))\) and \(\sigma^{-\frac{1}{2}}(\Delta(j,u))\) satisfy the following relations respectively, \[\sigma^{-\frac{1}{2}}(\lambda(j,u))=-\sigma^{-\frac{1}{2}}(\lambda(u-j-1,u)), \quad\sigma^{-\frac{1}{2}}(\Delta(j,u))=\sigma^{-\frac{1}{2}}(\Delta(u-1-j,u)) \tag{44}\] Fix a boundary admissible level \(k\), all \(\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))\) are ordinary modules. Then the normalized characters satisfy a modular linear differential equation (Section 5). We shall use the fact that \(A_{\hat{\sigma}}(L_{k}(\mathfrak{sl}_{2}))\) is semi-simple to prove \(\mathbb{Z}_{2}\)-rationality of \(L_{k}(\mathfrak{sl}_{2})\). **Lemma 2.10**.: _Let \(\sigma^{-\frac{1}{2}}(L(\Delta_{1},\lambda_{1}))\) and \(\sigma^{-\frac{1}{2}}(L(\Delta_{2},\lambda_{2}))\) be irreducible \(\mathbb{Z}_{2}\)-twisted modules of \(L_{k}(\mathfrak{sl}_{2})\). Suppose that there is a nontrivial extension of_ \[0\longrightarrow\sigma^{-\frac{1}{2}}(L(\Delta_{1},\lambda_{1}))\overset{ \iota}{\longrightarrow}M\overset{\pi}{\longrightarrow}\sigma^{-\frac{1}{2}} (L(\Delta_{2},\lambda_{2}))\longrightarrow 0\] _Then \(L_{0}\) acts locally finitely on \(M\)._ Proof.: Since the Zhu's algebra \(A_{\hat{\sigma}}(L_{k}(\mathfrak{sl}_{2}))\cong\mathbb{C}[h]/\langle I\rangle\) is a commutative algebra and semisimple. According to characters \(\operatorname{ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))]\) of all twisted modules. We have \(\sigma^{-\frac{1}{2}}(L(\Delta_{1},\lambda_{1}))\) and \(\sigma^{-\frac{1}{2}}(L(\Delta_{2},\lambda_{2}))\) are \(L_{0}\)-diagonalizable and the \(L_{0}\)-finite dimensional. The proof is similar to [10] (Lemma 7.3). Finally, we have that, \(m\in M\) belongs to some \(L_{0}\)-stable finite dimensional vector subspace of \[\bigoplus_{i=1}^{r}\ker(L_{0}-\nu_{i}Id)\oplus\bigoplus_{j=1}^{s}\ker(L_{0}- \mu_{j}Id).\] where \(\nu_{1},\cdots,\nu_{r}\) and \(\mu_{1},\cdots,\mu_{s}\) are eigenvalues of some eigenvectors \(v_{1},\cdots,v_{r}\in\sigma^{-\frac{1}{2}}(L(\Delta_{1},\lambda_{1}))\) and \(w_{1},\cdots,w_{s}\in\sigma^{-\frac{1}{2}}(L(\Delta_{2},\lambda_{2}))\) respectively, then the assertion follows. Then we can prove the following theorem which is Theorem 1.5 in the introduction. **Theorem 2.11**.: _The vertex operator algebra \(L_{k}(\mathfrak{sl}_{2})\) at boundary level \(k\) is \(\mathbb{Z}_{2}\)-rational._ Proof.: For any distinct conformal dimension \(\sigma^{-\frac{1}{2}}(\Delta_{m}),\sigma^{-\frac{1}{2}}(\Delta_{n})\in\sigma^{ -\frac{1}{2}}(\Delta(j,u))\) which correspond to two distinct simple modules. We have \(\sigma^{-\frac{1}{2}}(\Delta_{m})\neq\sigma^{-\frac{1}{2}}(\Delta_{n}) \operatorname{(mod}\mathbb{Z})\). It is known [10] (Lemma 7.4) that if there exists a nontrivial extension of two distinct simple \(\mathbb{Z}_{2}\)-twisted \(L_{k}(\mathfrak{sl}_{2})\)-modules \[0\longrightarrow\sigma^{-\frac{1}{2}}(L(\Delta_{m},\lambda_{m}))\overset{ \iota}{\longrightarrow}M\overset{\pi}{\longrightarrow}\sigma^{-\frac{1}{2}} (L(\Delta_{n},\lambda_{n}))\longrightarrow 0\] then \(\sigma^{-\frac{1}{2}}(\Delta_{m})\) and \(\sigma^{-\frac{1}{2}}(\Delta_{n})\) coincide modulo \(\mathbb{Z}\). Then, we have \(\operatorname{Ext}^{1}[\sigma^{-\frac{1}{2}}(L(\Delta_{m},\lambda_{m})), \sigma^{-\frac{1}{2}}(L(\Delta_{n},\lambda_{n}))]=0\) for \(\sigma^{-\frac{1}{2}}(\Delta_{m})\neq\sigma^{-\frac{1}{2}}(\Delta_{n})\). For the simple modules which have the same conformal dimension \(\sigma^{-\frac{1}{2}}(\Delta_{m})=\sigma^{-\frac{1}{2}}(\Delta_{n})\). One can consider the top space of these modules, the exact sequence \[0\longrightarrow\sigma^{-\frac{1}{2}}(L(\Delta_{m},\lambda_{m}))_{\operatorname {top}}\overset{\iota}{\longrightarrow}M_{\operatorname{top}}\overset{\pi}{ \longrightarrow}\sigma^{-\frac{1}{2}}(L(\Delta_{n},\lambda_{n}))_{\operatorname {top}}\longrightarrow 0\] has non-trivial extension of \(A_{\hat{\sigma}}(L_{k}(\mathfrak{sl}_{2}))\) modules, This contradicts the fact that Zhu's algebra \(A_{\hat{\sigma}}(L_{k}(\mathfrak{sl}_{2})\) is semisimple. So, there is no nontrivial extension between two distinct simple modules and nontrivial self-extension. This completes the proof. ### Modular transformation Applying (2.4) to (22), one has the following \(S\)-transformation for twisted characters: \[\chi_{\Lambda}^{\alpha,\beta}(\tau,z,t)|_{S}=(\chi_{\Lambda}(\tau,z,t)|_{S})^{ \beta,-\alpha}, \tag{45}\] where \(S=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\). **Example 2.12**.: _(Boundary admissible level) We follow the same notations in Section 2.6. Let \(\text{Spec}(T)\) be the eigenvalues of \(T\). First, note that \(\text{Spec}((\epsilon h_{1})_{(0)})\in 2\epsilon\mathbb{Z}\), \((\epsilon\in\mathbb{Q})\). The twisted character (23) evaluated at \(h=2\pi i(-\tau d+\alpha_{1}z+ct)\) is_ \[\begin{split}\chi_{\Lambda}^{0,-\frac{\ell}{2}\alpha_{1}}(\tau,z, t)&=(\mathbf{y}e^{-2\pi i(\alpha_{1}z,-\frac{\ell}{2}\alpha_{1})}q^{ \frac{|-\ell}{2}\alpha_{1}|^{2}})^{m}\operatorname{tr}_{L(\Lambda)}e^{2\pi i \alpha_{1}z}q^{\frac{\ell}{2}\alpha_{1}}q^{L_{(0)}-\frac{m}{2\ell}}\\ &=(\mathbf{y}\mathbf{z}^{\ell}q^{\frac{\ell^{2}}{4}})^{m} \operatorname{tr}_{L(\Lambda)}(\mathbf{z}q^{\frac{\ell}{2}})^{h_{0}}q^{L_{( 0)}-\frac{m}{2\ell}},\end{split} \tag{46}\] _where we identity \((h_{1})_{(0)}\) with \(\alpha_{1}\). (46) is the same as (39)._ _We split the discussion into two cases:_ * \(\text{Spec}((\epsilon h_{1})_{0})\in\mathbb{Z}\)_, i.e.,_ \(\epsilon=\frac{\ell}{2}\)_,_ \(\ell\in\mathbb{Z}\)_. Let_ \(\sigma\) _be spectral flow automorphism on_ \(M\) _(see_ _[_13_]_ _for more details). In this case, (_46_) is the character of_ \(\sigma^{\ell}(M)\)_._ * \(\text{Spec}((\epsilon h_{1})_{0})\in\frac{1}{T}\mathbb{Z}\)_,_ \(T\in\mathbb{Z}_{>0}\)_, i.e.,_ \(\epsilon=\frac{\ell}{2}\)_,_ \(\ell\in\frac{1}{T}\mathbb{Z}\)_,_ \(T\in\mathbb{Z}_{>0}\)_. (_46_) now becomes the character of the_ \(e^{2\pi i(\epsilon h_{1})_{(0)}}\)_-twisted module_ \(\hat{M}\) _introduced in Section_ 2.3_, where_ \(v=\epsilon h_{1}\)_. The_ \(S\)_-transformation of_ \(\chi_{\lambda_{m,k}}^{0,-\frac{\ell}{2}\alpha_{1}}(\tau,z,t)\) _is_ \[\begin{split}\chi_{\lambda_{m,k}}^{0,-\frac{\ell}{2}\alpha_{1}}( \tau,z,t)|_{S}&=(\chi_{\lambda_{m,k}}(\tau,z,t)|_{S})^{-\frac{ \ell}{2}\alpha_{1},0}\\ &=\sum_{\lambda_{m,k^{\prime}}\in P_{m}}a_{\lambda_{m,k},\lambda_ {m,k^{\prime}}}(\chi_{\lambda_{m,k^{\prime}}}^{-\frac{\ell}{2}\alpha_{1},0}( \tau,z,t))\end{split}\] (47) _where_ \[a_{\lambda_{m,k},\lambda_{m,k^{\prime}}}=(-1)^{k+k^{\prime}}e^{-\frac{2\pi i k \lambda^{\prime}}{u}}\frac{1}{\sqrt{u}}\sin\frac{u\pi}{2},\] _and_ (48) \[\chi_{\lambda_{m,k^{\prime}}}^{-\frac{\ell}{2}\alpha_{1},0}(\tau,z,t)=\chi_{ \lambda_{m,k^{\prime}}}(\tau,z-\frac{\ell}{2},t).\] _As one can see, the \(S\) transformation of the character of \(e^{2\pi i(\sigma h_{1})_{(0)}}\)-twisted module \(L(\lambda_{m,k})\) can be written as linear combination of the characters of untwisted modules with the same \(S\)-matrix as in untwisted case._ ## 3. Twisted Zhu's Bimodule of highest weight modules The fusion rules among admissible representations in category \(\mathcal{O}\) were studied by many authors ([14, 15, 16, 17, 18, 19], and etc.). They found that the Verlinde formula is no longer true in the case of \(L_{k}(\mathfrak{g})\), where \(k\) is the admissible level and \(\mathfrak{g}\) is a simple Lie algebra, since the negative coefficient would appear. The method used by Dong-Li-Mason is to apply Frenkel-Zhu's bimodule theorem [10], while the other methods involve the use of characters and some machinery from physics [1, 16, 18]. In this section, we shall compute the twisted Zhu's bimodules recently introduced in [14]; in particular, the twisted Zhu's algebra is a special case, then we use the twisted Zhu's algebra and bimodules to classify the twisted modules and also calculate the fusion rules among the twisted modules. Let \(M^{1},M^{2},M^{3}\) be \(g_{1},g_{2},g_{3}\)-twisted modules, respectively. We consider the fusion rules \(N_{1\,2}^{3}\), where \(1,2,3\) represent modules \(M^{1},M^{2},M^{3}\). Recall the following definition [14] of \(A_{g_{2}}\)-bimodule \(A_{g_{1}g_{2},g_{2}}(M^{1})\). Denote the remainder of \(r\in\mathbb{N}\) divided by \(T\) by \([r]\). For homogeneous element \(u\in V\) and \(w_{1}\in M^{1}\), one defines \[u\circ_{g_{1}g_{2},g_{2}}w_{1}=\text{Res}_{z}\frac{(1+z)^{\text{wt}\,u-1+\delta( j_{2})+\frac{j_{2}}{T}}}{z^{1+\delta(j_{1},j_{2})-\frac{j_{1}}{T}}}\] where \(u\in V^{(j_{1},j_{2})}\) and \[\delta(j_{1},j_{2})=\begin{cases}1,&j_{2}=0\\ 1,&j_{2}\neq 0,\;j_{1}+j_{2}\geq T\\ 0,&j_{2}\neq 0,\;j_{1}+j_{2}<T.\end{cases} \tag{49}\] Let \(O^{\prime}_{g_{1}g_{2},g_{2}}(M^{1})\) be the subspace of \(M^{1}\) spanned by all \(u\circ_{g_{1}g_{2},g_{2}}w_{1}\). One defines \(A_{g_{1}g_{2},g_{2}}(M^{1})=M^{1}/O^{\prime}_{g_{1}g_{2},g_{2}}(M^{1})\). Now we recall [14] the \(A_{g_{1}g_{2}}(V)\)-\(A_{g_{2}}(V)\)-bimodule on \(A_{g_{1}g_{2},g_{2}}(M)\). For homogeneous \(u\in V\) and \(w\in M\), the left and right bimodule actions are defined as \[u*_{g_{1}g_{2},g_{2}}w=\begin{cases}\text{Res}_{z}Y_{M}(u,z)w^{\frac{(1+z)^{ \text{wt}\,u-1+\delta(j_{2})+\frac{j_{2}}{T}}}{z^{1-\frac{j_{1}}{T}}}}&j_{1}+j_{2} \equiv 0\;(\text{mod}\;\;T)\\ 0&\text{otherwise},\end{cases} \tag{50}\] and \[w*_{g_{2},g_{1}g_{2}}u=\begin{cases}\operatorname{Res}_{z}Y_{M}(u,z)w\frac{(1+z)^ {n*u-1}}{z^{1-\frac{2}{2}}}&j_{2}=0\\ 0,&\text{otherwise}.\end{cases} \tag{51}\] In particular, when \(g_{1}=id\), \(g_{2}=g\), the \(A_{g,g}(V)\) with the multiplication given by (51) is the same as the \(g\)-twisted Zhu's algebra, \(A_{g}(V)\) defined in [10]. **Conjecture 3.1**.: _[_15_]_ _One has:_ \[\dim\hom_{A_{g_{1}g_{2}}(V)}(A_{g_{1}g_{2},g_{2}}(M^{1})\otimes M^{2}(0),M^{3 }(0))=N_{1\,2}^{3}.\] **Remark 3.2**.: _When \(g_{1}=g_{2}=id\), this Conjecture was proved by Frenkel and Zhu in [11]._ Let \(e,f,h\) be the basis of \(\mathfrak{sl}_{2}\). Define \(\hat{\sigma}=e^{\frac{\pi ih_{(0)}}{2}}\). One can obtain irreducible \(\hat{\sigma}\)-twisted (\(\mathbb{Z}_{2}\)-twisted) modules of \(L_{\mathfrak{sl}_{2}}(\ell,0)\) in category \(\mathcal{O}\) by using Li's Delta operator: \[\Delta_{-\frac{1}{2}}(z)=z^{\frac{1}{4}h_{(0)}}\exp\left(\sum_{n=1}^{\infty} \frac{\frac{1}{4}h_{(n)}}{-n}(-z)^{-n}\right),\] i.e., \(\sigma^{-\frac{1}{2}}(L_{\mathfrak{sl}_{2}}(\ell,j))\), where we define \[(\sigma^{-\frac{1}{2}}(M),Y_{M}^{-\frac{1}{2}}(\cdot,z)):=(M,Y_{M}(\Delta_{- \frac{1}{2}}(z)\cdot,z)).\] Now we compute the twisted Zhu's bimodule in the case where \(M^{1}\) is untwisted and \(M^{2},M^{3}\) are \(\hat{\sigma}\)-twisted modules. We have \[u\circ_{\hat{\sigma},\hat{\sigma}}w=\operatorname{Res}_{z}\frac{(1+z)^{\text{ wt}\,u-1+\delta(j_{2})+\frac{j_{2}}{2}}}{z^{1+\delta(j_{2})}}Y_{M}(u,z)w, \tag{52}\] where \(u\in V^{(0,j_{2})}\). If \(u\in V^{(0,1)}_{1}\), one has \[\begin{split}& u\circ_{\hat{\sigma},\hat{\sigma}}w\\ &=\operatorname{Res}_{z}\frac{(1+z)^{\frac{1}{2}}}{z^{m+1}}Y_{M}(u _{(-1)}\mathbf{1},z)w\\ &=u(-m-1)w+\frac{1}{2}u(-m)w-\frac{1}{8}u(-m+1)w+\frac{1}{16}u(- m+2)w+\cdots\in O_{\hat{\sigma},\hat{\sigma}}(M),\end{split} \tag{53}\] where \(m\geq 0\). In our case, we have the following bimodule action [10] **Proposition 3.3**.: _The \(A_{\hat{\sigma}}(V)\)-bimodule \(A_{\hat{\sigma},\hat{\sigma}}(\sigma^{-\frac{1}{2}}(M))\) is isomorphic to \(\mathbb{C}[x,y]\) with bimodule action as follows:_ \[x*f(x,y)=(x+j-2y\frac{\partial}{\partial y})f(x,y),\quad f(x,y)*x=xf(x,y) \tag{54}\] _for any \(f(x,y)\in\mathbb{C}[x,y]\), where \(h_{(0)}v=jv\)._ Proof.: By Definition, we have \[h_{(-1)}*(h_{(-1)}^{m}f_{(0)}^{n}v) =(h_{(-1)}+h_{(0)})h_{(-1)}^{m}f_{(0)}^{n}v\] \[=(h_{(-1)}+j-2n)h_{(-1)}^{m}f_{(0)}^{n}v\] and \[(h_{(-1)}^{m}f_{(0)}^{n})*h_{(-1)}=h_{(-1)}(h_{(-1)}^{m}f_{(0)}^{n})=h_{(-1)}^{ m+1}f_{(0)}^{n}v.\] The Proposition follows immediately if we set \(x=h_{(-1)}+O(M(\ell,j)),y=f(0)+O(M(\ell,j))\) ### Examples Let \(g\) be an automorphism of the universal affine vertex algebra \(V^{k}(\mathfrak{g})\) and denote by \(\mathfrak{g}^{0}\) the fixed point subalgebra of \(\mathfrak{g}\) under the action of \(g\). The \(g\)-twisted Zhu's algebra of \(V^{k}(\mathfrak{g})\) is the universal enveloping algebra of \(\mathfrak{g}^{0}\), \(U(\mathfrak{g}^{0})\), via the map [17] \[F:A_{g}(V^{k}(\mathfrak{g}))\mapsto U(\mathfrak{g}^{0})\] \[F([x^{1}_{(-n_{1}-n_{1})}x^{2}_{(-n_{2}-1)}\cdots x^{m}_{(-n_{m} -1)}\mathbf{1}])=(-1)^{n_{1}+n_{2}+\cdots+n_{m}}x^{m}x^{m-1}\cdots x^{1},\] where \(x^{1},x^{2},...,x^{m}\in\mathfrak{g}^{0}\). Moreover, \(g\)-twisted Zhu's algebra of a simple affine vertex algebra \(L_{k}(\mathfrak{g})\) is \(U(\mathfrak{g}^{0})/\langle[U(\mathfrak{g})v_{\mathrm{sing}}]\rangle\), where \([v_{\mathrm{sing}}]\) means the equivalence class of singular vector \(v_{sing}\) generating the maximal submodule of \(V^{k}(\mathfrak{g})\) in \(A_{g}(L_{k}(\mathfrak{g}))\). Now we compute the \(\hat{\sigma}\)-twisted Zhu's algebra \(A_{\hat{\sigma}}(L_{-\frac{4}{3}}(\mathfrak{sl}_{2}))\). We denote by \(L_{\mathfrak{sl}_{2}}(\ell,0)\) or \(L(\ell,0)\) the vertex algebra \(L_{\ell}(\mathfrak{sl}_{2})\) at level \(\ell\), and denote the admissible irreducible \(L_{\ell}(\mathfrak{sl}_{2})\)-module by \(L_{\mathfrak{sl}_{2}}(\ell,j)\) or \(L(\ell,j)\), where \(j\) is the Dynkin label of finite part of the admissible weight of \(\widehat{\mathfrak{sl}_{2}}\). Note that \(A_{\hat{\sigma}}(L_{-\frac{4}{3}}(\mathfrak{sl}_{2}))\) coincides with \(A_{\hat{\sigma},\hat{\sigma}}(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0))\). By above general argument, we have \[A_{\hat{\sigma}}(L_{-\frac{4}{3}}(\mathfrak{sl}_{2}))\cong U(h)/\langle[U( \mathfrak{g})v_{\mathrm{sing}}]\rangle.\] The weight zero singular vector is [13] \[9h_{(-1)}^{3}+18h_{(-2)}h_{(-1)}-16h_{(-3)}-36f_{(-1)}h_{(-1)}e_{(-1)}-24e_{(- 2)}f_{(-1)}+96f_{(-2)}e_{(-1)}\mathbf{1}. \tag{55}\] Using (53), the equivalence class of (55) in \(A_{\hat{\sigma}}(L_{-\frac{4}{3}}(\mathfrak{sl}_{2}))\) can be written in terms of \(h\). To that end, for \([f_{(-1)}h_{(-1)}e_{(-1)}]\) we have: \[[f_{(-1)}h_{(-1)}e_{(-1)}] =[(-\frac{1}{2}f_{(0)}+\frac{1}{8}f_{(1)}-\frac{1}{16}f_{(2)}+ \cdots)h_{(-1)}e_{(-1)}]\] \[=[\frac{1}{2}h_{(-1)}^{2}-\frac{1}{2}h_{(-1)}+\frac{1}{6}-\frac{5 }{12}h_{(-1)}+\frac{1}{6}]\] \[=[\frac{1}{2}h_{(-1)}^{2}-\frac{11}{12}h_{(-1)}+\frac{1}{3}];\] for \([e_{(-2)}f_{(-1)}]\) we have: \[[e_{(-2)}f_{(-1)}] =[(-\frac{1}{2}e_{(-1)}+\frac{1}{8}e_{(0)}-\frac{1}{16}e_{(1)}+ \cdots)f_{(-1)}]\] \[=[\frac{3}{8}h_{(-1)}+\frac{1}{6}];\] for \([f_{(-2)}e_{(-1)}]\) we have: \[[f_{(-2)}e_{(-1)}]=[-\frac{3}{8}h_{(-1)}+\frac{1}{6}].\] Combining everything together, we get (55) equals \[[9h_{(-1)}^{3}+18h_{(-2)}h_{(-1)}-16h_{(-3)}+18h_{(-1)}^{2}+12h_{(-1)}]. \tag{56}\] The image of (56) under \(F\) is \[h(h+\frac{2}{3})(h-\frac{2}{3}).\] Thus, \(A_{\hat{\sigma}}(L_{-\frac{4}{3}}(\mathfrak{sl}_{2}))\cong\mathbb{C}[h]/\langle h (h+\frac{2}{3})(h-\frac{2}{3})\rangle\). Consider the \(\hat{\sigma}\)-invariant subspace of \(M^{1}\) and \(M^{2}\). One has usual Zhu's bimodule \(A((M^{i})^{\hat{\sigma}})=(M^{i})^{\hat{\sigma}}/O((M^{i})^{\hat{\sigma}})\), where \(i=2,3\). From above definition, we have \[A_{\hat{\sigma},\hat{\sigma}}(M^{2}) =M^{2}/O\left((M^{2})^{\hat{\sigma}}\right),\] \[A_{\hat{\sigma},\hat{\sigma}}(M^{3}) =M^{3}/O\left((M^{3})^{\hat{\sigma}}\right).\] ### Relation between Dong-Li-Mason's Zhu's bimodules and twisted Zhu's bimodules Let \(\omega\) be the original Sugawara Virasoro vector of \(L(\ell,0)\). Set \(\omega_{z}=\omega+\frac{1}{2}zh(-2)\mathbf{1}\in L(\ell,0)\), where \(z\) is a complex number. Then \(\omega_{z}\) is a Virasoro vector of \(L(\ell,0)\) with the central charge \(e_{\ell,z}=c_{\ell}-6\ell z^{2}\). Let \(z\) be a positive rational number less than \(1\). Note that the vertex operator algebra \((L(\ell,0),Y,\mathbf{1},\omega_{z})\) is \(\mathbb{Q}\)-graded instead of \(\mathbb{Z}\)-graded. In [11] authors extend the definition of Zhu's \(A(V)\)-theory of one-to-one correspondence between the set of equivalence classes of irreducible admissible \(V\)-modules and the set of equivalence classes of irreducible \(A(V)\)-modules and Zhu-Frenkel's \(A(M)\)-theory for fusion rules to any \(\mathbb{Q}\)-graded vertex operator algebra \(V\). Denote the Zhu's algebra and bimodule of \(V\) by \(A^{\mathrm{dim}}(V)=V/V\circ_{\mathrm{dim}}V\) and \(A^{\mathrm{dim}}(M)=M/V\circ_{\mathrm{dim}}M\). Assume \[\exp\left(\sum_{n\geq 1}\frac{\frac{1}{4}h_{(n)}}{-n}(-z)^{-n}\right)=\sum_{n \geq 0}u_{(n)}z^{-n}.\] For brevity, we suppress the subscript of \(\Delta_{-\frac{1}{2}}\) and denote it by \(\Delta\). Let \(\epsilon(\epsilon)=\epsilon(f)=0\) and \(\epsilon(h)=1\). We have \[\Delta(1)(a\circ_{\mathrm{dlm}}m) =\Delta(1)\mathrm{Res}_{z=0}\bigg{(}Y(a,z)\frac{(1+z)^{[\mathrm{ wt}(a)]}}{z^{1+\epsilon(a)}}m\bigg{)}\] \[=\mathrm{Res}_{z=0}\bigg{(}\Delta(1)Y(a,z)\frac{(1+z)^{[\mathrm{ wt}(a)]}}{z^{1+\epsilon(a)}}m\bigg{)}\] \[=\mathrm{Res}_{z=0}Y(\Delta(z+1)a,z)\frac{(1+z)^{[\mathrm{wt}(a)] }}{z^{1+\epsilon(a)}}\Delta(1)m\] \[=\sum_{n\geq 0}\mathrm{Res}_{z=0}Y(u_{(n)}a,z)\frac{(1+z)^{[ \mathrm{wt}(a)]+\lambda-n}}{z^{1+\epsilon(a)}}\Delta(1)m\] \[=\sum_{n\geq 0}\mathrm{Res}_{z=0}Y(u_{(n)}a,z)\frac{(1+z)^{[ \mathrm{wt}(u_{(n)}a)]+\lambda}}{z^{1+\epsilon(a)}}\Delta(1)m\] \[=(\Delta(1)a)\circ_{\hat{\sigma},\hat{\sigma}}(\Delta(1)m).\] where \(\frac{h_{(0)}}{4}a=\lambda a.\) Using similar arguments, we get \[\Delta(1)(a*_{\mathrm{dlm}}m) =(\Delta(1)a)*_{\hat{\sigma},\hat{\sigma}}(\Delta(1)m),\] \[\Delta(1)(m*_{\mathrm{dlm}}a) =(\Delta(1)m)*_{\hat{\sigma},\hat{\sigma}}(\Delta(1)a),\] where \(*_{\mathrm{dlm}}\) is the bimodule action defined by [13]. Thus we have the following result. **Proposition 3.4**.: _The map \(V\to V,a\mapsto\Delta(1)a,\) induces an algebra isomorphism_ \[A^{\mathit{dlm}}(V)\rightharpoonup A_{\hat{\sigma}}(V).\] _The map \(M^{1}\to M^{1},m\mapsto\Delta(1)m\) induces an \(A_{\hat{\sigma},\hat{\sigma}}(V)(\cong A^{\mathit{dlm}}(V))\)-bimodule isomorphism_ \[A^{\mathit{dlm}}(M^{1})\rightharpoonup A_{\hat{\sigma},\hat{\sigma}}(M^{1})\] **Example 3.5**.: _Note \(\Delta(1)h(-1)v=(h-\frac{2}{3})v\), and \(\Delta(1)f(0)v=f(0)v\). If \(M^{1}=L_{\mathrm{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\), we have_ \[A_{\hat{\sigma},\hat{\sigma}}(M^{1})\cong\mathbb{C}[x,y]/\langle y,(x-\frac{2 }{3})x\rangle.\] _If \(M^{1}=L_{\mathrm{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\), we have_ \[A_{\hat{\sigma},\hat{\sigma}}(M^{1})\cong\mathbb{C}[x,y]/\langle y,(x-\frac{2 }{3})\rangle.\] _Here, we identify \(f(0)\) and \(h(-1)\) with \(y\) and \(x\)._ ### Fusion rules among twisted modules According to [13], \[A^{\mathrm{dlm}}(L(\ell,0))\cong\mathbb{C}[x]/(\prod_{r=0}^{p-2}\prod_{s=0}^{q -1}(x-r+st)).\] Since \(\Delta(1)h(-1)v=(h+\frac{1}{2}\ell)v\), by Proposition 3.4, we have \[A_{\hat{\sigma}}(L(\ell,0))\cong\mathbb{C}[x]/(\prod_{r=0}^{p-2}\prod_{s=0}^{q -1}(x+\frac{1}{2}\ell-r+st)). \tag{57}\] Similarly, since \(\Delta(1)f(0)v=f(0)v\), \(A_{\hat{\sigma},\hat{\sigma}}(L_{\mathrm{sl}_{2}}(\ell,j))\) is isomorphic to the quotient space of \(\mathbb{C}[x,y]\) modulo the subspace \[\mathbb{C}[x,y]y^{n}+\mathbb{C}[x]g_{j,0}(x,y)+\mathbb{C}[x]g_{j,1}(x,y)+ \cdots+\mathbb{C}[x]g_{j,n-1}(x,y)\] where \(g_{j,i}=y^{i}\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(x+\frac{1}{2}\ell-r-i+st).\) The left and right actions of \(A_{\hat{\sigma}}(L(\ell,0))\) is given by (54). **Theorem 3.6**.: _For admissible weights \(j_{i}=n_{i}-(k_{i}-1)t\)\((i=1,2)\), the fusion rules are given as follows:_ \[L(\ell,j_{1})\times\sigma^{-\frac{1}{2}}(L(\ell,j_{2}))=\sum_{i=\max\{0,n_{1}+n _{2}-p\}}^{\min\{n_{1}-1,n_{2}-1\}}\sigma^{-\frac{1}{2}}(L(\ell,j_{1}+j_{2}-2i)) \tag{58}\] _if \(0\leq k_{2}-1\leq q-k_{1}\), and \(L(\ell,j_{1})\times\sigma^{-\frac{1}{2}}(L(\ell,j_{2}))=0\) otherwise._ Proof.: For any admissible weight \(j\), let \(\mathbb{C}v_{j}^{\prime}\) be the one-dimensional module for Lie algebra \(\mathbb{C}h\) such that \(hv_{j}^{\prime}=(j-\frac{1}{2}\ell)v_{j}^{\prime}.\) We then calculate the \(A(L(\ell,0))\)-module \(A_{\vartheta,\vartheta}(L(\ell,j_{1}))\otimes_{A_{\vartheta}(L(\ell,0))} \mathbb{C}v_{j_{2}}^{\prime}.\) Using the result above this Theorem, we have \[A_{\vartheta,\vartheta}(L(\ell,j_{1}))\otimes_{A_{\vartheta}(L(\ell,0))} \mathbb{C}v_{j_{2}}^{\prime}\cong\mathbb{C}[x,y]/J\] where \(J\) is the subspace of \(\mathbb{C}[x,y]\) spanned by \[\{x-j_{2}+\frac{1}{2}\ell,\mathbb{C}[x,y]y^{n_{1}},g_{j_{1},i}(j_{2},1) \mathbb{C}[x]y^{i},i=0,1,...,n_{1}-1\}.\] Then the result follows from the similar argument of [1, Theorem 4.7]. **Example 3.7**.: _let \(j_{i}=-k_{i}\frac{2}{3}\)\((i=1,2)\) where \(k_{i}=0,1,2\), we have_ \[L_{\mathfrak{sl}_{2}}(-\frac{4}{3},j_{1})\times\sigma^{-\frac{1}{2}}(L_{ \mathfrak{sl}_{2}}(-\frac{4}{3},j_{2}))=\sigma^{-\frac{1}{2}}(L_{\mathfrak{sl} _{2}}(-\frac{4}{3},j_{1}+j_{2})) \tag{59}\] _for \(0\leq k_{2}\leq 2-k_{1}\)._ ### Verlinde formula for \(L_{-\frac{4}{3}}(\mathfrak{sl}_{2})\) Let \(\{\lambda_{1}:=-\frac{4}{3}\Lambda_{0},\lambda_{2}:=-\frac{2}{3}\Lambda_{0}- \frac{2}{3}\Lambda_{1},\lambda_{3}:=-\frac{4}{3}\Lambda_{1}\}\). The \(S\)-transformation is given by \[\chi_{\lambda_{i}}(-\frac{1}{\tau})=\sum_{j=1}^{3}a_{ij}\sigma^{-\frac{1}{2}}( \chi_{\lambda_{i}}(\tau))\] \[(a_{ij})=\frac{\sqrt{3}}{3}\left(\begin{array}{ccc}-1&1&-1\\ 1&-e^{-\frac{2}{3}\pi i}&e^{\frac{2}{3}\pi i}\\ -1&e^{\frac{2}{3}\pi i}&-e^{-\frac{2}{3}\pi i}\end{array}\right).\] The fusion rules are \[N_{0i}^{j}=\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\ \ N_{1i}^{j}=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 0&0&0\end{array}\right),\ \ N_{2i}^{j}=\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ 0&0&0\end{array}\right),\] while the Verlinde formula and \(S\)-matrix would give us the following fusion rules: \[N_{0i}^{j}=\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\ \ N_{1i}^{j}=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ -1&0&0\end{array}\right),\ \ N_{2i}^{j}=\left(\begin{array}{ccc}0&0&1\\ -1&0&0\\ 0&-1&0\end{array}\right).\] Intuitively, in order to obtain the Verlinde formula one need drop the condition for (59) i.e., \(0\leq k_{2}\leq 2-k_{1}\), and identify \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-2),L_{\mathfrak{sl}_{2}}(-\frac{4}{3},- \frac{8}{3})\) with \(-L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0)\) and \(-L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\). Similarly, in order to obtain the Verlinde formula for general \(L_{\mathfrak{sl}_{2}}(\ell,0)\), we need drop the condition for (58), and identify \(L_{\mathfrak{sl}_{2}}(\ell,j)\) with \(-L_{\mathfrak{sl}_{2}}(\ell,j+p)\). ## 4. Twisted Zhu's bimodules of contragredient modules of highest weight modules ### Motivation Given an admissible irreducible highest weight \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0)\)-module \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\), one can obtain the ordinary twisted modules and contragredient modules by taking integer and half-integer spectral flow respectively (Figure 1). In general, let \(\{M_{i}\}\) be the collection of all highest weight modules for \(L_{\mathfrak{sl}_{2}}(\ell,0)\) at the admissible level, one can obtain all the \(\mathbb{Z}_{2}\)-twisted modules of \(L_{\mathfrak{sl}_{2}}(\ell,0)\) at the admissible level either from \(\{\sigma^{-\frac{1}{2}}(M_{i})\}\) or \(\{\sigma^{\frac{1}{2}}(M_{i}^{*})\}\). In this Section, we will calculate the twisted Zhu's bimodules of the contragredient modules of highest weight modules and the fusion rules among them. The idea is to use the similar isomorphism in Proposition 3.4. To that end, we calculate the untwisted Zhu's bimodule first and fusion rules among untwisted modules. ### Untwisted Zhu's bimodules Let \(M\) be a \(V\)-module. Let \(O(M)\) be the linear span of elements of type \[\operatorname{Res}_{z}\biggl{(}Y(a,z)\frac{(z+1)^{\operatorname{wt}a}}{z^{2}}v \biggr{)}.\] In particular, for \(V=L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0)\). The \(O(M)\) is spanned by \[\operatorname{Res}_{z}\biggl{(}Y(e,z)\frac{(z+1)}{z^{m+1}}\biggr{)}v =(e(-m-1)+e(-m))v,\] \[\operatorname{Res}_{z}\biggl{(}Y(f,z)\frac{(z+1)}{z^{m+1}}\biggr{)}v =(f(-m-1)+f(-m))v,\] \[\operatorname{Res}_{z}\biggl{(}Y(h,z)\frac{(z+1)}{z^{m+1}}\biggr{)}v =(h(-m-1)+h(-m))v,\] for any positive integer \(m\) and for \(v\in M\). By [13], one has the following isomorphism \(F\) \[F:L(j)\otimes U(\mathfrak{g}) \to A(V_{\mathfrak{g}}(\ell,j)),\] \[F:v\otimes a_{1}\cdots a_{n} \mapsto[a_{n}(-1)\cdots a_{1}(-1)v].\] whose inverse is given by \[F^{-1}:A(V_{\mathfrak{g}}(\ell,j)) \to L(j)\otimes U(\mathfrak{g}),\] \[F^{-1}:[a_{1}(-1-i_{1})\cdots a_{n}(-1-i_{n})v] \to(-1)^{i_{1}+\cdots+i_{n}}v\otimes a_{n}\cdots a_{1},\] where the tensor products are over \(U(\mathfrak{g})\). Consider the example of \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0)\). It has three irreducible modules in category \(\mathcal{O}\), i.e., \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0),L_{\mathfrak{sl}_{2}}(-\frac{4}{3},- \frac{2}{3})\), \(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\). By [14] the maximal submodule of \(V_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\) is generated by singular vectors \(v_{1}\) and \(v_{2}\), \[v_{1} =\frac{2}{9}e_{(-2)}-\frac{1}{3}e_{(-1)}h_{(-1)}+e_{(-1)}e_{(-1)} f_{(0)} \tag{61}\] \[v_{2} =-\frac{10}{9}f_{(-1)}-\frac{5}{3}h_{(-1)}f_{(0)}+e_{(-1)}f_{(0)}^ {2}, \tag{60}\] and the maximal submodule of \(V_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\) is generated by singular vectors \(e_{(-1)}\) and \[\frac{280}{81}f_{(-2)}+\frac{70}{27}h_{(-2)}f_{(0)}-\frac{10}{9}e_ {(-2)}f_{(0)}^{2}\] \[+\frac{140}{27}h_{(-1)}f_{(-1)}+\frac{35}{9}h_{(-1)}^{2}f_{(0)}- \frac{5}{3}h_{(-1)}e_{(-1)}f_{(0)}^{2}\] \[-\frac{70}{9}e_{(-1)}f_{(-1)}f_{(0)}-\frac{10}{3}e_{(-1)}h_{(-1)} f_{(0)}^{2}+e_{(-1)}^{2}f_{(0)}^{3} \tag{62}\] Figure 1. The relation among admissible irreducible highest weight \(L(-4/3,0)\)-modules, contragredient modules and ordinary \(\mathbb{Z}_{2}\)-twisted modules, where each state labelled by \(|\lambda,\Delta\rangle\), its \(\mathfrak{sl}_{2}\)-weight \(\lambda\) and conformal dimension \(\Delta\). We next use the singular vectors to compute \(A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})),A(L_{\mathfrak{sl}_{2}}(-\frac {4}{3},-\frac{4}{3})).\) For \(A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})),\) the preimage of equivalence classes of \(v_{1}\) and \(v_{2}\) are \[-\frac{2}{9}v\otimes e-\frac{1}{3}v\otimes he+fv\otimes e^{2}, \tag{64}\] \[-\frac{10}{9}v\otimes f-\frac{5}{3}fv\otimes h+f^{2}v\otimes e. \tag{63}\] Thus \[A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3}))=(L(-\frac{2}{3})\otimes U (\mathfrak{sl}_{2}))/I_{1},\] where \(I_{1}\) is generated by (63) and (64), and the tensor products are over \(U(\mathfrak{g})\). For \(A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3}))\), the preimage of equivalence classes of (62) is \[-\frac{280}{81}v\otimes f-\frac{70}{27}fv\otimes h+\frac{10}{9}f^ {2}v\otimes e+\frac{140}{27}v\otimes hf+\frac{35}{9}fv\otimes h^{2}\] \[-\frac{5}{3}f^{2}v\otimes eh-\frac{70}{9}fv\otimes fe-\frac{10}{3 }f^{2}v\otimes he+f^{3}v\otimes e^{2}.\] Denote by \(L(-\frac{2}{3})^{*}\) and \(L(-\frac{4}{3})^{*}\) the dual of highest \(\mathfrak{sl}_{2}\)-modules with weights \(-\frac{2}{3}\) and \(-\frac{4}{3}\). We now consider the \(A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0))\)-module \(L(-\frac{2}{3})^{*}\otimes A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3}))\). Let \(v^{\prime}\) be the lowest weight vector of \(L(-\frac{2}{3})^{*}\). Then \(I_{1}\otimes v^{\prime}\cong\langle-\frac{10}{9}v\otimes ev^{\prime}+fv\otimes e ^{2}v^{\prime},-\frac{10}{9}fv\otimes v^{\prime}+f^{2}v\otimes ev^{\prime} \rangle.\) It is isomorphic to \[A(L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3}))\otimes L(-\frac{2}{3})^{*} \cong\mathbb{C}(v\otimes v^{\prime}).\] Thus we have \[L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\times\left(L_{\mathfrak{sl}_{ 2}}(-\frac{4}{3},-\frac{2}{3})\right)^{*}=L_{\mathfrak{sl}_{2}}(-\frac{4}{3},0).\] Similarly, we also have \[L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\times\left(L_{ \mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\right)^{*}=L_{\mathfrak{sl}_{2} }(-\frac{4}{3},0), \tag{66}\] \[L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\times\left(L_{ \mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\right)^{*}=\left(L_{\mathfrak{sl} _{2}}(-\frac{4}{3},-\frac{2}{3})\right)^{*},\] (67) \[L_{\mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{4}{3})\times\left(L_{ \mathfrak{sl}_{2}}(-\frac{4}{3},-\frac{2}{3})\right)^{*}=L_{\mathfrak{sl}_{2}}(- \frac{4}{3},-\frac{2}{3}), \tag{65}\] ### The contragredient modules of the highest weight modules As one can see from above example, directly computing Zhu's bimodules depends on the explicit form of singular vectors. In practice, it is extremely tedious to convert the singular vectors given in [10] into their normal forms. It was noted in [13] that the fusion rules among the admissible modules remain the same after a shift of the conformal vector. By a proper shift of the conformal vector, there are nice and compact projection formulas for singular vectors ([12], [13]) which can help us to compute fusion rules avoiding finding explicit form of singular vectors. Since \[[f(0),e(0)^{\gamma}] =-(\gamma e(0)^{\gamma-1}+\gamma(\gamma-1)e(0)^{\gamma-1}), \tag{69}\] \[[h(0),e(0)^{\gamma}] =2\gamma e(0)^{\gamma}, \tag{68}\] then by using the similar argument as in [10] we have **Proposition 4.1**.: _Let \(j=n-1-(k-1)\) where \(n\) and \(k\) are positive integers satisfying \(1\leq n\leq p-1\), \(1\leq k\leq q\) and let \(v\) be a highest weight vector of the Verma module \((M(\ell,j))^{*}\). Set_ \[E_{1}(n,k)=e(0)^{n+(k-1)t}f(-1)^{n+(k-2)t}e(0)^{n+(k-3)t}f(-1)^{n +(k-4)t}\cdots f(-1)^{n-(k-2)t}e(0)^{n-(k-1)t},\] \[E_{2}(n,k)=f(-1)^{p-n+(q-k)t}e(0)^{p-n+(q-k-1)t}f(-1)^{p-n+(q-k-2) t}e(0)^{p-n+(q-k-3)t}\] \[\cdots e(0)^{p-n-(q-k+1)t}f(-1)^{p-n-(q-k)t}.\] _Then \(v_{-j,1}=E_{1}(n,k)v,v_{-j,2}=E_{2}(n,k)v\) are singular vectors of \((M(\ell,j))^{*}\)._ Basically, we just interchange \(e\) and \(f\) based on the corresponding result in the case of highest weight modules. Next, we consider the projection formula. Let \(P_{1}\) be the projection \(\hat{\mathfrak{g}}\) onto \(\mathfrak{g}\) such that \(P_{1}(a\otimes t^{n})=a\) for any \(a\in\mathfrak{g}\) and \(P_{1}(c)=0\). Let \(H_{\alpha}=fe-\alpha h-\alpha(\alpha+1)\). **Proposition 4.2**.: _[_11_]_ _The following projection formulas hold:_ \[P_{1}(E_{1}(n,k)) =\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}H_{-r-st}\right)e^{n}\] \[P_{1}(E_{2}(n,k)) =\left(\prod_{r=0}^{p-n-1}\prod_{s=1}^{q-k}H_{r+st}\right)f^{p-n}.\] Define subalgebra \[T_{-}=\mathbb{C}e+t^{-1}\mathbb{C}[x^{-1}]\otimes\mathfrak{g}.\] Let \(B_{0}=\mathbb{C}(t^{-1}+1)\otimes e+(x^{-2}+x^{-1})\mathbb{C}[x^{-1}]\otimes \mathfrak{g}.\) Since \(B_{0}\) is an ideal of \(N_{-}\), \(U(N_{-})B_{0}=B_{0}U(N_{-})\) is an ideal of \(U(N_{-})\). Set \(L_{0}=N_{-}/B_{0}\). Define \[T_{+}=e(0)+B_{0},T_{-}=f(-1)+B_{0},T_{0}=h(-1)+B_{0}.\] They obey the following \(\mathfrak{sl}_{2}\)-relationships \[[T_{0},T_{+}]=-2T_{+},\ \ [T_{0},T_{-}]=2T_{-},\ \ [T_{+},T_{-}]=T_{0}.\] Define \(G_{\alpha}=T_{-}T_{+}-\alpha T_{0}+\alpha(\alpha+1)\). They satisfy the following relationships \[G_{\alpha}G_{\beta}=G_{\beta}G_{\alpha},\ \ T_{+}^{m}G_{\alpha}=G_{ \alpha-m}T_{+}^{m},\ \ T_{-}^{m}G_{\alpha}=G_{\alpha+m}T_{-}^{m},\] \[T_{-}^{m}T_{+}^{m}=G_{0}G_{1}\cdots G_{m-1},\ \ T_{+}^{m}T_{-}^{m}=G_{-1}G_{-2}\cdots G_{-m},\] for any complex numbers \(\alpha,\beta\) and for any positive integer \(m\). Let \(P\) be the natural quotient map from \(U(N_{-})\) onto \(U(L_{0})\). Using the similar method as suggested in [11] we obtain **Proposition 4.3**.: _The following formulas hold:_ \[P(E_{1}(n,k)) =\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}G_{-r-st}\right)T_{+}^{n}\] \[P(E_{2}(n,k)) =\left(\prod_{r=0}^{p-n-1}\prod_{s=1}^{q-k}G_{r+st}\right)T_{-}^{ p-n}.\] ### Fusion rules For the contragredient modules of the highest weight modules, we choose the new conformal vector \(\omega_{z}=\omega-\frac{1}{2}zh(-2)\mathbf{1}\), where \(0<z<1\). Let \(M\) be any weak \(V(\ell,\mathbb{C})\)-module. Since \[\operatorname{wt}h(-1)=1,\ \ \operatorname{wt}e(-1)=1+z,\ \ \operatorname{wt}f(-1)=1-z,\] we have \[\operatorname{Res}_{z}\frac{(1+z)^{[\operatorname{wt}f]}}{z^{m}}Y( f,z)u =f(-m)u\] \[\operatorname{Res}_{z}\frac{(1+z)^{[\operatorname{wt}e]}}{z^{m}}Y( e,z)u =(e(-m)+e(1-m))u,\] \[\operatorname{Res}_{z}\frac{(1+z)^{\operatorname{wt}h}}{z^{m+1}}Y (h,z)u =(h(-m-1)+h(-m))u\] for any positive integer \(m\) and for \(u\in M\). **Proposition 4.4**.: _Let \(j=n-1-(k-1)t\) be an admissible weight. Then the \(A(L(\ell,0))\)-bimodule \(A((L(\ell,j))^{*})\) is isomorphic to the quotient space of \(\mathbb{C}[x,z]\) modulo the subspace_ \[\mathbb{C}[x,z]^{n}+\mathbb{C}[x]f^{\prime}_{j,0}(x,z)+\mathbb{C}[x]f^{\prime }_{j,1}(x,z)+\cdots+\mathbb{C}[x]f^{\prime}_{j,n-1}(x,z)\] _where \(f^{\prime}_{j,i}(x,z)=z^{i}\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(x+r+i-st).\) The left and right actions of \(A(L(\ell,0))\) on \(A(L(\ell,j)^{*})\) are given by (54)._ Proof.: First, \((M(\ell,j))^{*}\cong U(N_{-})\) as a vector space. We have \[O((M(\ell,j))^{*})\cong f(-1)U(N_{-})+B_{0}U(N_{-}).\] Recall \(v_{-j,1},v_{-j,2}\) are two singular vectors of \((M(\ell,j))^{*}\). Then we have \[A((L(\ell,j))^{*})\cong(M(\ell,j))^{*}/(O((M(\ell,j))^{*}+U(N_{-})v_{-j,1}+U(N _{-})v_{-j,2})\] \[\cong U(N_{-})/(B_{0}U(N_{-})+f(-1)U(N_{-})+U(N_{-})E_{1}(n,k)+U(N_{-})E_{2}(n,k))\] as \(A((L(\ell,0))^{*})\)-bimodules. Note that \(U(N_{-})/B_{0}U(N_{-})\cong U(L_{0})\). Thus \[A((L(\ell,j))^{*})\cong U(L_{0})/(U(L_{0})P(E_{1}(n,k))+U(L_{0})P(E_{2}(n,k))+ T_{-}U(L_{0})).\] For any nonnegative integers \(a,b,d\), using above relationships, we have (See Appendix B for the detail) \[\begin{split}& T_{-}^{a}T_{0}^{b}T_{+}^{d}P(E_{1}(n,k))\\ &=T_{-}^{a}\left(\prod_{r=1}^{m}\prod_{s=1}^{k-1}(r+st+d)(T_{0}+r +st+d-1)\right)T_{0}^{b}T_{+}^{n+d}\qquad\text{mod }T_{-}U(L_{0}).\end{split} \tag{70}\] Noticing that \(r+st+d\neq 0\) for any \(1\leq r\leq n\), \(1\leq s\leq k-1\), \(d\in\mathbb{Z}_{+}\) we obtain \[\begin{split}& U(L_{0})P(E_{1}(n,k))+T_{-}U(L_{0})\\ &=T_{-}U(L_{0})+\sum_{d=0}^{\infty}\mathbb{C}[T_{0}]\left(\prod_ {r=0}^{n-1}\prod_{s=1}^{k-1}(T_{0}+r+st+d)\right)T_{+}^{n+d}.\end{split}\] Similarly, let \(a,b,d\) be any nonnegative integers. If \(d<p-n\), we have (see Appendix B for details) \[\begin{split}& T_{-}^{a}T_{0}^{b}T_{+}^{d}P(E_{2}(n,k))\\ &=T_{-}^{a+p-n-d}(T_{0}+2(p-n-d))^{b}\prod_{r=0}^{p-n-1}\prod_{s =1}^{q-k}\prod_{i=1}^{d}G_{r+st-p+n}G_{-i-p+n+d}\qquad\text{mod }T_{-}U(L_{0}).\end{split} \tag{71}\] If \(d=m+p-n\) for some \(m\in\mathbb{Z}_{+}\), we have \[\begin{split}& T_{-}^{a}T_{0}^{b}T_{+}^{d}P(E_{2}(n,k))\\ &=T_{-}^{a}\prod_{r=1}^{p-n}\prod_{s=0}^{q-k}(st-m-r)(-T_{0}+st-m -r+1)T_{0}^{b}T_{+}^{m}\qquad\text{mod }T_{-}U(L_{0}).\end{split}\] Since \(-st+m+r-1\neq 0\) for any \(1\leq r\leq p-n\), \(0\leq s\leq q-k\), we obtain \[\begin{split}& U(L_{0})P(E_{2}(n,k))+T_{+}U(L_{0})\\ &=T_{-}U(L_{0})+\sum_{m=0}^{\infty}\mathbb{C}[T_{0}]\left(\prod_{r =0}^{p-n-1}\prod_{s=0}^{q-k}(T_{0}-st+m+r)\right)T_{+}^{m}.\end{split}\] Thus \[\begin{split}& U(L_{0})P(E_{1}(n,k))+U(L_{0})P(E_{2}(n,k))+T_{-}U(L _{0})\\ &\subset T_{+}U(L_{0})+U(L_{0})T_{+}^{n}+\sum_{i=0}^{n-1} \mathbb{C}[T_{0}]\left(\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(T_{0}-st+m+r) \right)T_{+}^{m}.\end{split}\] On the other hand, since \(-r-st-d\neq s^{\prime}t^{\prime}-m-r^{\prime}\) for any \(0\leq r\leq n-1,1\leq s\leq k-1\), \(0\leq r\leq p-n-1,0\leq s\leq q-k\), \(d,m\in\mathbb{Z}_{+}\), \(\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(x-st+m+r)\) and \(\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(x-st+m+r)\) are relatively prime. Then we obtain \[\mathbb{C}[T_{0}]T_{+}^{n+i}\subset U(L_{0})P(E_{1}(n,k))+U(L_{0})P(E_{2}(n,k)) +T_{-}U(L_{0})\] for any \(i\in\mathbb{Z}_{+}\). This shows that \[\begin{split}& U(L_{0})P(E_{1}(n,k))+U(L_{0})P(E_{2}(n,k))+T_{-}U(L _{0})\\ &\supset T_{-}U(L_{0})+U(L_{0})T_{+}^{n}+\sum_{i=0}^{n-1} \mathbb{C}[T_{0}]\left(\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}(T_{0}-st+m+r) \right)T_{+}^{i}.\end{split}\] Set \(x=T_{0}\), \(y=T_{-}\). Then the Proposition follows from the similar argument in Proposition 4.3, Lemma 4.5 ([1]). We have the fusion rules: **Theorem 4.5**.: _For admissible weight \(j_{i}=n_{i}-1-(k_{i}-1)t\)\((i=1,2)\), the fusion rules are given as follows:_ \[L(\ell,j_{1})\times L(\ell,j_{2})=\sum_{i=\max\{0,n_{1}+n_{2}-p\}}^ {\min\{n_{1}-1,n_{2}-1\}}L(\ell,j_{1}+j_{2}-2i), \tag{73}\] \[(L(\ell,j_{1}))^{*}\times L(\ell,j_{2})=L(\ell,j_{2})\times(L( \ell,j_{1}))^{*}=\left\{\begin{array}{ll}L(\ell,-j_{1}+j_{2}),&\mbox{ if }n_{2}-n_{1}\geq 0\mbox{;}\\ (L(\ell,j_{1}-j_{2}))^{*},&\mbox{ if }n_{2}-n_{1}<0.\end{array}\right.\] (74) \[(L(\ell,j_{1}))^{*}\times(L(\ell,j_{2}))^{*}=\sum_{i=\max\{0,n_{1 }+n_{2}-p\}}^{\min\{n_{1}-1,n_{2}-1\}}(L(\ell,j_{1}+j_{2}-2i))^{*}. \tag{72}\] Proof.: (72) was proved in [10]. We use the similar method to prove (73) and (74). We prove the (73). For any admissible weight \(-j\), let \(\mathbb{C}v_{-j}\) be the one-dimensional module for Lie algebra \(\mathbb{C}h\) such that \(hv_{-j}=-jv_{-j}\). Then \(\mathbb{C}v_{-j}\) is the lowest weight space of \(L(\ell,-j)\). By Frenkel-Zhu's Theorem, we need calculate the \(A(L(\ell,0))\)-module \(A(L(\ell,-j_{1}))\otimes_{A(L(\ell,0))}\mathbb{C}v_{j_{2}}.\) Note \(ev_{j_{2}}=0\). We get \[A(L(\ell,-j_{1}))\otimes_{A(L(\ell,0))}\mathbb{C}v_{j_{2}}\cong\mathbb{C}[x,z ]/J\] where \(J\) is the subspace of \(\mathbb{C}[x,z]\) spanned by \[\{x-j_{2},z\}.\] Thus, \(\mathbb{C}[x,z]/J\cong v_{-j_{1}}\otimes v^{\prime}_{j_{2}}.\) And \(x*(v_{-j_{1}}\otimes v^{\prime}_{j_{2}})=j_{2}-j_{1}\), as required. For (74), let \(\mathbb{C}v_{-j}\) be the one-dimensional module for Lie algebra \(\mathbb{C}h\) such that \(hv_{-j}=-jv_{-j}\). Using Proposition 4.4 we get \[A(L(\ell,j_{1}))\otimes_{A(L(\ell,0))}\mathbb{C}v_{-j_{2}}\cong\mathbb{C}[x,z ]/J\] where \(J\) is the subspace of \(\mathbb{C}[x,z]\) spanned by \[\{x+j_{2},\mathbb{C}[x,z]z^{n_{1}},f^{\prime}_{j_{1},i}(-j_{2},1)\mathbb{C}[x] z^{i},i=0,1,...,n_{1}-1\}.\] If \(j_{2}\) does not satisfy the relation \(0\leq k_{2}-1\leq q-k_{1}\), then \[f^{\prime}_{j_{1},i}(-j_{2},1)=\prod_{r=0}^{p-n_{1}-1}\prod_{s=0}^{q-k_{1}}(-j _{2}+r+i-st)\neq 0\] for \(0\leq i\leq n_{1}-1.\) Thus \(A(L(\ell,-j_{1}))\otimes_{A(L(\ell,0))}\mathbb{C}v_{j_{2}}=0\) so that all the corresponding fusion rules are zero. Suppose \(0\leq k_{2}-1\leq q-k_{1}.\) As before \(\mathbb{C}[x]z^{i}=0\) in \(\mathbb{C}[x,z]/J\) if \(f^{\prime}_{j_{1},i}(j_{2},1)\neq 0.\) Noticing \(f^{\prime}_{j_{1},i}(j_{2},1)=0\) if and only if \(-j_{2}+r+i-st=0\) for some \(0\leq r\leq p-n_{1}-1\) and \(0\leq s\leq q-k_{1}\). This implies that \(r+i=n_{2}-1\). Thus \(n_{1}+n_{2}-p\leq i\leq n_{2}-1.\) Therefore \[\max\{0,n_{1}+n_{2}-p\}\leq i\leq\min\{n_{1}-1,n_{2}-1\}.\] If \(n_{1}+n_{2}-p\leq i\leq n_{2}-1\), then \(\mathbb{C}[x]z^{i}\) is not zero in \(\mathbb{C}[x,z]/J\). Thus \[\mathbb{C}[x,z]/J\cong\oplus_{\max\{0,n_{1}+n_{2}-p\}\leq i\leq\min\{n_{1}-1, n_{2}-1\}}\mathbb{C}y^{i}.\] From (54) we get \(x*z^{i}=(-j_{2}-j_{1}+2i)z^{i}\), as required. ### Fusion rules among twisted modules We follow the same notations as in previous subsection. Let \(\hat{\sigma}^{\prime}=e^{-\frac{\pi ih(0)}{2}}\). By using the similar arguments as in Section 3, one has **Theorem 4.6**.: * _We have the following isomorphism_ \[A(L(\ell,j)^{*})\cong A_{\hat{\sigma}^{\prime},\hat{\sigma}^{\prime}}(L(\ell,j)^ {*}),\] _via_ \(\Delta_{\frac{1}{2}}(1)\)_._ * _We also have_ (75) \[(L(\ell,j_{1}))^{*}\times\sigma^{\frac{1}{2}}((L(\ell,j_{2}))^{*})=\sum_{i=\max \{0,n_{1}+n_{2}-p\}}^{\min\{n_{1}-1,n_{2}-1\}}\sigma^{\frac{1}{2}}((L(\ell,j_{1 }+j_{2}-2i))^{*}).\] (76) \[(L(\ell,j_{1}))^{*}\times\sigma^{\frac{1}{2}}(L(\ell,j_{2}))=\left\{ \begin{array}{ll}\sigma^{\frac{1}{2}}(L(\ell,-j_{1}+j_{2})),&\mbox{ if }n_{2}-n_{1}\geq 0\mbox{;}\\ \sigma^{\frac{1}{2}}((L(\ell,j_{1}-j_{2}))^{*}),&\mbox{ if }n_{2}-n_{1}<0.\end{array}\right.\] (77) \[L(\ell,j_{2})\times\sigma^{-\frac{1}{2}}((L(\ell,j_{1}))^{*})= \left\{\begin{array}{ll}\sigma^{-\frac{1}{2}}(L(\ell,-j_{1}+j_{2})),&\mbox{ if }n_{2}-n_{1}\geq 0\mbox{;}\\ \sigma^{-\frac{1}{2}}((L(\ell,j_{1}-j_{2}))^{*}),&\mbox{ if }n_{2}-n_{1}<0.\end{array}\right.\] ## 5. Twisted modules from spectral flow and their MLDEs In [11], we have the following result **Theorem 5.1**.: _If \(V\) is a quasi-lisse vertex superalgebra and \(g\) is an automorphism of \(V\) of finite order, then the supercharacter of its simple \(g\)-twisted module satisfies the twisted modular linear differential equation._ In this section and the next, we shall provide examples for this Theorem, i.e., twisted modules coming from spectral flow. We also discuss their applications in physics. Firstly, let us review some useful facts about modular forms and modular differential operators. See any standard reference, or [10] for further details. The ordinary Eisenstein series are modular forms for the full modular group \(\Gamma\) of weight \(2k\) with \(k\geq 2\). We define our Eisenstein series, following the notation of [1], \[\mathbb{E}_{k}(\tau)=-\frac{B_{2k}}{2k!}+\frac{2}{(2k-1)!}\sum_{n=1}^{\infty} \frac{n^{2k-1}q^{n}}{1-q^{n}} \tag{78}\] where \(B_{2k}\) is the \(2k\)'th Bernoulli number. The ring of modular forms for the full modular group \(\Gamma\) is freely generated by \(\mathbb{E}_{4}(\tau)\) and \(\mathbb{E}_{6}(\tau)\), so we have, \[\bigoplus_{k=0}^{\infty}M_{k}(\Gamma,\mathbb{C})=\mathbb{C}[\mathbb{E}_{4}( \tau),\mathbb{E}_{6}(\tau)] \tag{79}\] We also make use of a class of twisted Eisenstein series that are modular forms for certain congruence subgroups of \(\Gamma\), \[\mathbb{E}_{k}\begin{bmatrix}\varphi\\ \vartheta\end{bmatrix}(\tau)\equiv-\frac{B_{k}(\lambda)}{k!}+\frac{1}{(k-1)!} \sum_{r\geq 0}{}^{\prime}\frac{(r+\lambda)^{k-1}\vartheta^{-1}q^{r+\lambda}}{1- \vartheta^{-1}q^{r+\lambda}}+\frac{(-1)^{k}}{(k-1)!}\sum_{r\geq 1}\frac{(r- \lambda)^{k-1}\vartheta q^{r-\lambda}}{1-\vartheta q^{r-\lambda}} \tag{80}\] where \(\varphi=e^{2\pi i\lambda}\) with \(\lambda\in[0,1)\) and now \(B_{k}(x)\) is the \(k\)'th Bernoulli polynomial. The prime in the first summation means that the \(r=0\) term should be omitted when \(\varphi=\vartheta=1\). The spaces of modular forms for \(\Gamma(2)\), \(\Gamma^{0}(2)\) all admit a simple description in terms of theta functions. For example, \[M_{2k}(\Gamma^{0}(2))=\operatorname{span}_{\mathbb{C}}\big{\{}\bar{\Theta}_{ r,s}(\tau)|r+s=k\big{\}} \tag{81}\] where the \(\bar{\Theta}_{r,s}\) takes the following form, \[\bar{\Theta}_{r,s}(\tau):=\theta_{2}(\tau)^{4r}\theta_{3}(\tau)^{4s}+\theta_ {2}(\tau)^{4s}\theta_{3}(\tau)^{4r},\qquad r\leq s. \tag{82}\] We define \(k\)'th order modular differential operators \(D_{q}^{(k)}\) as \[D_{q}^{(k)}\chi(q):=\partial_{(2k-2)}\circ\cdots\circ\partial_{(2)}\circ \partial_{(0)}\chi(q), \tag{83}\] then modular linear differential operators that are holomorphic and monic have the following generic form, \[\mathcal{D}_{q}^{(k)}\equiv D_{q}^{(k)}+\sum_{r=1}^{k}f_{r}(q)D_{q}^{(k-r)}, \quad f_{r}(q)\in M_{2k}(\tilde{\Gamma},\mathbb{C}) \tag{84}\] where \(\tilde{\Gamma}\) denote any congruence subgroup of \(\Gamma\). ### \(A_{1}^{(1)}\) at boundary admissible levels \(k=-2+\frac{2}{u}\) In this section, we give some specific MLDEs of irreducible \(\mathbb{Z}_{2}\)-twisted modules for \(A_{1}^{(1)}\) at boundary admissible level \(k=-2+\frac{2}{u}\). Following from Proposition 2.9 and (57), we have: **Theorem 5.2**.: _All irreducible \(\mathbb{Z}_{2}\)-twisted modules of \(L_{k}(\mathfrak{sl}_{2})\) at admissible level in category \(\mathcal{O}\) can be obtained by using \(\ell=-\frac{1}{2}\) spectral flow on the untwisted modules in category \(\mathcal{O}\). In particular, for boundary admissible level, all of those irreducible twisted modules are ordinary modules. we find that the \(q\)-series characters satisfy the following relation,_ \[\operatorname{ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))](q)=\operatorname{ ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,u-1-j}))](q) \tag{85}\] _Furthermore, the number of independence \(q\) series characters is \(\frac{u+1}{2}\)._ Now let us give some concrete examples for small values of \(u\). **Example 5.3**.: _Let us consider \(\hat{\mathfrak{g}}=A_{1}^{(1)}\) at level \(k=-\frac{4}{3}\), the number of independence \(q\)-series characters is two, we denote these two characters of twisted modules as \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,0}))\right]\) and \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,1}))\right]\). They are annihilated by a second-order \(\Gamma^{0}(2)\)-MLDE which we display here:_ \[\left(D_{q}^{(2)}-\frac{1}{96}\bar{\Theta}_{1,1}(\tau)\right) \operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,i}))\right](q)=0, \quad i=0,1. \tag{86}\] _Since the modular form \(M_{2k}(\Gamma^{0}(2))\) spanned by the \(\Theta_{r,s}\) which can be rewritten in terms of twisted Eisenstein series, we can rewrite above MLDE (86) as_ \[\left(D_{q}^{(2)}+\frac{4}{3}\mathbb{E}_{4}\begin{bmatrix}-1\\ 1\end{bmatrix}+\frac{28}{3}\mathbb{E}_{4}\begin{bmatrix}1\\ -1\end{bmatrix}+\frac{28}{3}\mathbb{E}_{4}\begin{bmatrix}-1\\ -1\end{bmatrix}\right)\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{ k,i}))\right](q)=0,\quad i=0,1. \tag{87}\] Actually, the twisted module character \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,1}))\right]\big{|}_ {y=1}\) has two different physical interpretations. In [10], the authors computed the defect Schur indices of the \((A_{1},A_{3})\) Argyres-Douglas theory, \[\mathcal{I}_{\mathbb{S}}(q,x)=(q)_{\infty}^{2}\sum_{\begin{subarray}{c}\ell_ {1},\ldots,\ell_{3},\\ k_{1},\ldots,k_{3}=0\end{subarray}}^{\infty}\frac{(-1)^{\sum_{i=1}^{3}(k_{i}+ \ell_{i})}q^{\frac{1}{2}}\sum_{i=1}^{3}(k_{i}+\ell_{i})+\ell_{2}(\ell_{1}+\ell _{3})}{\prod_{i=1}^{3}(q)_{k_{i}}(q)_{\ell_{i}}}(x)^{\ell_{1}-k_{1}}\left(q^{ \frac{\ell_{1}-k_{1}}{2}}+q^{\frac{k_{1}-\ell_{1}}{2}}-q^{\frac{\ell_{1}+k_{1} }{2}}\right)\delta_{k_{2},\ell_{2}}\delta_{k_{1}+k_{3},\ell_{1}+\ell_{3}}. \tag{88}\] The corresponding VOA of the \((A_{1},A_{3})\) AD theory is just \(L_{-\frac{4}{3}}(A_{1}^{(1)})\). One can check that the above surface defect index agrees with the twisted module character \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,1}))\right]\big{|}_ {y=1}\) with \(x\) replaced by \(\mathbf{z}^{2}\), \[\mathcal{I}_{\mathbb{S}}(q,\mathbf{z}^{2})=\operatorname{ch}\left[\sigma^{- \frac{1}{2}}(L(\Lambda_{k,1}))\right]\big{|}_{y=1}. \tag{89}\] In [11], the authors compute the lens space index of the \((A_{1},A_{3})\) AD theory. For example, we have checked that \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,1}))\right]\big{|} _{y=1}\) agrees with the lens space index \(\mathcal{I}_{(A_{1},D_{3})}^{\operatorname{Mac}}\big{|}_{t=1}\) up to an overall factor by identify their "twisting parameter" with the spectral flow parameter. One advantage of our expression is that operator spectrum of the 4d theory is much easier to read off, and modular properties are also apparent. **Example 5.4**.: _Let us consider \(\hat{\mathfrak{g}}=A_{1}^{(1)}\) at level \(k=-\frac{8}{5}\), the number of independent \(q\)-series characters is three, we denote these three characters of twisted modules as \(\operatorname{ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))](q),j=0,1,2\). They satisfy a third-order \(\Gamma^{0}(2)\)-MLDE._ \[\left[D_{q}^{(3)}-\left(\frac{7}{450}\bar{\Theta}_{0,2}(\tau)+ \frac{31}{1800}\bar{\Theta}_{1,1}(\tau)\right)D_{q}^{(1)}-\frac{1}{400}\bar{ \Theta}_{1,2}(\tau)\right]\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(L( \lambda_{k,j}))\right](q)=0,\quad j=0,1,2. \tag{90}\] **Example 5.5**.: _Let us consider \(\hat{\mathfrak{g}}=A_{1}^{(1)}\) at level \(k=-\frac{12}{7}\), the number of independent \(q\)-series characters is four, we denote these three characters of twisted modules as \(\operatorname{ch}[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))],j=0,1,\cdots,3\). They satisfy a fourth-order \(\Gamma^{0}(2)\)-MLDE._ \[\left[D_{q}^{(4)}-\left(\frac{1}{18}\bar{\Theta}_{0,2}(\tau)+ \frac{17}{1008}\bar{\Theta}_{1,1}(\tau)\right)D_{q}^{(2)}\right.\] \[\left.+\left(\frac{50}{9261}\bar{\Theta}_{0,3}(\tau)-\frac{883}{49 392}\bar{\Theta}_{1,2}(\tau)\right)D_{q}^{(1)}+\left(\frac{9}{10976}\bar{ \Theta}_{1,3}(\tau)-\frac{225}{175616}\bar{\Theta}_{2,2}(\tau)\right)] \mathrm{ch}\left[\sigma^{-\frac{1}{2}}(L(\Lambda_{k,j}))\right](q)=0. \tag{91}\] ### \(A_{1}^{(1)}\) at admissible level \(k=-\frac{1}{2}\) For \(A_{1}^{(1)}\) at level \(k=-\frac{1}{2}\), we can write down characters of admissible highest weight modules following [12], \[\operatorname{ch}[\mathcal{L}_{0}] =\frac{\mathbf{y}^{-\frac{1}{2}}}{2}\left[\frac{\eta(\tau)}{ \theta_{\mathbf{z}}(\mathbf{z};q)}+\frac{\eta(\tau)}{\theta_{3}(\mathbf{z};q)} \right],\quad\operatorname{ch}[\mathcal{D}_{-1/2}^{+}]=\frac{\mathbf{y}^{- \frac{1}{2}}}{2}\left[\frac{-i\eta(\tau)}{\theta_{1}(\mathbf{z};q)}+\frac{ \eta(\tau)}{\theta_{2}(\mathbf{z};q)}\right] \tag{93}\] \[\operatorname{ch}[\mathcal{L}_{1}] =\frac{\mathbf{y}^{-\frac{1}{2}}}{2}\left[\frac{\eta(\tau)}{ \theta_{4}(\mathbf{z};q)}-\frac{\eta(\tau)}{\theta_{3}(\mathbf{z};q)}\right], \quad\operatorname{ch}[\mathcal{D}_{-3/2}^{+}]=\frac{\mathbf{y}^{-\frac{1}{2} }}{2}\left[\frac{-i\eta(\tau)}{\theta_{1}(\mathbf{z};q)}-\frac{\eta(\tau)}{ \theta_{2}(\mathbf{z};q)}\right] \tag{92}\] The characters \(\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(M)\right]\) of associated \(\mathbb{Z}_{2}\)-twisted modules from \(\ell=-\frac{1}{2}\) spectral flow are, \[\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(\mathcal{L}_{0}) \right](\mathbf{y},\mathbf{z},q) =\frac{\mathbf{y}^{-\frac{1}{2}}\mathbf{z}^{\frac{1}{4}}q^{- \frac{1}{32}}}{2}\left[\frac{\eta(q\tau)}{\theta_{4}(\mathbf{z}q^{-\frac{1}{4 }};q)}+\frac{\eta(\tau)}{\theta_{3}(\mathbf{z}q^{-\frac{1}{4}};q)}\right]\] \[\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(\mathcal{L}_{1}) \right](\mathbf{y},\mathbf{z},q) =\frac{\mathbf{y}^{-\frac{1}{2}}\mathbf{z}^{\frac{1}{4}}q^{- \frac{1}{32}}}{2}\left[\frac{\eta(\tau)}{\theta_{4}(\mathbf{z}q^{-\frac{1}{4 }};q)}-\frac{\eta(\tau)}{\theta_{3}(\mathbf{z}q^{-\frac{1}{4}};q)}\right]\] \[\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(\mathcal{D}^{+}_{-1/ 2})\right](\mathbf{y},\mathbf{z},q) =\frac{\mathbf{y}^{-\frac{1}{2}}\mathbf{z}^{\frac{1}{4}}q^{- \frac{1}{32}}}{2}\left[\frac{-i\eta(\tau)}{\theta_{1}(\mathbf{z}q^{-\frac{1}{4 }};q)}+\frac{\eta(\tau)}{\theta_{2}(\mathbf{z}q^{-\frac{1}{4}};q)}\right]\] \[\operatorname{ch}\left[\sigma^{-\frac{1}{2}}(\mathcal{D}^{+}_{-3/ 2})\right](\mathbf{y},\mathbf{z},q) =\frac{\mathbf{y}^{-\frac{1}{2}}\mathbf{z}^{\frac{1}{4}}q^{- \frac{1}{32}}}{2}\left[\frac{-i\eta(\tau)}{\theta_{1}(\mathbf{z}q^{-\frac{1}{4 }};q)}-\frac{\eta(\tau)}{\theta_{2}(\mathbf{z}q^{-\frac{1}{4}};q)}\right] \tag{94}\] For the character \(\operatorname{ch}\left[\mathcal{L}_{0}\right]\) of vacuum module of \(L_{-1/2}(\mathfrak{sl}_{2})\), it is a solution of a third-order MLDE under full \(SL(2,\mathbb{Z})\) group, \[\left[D^{(3)}_{q}-\frac{235}{4}\mathbb{E}_{4}(\tau)D^{(1)}_{q}-\frac{455}{8} \mathbb{E}_{6}(\tau)\right]\operatorname{ch}[\mathcal{L}_{0}](q)=0 \tag{95}\] There are two independent well-defined \(q\)-series characters of \(\mathbb{Z}_{2}\)-twisted modules denoted by \(\operatorname{ch}[\sigma^{-\frac{1}{2}}(\mathcal{L}_{0})](q)\) and \(\operatorname{ch}[\sigma^{-\frac{1}{2}}(\mathcal{L}_{1})](q)\). They satisfy a second-order \(\Gamma^{0}(2)\) MLDE. \[\left[D^{(2)}_{q}-\frac{5}{48}\bar{\Theta}_{0,1}(\tau)D^{(1)}_{q}+\left(\frac {25}{9216}\bar{\Theta}_{0,2}(\tau)-\frac{41}{9216}\bar{\Theta}_{1,1}(\tau) \right)\right]\operatorname{ch}[\sigma^{-\frac{1}{2}}(\mathcal{L}_{i})](q)=0, \quad i=0,1. \tag{96}\] ### \(A^{(1)}_{2}\) at boundary admissible level \(k=-\frac{3}{2}\) Consider the boundary principal admissible weight modules of \(A^{(1)}_{2}\) at the boundary admissible level \(k=-\frac{3}{2}\). There are four irreducible admissible highest weight modules of affine Lie algebra \(A^{(1)}_{2}\), they are exactly the complete list of irreducible weak \(L_{-3/2}(\mathfrak{sl}_{3})\) modules from category \(\mathcal{O}\)[1, 2]. Their characters can be written in terms of Jacobi theta function [13, 14]. \[\operatorname{ch}\left[\mathcal{L}\left(-\frac{3}{2}\Lambda_{0} \right)\right](\mathbf{y},\mathbf{z}_{1},\mathbf{z}_{2},q) =\mathbf{y}^{-\frac{3}{2}}\left(\frac{\eta(2\tau)}{\eta(\tau)} \right)^{-1}\frac{\theta_{1}\left(\mathbf{z}_{1};q^{2}\right)\theta_{1}\left( \mathbf{z}_{1}\mathbf{z}_{2};q^{2}\right)\theta_{1}\left(\mathbf{z}_{1} \mathbf{z}_{2};q^{2}\right)}{\theta_{1}\left(\mathbf{z}_{1};q\right)\theta_{1} \left(\mathbf{z}_{2};q\right)\theta_{1}\left(\mathbf{z}_{1}\mathbf{z}_{2};q \right)\theta_{1}\left(\mathbf{z}_{1}\mathbf{z}_{2};q\right)}\] \[\operatorname{ch}\left[\mathcal{L}\left(-\frac{3}{2}\Lambda_{1} \right)\right](\mathbf{y},\mathbf{z}_{1},\mathbf{z}_{2},q) =-\mathbf{y}^{-\frac{3}{2}}\left(\frac{\eta(2\tau)}{\eta(\tau)} \right)^{-1}\frac{\theta_{1}(\mathbf{z}_{2};q^{2})\theta_{4}(\mathbf{z}_{1} \mathbf{z}_{2};q^{2})}{\theta_{1}(\mathbf{z}_{1};q)\theta_{1}(\mathbf{z}_{2};q \right)\theta_{1}(\mathbf{z}_{1}\mathbf{z}_{2};q)}\] \[\operatorname{ch}\left[\mathcal{L}\left(-\frac{\rho}{2}\right) \right](\mathbf{y},\mathbf{z}_{1},\mathbf{z}_{2},q) =\mathbf{y}^{-\frac{3}{2}}(\mathbf{z}_{1}\mathbf{z}_{2})^{\frac{3}{2}}q^{ \frac{3}{2}}\left(\frac{\eta(2\tau)}{\eta(\tau)}\right)^{-1}\frac{\theta_{1}( \mathbf{z}_{2}^{-1}q^{-1};q^{2})\theta_{1}(\mathbf{z}_{1}^{-1}q^{-1};q^{2}) \theta_{1}(\mathbf{z}_{1}^{-1}\mathbf{z}_{2}^{-1}q^{-2};q^{2})}{\theta_{1}( \mathbf{z}_{1};q)\theta_{1}(\mathbf{z}_{2};q)\theta_{1}(\mathbf{z}_{1}\mathbf{z} _{2};q)} \tag{97}\] where \(\Lambda_{i}\) denote the fundamental weights of affine Lie algebra \(A^{(1)}_{2}\) and \(\rho\) is the affine Weyl vector. Letting \(z=\sum_{i=1}^{2}\mathfrak{z}_{i}\bar{\Lambda}_{i}\), where \(\bar{\Lambda}_{i}\) are the fundamental weights of finite Lie algebra \(\mathfrak{g}=\mathfrak{sl}_{3}\), we define \(\mathbf{z}_{i}=e^{2\pi i\mathfrak{z}_{i}}\) which appeared in Jacobi theta function. Now, we consider the action of spectral flow on these irreducible highest weight modules. Firstly, we consider the spectral flow along \(\frac{1}{2}\bar{\Lambda}_{1}^{\vee}\) direction, and the character becomes, \[\operatorname{ch}\left[\sigma^{\frac{1}{2}\Lambda_{1}^{\vee}}\left(\mathcal{L} \left(-\frac{3}{2}\Lambda_{0}\right)\right)\right](\mathbf{y},\mathbf{z}_{1}, \mathbf{z}_{2},q)=(\mathbf{y}\mathbf{z}_{1}^{\frac{1}{2}}\mathbf{z}_{2}^{ \frac{1}{2}}q^{\frac{1}{2}})^{-\frac{3}{2}}\left(\frac{\eta(2\tau)}{\eta(\tau)} \right)^{-1}\frac{\theta_{1}\left(\mathbf{z}_{1}q^{\frac{1}{2}};q^{2}\right) \theta_{1}\left(\mathbf{z}_{2};q^{2}\right)\theta_{1}\left(\mathbf{z}_{1} \mathbf{z}_{2}q^{\frac{1}{2}};q^{2}\right)}{\theta_{1}\left(\mathbf{z}_{1} \mathbf{z}_{1}q^{\frac{1}{2}};q\right)\theta_{1}\left(\mathbf{z}_{2};q\right) \theta_{1}\left(\mathbf{z}_{1}\mathbf{z}_{2}q^{\frac{1}{2}};q\right)}. \tag{98}\] This spectral flowed module is an ordinary module. It satisfies a second-order \(\Gamma^{0}(2)\)-modular linear differential equation, \[\left(D^{(2)}_{q}-\frac{5}{576}\bar{\Theta}_{0,2}(\tau)-\frac{11}{576}\bar{ \Theta}_{1,1}(\tau)\right)\operatorname{ch}\left[\sigma^{\frac{1}{2}\Lambda_{1}^{ \vee}}\left(\mathcal{L}\left(-\frac{3}{2}\Lambda_{0}\right)\right)\right](q)=0. \tag{99}\] Secondly, consider the spectral flow of the character \(\operatorname{ch}\left[\mathcal{L}\left(-\frac{\rho}{2}\right)\right]\) along \(\frac{1}{3}(\bar{\Lambda}_{1}^{\vee}+\bar{\Lambda}_{2}^{\vee})\) direction, \[\begin{split}\operatorname{ch}\left[\sigma^{\frac{1}{3}\overline{ \Lambda}_{1}^{\vee}+\frac{1}{3}\overline{\Lambda}_{2}^{\vee}}\left(\mathcal{L }\left(-\frac{\rho}{2}\right)\right)\right](y,\mathbf{z}_{1},\mathbf{z}_{2},q )&=(y\mathbf{z}_{1}^{\frac{1}{3}}\mathbf{z}_{2}^{\frac{1}{3}} \mathbf{z}_{2}^{\frac{1}{3}}\mathbf{z}_{1}^{\frac{1}{3}})^{-\frac{3}{2}}( \mathbf{z}_{1}\mathbf{z}_{2}q^{\frac{3}{2}})^{\frac{3}{2}}q^{\frac{3}{2}}\left( \frac{\eta(2\tau)}{\eta(\tau)}\right)^{-1}\\ &\times\frac{\theta_{1}((\mathbf{z}_{2}q^{\frac{1}{3}})^{-1}q^{-1} ;q^{2})\theta_{1}((\mathbf{z}_{1}q^{\frac{1}{3}})^{-1}q^{-1};q^{2})\theta_{1 }((\mathbf{z}_{1}\mathbf{z}_{2}q^{-\frac{2}{3}})^{-1}q^{-2};q^{2})}{\theta_{1 }(\mathbf{z}_{1}q^{\frac{1}{3}};q)\theta_{1}(\mathbf{z}_{2}q^{\frac{1}{3}};q) \theta_{1}(\mathbf{z}_{1}\mathbf{z}_{2}q^{\frac{3}{2}};q)}\end{split} \tag{100}\] This spectral flowed character matches with the lens space index of \((A_{1},D_{4})\) AD theory [15] with suitable change of variables. ### Bershadsky-Polyakov Algebra \(\operatorname{BP}^{k}\) with \(k=-\frac{9}{4}\) In previous examples, we consider the spectral flowed modules of affine Lie algebra \(\hat{\mathfrak{g}}\). Now we consider an example of the affine \(W\)-algebra [15, 16, 17, 18], the \(W^{k}(\mathfrak{sl}_{3},f_{\min})\) which agrees with the \(\operatorname{BP}^{k}\)-algebra defined in [1]. First, let us review the definition of \(\operatorname{BP}^{k}\)-algebra. **Definition 5.6**.: _[_16_]_ _Given \(k\in\mathbb{C}\), \(k\neq-3\), the level-\(k\) universal Bershadsky-Polyakov algebra \(\operatorname{BP}^{k}\) is the vertex operator algebra with vacuum \(\mathbf{1}\) that is strongly and freely generated by fields \(J(z)\), \(G^{+}(z)\), \(G^{-}(z)\) and \(L(z)\) satisfying the complicated operator product expansions. The conformal weights of the generating fields \(J(z)\), \(G^{+}(z)\), \(G^{-}(z)\) and \(L(z)\) are \(1\), \(\frac{3}{2}\), \(\frac{3}{2}\) and \(2\) respectively, the central charge is,_ \[c_{w,v}^{\operatorname{BP}}=-\frac{(2k+3)(3k+1)}{k+3} \tag{101}\] The action of the spectral flow automorphism \(\sigma^{\ell}\), \(\ell\in\mathbb{Z}\) of the vertex algebra \(\operatorname{BP}^{k}\) on the modes of the generating field \(J(z)\), \(G^{+}(z)\), \(G^{-}(z)\) and \(L(z)\) is \[\begin{split}\sigma^{l}(J_{n})&=J_{n}-\frac{2k+3}{ 3}l\delta_{n,0}\mathbf{1},\\ \sigma^{l}(G_{r}^{+})&=G_{r-l}^{+},\\ \sigma^{l}(G_{r}^{-})&=G_{r+l}^{-},\\ \sigma^{l}(L_{n})&=L_{n}-lJ_{n}+\frac{2k+3}{6}l^{2 }\delta_{n,0}\mathbf{1}.\end{split} \tag{102}\] When \(\ell\) is a half-integer, \(\sigma^{l}\) exchanges twisted and untwisted mode algebras [16]. If we consider the \(|\lambda,\Delta\rangle\in M\) is a state of weight \(\lambda\) and conformal dimension \(\Delta\) for any module \(M\), then the state \((\sigma^{\ell})^{*}|\lambda,\Delta\rangle\in(\sigma^{\ell})^{*}(M)\) satisfied, \[\begin{split} h_{0}(\sigma^{\ell})^{*}|\lambda,\Delta\rangle& =(\lambda+\ell\frac{2k+3}{3})(\sigma^{\ell})^{*}|\lambda,\Delta\rangle\\ L_{0}(\sigma^{\ell})^{*}|\lambda,\Delta\rangle&=( \Delta+\ell\lambda+\frac{2k+3}{6}\ell^{2})(\sigma^{\ell})^{*}|\lambda,\Delta\rangle \end{split} \tag{103}\] For a \(\operatorname{BP}^{k}\)-algebra module \(M\), the character has been defined in [16], \[\operatorname{ch}[M](\theta|\zeta|\tau)=\mathbf{y}^{*k}\mathrm{tr}_{M}\left( \mathbf{z}^{J_{0}}q^{L_{0}-c_{w,v}^{\operatorname{BP}}/24}\right),\quad\kappa =\frac{2k+3}{6} \tag{104}\] where \(\mathbf{y}=e^{2\pi i\theta}\), \(\mathbf{z}=e^{2\pi i\zeta}\) and \(q=e^{2\pi i\tau}\). The character of spectral flowed module \(\sigma^{\ell}(M)\) for \(\ell\in\frac{1}{2}\mathcal{I}\) given by Lemma 4.3 in [16], \[\operatorname{ch}[\sigma^{\ell}(M)]\left(\theta|\zeta|\tau\right)= \operatorname{ch}[M]\left(\theta+2\ell\zeta+\ell^{2}\tau|\zeta+\ell\tau|\tau\right) \tag{105}\] We consider the special case which the level \(k=-\frac{9}{4}\) and \(c_{u,v}^{\operatorname{BP}}=-\frac{23}{2}\), the highest weight vector \(|\lambda,\Delta\rangle=|\frac{1}{4},-\frac{3}{8}\rangle\), and spectral flow parameter \(\ell=\frac{1}{2}\). After spectral flow, the weight and conformal dimension become, \[\lambda^{{}^{\prime}}=\frac{1}{4}+\frac{1}{2}\times(-\frac{1}{2})=0,\qquad \Delta^{\prime}=-\frac{3}{8}+\frac{1}{2}\times\frac{1}{4}-\frac{1}{4}\times \frac{1}{4}=-\frac{5}{16} \tag{106}\] According to [14], the character of spectral flowed module can be written as follows, \[\operatorname{ch}[\sigma^{\frac{1}{2}}(M_{\lfloor\frac{1}{4},-\frac{3}{8}\rangle })](q)=q^{\frac{1}{6}}\left(1+4q+10q^{2}+24q^{3}+51q^{4}+100q^{5}+\mathcal{O}( q^{6})\right) \tag{107}\] This character satisfies a third-order MLDE under full \(SL(2,\mathbb{Z})\) group, \[\left[D_{q}^{(3)}-25\mathbb{E}_{4}(\tau)D_{q}^{(1)}-175\mathbb{E}_{6}(\tau) \right]\operatorname{ch}[\sigma^{\frac{1}{2}}(M_{\lfloor\frac{1}{4},-\frac{3}{8} \rangle})](q)=0 \tag{108}\] ## 6. \(\mathfrak{d}_{4}\) with non-admissible level \(k=-2\) In this section, we propose a relation between simple modules and spectral flowed modules of the non-admissible affine vertex algebra \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\). We also find an ordinary module \(\sigma^{\frac{1}{2}\bar{\Lambda}_{2}}(\mathcal{L}_{\mathfrak{d}_{4}}(-\Lambda_ {2}))\), whose character satisfies a second-order \(\Gamma^{0}(2)\) MLDE. Firstly, recall the result for the simple modules of \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\). **Theorem 6.1**.: _[_10_]_ _The set \(\{\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0}),\mathcal{L}_{\mathfrak{d}_{4} }(-2\Lambda_{1}),\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{3}),\mathcal{L}_{ \mathfrak{d}_{4}}(-2\Lambda_{4}),\mathcal{L}_{\mathfrak{d}_{4}}(-\Lambda_{2})\}\) provides a complete list of irreducible weak \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\)-modules from the Category \(\mathcal{O}\). \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\) is the unique irreducible ordinary module for \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\). And every ordinary \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\)-module is completely reducible._ We consider the spectral flow automorphsim that act on this vacuum character. The fundamental weights \(\bar{\Lambda}_{i}\) of finite Lie algebra \(\mathfrak{d}_{4}\) can be written in terms of the linear combination of simple roots \(\alpha_{i}\). \[\bar{\Lambda}_{1} =\alpha_{1}+\alpha_{2}+\frac{1}{2}\alpha_{3}+\frac{1}{2}\alpha_{ 4}, \bar{\Lambda}_{2} =\alpha_{1}+2\alpha_{2}+\alpha_{3}+\alpha_{4}\] \[\bar{\Lambda}_{3} =\frac{1}{2}\alpha_{1}+\alpha_{2}+\alpha_{3}+\frac{1}{2}\alpha_{ 4}, \bar{\Lambda}_{4} =\frac{1}{2}\alpha_{1}+\alpha_{2}+\frac{1}{2}\alpha_{3}+\alpha_{4} \tag{109}\] Since the \(\mathfrak{d}_{4}\) is simple-laced, then \(\alpha_{i}=\alpha_{i}^{\vee}\) and \(\bar{\Lambda}_{i}=\bar{\Lambda}_{i}^{\vee}\), we will use roots (weights) or coroots (coweights) without distinction. The highest root of finite Lie algebra \(\mathfrak{d}_{4}\) is \(\theta=\bar{\Lambda}_{2}\), therefore the marks and comarks of affine Lie algebra \(\mathfrak{d}_{4}\) are \((a_{i})=(a_{i}^{\vee})=(1,1,2,1,1)\), the level of affine weight \(\Lambda=\sum_{i=0}^{4}\lambda_{i}\Lambda_{i}\) is then given by, \[k=\lambda_{0}+\lambda_{1}+2\lambda_{2}+\lambda_{3}+\lambda_{4} \tag{110}\] **Proposition 6.2**.: _One can check the vacuum module \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\) with highest weight \(\lambda=-2\Lambda_{0}\) becomes highest weights \(-2\Lambda_{1},-2\Lambda_{3},-2\Lambda_{4}\) after spectral flow automorphsim along \(\bar{\Lambda}_{1},\bar{\Lambda}_{3}\) and \(\bar{\Lambda}_{4}\) direction respectively._ Proof.: The powers of \(\tau_{i}\) acts as follows, \[\tau_{i}^{\ell}(h_{n}^{j})=h_{n}^{j}-\ell(\alpha_{i},\alpha_{i})\delta_{n,0}K \tag{111}\] We can compute the action of this automorphsim along \(\bar{\Lambda}_{1}\) on \(h_{0}^{i}\), \[\tau_{1}\tau_{2}\tau_{3}^{\frac{1}{2}}\tau_{4}^{\frac{1}{2}}(h_{0}^{1})=h_{0}^{ 1}+K,\quad\tau_{1}\tau_{2}\tau_{3}^{\frac{1}{2}}\tau_{4}^{\frac{1}{2}}(h_{0}^{2 })=h_{0}^{2} \tag{112}\] \[\tau_{1}\tau_{2}\tau_{3}^{\frac{1}{2}}\tau_{4}^{\frac{1}{2}}(h_{0}^{3})=h_{3}^ {0},\quad\tau_{1}\tau_{2}\tau_{3}^{\frac{1}{2}}\tau_{4}^{\frac{1}{2}}(h_{0}^{4 })=h_{0}^{4} \tag{113}\] That means the spectral flow automorphsim change highest weight \(-2\Lambda_{0}\) into another highest weight \(-2\Lambda_{1}\). One can get highest weight \(-2\Lambda_{3}\) and \(-2\Lambda_{4}\) in the same way. This completes the proof. From above Proposition and some results from physics, we conjecture the following relation between simple modules and spectral flowed modules of \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\). **Conjecture 6.3**.: _The \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{i})\) with \(i=1,3,4\) is the same as spectral flowed module \(\sigma^{\bar{\Lambda}_{i}}(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0}))\), respectively._ **Remark 6.4**.: _In view of Proposition 6.2, if one can show that the spectral flow along the \(\bar{\Lambda}_{i}\) direction preserves the irreduciblity of the module \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{i})\), the Conjecture follows._ There are two evidences for this conjecture. The closed form expressions of characters of all simple modules of \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\) are conjectured from the SCFT/VOA correspondence as [11, 12], \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})] =\mathcal{I}_{0,4}\] \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{1})] =\mathcal{I}_{0,4}-2R_{1}\] \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-\Lambda_{2})] =-2\mathcal{I}_{0,4}+2R_{1}+2R_{2}\] \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{3})] =\mathcal{I}_{0,4}-R_{1}-R_{2}-R_{3}-R_{4}\] \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{4})] =\mathcal{I}_{0,4}-R_{1}-R_{2}-R_{3}+R_{4} \tag{114}\] where \(\mathcal{I}_{0,4}\) is the Schur index of an \(SU(2)\) gauge theory with four hypermultiplets, and the \(R_{j}\) functions are \[R_{j}(\widetilde{\mathbf{m}}_{i},\tau)=\frac{i}{2}\frac{\theta_{1}(2\widetilde{ \mathbf{m}}_{j})}{\eta(\tau)}\prod_{l\neq j}\frac{\eta(\tau)}{\theta_{1}( \widetilde{\mathbf{m}}_{j}+\widetilde{\mathbf{m}}_{l})}\frac{\eta(\tau)}{\theta_{1}( \widetilde{\mathbf{m}}_{j}-\widetilde{\mathbf{m}}_{l})},\quad j=1,2,3,4. \tag{115}\] Here \(\widetilde{m}_{i}=e^{2\pi i\widetilde{\mathbf{m}}_{i}}\) are related to the Cartan element \(\widetilde{z}_{i}=e^{2\pi i\widetilde{\mathbf{z}}_{i}}\) of \(\mathfrak{d}_{4}\) as follows, \[\widetilde{z}_{1}=\frac{\widetilde{m}_{1}}{\widetilde{m}_{2}},\qquad\widetilde{z}_{2 }=\frac{\widetilde{m}_{2}}{\widetilde{m}_{3}},\qquad\widetilde{z}_{3}=\widetilde{m}_ {3}\widetilde{m}_{4},\qquad\widetilde{z}_{4}=\frac{\widetilde{m}_{3}}{ \widetilde{m}_{4}} \tag{116}\] For our purpose, we give \(\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})]\), \(\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{1})]\), \(\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{3})]\), \(\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{4})]\) explicitly, \[\begin{split}\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{0})](q;\widetilde{z}_{1},\widetilde{z}_{2},\widetilde{z}_{3}, \widetilde{z}_{4})&=\frac{1}{2}\frac{\eta(\tau)^{2}}{\theta_{1}( \widetilde{z}_{1}\widetilde{z}_{2}\widetilde{z}_{3}^{2}\widetilde{z}_{4};q) \theta_{1}(\widetilde{z}_{1};q)\theta_{1}(\widetilde{z}_{4};q)\theta_{1}( \widetilde{z}_{3};q)}\\ &\times\sum_{\tilde{\alpha}=\pm}\left(\prod_{i=1}^{4}\alpha_{i} \right)E_{2}\left[\left(\widetilde{z}_{1}^{\frac{1}{4}}z_{2}z_{3}^{\frac{1}{4} }\widetilde{z}_{4}^{\frac{1}{4}}\right)^{\alpha_{1}}\left(\widetilde{z}_{1}^ {\frac{1}{4}}\right)^{\alpha_{2}}\left(\widetilde{z}_{4}^{\frac{1}{4}}\right) ^{\alpha_{3}}\left(\widetilde{z}_{3}^{\frac{1}{4}}\right)^{\alpha_{4}}\right] \end{split}\] \[\begin{split}\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{1})](q;\widetilde{z}_{1},\widetilde{z}_{2},\widetilde{z}_{3}, \widetilde{z}_{4})&=\frac{1}{2}\left(y\widetilde{z}_{1}\widetilde {z}_{2}\widetilde{z}_{3}^{\frac{1}{4}}\widetilde{z}_{4}^{\frac{1}{4}}q^{\frac{ 1}{4}}\right)^{-2}\frac{\eta(\tau)^{2}}{\theta_{1}(\widetilde{z}_{1}q\widetilde {z}_{2}^{2}\widetilde{z}_{3}\widetilde{z}_{4};q)\theta_{1}(\widetilde{z}_{1}q;q )\theta_{1}(\widetilde{z}_{4};q)\theta_{1}(\widetilde{z}_{3};q)}\\ &\times\sum_{\tilde{\alpha}=\pm}\left(\prod_{i=1}^{4}\alpha_{i} \right)E_{2}\left[\left(\widetilde{z}_{1}^{\frac{1}{4}}q^{\frac{1}{4}}\widetilde {z}_{2}\widetilde{z}_{3}^{\frac{1}{4}}\widetilde{z}_{4}^{\frac{1}{4}}\right)^ {\alpha_{1}}\left(\widetilde{z}_{1}^{\frac{1}{4}}q^{\frac{1}{4}}\right)^{ \alpha_{2}}\left(\widetilde{z}_{4}^{\frac{1}{4}}\right)^{\alpha_{3}}\left( \widetilde{z}_{3}^{\frac{1}{4}}\right)^{\alpha_{4}}\right]\end{split}\] \[\begin{split}\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{3})](q;\widetilde{z}_{1},\widetilde{z}_{2},\widetilde{z}_{3}, \widetilde{z}_{4})&=\frac{1}{2}\left(y\widetilde{z}_{1}^{\frac{1}{ 4}}\widetilde{z}_{2}\widetilde{z}_{3}\widetilde{z}_{4}^{\frac{1}{4}}q^{\frac{ 1}{4}}\right)^{-2}\frac{\eta(\tau)^{2}}{\theta_{1}(\widetilde{z}_{1}\widetilde {z}_{2}^{2}\widetilde{z}_{3}q\widetilde{z}_{4};q)\theta_{1}(\widetilde{z}_{1}; q)\theta_{1}(\widetilde{z}_{4};q)\theta_{1}(\widetilde{z}_{3}q;q)}\\ &\times\sum_{\tilde{\alpha}=\pm}\left(\prod_{i=1}^{4}\alpha_{i} \right)E_{2}\left[\left(\widetilde{z}_{1}^{\frac{1}{4}}\widetilde{z}_{2}z_{3}^ {\frac{1}{4}}q^{\frac{1}{4}}\widetilde{z}_{4}^{\frac{1}{4}}\right)^{\alpha_{1}} \left(\widetilde{z}_{1}^{\frac{1}{4}}\right)^{\alpha_{2}}\left(\widetilde{z}_{ 4}^{\frac{1}{4}}\right)^{\alpha_{3}}\left(\widetilde{z}_{3}^{\frac{1}{4}}q^{ \frac{1}{4}}\right)^{\alpha_{4}}\right]\end{split}\] \[\begin{split}\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{4})](q;\widetilde{z}_{1},\widetilde{z}_{2},\widetilde{z}_{3}, \widetilde{z}_{4})&=\frac{1}{2}\left(y\widetilde{z}_{1}^{\frac{1}{ 4}}\widetilde{z}_{2}\widetilde{z}_{3}^{\frac{1}{4}}\widetilde{z}_{4}^{\frac{1}{ 4}}\right)^{-2}\frac{\eta(\tau)^{2}}{\theta_{1}(\widetilde{z}_{1}\widetilde{z}_ {2}^{2}\widetilde{z}_{3}\widetilde{z}_{4};q;q)\theta_{1}(\widetilde{z}_{1};q) \theta_{1}(\widetilde{z}_{4};q)\theta_{1}(\widetilde{z}_{3};q)}\\ &\times\sum_{\tilde{\alpha}=\pm}\left(\prod_{i=1}^{4}\alpha_{i} \right)E_{2}\left[\left(\widetilde{z}_{1}^{\frac{1}{4}}\widetilde{z}_{2}z_{3}^ {\frac{1}{4}}\widetilde{z}_{4}^{\frac{1}{4}}q^{\frac{1}{4}}\right)^{\alpha_{1}} \left(\widetilde{z}_{1}^{\frac{1}{4}}\right)^{\alpha_{2}}\left(\widetilde{z}_{ 4}^{\frac{1}{4}}q^{\frac{1}{4}}\right)^{\alpha_{3}}\left(\widetilde{z}_{3}^{ \frac{1}{4}}\right)^{\alpha_{4}}\right]\end{split} \tag{117}\] then, one can show that these expressions satisfy the relation \[\mathrm{ch}[\sigma^{\tilde{\Lambda}_{i}}(\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{0}))]=\mathrm{ch}\left[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{i}) \right],\quad i=1,3,4. \tag{118}\] Another evidence is to use the relation between characters of \(\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})\) modules and the partition function of the curved \(\beta\gamma\) system [10]. The partition function of the curved \(\beta\gamma\) system on complex Grassmannian \(\mathrm{Gr}(2,4)\) is given by, \[Z_{\mathfrak{d}_{4}}(t,\mathbf{m}_{3}^{\mathfrak{d}_{3}},\tau)=\frac{i\eta(\tau) \theta_{1}(2\sigma,\tau)}{\prod_{\omega\in\rho}\theta_{1}(\sigma+(\mathbf{m}^{ \mathfrak{a}_{3}},\omega),\tau)}, \tag{119}\] where \(t=e^{2\pi i\sigma}\) and \(q=e^{2\pi i\tau}\) as usual. The product in the denominator is over the weights in the representation \(\rho\) of \(\mathfrak{a}_{3}\) with highest weight \(\bar{\Lambda}_{\mathfrak{a}_{3}}\). The authors of [10] found that this partition function is also given by \[Z_{\mathfrak{d}_{4}}(t,\mathbf{m}_{3}^{\mathfrak{d}_{3}},\tau)=\mathrm{ch}[ \mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})](\mathbf{m}^{\mathfrak{d}_{4}}, \tau)-\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{4})](\mathbf{m}^{ \mathfrak{a}_{4}},\tau) \tag{120}\] with the following identifications of parameters \[\mathbf{m}_{i}^{\mathfrak{a}_{4}}=\mathbf{m}_{i}^{\mathfrak{a}_{3}},\quad\text{for }i=1,2,3, \quad\mathbf{m}_{4}^{\mathfrak{a}_{4}}=\sigma-\frac{\mathbf{m}_{1}^{\mathfrak{a}_{3}} }{2}-\mathbf{m}_{2}^{\mathfrak{a}_{3}}-\frac{\mathbf{m}_{3}^{\mathfrak{a}_{3}}}{ 2}. \tag{121}\] Using the expression of \(\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})]\), one also sees that \[Z_{\mathfrak{d}_{4}}(t,\mathbf{m}_{3}^{\mathfrak{a}_{3}},\tau)=\mathrm{ch}[ \mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{0})](\mathbf{m}^{\mathfrak{d}_{4}}, \tau)-\mathrm{ch}[\sigma^{\tilde{\Lambda}_{4}}(\mathcal{L}_{\mathfrak{d}_{4}}(-2 \Lambda_{0}))](\mathbf{m}^{\mathfrak{d}_{4}},\tau). \tag{122}\] Therefore we have (123) \[\mathrm{ch}[\mathcal{L}_{\mathfrak{d}_{4}}(-2\Lambda_{4})](\mathbf{m}^{ \mathfrak{d}_{4}},\tau)=\mathrm{ch}[\sigma^{\tilde{\Lambda}_{4}}(\mathcal{L}_{ \mathfrak ## Appendix A Theta functions We first summarize some basic facts about affine Lie of type \(A_{1}^{(1)}\). \begin{tabular}{|l|l|} \hline Cartan matrix & \(\begin{pmatrix}2&-2\\ -2&2\end{pmatrix}\) \\ \hline Simple roots & \(\{\alpha_{0},\alpha_{1}\}\) \\ \hline \(\mathfrak{h}^{*}\) & \(\mathrm{Span}_{\mathbb{C}}\{\alpha_{0},\alpha_{1},\Lambda_{0}\}\) \\ \hline Bilinear form on \(\mathfrak{h}^{*}\) & \(\langle\alpha_{0},\alpha_{0}\rangle=2\) \\ & \(\langle\alpha_{1},\alpha_{1}\rangle=2\) \\ & \(\langle\alpha_{1},\alpha_{0}\rangle=\langle\alpha_{0},\alpha_{1}\rangle=-2\) \\ & \(\langle\alpha_{0},\Lambda_{0}\rangle=1\) \\ & \(\langle\Lambda_{0},\Lambda_{0}\rangle=\langle\alpha_{1},\Lambda_{0}\rangle=0\) \\ \hline basic imaginary root & \(\delta=\alpha_{0}+\alpha_{1}\) \\ \hline \(\mathfrak{h}\) & \(\mathrm{Span}_{\mathbb{C}}\{\alpha_{0}^{\vee}=h_{1},\alpha_{2}^{\vee}=h_{2},d\}\) \\ \hline Central element & \(c=\alpha_{0}^{\vee}+\alpha_{1}^{\vee}\) \\ \hline Fundamental weights & \(\{\Lambda_{0},\Lambda_{1}\}\) \\ & \(\langle\Lambda_{i},\alpha_{j}^{\vee}\rangle=\delta_{i,j}\), \(\Lambda_{i}(d)=0\), \(i=0,1\) \\ \hline Lattice \(M\) & \(\mathbb{Z}h_{1}\) \\ \hline Lattice \(M^{*}\) & \(\frac{1}{2}\mathbb{Z}\alpha_{1}\) \\ \hline Integral forms \(P\) & \(\{\lambda\in\mathfrak{h}^{*}|\langle\lambda(\alpha_{i}^{\vee})\in\mathbb{Z},i =0,1\}\) \\ \hline Positive integral forms \(P_{+}\) & \(\{\lambda\in\mathfrak{h}^{*}|\langle\lambda(\alpha_{i}^{\vee})\in\mathbb{Z}_{ >0},i=0,1\}\) \\ \hline Weyl group & \(t(M)\rtimes\overline{W}\), \(t(M)=\{t_{m}|m\in M\}\), \(\overline{W}=\{s_{1}\}\) \\ \hline Lacing number & \(1\) \\ \hline Coxeter dual number & \(2\) \\ \hline \end{tabular} One can use classical theta functions to define _Jacobi theta functions_ of degree two: \[\theta_{3}(\mathbf{z};q) \equiv\vartheta_{00}(\tau,z)=\Theta_{2,2}(\tau,z)+\Theta_{0,2}( \tau,z)=\sum_{n\in\mathbb{Z}}\mathbf{z}^{2(n+\frac{1}{2})}q^{2(n+\frac{1}{2}) ^{2}}+\mathbf{z}^{2n}q^{2n^{2}}=\sum_{n\in\mathbb{Z}}\mathbf{z}^{n}q^{\frac{n ^{2}}{2}},\] \[\theta_{4}(\mathbf{z};q) \equiv\vartheta_{01}(\tau,z)=-\Theta_{2,2}(\tau,z)+\Theta_{0,2}( \tau,z)=\sum_{n\in\mathbb{Z}}-\mathbf{z}^{2(n+\frac{1}{2})}q^{2(n+\frac{1}{2}) ^{2}}+\mathbf{z}^{2n}q^{2n^{2}}=\sum_{n\in\mathbb{Z}}(-1)^{n}\mathbf{z}^{n}q^{ \frac{n^{2}}{2}},\] \[\theta_{2}(\mathbf{z};q) \equiv\vartheta_{10}(\tau,z)=\Theta_{1,2}(\tau,z)+\Theta_{-1,2}( \tau,z)=\sum_{n\in\mathbb{Z}}\mathbf{z}^{2(n+\frac{1}{4})}q^{2(n+\frac{1}{4}) ^{2}}+\mathbf{z}^{2(n-\frac{1}{4})}q^{2(n-\frac{1}{4})^{2}}=\sum_{n\in\mathbb{ Z}}\mathbf{z}^{(n+\frac{1}{2})^{2}},\] \[\theta_{1}(\mathbf{z};q) \equiv-\vartheta_{11}(\tau,z)=i\Theta_{1,2}(\tau,z)-i\Theta_{-1, 2}(\tau,z)=\sum_{n\in\mathbb{Z}}i\mathbf{z}^{2(n+\frac{1}{4})}q^{2(n+\frac{1}{ 4})^{2}}-i\mathbf{z}^{2(n-\frac{1}{4})}q^{2(n-\frac{1}{4})^{2}}\] \[=\sum_{n\in\mathbb{Z}}e^{\pi i(n+\frac{1}{2})}\mathbf{z}^{(n+\frac{ 1}{4})}q^{\frac{(n+\frac{1}{4})^{2}}{2}}. \tag{126}\] Using \(\eta(q)=q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n})\), one can gets the following infinite product identities \[\prod_{n=1}^{\infty}(1+\mathbf{z}q^{n-\frac{1}{2}})(1+\mathbf{z }^{-1}q^{n-\frac{1}{2}})=q^{\frac{1}{24}}\frac{\vartheta_{00}(\tau,z)}{\eta(q)},\] \[\prod_{n=1}^{\infty}(1-\mathbf{z}q^{n-\frac{1}{2}})(1-\mathbf{z} ^{-1}q^{n-\frac{1}{2}})=q^{\frac{1}{24}}\frac{\vartheta_{01}(\tau,z)}{\eta(q)},\] \[\prod_{n=1}^{\infty}(1+\mathbf{z}q^{n})(1+\mathbf{z}^{-1}q^{n-1})= q^{-\frac{1}{12}}\mathbf{z}^{-\frac{1}{2}}\frac{\vartheta_{10}(\tau,z)}{\eta(q)},\] \[\prod_{n=1}^{\infty}(1-\mathbf{z}q^{n})(1-\mathbf{z}^{-1}q^{n-1})= -iq^{-\frac{1}{12}}\mathbf{z}^{-\frac{1}{2}}\frac{\vartheta_{11}(\tau,z)}{\eta(q)}, \tag{127}\] where \(q\) is as always \(e^{2\pi i\tau}\). ## Appendix B Proof in Proposition 4.4 The calculation of (70): \[T_{-}^{a}T_{0}^{b}T_{+}^{d}P(E_{1}(n,k))\] \[=T_{-}^{a}T_{0}^{b}T_{+}^{d}\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}G_ {-r-st}\right)T_{+}^{n}\] \[=T_{-}^{a}\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}G_{-r-st-d}\right) T_{0}^{b}T_{+}^{d+n}\] \[=T_{-}^{a}\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}(T_{-}T_{+}-(-r-st- d)T_{0}+(-r-st-d)(-r-st-d+1))\right)T_{0}^{b}T_{+}^{d+n}\] \[=T_{-}^{a}\left(\prod_{r=1}^{n}\prod_{s=1}^{k-1}(r+st+d)(T_{0}+r+ st+d-1)\right)T_{0}^{b}T_{+}^{n+d}\mod T_{-}U(L_{0}).\] The calculation of (71): \[T_{-}^{a}T_{0}^{b}T_{+}^{d}P(E_{2}(n,k))\] \[=T_{-}^{a}T_{0}^{b}T_{+}^{m}\left(\prod_{i=1}^{p-n}G_{-i}\right) \prod_{r=0}^{p-n-1}\prod_{s=1}^{q-k}G_{r+st-p+n}\] \[=T_{-}^{a}T_{0}^{b}T_{+}^{m}\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}G_ {r+st-p+n}\] \[=T_{-}^{a}\prod_{r=0}^{p-n-1}\prod_{s=0}^{q-k}G_{r+st-p+n-m}T_{0}^ {b}T_{+}^{m}\] \[=T_{-}^{a}\prod_{r=1}^{p-n}\prod_{s=0}^{q-k}G_{st-m-r}T_{0}^{b}T_ {+}^{m}\] \[=T_{-}^{a}\prod_{r=1}^{p-n}\prod_{s=0}^{q-k}(st-m-r)(-T_{0}+st-m- r+1)T_{0}^{b}T_{+}^{m}\mod T_{-}U(L_{0}).\]
2303.16706
A Lie theoretic approach to the twisting procedure and Maurer-Cartan simplicial sets over arbitrary rings
The Deligne-Getzler-Hinich--$\infty$-groupoid or Maurer-Cartan simplicial set of an $L_\infty$-algebra plays an important role in deformation theory and many other areas of mathematics. Unfortunately, this construction only works over a field of characteristic $0$. The goal of this paper is to show that the notions of Maurer-Cartan equation and Maurer-Cartan simplicial set can be defined for a much larger number of operads than just the $L_\infty$-operad. More precisely, we show that the Koszul dual of every unital Hopf cooperad (a cooperad in the category of unital associative algebras) with an arity $0$ operation admits a twisting procedure, a natural notion of Maurer-Cartan equation and under some mild additional assumptions can also be integrated to a Maurer-Cartan simplicial set. In particular, we show that the Koszul dual of the Barratt-Eccles operad and its $E_n$-suboperads admit Maurer-Cartan simplicial sets. In this paper, we will work over arbitrary rings.
Niek de Kleijn, Felix Wierstra
2023-03-29T13:55:58Z
http://arxiv.org/abs/2303.16706v1
A Lie theoretic approach to the twisting procedure and Maurer-Cartan simplicial sets over arbitrary rings ###### Abstract The Deligne-Getzler-Hinich-\(\infty\)-groupoid or Maurer-Cartan simplicial set of an \(L_{\infty}\)-algebra plays an important role in deformation theory and many other areas of mathematics. Unfortunately, this construction only works over a field of characteristic \(0\). The goal of this paper is to show that the notions of Maurer-Cartan equation and Maurer-Cartan simplicial set can be defined for a much larger number of operads than just the \(L_{\infty}\)-operad. More precisely, we show that the Koszul dual of every unital Hopf cooperad (a cooperad in the category of unital associative algebras) with an arity \(0\) operation admits a twisting procedure, a natural notion of Maurer-Cartan equation and under some mild additional assumptions can also be integrated to a Maurer-Cartan simplicial set. In particular, we show that the Koszul dual of the Barratt-Eccles operad and its \(\mathcal{E}_{n}\)-suboperads admit Maurer-Cartan simplicial sets. In this paper, we will work over arbitrary rings. ## 1 Introduction Lie algebras up to homotopy, better known as \(L_{\infty}\)-algebras, play an important role in many areas of mathematics like deformation quantization and deformation theory, mathematical physics, symplectic geometry, rational homotopy theory and many others. It is a well known philosophy that, over a field of characteristic \(0\), all deformation problems are controlled by the Maurer-Cartan elements in an \(L_{\infty}\)-algebra. This philosophy goes back to Deligne, Drinfeld, Feigin, Hinich, Kontsevich-Soibelman, Manetti, and many others, and was made precise by Lurie in [21] and Pridham in [26]. The information about such a deformation problem can be conveniently organized in a simplicial set, which is known as the Deligne-Getzler-Hinich-\(\infty\)-groupoid, Maurer-Cartan simplicial set or nerve. This simplicial set was first defined by Getzler in the case of \(L_{\infty}\)-algebra in [13]. When working over fields of general characteristic, the situation is a lot more complicated. Deformation problems or formal moduli problems are no longer controlled by \(L_{\infty}\)-algebras but by \(E_{n}\)-algebras. More explicitly, Lurie showed that for \(n<\infty\), there is an equivalence between the \(\infty\)-category of of formal \(E_{n}\)-moduli problems and the \(\infty\)-category of augmented \(E_{n}\)-algebras. This was later extended to \(n=\infty\) by Brantner and Mathew [5] and Brantner, Campos and Nuiten [4]. Although all these approaches are great from a theoretical perspective, there has not been an analog of the Deligne-Hinich-Getzler-\(\infty\)-groupoid. With this we mean a simplicial set which encodes the homotopical information of the deformation problem, in which the simplices (Maurer-Cartan elements) are defined by explicit equations and operations like the twist are defined by explicit formulas. In [6], we made a first step towards this by showing that \(A_{\infty}\)-algebras (also known as \(E_{1}\)-algebras) admit a Maurer-Cartan simplicial set. In this paper, we show that a much larger number of operads have a natural notion of Maurer-Cartan equation, twisting procedure and Maurer-Cartan simplicial set. To construct our Maurer-Cartan simplicial set over a general ring \(R\), we use Kontsevich's perspective of formal pointed manifolds from [18] and generalize it to what could be seen as "formal pointed Lie groups". Over a field of characteristic \(0\), Kontsevich defined an \(L_{\infty}\)-algebra on a graded vector space \(V\) as a degree \(-1\) coderivation \(Q:C(V)\to C(V)\) which squares to zero, where \(C(V)\) denotes the cofree conilpotent cocommutative coalgebra cogenerated by \(V\). The coderivation \(Q\) is seen as a "vector field" on \(\mathcal{C}(V)\). In this paper, we generalize this idea by taking more general cooperads \(\mathcal{C}\) instead of the cocommutative cooperad. If we furthermore assume that \(\mathcal{C}\) is a unital Hopf cooperad (a cooperad in the category of unital associative algebras), which admits an arity zero operation encoding the counit, then we define an \(\Omega\mathcal{C}\)-algebra on a graded \(R\)-module \(V\) as a square-zero coderivation \(Q:\mathcal{C}(V)\to\mathcal{C}(V)\) on the cofree conilpotent \(\mathcal{C}\)-coalgebra cogenerated by \(V\). Here, \(\Omega\mathcal{C}\) denotes the operadic cobar construction on \(\mathcal{C}\) and the correspondence between \(\Omega\mathcal{C}\)-coalgebras and coderivations follows from the Rosetta Stone Theorem (see [20], Theorem 10.1.13). To ensure that certain infinite sums converge, we need to introduce filtrations and restrict ourselves to complete or pro-nilpotent \(\Omega\mathcal{C}\)-algebras. To ensure that the theory works over arbitrary rings, we further need to assume that in arity \(r\), \(\mathcal{C}(r)\) is free as an \(R[\mathbf{S}_{r}]\)-module, where \(\mathbf{S}_{r}\) denotes the symmetric group on \(r\) elements. The class of operads that satisfy these assumptions is quite large. For example, it contains the normalized cochains on any operad \(\mathcal{P}\) in simplicial sets, such that \(\mathcal{P}(0)=*\), \(\mathcal{P}\) has a free symmetric group action in each arity and finitely many non-degenerate simplices in each arity. This is because the cochains on a topological operad have the cochain level cup product making it a unital Hopf cooperad. Some particular examples of such operads are the commutative operad, the associative operad, the Barratt-Eccles operad and the \(\mathcal{E}_{n}\)-suboperads of the Barratt-Eccles operad. By a theorem of Moerdijk [24], we can equip \(\mathcal{C}(V)\) with an associative product \[\star:\mathcal{C}(V)\otimes\mathcal{C}(V)\to\mathcal{C}(V)\] which is a generalization of the shuffle product on the cofree conilpotent cocommutative coalgebra and is therefore called the generalized shuffle product. This product turns \(\mathcal{C}(V)\) into a \(\mathcal{C}\)-\(\mathcal{ASS}\)-bialgebra, which can be interpreted as the analog of a Lie group, where \(\star\) corresponds to the group structure and the differential \(Q:\mathcal{C}(V)\to\mathcal{C}(V)\) corresponds to the manifold structure. The chain complex \(V\) corresponds in this case to the tangent space or "Lie algebra". This "Lie group" has many similarities to the theory of Lie groups and we generalize certain concepts from classical Lie theory to this new setting. We start by defining the analog of the exponential map, which is a map \[\exp:V_{0}\to\mathcal{C}(V)\] from the "Lie algebra" \(V\) to the "Lie group" \(\mathcal{C}(V)\) which shares many properties with the ordinary exponential map. This exponential map allows us to define a twisting procedure which twists the differential \(Q:\mathcal{C}(V)\to\mathcal{C}(V)\) by a degree \(0\) element \(v\in V_{0}\), in a way that is analogous to the adjoint representation of a classical Lie group. More precisely, if \(x\in\mathcal{C}(V)\) then the twisted differential \(Q^{v}:\mathcal{C}(V)\to\mathcal{C}(V)\) is defined as \[Q^{v}:=\exp(-v)\star Q(\exp(v)\star x).\] The Maurer-Cartan equation then naturally follows as a flatness condition for this twist. This further gives an alternative to the twisting procedure from [8], which works for differential graded operads and over arbitrary rings instead of fields of characteristic \(0\). If the cooperad \(\mathcal{C}\) comes equipped with a map \(\mathcal{E}_{\infty}\to\mathcal{C}\), where \(\mathcal{E}_{\infty}\) denotes the cochains on the Barratt-Eccles operad \(\mathcal{E}_{\infty}\), we can integrate every \(\Omega\mathcal{C}\)-algebra to a simplicial set which we call the Maurer-Cartan simplicial set. The main examples which satisfy this requirement are the Barratt-Eccles cooperad and its \(E_{n}\)-subcooperads. We finish by showing that this simplicial set is Kan complex and therefore an \(\infty\)-groupoid. ### Acknowledgements The authors would like to thank Daniel Robert-Nicoud, Martin Markl, Igor Khavkine, Dion Leijnse, Noah Olander, Sergey Shadrin and Jose Moreno-Fernandez for useful conversations and comments. The second author is supported by Dutch Research Organisation (NWO) grant number VI.Veni.202.046. ## 2 Preliminaries and conventions In this section, we recall the necessary preliminaries and establish our notation and conventions. ### Conventions and notation In this paper, we work in the category of chain complexes over a general ring \(R\) with unit. We use a homological grading, so the differentials are of degree \(-1\). Unless stated otherwise, all tensor products are taken over \(R\) and we implicitly assume the Koszul sign rule, i.e. if \(V\) and \(W\) are two chain complexes then the switch map \(V\otimes W\to W\otimes V\) induces an additional sign. The chain complex of \(R\)-linear maps between two chain complexes \(V\) and \(W\) is denoted by \(\hom_{R}(V,W)\), often we drop the \(R\) and write just \(\hom(V,W)\). Let \(G\) be a finite group and \(V\) a chain complex with a linear \(G\)-action, then we denote the coinvariants with respect to the \(G\) action by \(V_{G}\). The invariants with respect to the \(G\)-action are denoted by \(V^{G}\). The symmetric group in \(r\) elements is denoted by \(\mathbf{S}_{r}\), by convention we say that both \(\mathbf{S}_{1}\) and \(\mathbf{S}_{0}\) are the trivial group. ### Conventions on operads and cooperads We assume that the reader is familiar with the basic theory of algebraic operads and cooperads and otherwise refer the reader to [9], [10] and [20]. Although the book [20] is written in characteristic \(0\), most of the results hold over arbitrary rings with only small modifications which are explained later in this paper. Unless stated otherwise, all operads are in chain complexes over a ring \(R\) (except for Section 6 where we also consider operads in simplicial sets). Since we work over a general ring \(R\), and not a field of characteristic \(0\), there are a few special conventions we need in this paper that are not super standard and we recall those here. For more details, we mainly refer to [10], which is one of the few references that works in this generality. The unit operad and the unit cooperad are both denoted by \(I\) and defined as \(I(r)=0\) if \(r\neq 1\) and \(I(1)=R\), with the appropriate (co)operad structure. From the context it should be clear whether we mean the operad or cooperad. An operad \(\mathcal{P}\) is called augmented if there is an additional map \(\epsilon:\mathcal{P}\to I\) which is a splitting for the operadic unit. A cooperad is coaugmented if there is a splitting \(\eta:I\to\mathcal{C}\) for the cooperadic counit. An operad \(\mathcal{P}\) (resp. cooperad \(\mathcal{C}\)) is called unitary if \(\mathcal{P}(0)=R\) (resp. \(\mathcal{C}(0)=R\)), the unitary operation will be denoted by \(\mathbb{1}\). We denote unitary operads by \(u\mathcal{P}\) (resp. unitary cooperads by \(u\mathcal{C}\)) to indicate that they are unitary. Every unitary operad (resp. cooperad) has a non-unitary suboperad (resp. non-unitary quotient cooperad) which is denoted by \(\mathcal{P}\) (resp. \(\mathcal{C}\)) and is defined as \(\mathcal{P}(0):=0\) and \(\mathcal{P}(r):=u\mathcal{P}(r)\) for \(r\geq 1\) (resp. \(\mathcal{C}(0):=0\) and \(\mathcal{C}(r):=u\mathcal{C}(r)\) for \(r\geq 1\), the coproduct is called the reduced coproduct). A unitary operad \(\mathcal{P}\) (resp. unitary cooperad \(\mathcal{C}\)) is called reduced if \(\mathcal{P}(1)=R\) is the operadic unit (resp. \(\mathcal{C}(1)=R\) is the cooperadic counit). A non-unitary operad \(\mathcal{P}\) (resp. cooperad \(\mathcal{C}\)) is called reduced if \(\mathcal{P}(0)=\mathcal{P}(1)=0\) (resp. \(\mathcal{C}(0)=\mathcal{C}(1)=0\)). Note that every reduced operad (resp. cooperad) is canonically augmented (resp. coaugmented). The non-unitary part of every reduced unitary operad has a natural weight grading which coincides with the arity grading. We further assume for simplicity that all operads and cooperads we consider are of finite \(R\)-type, i.e. they are finitely generated \(R\)-modules in each degree in each arity. All operads and cooperads are assumed to be projective as \(R\)-modules in each arity. Finally, we also assume that the non-unitary part of all cooperads is conilpotent. With these assumptions we can freely dualize between operads and cooperads. Note that the dual of a unitary operad is usually not conilpotent. ### (Co)Algebras over (co)operads In this section, we recall the details about (co)algebras over (co)operads and their up to homotopy versions. #### 2.3.1 Invariants vs coinvariants Since we are working over general rings instead of a field of characteristic \(0\), there are a few subtleties in the definition of algebras over symmetric operads. The main issue is that when working over an arbitrary ring, there are two ways to consider the symmetric group actions, we can either take invariants or coinvariants with respect to the symmetric group actions. As is explained in [9], over a field characteristic \(0\) these choices are isomorphic, but over an arbitrary ring these choices can be very different. In this section, we recall the necessary definitions and fix our conventions about algebras and coalgebras over operads and cooperads, for more details see [9]. Given two symmetric sequences \(M\) and \(N\), the tensor product of \(M\) and \(N\) is defined as \[\left(M\otimes N\right)(r)=\bigoplus_{i+j=r}\operatorname{Ind}_{\mathbf{S}_{i }\times\mathbf{S}_{j}}^{\mathbf{S}_{r}}M(i)\otimes N(j),\] where \(\operatorname{Ind}\) denotes the induced representation. Besides the tensor product, there are also two version of the composition product. The first version, which is just called the composition product, is defined as \[M\circ N=\bigoplus_{r\geq 0}\left(M(r)\otimes N^{\otimes r}\right)_{\mathbf{S}_{ r}}.\] The composition product with divided symmetries is defined as \[M\bar{\circ}N=\bigoplus_{r\geq 0}\left(M(r)\otimes N^{\otimes r}\right)^{ \mathbf{S}_{r}},\] so instead of using coinvariants we are using invariants with respect to the symmetric groups \(\mathbf{S}_{r}\) Using these two versions of the composition product we can define algebras (resp. coalgebras) over operads (resp. cooperads), again we have two different versions here, one using invariants and one using coinvariants. Let \(V\) be a chain complex and \(\mathcal{P}\) an operad. Then we call \(V\) a \(\mathcal{P}\)-algebra if there exists a map \[\gamma_{V}:\mathcal{P}\circ V\to V\] satisfying the usual axioms. The chain complex \(V\) is called a \(\mathcal{P}\)-algebra with divided symmetries if there exists a map \[\tilde{\gamma_{V}}:\mathcal{P}\bar{\circ}V\to V,\] again satisfying the usual axioms. Let \(\mathcal{C}\) be a (non-unitary) reduced operad, the cofree conilpotent \(\mathcal{C}\)-coalgebra on a chain complex \(V\), denoted \(\mathcal{C}(V)\), is defined as \[\mathcal{C}(V):=\bigoplus_{r\geq 1}\left(\mathcal{C}(r)\otimes V^{\otimes r }\right)_{\mathbf{S}_{r}}.\] Note that the direct sum starts at \(1\) because we assumed that \(\mathcal{C}\) is reduced. The coproduct is induced by the cooperad structure of \(\mathcal{C}\) (see [20] for more details). Let \(M\) and \(N\) be two symmetric sequences, assume that \(M(r)\) is free as an \(R[\mathbf{S}_{r}]\)-module in each arity \(r\geq 0\), then we there is an isomorphism between the composition product and the composition product with divided powers. This isomorphism is induced by the norm map. If \(X\) is an chain complex with an \(\mathbf{S}_{r}\) action then there is a natural map, called the norm map, from the coinvariants to the invariants given by \[Tr:X_{\mathbf{S}_{r}}\to X^{\mathbf{S}_{r}}\] \[Tr(x)=\sum_{\sigma\in\mathbf{S}_{r}}\sigma x,\] with \(x\in X_{\mathbf{S}_{r}}\). The norm map is in general not an isomorphism, but if \(X\) is free as an \(R[\mathbf{S}_{r}]\)-module then the norm map is an isomorphism. When \(\mathbb{Q}\subseteq R\) there is a variation of the norm map given by \(Tr(x)=\sum_{\sigma\in\mathbf{S}_{r}}\frac{\sigma x}{r!}\), which is also an isomorphism. Since we assumed that \(M\) is free as an \(R[\mathbf{S}_{r}]\)-module, the norm map induces an isomorphism \[Tr_{M,N}:M\circ N\to M\bar{\circ}N. \tag{1}\] So in case that \(\mathcal{P}\) (resp. \(\mathcal{C}\)) is an operad (resp. cooperad) which free as an \(R[\mathbf{S}_{r}]\)-module in each arity, there is no difference between algebras (resp. coalgebras) and algebras with divided symmetries (resp. coalgebras with divided symmetries). #### 2.3.2 Homotopy algebras and the bar and cobar construction In this section, we discuss algebras up to homotopy and set up the situation in which things can be twisted. Let \(\mathcal{P}\) be a reduced operad (without unitary operation), if \(\mathbb{Q}\nsubseteq R\), we further assume that the action of \(\mathbf{S}_{r}\) is free in each arity \(r\). Let \(\mathcal{C}\) be a reduced cooperad (also non-unitary) together with a Koszul twisting morphism \(\tau:\mathcal{C}\to\mathcal{P}\). Then we define a \(\mathcal{P}_{\infty}\)-algebra as an algebra over \(\Omega\mathcal{C}\), the operadic cobar construction of \(\mathcal{C}\). **Remark 2.1**.: _This definition varies a little bit from the definition of a \(\mathcal{P}_{\infty}\)-algebra from [20], in their definition the operad \(\mathcal{P}\) would be assumed to be quadratic and Koszul and \(\mathcal{C}\) would be the quadratic dual. For preciseness, the cooperad \(\mathcal{C}\) could have been included in the notation of a \(\mathcal{P}_{\infty}\)-algebra but in this paper it will always be clear which cooperad \(\mathcal{C}\) we refer to._ Alternatively, we can also define a \(\mathcal{P}_{\infty}\)-algebra on agraded \(R\)-module \(V\) as a square-zero coderivation on the cofree conilpotent \(\mathcal{C}\)-coalgebra with divided symmetries \(\mathcal{C}(V)\). This follows from the Rosetta Stone Theorem (Theorem 10.3.1 in [20]), because we assumed that all the symmetric group actions are free, this extends to arbitrary rings. **Proposition 2.1**.: _Let \(V\) be a chain complex and \(\tau:\mathcal{C}\to\mathcal{P}\) be a Koszul twisting morphism between a reduced cooperad \(\mathcal{C}\) and a reduced operad \(\mathcal{P}\), if \(\mathbb{Q}\nsubseteq R\) further assume that \(\mathcal{C}\) and \(\mathcal{P}\) are free as \(R[\mathbf{S}_{r}]\)-modules in each arity. Then the notion of a \(\mathcal{P}_{\infty}\)-algebra, i.e. a morphism of operads \(\gamma:\mathcal{P}_{\infty}=\Omega\mathcal{C}\to\operatorname{End}(V)\) is equivalent to a square-zero coderivation \(Q:\mathcal{C}(V)\to\mathcal{C}(V)\) of degree \(-1\), where \(\mathcal{C}(V)\) is the cofree conilpotent \(\mathcal{C}\)-coalgebra._ Since \(\mathcal{C}(V)\) is cofree, it further turns out that the coderivation \(Q\) is determined by its image on cogenerators. **Lemma 2.1**.: _Under the assumptions of Proposition 2.1, the coderivation \(Q:\mathcal{C}(V)\to\mathcal{C}(V)\) is determined by \(\tilde{Q}:\mathcal{C}(V)\to V\), the projection onto the cogenerators of \(\mathcal{C}(V)\)._ The proof is completely analogous to the proof of Proposition 6.3.8 of [20] and is therefore omitted. The converse of the lemma is not true, not every map \(f:\mathcal{C}(V)\to V\) determines a square-zero coderivation. Every degree \(-1\) map \(f:\mathcal{C}(V)\to V\) does determine a coderivation but this might not square to zero. The map \(\tilde{Q}:\mathcal{C}(V)\to V\) can be decomposed to define the set of multiplications on \(V\). We denote the arity \(r\)-component by \(\tilde{Q}_{r}:\mathcal{C}(r)\otimes V^{\otimes r}\to V\). Every element \(\delta\in\mathcal{C}(r)\) defines a multiplication \(Q_{\delta}:V^{\otimes r}\to V\) by first taking the inclusion \(\mathcal{C}(r)\otimes V^{\otimes r}\hookrightarrow\bigoplus_{r\geq 1}\left( \mathcal{C}(r)\otimes V^{\otimes r}\right)_{\mathbf{S}_{r}}\) and then the projection onto \(V\). #### 2.3.3 Filtrations and completions In the rest of this paper, we need to consider certain infinite sums, we therefore need to include certain filtrations. We mainly use the conventions from [6] (except that in this paper we use a homological grading instead of the cohomological one in [6]). Let \(V\) be a chain complex then we will from now on assume that \(V\) is equipped with a descending filtration \[V=F^{1}V\supset F^{2}V\supset F^{3}V\supset...,\] whichis respected by the differential and satisfies \(\cap_{k}F^{k}V=\{0\}\). We further assume that the filtration is complete, i.e. \(V=\varprojlim F^{i}V\). The tensor product of two chain complexes is defined by first taking the tensor product of the filtrations and then completing with respect to this induced filtration (for more details see [12] Section 7.3). We further need that if \(W\) is a finitely generated chain complex and \(V\) a filtered chain complex then \(\hom(W,V)\) is also filtered with filtration \[F^{n}\hom(W,V):=\hom(W,F^{n}V).\] If the filtration on \(V\) is complete then so is the filtration on \(\hom(W,V)\). Similary, we can equip the tensor product of a filtered chain complex \(V\) and an unfiltered finitely generated chain complex \(W\) with a filtration given by \[F^{n}(W\otimes V):=W\otimes F^{n}V.\] Again if the filtration on \(V\) is complete then so is the filtration on \(W\otimes V\). Using this completed tensor product, the \(r\)-fold tensor product of a dg-\(R\)-module \(V\) with itself also becomes a complete \(R\)-module. For a cooperad \(\mathcal{C}\), we can extend this induced filtration to \(\mathcal{C}(V)\). An \(\Omega\mathcal{C}\)-algebra is called complete or pro-nilpotent if the square-zero coderivation \(\tilde{Q}:u\mathcal{C}(V)\to V\) respects the filtrations, i.e. if \[\tilde{Q}_{r}\left(\mathcal{C}(r)\otimes F^{i_{1}}V\otimes...\otimes F^{i_{r} }V\right)\subseteq F^{i_{1}+\ldots+i_{r}}V.\] **Convention 2.1**.: _From now on we will always assume that the objects we are working with have a filtration and are complete with respect to this filtration._ #### 2.3.4 Curved \(\mathcal{P}_{\infty}\)-algebras and \(\infty\)-morphisms If we assume that \(u\mathcal{C}\) is reduced unitary operad and that all our chain complexes are equipped with complete filtrations, then we can generalize \(\mathcal{P}_{\infty}\)-algebras to curved \(\mathcal{P}_{\infty}\)-algebras. Curved algebras where first introduced by Positselski in [25], where he showed the remarkable fact that there is a Koszul duality between unital algebras and curved coalgebras. **Definition 2.1**.: _Let \(u\mathcal{C}\) be a unitary reduced cooperad with non-unitary part \(\mathcal{C}\) and \(V\) a filtered graded \(R\)-module. Since \(u\mathcal{C}\) is reduced, its non-unitay part \(\mathcal{C}\) automatically has a weight grading. We denote by \(\widetilde{u\mathcal{C}(V)}\) the completion of \(\bigoplus_{r\geq 0}\left(u\mathcal{C}(r)\otimes V^{\otimes r}\right)_{ \mathcal{S}_{r}}\) with respect to this weight grading, where we put \(\mathbb{1}\in u\mathcal{C}(0)\) in weight \(0\). We call this the completed conilpotent cofree \(\mathcal{C}\)-coalgebra._ By using the cooperadic decomposition map \(u\mathcal{C}(V)\) becomes a \(u\mathcal{C}\)-coalgebra. **Definition 2.2**.: _Let \(u\mathcal{C}\) be a unitary cooperad and \(\mathcal{C}\) its non-unitary part. A curved \(\Omega\mathcal{C}\)-algebra on a graded \(R\)-module \(V\) is defined as a square-zero coderivation \(Q:u\mathcal{C}(V)\to u\mathcal{C}(V)\). Similar to Lemma 2.1, every square-zero coderivation is determined by its image on the cogenerators \(\tilde{Q}:u\mathcal{C}(V)\to V\)._ _The curvature \(\mathcal{R}\) of a curved \(\Omega\mathcal{C}\)-algebra \((V,\tilde{Q}:u\mathcal{C}(V)\to V)\) is defined as \(\mathcal{R}:=\tilde{Q}(\mathbb{1})\in V\), where \(\mathbb{1}\) is the unitary operation of \(u\mathcal{C}\). We call a curved \(\Omega\mathcal{C}\)-algebra flat if \(\mathcal{R}=\tilde{Q}(\mathbb{1})=0\)._ **Remark 2.2**.: _For simplicity, we have chosen to only use the definition of curved coalgebras as coderivations on the cofree conilpotent unitary \(u\mathcal{C}\)-coalgebra. There is also a theory of curved coalgebras using curved operads, it seems plausible that most of the theory would also work in that description but that is beyond the scope of this paper. For more details, see for example [1]._ For the definition of the exponential map we further need the notion of tangent vector of an element **Definition 2.3**.: _Let \(\tilde{Q}:u\mathcal{C}(V)\to V\) be a curved complete \(\Omega\mathcal{C}\)-algebra, then the cooperadic counit map \(\epsilon:u\mathcal{C}\to I\) induces a map \(T:u\mathcal{C}(V)\to I(V)\cong V\), where \(I(V)\) is the cofree coalgebra generated by the cooperad \(I\). We call this the tangent space of \(u\mathcal{C}(V)\). If \(v\in u\mathcal{C}(V)\) then we call \(T(v)\) the tangent vector of \(v\)._ **Lemma 2.2**.: _Let \(V\) be a flat curved \(\Omega\mathcal{C}\)-algebra, then \(V\) is an algebra over the operad \(\Omega\mathcal{C}\)._ Proof.: Since \(V\) is a flat curved \(\Omega\mathcal{C}\)-coalgebra it is defined by a coderivation \(\tilde{Q}:u\mathcal{C}(V)\to V\) with \(\tilde{Q}(\mathbb{1})=0\). We can therefore restrict the coderivation \(\tilde{Q}\) to a coderivation \(\tilde{Q}^{\prime}:\mathcal{C}(V)\to V\), this coderivation also squares to zero because \(\tilde{Q}\) squared to zero and \(R=0\). It then follows from the Rosetta Stone (Proposition 2.1) that this is an \(\Omega\mathcal{C}\)-algebra. **Definition 2.4**.: _A curved \(\Omega\mathcal{C}\)-algebra is called nilpotent if there exists an \(N\in\mathbb{N}\) such that the arity \(r\)-part of the coderivation \(\tilde{Q}:u\mathcal{C}(V)\to V\) is zero for all elements of arity \(r>N\)._ Curved \(\Omega\mathcal{C}\)-algebras have two types of morphisms, strict morphisms and \(\infty\)-morphisms. Let \(A\) and \(B\) be two curved \(\Omega\mathcal{C}\)-algebras together with coderivations \(\tilde{Q}^{A}:u\mathcal{C}(A)\to A\) and \(\tilde{Q}^{B}:u\mathcal{C}(B)\to B\), then a strict morphism is a linear map \(f:A\to B\) that strictly commutes with all structure maps and filtrations, i.e. if \(a_{1},...,a_{r}\in A\) and \(\delta\in u\mathcal{C}(r)\) then \(f\tilde{Q}^{A}_{r}\left(\delta\otimes a_{1}\otimes...\otimes a_{r}\right)=Q^ {B}_{r}\left(\delta\otimes f(a_{1})\otimes...\otimes f(a_{r})\right)\). The more interesting notion of a morphism is called an \(\infty\)-morphism and is defined as follows. An \(\infty\)-morphism \(\Phi:A\rightsquigarrow B\) is a \(u\mathcal{C}\)-coalgebra map \[\Phi:u\mathcal{C}(A)\to u\mathcal{C}(B),\] such that \(\Phi\circ Q^{A}=Q^{b}\circ\Phi\). Since \(u\mathcal{C}(B)\) is cofree, this is equivalent to a sequence of maps \[\phi_{r}:u\mathcal{C}(r)\otimes A^{r}\to B,\] satisfying a certain equation coming from the cooperadic decompositon map. We call the map \(\phi_{r}\) the arity \(r\) component of \(\Phi\). Note that an \(\infty\)-morphism for which \(\phi_{r}=0\) for all \(r\neq 1\) is the same as a strict morphism. **Remark 2.3**.: _There is a second notion of an \(\infty\)-morphism, which is given by replacing \(u\mathcal{C}(A)\) and \(u\mathcal{C}(B)\) by their completions \(\widehat{u\mathcal{C}(A)}\) and \(\widehat{u\mathcal{C}(B)}\). These notions can be very different, for example in the non completed case the only grouplike element in \(u\mathcal{C}(A)\) (resp. \(u\mathcal{C}(B)\)) is \(\mathbb{1}_{A}\) (resp. \(\mathbb{1}_{B}\)) so \(\Phi(\mathbb{1}_{A})=\mathbb{1}_{B}\). In the completed case, this is no longer the case and the homotopy theory of these algebras behaves very differently (see for example [16] in the case of modules). Since the non-completed notion of \(\infty\)-morphism coincides with the classical notion of \(\infty\)-morphism we have chosen to only focus on this notion._ ## 3 The (generalized) shuffle product In this section, we recall a generalization of the shuffle product, but before we can do that we first recall the notion of a Hopf (co)operad. For more details see for example [24] or [19]. ### Hopf (co)operads and the tensor product of (co)algebras Let \(\mathcal{P}\) be an operad and \(A\) and \(B\) two \(\mathcal{P}\)-algebras, in general the tensor product \(A\otimes B\) does not have the structure of a \(\mathcal{P}\)-algebra but only the structure of a \(\mathcal{P}\otimes\mathcal{P}\)-algebra, where \(\mathcal{P}\otimes\mathcal{P}\) denotes the aritywise tensor product of the operad \(\mathcal{P}\) with itself. There is a special class of operads called Hopf operads for which the tensor product has a natural \(\mathcal{P}\)-algebra structure. A Hopf operad \(\mathcal{P}\) is an operad in the category of coassociative coalgebras, i.e. in each arity \(r\) we have a coproduct \(\Delta_{r}:\mathcal{P}(r)\rightarrow\mathcal{P}(r)\otimes\mathcal{P}(r)\) such that the operadic composition maps commute with the coproducts. A Hopf operad is called counital if the coproducts \(\Delta^{r}\) are all counital. Suppose that \(A\) and \(B\) are algebras over a Hopf operad \(\mathcal{P}\) then we can equip the tensor product \(A\otimes B\) with a \(\mathcal{P}\)-algebra structure where the multiplication map \[\mathcal{P}(r)\otimes\left(A\otimes B\right)^{\otimes r}\to A\otimes B\] is given by the following map \[\mathcal{P}(r)\otimes(A\otimes B)^{\otimes r}\xrightarrow{\Delta_{r}\otimes \operatorname{id}_{(A\otimes B)^{\otimes r}}}\mathcal{P}(r)\otimes\mathcal{P} (r)\left(A\otimes B\right)^{\otimes r}\xrightarrow{\tau}\mathcal{P}(r)\otimes A ^{\otimes r}\otimes\mathcal{P}(r)\otimes B^{\otimes r}\xrightarrow{\gamma_{A} \otimes\gamma_{B}}A\otimes B,\] where \(\tau\) is the map shuffling the tensor factors and \(\gamma_{A}\) and \(\gamma_{B}\) are the structure maps of \(A\) and \(B\). Similarly, we can define Hopf cooperads, these are cooperads in the category of associative algebras. More precisely, a Hopf cooperad \(\mathcal{C}\) is a cooperad together with maps \(\mu_{r}:\mathcal{C}(r)\otimes\mathcal{C}(r)\to\mathcal{C}(r)\), such that each \(\mu_{r}\) forms an associative product on \(\mathcal{C}(r)\) and the maps \(\mu_{r}\) commute with the cooperadic decomposition. We remark that technically this should be called a co-Hopf cooperad (as it is called in [14]), but by a small abuse of terminology we just call it a Hopf cooperad. A Hopf cooperad is called unital if all the products \(\mu_{r}\) are unital, the units are denote by \(\eta_{r}\in\mathcal{C}(r)\). Since the cooperadic decomposition maps respect the units, every unitary unital Hopf cooperad \(u\mathcal{C}\) comes equipped with a map \(\eta:u\mathcal{C}\mathcal{O}\mathcal{C}\mathcal{O}\mathcal{M}\to u\mathcal{C}\), where \(u\mathcal{C}\mathcal{O}\mathcal{C}\mathcal{O}\mathcal{M}\) is the unitary cocommutative cooperad. The map is given by sending the arity \(r\) operation of \(u\mathcal{C}\mathcal{O}\mathcal{C}\mathcal{O}\mathcal{M}\) to \(\eta_{r}\). Similar to algebras, it turns out that the tensor product of two coalgebras with divided symmetries \(C\) and \(D\) over a Hopf cooperad \(\mathcal{C}\) is naturally a \(\mathcal{C}\)-coalgebra with divided symmetries. The coproduct \[\delta_{r}^{C\otimes D}:C\otimes D\to\left(\mathcal{C}(r)\otimes\left(C\otimes D \right)^{\otimes r}\right)_{\mathbf{S}_{r}}\] is defined in a dual way as in the algebra case and is explicitly given by the following composition \[C\otimes D\xrightarrow{\delta_{r}^{C}\otimes\delta_{r}^{D}}\left(\mathcal{C}( r)\otimes C^{\otimes r}\otimes\mathcal{C}(r)\otimes D^{\otimes r}\right)_{ \mathbf{S}_{r}}\xrightarrow{\tau}\left(\mathcal{C}(r)\otimes\mathcal{C}(r) \otimes\left(C\otimes D\right)^{\otimes r}\right)_{\mathbf{S}_{r}}\] \[\xrightarrow{\mu_{r}\otimes\operatorname{id}_{\left(C\otimes D\right)^{ \otimes r}}}\left(\mathcal{C}(r)\otimes\left(C\otimes D\right)^{\otimes r} \right)_{\mathbf{S}_{r}},\] where \(\delta_{r}^{C}\) and \(\delta_{r}^{D}\) are the the coproduct maps of \(C\) and \(D\) and \(\tau\) is the map that permutes the tensor factors. ### The generalized shuffle product It is a well known fact that the counital cofree conilpotent cocommutative coalgebra on a chain complex \(V\) is not just a cocommutative coalgebra but also carries an additional product called the shuffle product. With this shuffle product, the cofree conilpotent cocommutative coalgebra becomes a Hopf algebra. In this section, we recall a generalization of the shuffle product which was originally due to Moerdijk (see Example 2.3 of [24]). Moerdijk showed that the free algebra over a Hopf operad carries a coassociative coproduct. For the constructions in this paper, we need the dual of Moerdijk's construction which we describe here. For more details see also Section 3.2.1 of [19]. Let \(u\mathcal{C}\) be a reduced unitary Hopf cooperad and \(V\) a chain complex. Since \(u\mathcal{C}\) is reduced it is canonically augmented. Then \(u\mathcal{C}(V)\), the cofree conilpotent coassociative coalgebra on \(V\), has an associative product \[\star:u\mathcal{C}(V)\otimes u\mathcal{C}(V)\to u\mathcal{C}(V)\] which is called the generalized shuffle product and is defined as follows. Because \(u\mathcal{C}\) is a Hopf cooperad, the tensor product \(u\mathcal{C}(V)\otimes u\mathcal{C}(V)\) becomes a \(u\mathcal{C}\)-coalgebra as well. Since \(u\mathcal{C}(V)\) is cofree, we only need to specify what the image is on the cogenators of \(u\mathcal{C}(V)\). The coaugmentation \(\eta:I\to u\mathcal{C}\) of \(u\mathcal{C}\) induces a map \(V\cong I(V)\to u\mathcal{C}(V)\), by abuse of notation we will denote the image of this map by \(V\). The restriction to cogenerators \(\tilde{\star}:u\mathcal{C}(V)\otimes u\mathcal{C}(V)\to V\) of the generalized shuffle product is defined by \[1\star v=v\star 1\,=v,\] for \(v\in V\) and zero otherwise. By cofreeness this extends to a product \(\star:u\mathcal{C}(V)\otimes u\mathcal{C}(V)\to u\mathcal{C}(V)\). We call this product the generalized shuffle product. The generalized shuffle product has the following compatibility with the coproducts of \(u\mathcal{C}(V)\) **Lemma 3.1**.: _Let \(u\mathcal{C}\) be a unitary Hopf cooperad and \(V\) a graded \(R\)-module. The generalized shuffle \(\star:u\mathcal{C}(V)\otimes u\mathcal{C}(V)\to u\mathcal{C}(V)\) product is a morphism of \(u\mathcal{C}\)-coalgebras, in particular the following diagram commutes_ _where \(\Delta_{u\mathcal{C}(V)\otimes u\mathcal{C}(V)}\) and \(\Delta_{u\mathcal{C}(V)}\) are the coalgebra maps._ The proof of the lemma is omitted since it is a straightforward consequence of the definition of the generalized shuffle product and the compatibility of the Hopf product with the cooperad structure. By the same arguments as in Appendix A of [6], we can extend the generalized shuffle product to the completed \(u\mathcal{C}\)-coalgebras. **Lemma 3.2**.: _Let \(u\mathcal{C}\) be a reduced unitary Hopf cooperad and \(V\) a graded \(R\)-module. The generalized shuffle product extends to the completed cofree \(u\mathcal{C}\)-coalgebras_ \[\star:\widehat{u\mathcal{C}(V)}\otimes\widehat{u\mathcal{C}(V)}\to\widehat{u \mathcal{C}(V)}.\] Both \(u\mathcal{C}(V)\) and \(\widehat{u\mathcal{C}(V)}\) therefore become \(u\mathcal{C}\)-\(\mathcal{AS}\)-bialgebras. ## 4 The twisting procedure for algebras over Hopf operads In the previous section, we saw that \(\widehat{u\mathcal{C}(V)}\), the completed cofree conilpotent \(u\mathcal{C}\)-coalgebra on a graded \(R\)-module \(V\) over a Hopf cooperad \(u\mathcal{C}\), becomes a \(u\mathcal{C}\)-\(\mathcal{AS}\)-bialgebra with the generalized shuffle product. In this section, we explain how this bialgebra structure can be interpreted as the analog of a Lie group. In this interpretation, the "group structure" is given by the generalized shuffle product and the manifold structure is given by the \(u\mathcal{C}(V)\)-coalgebra structure and its differential \(Q:u\mathcal{C}(V)\to u\mathcal{C}(V)\). We assume that the reader is familiar with the basics of classical Lie theory, otherwise we refer the reader to for example [17] for an introduction to classical Lie theory. In this analogy, the tangent space of the "Lie group" \(u\mathcal{C}(V)\) is given by \(V\). It turns out that many concepts from the classical Lie theory of manifolds also extend to our setting. For example, the exponential map and the adjoint representation both have analogs in this setting and together they give rise to the twisting procedure. Most of the ideas in this section can be seen as a generalization of Dolgushev's approach to the twisting procedure of \(L_{\infty}\)-algebras from [7]. This approach to the twist was later extended to \(A_{\infty}\)-algebras by the authors. ### The exponential map In this section, we define the exponential map and show that it has properties similar to the exponential map in the theory of classical Lie groups. Recall that for a Lie group \(G\) with Lie algebra \(\mathfrak{g}\), the exponential map is a function \(\exp:\mathfrak{g}\to G\) that maps \(\mathfrak{g}\) to \(G\) and captures its local structure. For an element \(v\in\mathfrak{g}\), it is explicitly defined by first considering the unique one parameter subgroup \(\gamma_{v}:\mathbb{R}\to G\) whose tangent vector at the identity is equal to \(v\). The exponential map of \(v\) is then defined as \(\exp(v)=\gamma_{v}(1)\), where \(1\in\mathbb{R}\) is the multiplicative unit of We want to generalize this construction to \(u\mathcal{C}\)-\(\mathcal{ASS}\)-bialgebras. For this we first need to find the analog of the real line, which is given by \(R[R]\), the group ring of \(R\) as an additive group with coefficients in \(R\). This needs to be turned into a \(u\mathcal{C}\)-\(\mathcal{ASS}\)-bialgebra where the \(u\mathcal{C}\)-bialgebra structure has divided symmetries. The group ring \(R[R]\) has a natural cocommutative coproduct given by defining the elements of the additive group \(R\) as grouplike, this needs to be extended to a \(u\mathcal{C}\)-coalgebra structure. To do this we use that every unitary unital Hopf cooperad \(u\mathcal{C}\) comes with a natural map \(\eta:u\mathcal{COCOM}\to u\mathcal{C}\). #### 4.1.1 The analog of the real line As mentioned earlier, the analog of the real line \(\mathbb{R}\) in our setting is played by the group ring \(R[R]\). In the setting of classical Lie groups, the real line \(\mathbb{R}\) has a group structure given by addition and a manifold structure given by the usual manifold structure on \(\mathbb{R}\). In our setting the group ring \(R[R]\) has a "group" structure (strictly speaking monoid structure) but not yet a \(u\mathcal{C}\)-coalgebra structure, which would be the analog of the manifold structure on \(\mathbb{R}\). To avoid confusion between \(R\) as a coefficient ring and \(R\) as an additive group, we will from now on denote \(R\) as an additive group by \(G\). The elements of \(G\) are denoted by \(g_{\lambda}\) with \(\lambda\in G\), since \(G\) and \(R\) are the same as additive groups we will occasionally also use \(R\) as an indexing set for the elements \(g_{\lambda}\). The additive unit of \(R\) is denoted by \(e\) and the multiplicative unit by \(g_{1}\). With this notation the group ring has a basis given by the elements \(g_{\lambda}\) and the ring structure of the group ring is defined on these basis elements by \(g_{\lambda}\cdot g_{\mu}=g_{\lambda+\mu}\), with \(\lambda,\mu\in G\). The group ring \(R[G]\) further becomes a cocommutative coalgebra with the classical coproduct \[\Delta:R[G]\to R[G]\otimes R[G]\] defined on the basis \(\{g_{\lambda}\}\) by \[\Delta(g_{\lambda})=g_{\lambda}\otimes g_{\lambda}\] for \(g_{\lambda}\in G\). The coproduct of \(R[G]\) naturally lands in the invariants and not in the coinvariants, so this coalgebra does not have divided symmetries. Since we want to look at morphisms from \(R[G]\) to \(u\mathcal{C}(V)\) we need to give \(R[G]\) a \(u\mathcal{C}\)-coalgebra structure with divided symmetries. First, we turn \(R[G]\) into a \(u\mathcal{C}\)-coalgebra (without divided symmetries) by using the morphism \(\eta:u\mathcal{COCOM}\to u\mathcal{C}\), so we have a map \[R[G]\rightarrow\bigoplus_{r\geq 0}\widehat{(u\mathcal{C}(r)\otimes R[G]^{ \otimes r})^{\mathbf{S}_{r}}}. \tag{2}\] Since we assumed that \(u\mathcal{C}\) is free as an \(R[\mathbf{S}_{r}]\)-module in each arity, the invariants and coinvariants are isomorphic. We can therefore apply this isomorphism to Equation 2 to get a map \[R[G]\rightarrow\bigoplus_{r\geq 0}\widehat{(u\mathcal{C}(r)\otimes R[G]^{ \otimes r})}_{\mathbf{S}_{r}},\] which turns the group algebra into a \(u\mathcal{C}\)-coalgebra with divided symmetries. **Remark 4.1**.: _When \(\mathbb{Q}\subseteq R\), the norm map is always invertible and in this case our constructions work for all unitary reduced unital Hopf cooperads._ #### 4.1.2 The exponential map The exponential map is defined as follows. We want to define a map which generalizes the exponential map of classical Lie algebras. In particular, we want to associate to each degree \(0\) element \(v\in V_{0}\) the unique "one parameter subgroup" of \(\widehat{u\mathcal{C}(V)}\) with tangent vector \(v\). With this we mean a \(u\mathcal{C}\)-\(\mathcal{ASS}\)-bialgebra map \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) from \(R[G]\), the analog of the real line, to \(\widehat{u\mathcal{C}(V)}\) with \(\gamma(e)=\mathbb{1}\) and \(T(\gamma_{v}(g_{1}))=v\), i.e. the tangent vector at the multiplicative unit \(g_{1}\in G\) is given by \(v\) (see Definition 2.3). This does not necessarily make the map \(\gamma_{v}\) unique, but it becomes unique once we further require that \(T(\gamma_{v}(\lambda g_{1}))=\lambda T(\gamma_{v}(g_{1}))\), for \(\lambda\in R\). So in particular if the "speed" of the exponential becomes \(\lambda\) times bigger, the tangent vector needs to become \(\lambda\) times bigger as well. **Proposition 4.1**.: _Let \(v\in V_{0}\), there is a unique map \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) of \(u\mathcal{C}\)-\(\mathcal{ASS}\)-bialgebras determined by the following properties:_ 1. _The tangent vector of_ \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) _at the unit is given by_ \(T(\gamma_{v}(e))=v\) _and_ \(\lambda\in R\) _we have that_ \(T(\gamma_{v}(\lambda\cdot e))=\lambda\cdot v\)_._ 2. _The map_ \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) _is a morphism of bialgebras, i.e. we have_ \(\gamma_{v}(\alpha\cdot\beta)=\gamma_{v}(\alpha)\star\gamma_{v}(\beta)\)_, with_ \(\alpha,\beta\in R[G]\)_._ Proof.: Since \(\widehat{u\mathcal{C}(V)}\) is completed cofree every map of \(u\mathcal{C}\)-coalgebras is determined by its image on the cogenerators. As a morphism of \(u\mathcal{C}\)-coalgebras, the map \(\gamma_{v}\) is therefore uniquely defined by the \(R\)-linear map \[\widetilde{\gamma_{v}}:R[G]\to V\] which is defined on basis elements \(g_{\lambda}\in R[G]\) (with \(\lambda\in R\)) by \[\widetilde{\gamma_{v}}(g_{\lambda})=\lambda\cdot v.\] This determines \(\gamma_{v}\) uniquely as a \(u\mathcal{C}\)-coalgebra map, what is left to check is whether this map is also a bialgebra map. So we need to check that it commutes with the associative products of \(R[G]\) and \(\widehat{u\mathcal{C}(V)}\). To show that the exponential map is a morphism of bialgebras, we need to check that \[\gamma_{v}(g_{\alpha}\cdot g_{\beta})=\gamma_{v}(g_{\alpha})\star\gamma_{v}(g _{\beta}).\] This is equivalent to showing that the following diagram of \(u\mathcal{C}\)-coalgebra maps commutes: where \(\star\) is the generalized shuffle product and \(\mu_{R[G]}\) is the multiplication of the group ring \(R[G]\). Since \(u\mathcal{C}(R[G])\) is cofree in the completed sense, every map is determined by its image on the cogenerators. We therefore only need to check that the maps \(\gamma_{v}\circ\mu_{R_{[G]}}\) and \(\star\circ(\gamma_{v}\otimes\gamma_{v})\) have the same tangent vector, which is equal to its image on the cogenerators. In other words, we have to show that for \(\alpha\otimes\beta\in R[G]\otimes R[G]\) \[T\left(\gamma_{v}\circ\mu_{R_{[G]}}\right)=T\left(\star\circ(\gamma_{v}\otimes \gamma_{v})\right). \tag{3}\] If we do this explicit computation, we see that the left hand side of Equation 3 is equal to \[T(\gamma_{v}\circ\mu_{R_{[G]}}(\alpha\otimes\beta))=\alpha+\beta.\] The right hand side of Equation 3 is computed as follows. First of all, the elements \(\gamma_{v}(\alpha)\) (resp. \(\gamma_{v}(\beta)\)) are of the form \(1+\alpha+\text{higher order terms}\) (resp. \(1+\beta+\text{higher order terms}\)), where the higher order terms are given by coproducts of arity \(2\) and greater. The generalized shuffle product is then given by \(\gamma_{v}(\alpha)\star\gamma_{v}(\beta)=1+\alpha+\beta+\text{higher order terms}\), so after projecting on cogenerators it is given by \(\alpha+\beta\). The maps therefore have the same projection onto the cogenerators and are therefore equal. The map \(\gamma_{v}\) is therefore a map of bialgebras. From these properties, it follows that \(\gamma_{v}\) is determined by the image of \(g_{1}\). **Lemma 4.1**.: _From the properties of Properties 4.1, it also follows that the \(u\mathcal{C}\)-\(\mathcal{A}\mathcal{S}\mathcal{S}\)-bialgebra map \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) is equivalent to a \(u\mathcal{C}\)-coalgebra map \(\gamma^{\prime}_{v}:R\to\widehat{u\mathcal{C}(V)}\)._ Proof.: It is clear that a \(u\mathcal{C}\)-\(\mathcal{A}\mathcal{S}\mathcal{S}\)-bialgebra map \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\) is determines a \(u\mathcal{C}\)-coalgebra map \(\gamma^{\prime}_{v}:R\to\widehat{u\mathcal{C}(V)}\). This map is given by \(\gamma^{\prime}_{v}(\lambda):=\gamma_{v}(\lambda g_{1})\), with \(\lambda\in R\). To go in the other direction, we assume that we have a map \(\gamma^{\prime}_{v}:R\to\widehat{u\mathcal{C}(V)}\) and extend it to a map \(\gamma_{v}:R[G]\to\widehat{u\mathcal{C}(V)}\). Since \(\widehat{u\mathcal{C}(V)}\) is cofree, we only need to define its projection on cogenerators, the image of the generators \(g_{\lambda}\mathbb{R}[G]\) by the following formula \[T(\gamma_{v}(g_{\lambda})):=T(\gamma^{\prime}_{v}(\lambda)).\] It follows from the properties of Propositions 4.1 that this can indeed be extended to a bialgebra map. Now that we have the definition of the analog of a "one-parameter subgroup" we can define the exponential map in exactly the same way as it is done in classical Lie theory. **Definition 4.1**.: _The exponential map is defined by_ \[\exp:V_{0}\to\widehat{u\mathcal{C}(V)}\] \[exp(v):=\gamma_{v}(g_{1}).\] Similar to the classical exponential map, our exponential map has the following properties. **Proposition 4.2**.: _Let \(v\in V_{0}\), \(\kappa,\lambda\in R\) and \(x\in u\mathcal{C}(V)\). The exponential map has the following properties:_ 1. \(\exp(\kappa v)\star\exp(\lambda v)=exp((\kappa+\lambda)v)\)_._ 2. \(\exp(-v)\star\exp(v)=\mathbb{1}=\exp(v)\star\exp(-v)\) _or alternatively_ \(\exp(-v)\) _is a multiplicative inverse to_ \(\exp(v)\)_._ 3. _The map_ \(\exp(v)\star-:\widehat{u\mathcal{C}(V)}\to\widehat{u\mathcal{C}(V)}\) _is a morphism of_ \(u\mathcal{C}\)_-coalgebras._ Proof.: Part 1 and part 2 follow immediately from Proposition 4.1 and their proof is omitted. We prove Part 3 as follows. We need to show that the map \(\exp(v)\star-:\widehat{u\mathcal{C}(V)}\to\widehat{u\mathcal{C}(V)}\) commutes with the coproduct, so if \(x\in\widehat{u\mathcal{C}(V)}\) then we need to show that \[\Delta_{\widehat{u\mathcal{C}(V)}}(\exp(v)\star x)=\Delta(\star)(\Delta_{ \widehat{u\mathcal{C}(v)}\otimes u\widehat{u\mathcal{C}(V)}}(\exp(v)\otimes x )), \tag{4}\] with \(x\in u\mathcal{C}(V)\) and \(\Delta(\star)\) is the coproduct of the generalized shuffle product \(\star\). To show that Equation 4 holds, we need two ingredients. The first one is Lemma 3.1 and the second one is that \(\exp(v)\) satisfies a grouplike property. Since the element \(g_{1}\) is grouplike in \(R[G]\), its coproduct is given by \(\Delta(g_{1})=\sum_{r\geq 0}\eta_{r}\otimes g_{1}^{\otimes r}\),where \(\eta_{r}\) is the image of \(u\mathcal{COCOM}(r)\) in \(u\mathcal{C}(r)\). Since \(\gamma_{v}\) is a morphism of \(u\mathcal{C}\)-bialgebras, the coproduct of \(\Delta(\exp(v))=\Delta(\gamma_{v}(g_{1}))=\sum_{r\geq 0}\eta_{r}\otimes\exp(v)^{ \otimes r}\). If we now use that \(\eta_{r}\) the unit is for the product \(\mu_{r}:u\mathcal{C}(r)\otimes u\mathcal{C}(r)\to u\mathcal{C}(r)\) and Lemma 3.1, we see that Equation 4 holds. The following corollary follows almost immediately from Proposition 4.2. Note that in second part of the following proposition we do not use the completion of the coalgebra \(u\mathcal{C}(V)\). **Corollary 4.1**.: _The element \(\exp(v)\) is invertible in \(\widehat{u\mathcal{C}(V)}\) with the generalized shuffle product, its inverse is given by \(\exp(-v)\). The morphism_ \[\exp(v)\star-:u\mathcal{C}(V)\to u\mathcal{C}(V)\] _is therefore an isomorphism with inverse \(\exp(-v)\star-\)._ Proof.: The fact that \(\exp(v)\star-\) is invertible follows immediately from Proposition 4.2. The fact that the map \(\exp(v)\) preserves the subspace \(u\mathcal{C}(V)\subset\widehat{u\mathcal{C}(V)}\) follows from the same arguments as in Appendix A of [6]. ### The twisting procedure and the Maurer-Cartan equation Using the exponential from the previous section, we can define the twisting procedure for \(\Omega\mathcal{C}\)-algebras. This is similar to the adjoint representation of a Lie group on its Lie algebra. **Definition 4.2**.: _Let \((u\mathcal{C}(V),Q)\) be a curved \(\Omega\mathcal{C}\)-algebra and \(v\in V_{0}\). Then we define \(Q^{v}:u\mathcal{C}(V)\to u\mathcal{C}(V)\), the differential of \((u\mathcal{C}(V),Q)\) twisted by \(v\) as_ \[Q^{v}:u\mathcal{C}(V)\to u\mathcal{C}(V)\] \[Q^{v}(x):=\exp(-v)\star Q\left(\exp(v)\star x\right),\] _with \(x\in u\mathcal{C}(V)\)._ **Theorem 4.1**.: _The twisted differential \(Q^{v}\) is a coderivation and squares to zero. The twist of a curved \(\Omega\mathcal{C}\)-algebra is therefore again a curved \(\Omega\mathcal{C}\)-algebra._ Proof.: We need to show that the twisted differential \(\mathbb{Q}^{v}\) is again a coderivation and squares to zero. First note that by Proposition 4.2, multiplication by \(\exp(v)\) is a \(u\mathcal{C}\)-coalgebra isomorphism with inverse \(\exp(-v)\). Since the conjugation of a coderivation by an isomorphism is again a coderivation, the map \(Q^{v}\) is again a coderivation. The map \(Q^{v}\) squares to zero since \[(Q^{v})^{2}(x) =\exp(-v)\star Q(\exp(v)\star\exp(-v)\star Q(\exp(v)\star x))\] \[=\exp(-v)\star Q(\mathbb{1}\star Q(\exp(v)\star x))\] \[=\exp(-v)\star Q^{2}(\exp(v)\star x)\] \[=0\] where we used that multiplication by \(\mathbb{1}\) is the identity. So the twisted differential squares again to zero which proves the theorem. Using this notion of the twist, we can define Maurer-Cartan elements as those elements that produce a flat \(\Omega\mathcal{C}\)-algebra after twisting. **Definition 4.3**.: _Let \((u\mathcal{C}(V),Q)\) be a curved \(\Omega\mathcal{C}\)-algebra and let \(v\in V_{0}\), then the Maurer-Cartan equation is defined as_ \[\tilde{Q}(exp(v))=0.\] _An element is called a Maurer-Cartan element if it satisfies the Maurer-Cartan equation, the set of Maurer-Cartan elements is denoted by \(\mathrm{MC}(V)\subseteq V\)._ The Maurer-Cartan equation can be made more explicit by using the explicit morphism \(\eta:u\mathcal{COCOM}\to u\mathcal{C}\). If \(\eta^{\prime}_{r}\) denotes the basis element for \(u\mathcal{COCOM}(r)\) and \(\eta_{r}=\varphi(\eta^{\prime}_{r})\) then the Maurer-Cartan equation can be rewritten as \[\sum_{r\geq 0}\tilde{Q}_{r}\left(\eta_{r}\otimes v^{\otimes r}\right)=0.\] Using the same arguments as in Section 4 of [6], we get the following propostion which shows that the Maurer-Cartan equation is indeed a flatness equation. **Proposition 4.3**.: _Let \((u\mathcal{C}(V),Q)\) be a curved \(\Omega\mathcal{C}\)-algebra, then \((u\mathcal{C}(V),Q^{v})\), the twist of \(Q\) by an element \(v\in V_{0}\), is flat if and only if \(v\) is a Maurer-Cartan element._ ## 5 The Maurer-Cartan simplicial set In the theory of \(L_{\infty}\)- and \(A_{\infty}\)-algebras, the set of Maurer-Cartan elements can be extended to a simplicial set which encodes all the relevant gauges. These Maurer-Cartan simplicial sets have many applications in deformation theory, rational homotopy theory and related fields. In this section, we show that these Maurer-Cartan simplicial sets can be constructed in much greater generality. In particular, we show that one can construct a Maurer-Cartan simplicial set for every unitary reduced unital Hopf cooperad \(u\mathcal{C}\) with a map \(\mathcal{E}_{\infty}\to\mathcal{C}\), where \(\mathcal{E}_{\infty}\) denotes the cochains on the Barratt-Eccles operad and \(\mathcal{C}\) is the non-unitary part of \(\mathcal{C}\). **Convention 5.1**.: _From now on we only work with flat \(\Omega\mathcal{C}\)-algebras._ ### The Maurer-Cartan simplicial set We construct the Maurer-Cartan simplicial set as follows. Suppose that \(u\mathcal{C}\) is a unitary reduced unital Hopf cooperad, then we have seen in Section 4 that there exists a morphism \(u\mathcal{COCOM}\to u\mathcal{C}\) and that we have an exponential map and a Maurer-Cartan equation. We further saw that if \(V\) is a curved \(\Omega\mathcal{C}\)-algebra, then the exponential map is defined via the "one parameter subgroup" \(\gamma_{v}:R[G]\to u\mathcal{C}(V)\), which in turn was determined by a morphism of \(u\mathcal{C}\)-coalgebras \(R\to u\mathcal{C}(V)\). Since \(R\) can be identified with \(N_{*}(\Delta^{0};R)\), the chains on \(\Delta^{0}\) with coefficients in \(R\), the set of Maurer-Cartan elements is equal to the set of morphisms of \(N_{*}(\Delta^{0};R)\), considered as a \(u\mathcal{C}\)-coalgebra using the morphism \(u\mathcal{COCOM}\to u\mathcal{C}\). to \(u\mathcal{C}(V)\). To define the higher simplices of the Maurer-Cartan simplicial set, we use the higher standard simplices \(\Delta^{n}\). It is well known that the collection of standard simplices forms a cosimplicial object in the category of simplicial sets (see [15] for example). So if we apply \(N_{*}(-,R)\), the normalized chains functor with coefficients in \(R\), we get a cosimplicial object in the category of chain complexes. To shorten the notation a little, we will from now on omit the coefficients in the notation for the normalized chains and always implicitly assume they are the ring \(R\). To define the Maurer-Cartan simplicial set, we need to turn this into a cosimplicial object in the category of \(\mathcal{C}\)-coalgebras. The Maurer-Cartan simplicial set is then morally defined as \[\mathrm{MC}_{n}(V):=\hom_{\mathcal{C}-\text{coalg}}(N_{*}(\Delta^{n}),\widehat{ u\mathcal{C}(V)}). \tag{5}\] To equip \(N_{*}(\Delta^{n})\) with a \(\mathcal{C}\)-coalgebra structure, we first use the fact that it is an \(\mathcal{E}_{\infty}\)-coalgebra. There are several choices possible for an \(\mathcal{E}_{\infty}\)-coalgebra structure on the normalized chains. There are for example the surjection operad and the Barratt-Eccles operad (see [22] and [2]). In the rest of this paper, we use the Barratt-Eccles operad from [2] and instead of working with coalgebras over operads, we work with the dual of the Barratt-Eccles operad, which we denote by \(\mathcal{E}_{\infty}\). Since the Barratt-Eccles operad is of finite type, coalgebras over an operad are equivalent to coalgebras over the dual cooperad. So to turn \(N_{*}(\Delta^{n})\) into a \(\mathcal{C}\)-coalgebra, we need to assume that we have a morphism of cooperads \(\varphi:\mathcal{E}_{\infty}\to\mathcal{C}\). Since the Barratt-Eccles cooperad has a free symmetric group action, we can identify invariants and coinvariants so there is no difference between coalgebras with divided symmetries and ordinary coalgebras. Since we assumed that all \(\Omega\mathcal{C}\)-algebras are flat, we see that a flat \(\Omega\mathcal{C}\)-algebra structure on a chain complex \(V\) is equivalent to a square-zero coderivation on \(\mathcal{C}(V)\) (so the non counital version). In this case, the set of \(n\)-simplices of the Maurer-Cartan simplicial set from Equation 5 is given by the set of maps of \(\mathcal{C}\)-coalgebras \[\mathrm{MC}_{n}(V):=\hom_{\mathcal{C}-\text{coalg}}(N_{*}(\Delta^{n}),\widehat {\mathcal{C}(V)}). \tag{6}\] This is because we can identify \(\mathcal{C}(V)\) with the relative bar construction on \(V\) (see [20], Chapter 11). Recall that since \((\mathcal{C}(V),Q)\) is cofree, every map to it is determined by its image on cogenerators. The converse is however not true since not every map commutes with the differential \(Q\). We can however form the convolution algebra \(\hom_{R}\left(N_{*}(\Delta^{n}),V\right)\), which becomes an algebra over the convolution operad \(\hom(\mathcal{C},\Omega\mathcal{C})\) (see [3]). As is explained in Chapter 11 of [20], there is a Maurer-Cartan equation in this convolution algebra whose solutions correspond to the maps that commute with the differential \(Q\). To define this Maurer-Cartan equation, we first define the \(\star_{\tau}\)-operator. Let \(\mathcal{C}\) be a cooperad and \(\mathcal{P}\) an operad both with free symmetric group actions. Let \((C,\Delta_{C})\) be a \(\mathcal{C}\)-coalgebra and \((V,\gamma_{V})\) a \(\mathcal{P}\)-algebra, since the symmetric group actions are free there is no difference between divided symmetries and no divided symmetries. Further suppose that we have an operadic twisting morphism \(\tau:\mathcal{C}\to\mathcal{P}\). In our case \(\mathcal{P}\) is given by \(\Omega\mathcal{C}\) and \(\tau\) is given by the canonical twisting morphism \(\iota:\mathcal{C}\to\Omega\mathcal{C}\). The \(\star_{\tau}\)-operator is the (non-linear) map of degree \(-1\) \[\star_{\tau}:\hom_{R}(C,V)\to\hom_{R}(C,V)\] defined as the composite \[\star_{\tau}(\psi):=C\xrightarrow{\Delta_{C}}\mathcal{C}\circ C\xrightarrow{ \tau\circ\psi}\mathcal{P}\circ V\xrightarrow{\gamma_{V}}V.\] We further define, for \(r\geq 2\), the operations \[\mu_{r}:\hom_{R}(C,V)^{\otimes r}\to\hom_{R}(C,V)\] by \[\mu_{r}(\psi_{1},...,\psi_{r}):=\gamma_{V}\circ(\psi_{1}\otimes...\otimes \psi_{r})\circ\Delta_{C}^{r},\] with \(\psi_{1},...,\psi_{r}\in\hom_{R}(C,V)\) and \(\Delta_{C}^{r}:C\to(\mathcal{C}(r)\otimes C^{\otimes r})_{\mathcal{S}_{r}}\) is the arity \(r\) part of the coproduct of \(C\). The operator \(\star_{\tau}\) can be written as \(\star_{\tau}(\psi)=\sum_{r\geq 2}\mu_{r}(\psi,...,\psi)\). A Maurer-Cartan element is a degree \(0\) morphism \(\psi\in\hom_{R}(C,V)\) which satisfies the Maurer-Cartan equation, which is given by \[\partial(\psi)+\star_{\tau}(\psi)=0, \tag{7}\] where \(\partial\) is the differential of \(\hom_{R}(C,V)\). The set of solutions to the Maurer-Cartan equation is denoted by \(\MC\left(\hom_{R}(C,V)\right)\). From Chapter 11 of [20] we get the following proposition. **Proposition 5.1** ([20], Proposition 11.3.1).: _Under the previous assumptions we have the following bijection_ \[\MC\left(\hom_{R}(C,V)\right)\cong\hom_{\mathcal{C}-\text{coalg}}\left(C,( \mathcal{C}(V),Q)\right).\] In our specific situation, the twisting morphism is given by the canonical twisting morphism \(\iota:\mathcal{C}\rightarrow\Omega\mathcal{C}\) (see [20], Section 6.5). Because of Proposition 5.1, we see that have an equivalence between the set of \(n\)-simplices from Equation 6 and the Maurer-Cartan elements in \(\hom_{R}(N_{*}(\Delta^{n}),V)\). Further notice that when \(n=0\), the chains on \(\Delta^{0}\) are isomorphic to \(R\). So there is an isomorphism \(\hom_{R}(N_{*}(\Delta^{0}),V)\cong V\). It is straightforward to see that under this isomorphism the Maurer-Cartan equation in \(\hom_{R}(N_{*}(\Delta^{0}),V)\) given by \(\partial(\psi)+\star_{\iota}(\psi)\) is equivalent to the Maurer-Cartan equation from Definition 4.3. An equivalent formulation of the Maurer-Cartan simplicial set is then given by \[\MC_{n}(V):=\MC\left(\hom_{R}\left(N_{*}(\Delta^{n},V)\right).\right. \tag{8}\] The face and degeneracy maps are the maps induced by the face and degeneracy maps of \(\{\Delta^{n}\}_{n\geq 0}\). **Lemma 5.1**.: _Under the earlier assumptions from this section, the induced face maps \(d_{i}:\MC_{n}(V)\rightarrow\MC_{n-1}(V)\) and \(s_{j}:\MC(V)\rightarrow\MC_{n+1}(V)\) preserve Maurer-Cartan elements._ This follows from the fact that \(\MC\) is a bifunctor in the coalgebra and the algebra. Since the cosimplicial maps are coalgebra maps they preserve the Maurer-Cartan equation. The Maurer-Cartan simplicial set therefore becomes indeed a well defined simplicial set. **Remark 5.1**.: _In characteristic \(0\), the convolution algebra becomes canonically an \(L_{\infty}\)-algebra with the \(L_{\infty}\)-structure defined in [27]. This is unfortunately not the case when working over rings that do not contain \(\Q\) as a subring._ **Remark 5.2**.: _In our definition of the Maurer-Cartan simplicial set we have chosen to use the chains on the standard simplex. But this construction could of course also be done for other models of the simplex. It is a natural question to ask whether other models would give homotopy equivalent Maurer-Cartan simplicial sets. It seems highly likely that the methods of Milham and Rogers (see [23]) would also apply to this more general setting, but this is beyond the scope of this paper._ ### The Maurer-Cartan simplicial set is a Kan complex In this section, we prove that the Maurer-Cartan simplicial set is a Kan complex. **Theorem 5.1**.: _Let \(V\) be a complete \(\Omega\mathcal{C}\)-algebra then the Maurer-Cartan simplicial set \(\MC_{\bullet}(V)\) is a Kan complex._ To prove the theorem, we follow Getzler's proof of the fact that the Maurer-Cartan simplicial set or Deligne-Getzler-Hinich groupoid associated to an \(L_{\infty}\)-algebra (see [13], Section 4). The main idea behind Getzler's proof is that the chains on \(\Delta^{n}\) come with a retraction onto each of the vertices of \(\Delta^{n}\). Given a horn \(\Lambda_{k}^{n}\) in \(V\), we can use this contraction to inductively build a horn filler. In [6], we generalized Getzler's proof to the case of \(A_{\infty}\)-algebras by using the cochains on the simplices. In the case of a general \(\Omega\mathcal{C}\)-algebra, we need to replace the cochains by the chains and replace the tensor product by the mapping space. The biggest difference is that in the \(A_{\infty}\)- and \(L_{\infty}\)-case the tensor product with the cochains (resp. polynomial de Rham forms) is again an \(A_{\infty}\) (resp. \(L_{\infty}\)-algebra). In our case, the tensor product is replaced by the convolution algebra which is not naturally an \(\Omega\mathcal{C}\)-algebra. We therefore need to work with the more general Maurer-Cartan equation from Equation 7, which has as a consequence that our formulas are slightly different from Getzler's proof in [13]. Before we prove Theorem 5.1, we first need a contraction on the chains of the simplex \(\Delta^{n}\). Since all the arguments are completely analogues to the arguments in [6], we have left the proofs to the reader. For what follows we use the following notation. The \(k\)-dimensional subsimplex of \(\Delta^{n}\) with vertices \(i_{0},...,i_{k}\) is denoted by \(e_{i_{0},...,i_{k}}\), with \(0\leq i_{0}<...<i_{k}\leq n\). A set of generators for \(N_{k}(\Delta^{n})\) as a chain complex is then given by \(\{e_{I}\}\) where \(I\subseteq\{0,...,n\}\) runs over all subsets of order \(k+1\). The differential is then given by \[d(e_{i_{0},...,i_{k}})=\sum_{j=0}^{k}(-1)^{j}e_{i_{0},...,\widehat{i_{j}},..., i_{k}},\] where \(\widehat{i_{j}}\) means that we omit the \(i_{j}\)th index. The counit of \(N_{*}(\Delta^{n})\) is the map induced by the map of simplicial sets \(\Delta^{n}\to\Delta^{0}\). Explicitly the counit \(\epsilon:N_{*}(\Delta^{n})\to R\) is defined on generators by \(\epsilon(e_{k})=\mathbb{1}\) and zero otherwise. The inclusion of the \(k\)th vertex is denoted by \(p_{n}^{k}:\Delta^{0}\to\Delta^{n}\) and is given by \(p_{n}^{k}(e_{0})=e_{k}\). The composition \(p_{n}^{k}\circ\epsilon:N_{*}(\Delta^{n})\to N_{*}(\Delta^{n})\) is homotopic to the identity \(\mathrm{id}:N_{*}(\Delta^{n})\to N_{*}(\Delta^{n})\) via the chain homotopy \(h_{n}^{k}:N_{*}(\Delta^{n})\to N_{*+1}(\Delta^{n})\) given by \[h_{n}^{k}(e_{I}):=(-1)^{s}e_{I\cup k},\] with \(e_{I}\in N_{*}(\Delta^{n})\) and where \(I\cup k\) is defined as \(0\) if \(k\) was already an element of \(I\). The sign \(s\) is given by the number of elements in \(I\) smaller than \(k\). The map \(h_{n}^{k}\) is a chain homotopy between \(p_{n}^{k}\) and the identity, i.e. it satisfies the following equation \[dh_{n}^{k}+h_{n}^{k}d=\mathrm{id}_{N_{*}(\Delta^{n})}-p_{n}^{k}. \tag{9}\] The map \(p_{n}^{k}\) induces a map \[\tilde{P_{n}^{k}}:\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\to\hom_{R}(N_{*}(\Delta^{0 }),V)_{d}\cong V\] given by \[\tilde{P_{n}^{k}}(\varphi):=\varphi\circ p_{n}^{k}\] with \(\varphi\in\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\). Similarly, the counit \(\epsilon:N_{*}(\Delta^{n})\to R\) induces a map \[E:V\cong\hom_{R}(N_{*}(\Delta^{0}),V)_{d}\to\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\] given by \[E(\phi):=\phi\circ\epsilon\] with \(\phi\in\hom_{R}(N_{*}(\Delta^{0}),V)_{d}\). The map \[P_{n}^{k}:\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\to\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\] is defined by \[P_{n}^{k}:=E\circ\tilde{P_{n}^{k}}.\] The homotopy \(h_{n}^{k}\) induces a similar homotopy on the level of convolution algebras \[H_{n}^{k}:\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\to\hom_{R}(N_{*}(\Delta^{n}),V)_{d +1}.\] given by \[H_{n}^{k}(\varphi):=\varphi\circ h_{n}^{k}\] with \(\varphi\in\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\). The maps \(\partial\), \(E\), \(P_{n}^{k}\) and \(H_{n}^{k}\) satisfy the following identity \[\partial H_{n}^{k}+H_{n}^{k}\partial=\id_{\hom_{R}(N_{*}(\Delta^{n}),V)}-P_{n }^{k}. \tag{10}\] We further define the map \[R_{n}^{k}:\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\to\hom_{R}(N_{*}(\Delta^{n}),V)_{d}\] as \[R_{n}^{k}:=\partial\circ H_{n}^{k}.\] Using these maps we can now prove Theorem 5.1. Proof of Theorem 5.1.: As mentioned earlier, we use a variation of Getzler's proof in which we start with a certain element in \(\hom_{R}(N_{*}(\Delta^{n}),V)\) and extend this to a Maurer-Cartan element. Suppose that we have a \(k\)-horn in \(\MC_{\bullet}(V)\), i.e. a map \(\varphi:N_{*}(\Lambda_{k}^{n})\to V\), then we need to construct a horn filler, i.e. a map \(\psi:N_{*}(\Delta^{n})\to V\) that fills \(\varphi\). We define \(\psi\) inductively and start with \(\psi_{1}\) which we define as \(\psi_{1}=\xi+\rho\), where \[\xi(e_{k}):=\varphi(e_{k})\] and zero otherwise and \[\rho(e_{I}):=\begin{cases}\varphi(e_{I})&\text{if }e_{I}\neq e_{k},e_{0\dots \hat{k}\dots n}\text{ or }e_{01\dots n},\\ \sum_{i\neq k}\varphi(e_{0\dots\hat{i}\dots n})&\text{if }e_{I}=e_{0\dots \hat{k}\dots n},\\ 0&\text{if }e_{I}=e_{k}\text{ or }e_{01\dots n}.\end{cases}\] Since \(\varphi\) is a Maurer-Cartan element, the element \(\xi\) is also a Maurer-Cartan element. It is further straightforward to see that \(\partial\rho=0\), so it is a cycle. In most cases, the element \(\psi_{1}\) is not a Maurer-Cartan element. Since \(\xi\) is a Maurer-Cartan element and \(\rho\) a cycle, it does however satisfy the Maurer-Cartan equation in \[F_{1}\hom(N_{*}(\Delta^{n}),V)/F_{2}\hom(N_{*}(\Delta^{n}),V),\] i.e. it satisfies the Maurer-Cartan equation modulo elements of filtration degree 2. We proceed by adding a "correction" term \(\gamma_{1}\) that defines an element \(\psi_{2}\) which satisfies the Maurer-Cartan equation up to elements of filtration degree 3. First, we define \[\gamma_{1}:=H_{n}^{k}(\partial\psi_{1}+\star_{\iota}(\psi_{1})).\] The element \(\gamma_{1}\) is of filtration degree greater or equal \(2\) because both \(\star_{\iota}(\psi_{1})\) and \(\partial\psi_{1}\) are of filtration degree \(\geq 2\). We define \(\psi_{2}\) as \[\psi_{2}:=\psi_{1}-\gamma_{1}.\] Next we show that the element \(\psi_{2}\) satisfies the Maurer-Cartan equation up to filtration degree \(3\). We have \[\partial(\psi_{2})+\star_{\iota}(\psi_{2})=\partial\psi_{1}-\partial\gamma_{1 }+\star_{\iota}(\psi_{1}-\gamma_{1}). \tag{11}\] Since \(\gamma_{1}\) is of filtration degree \(\geq 2\), the element \(\star_{\iota}(\psi_{1}-\gamma_{1})\) can be rewritten as \[\star_{\iota}(\psi_{1}-\gamma_{1})=\star_{\iota}(\psi_{1})+\text{terms of filtration degree}\geq 3.\] So modulo elements of filtration degree \(\geq 3\), Equation 11 reduces to \[\partial\psi_{1}-\partial\gamma_{1}+\star_{\iota}(\psi_{1}). \tag{12}\] If we apply Equation 10 to \(\partial\gamma_{1}\), we get \[\partial\gamma_{1} =\partial H^{k}_{n}(\partial\psi_{1}+\star_{\iota}\psi_{1}) \tag{13}\] \[=\partial\psi_{1}+\star_{\iota}(\psi_{1})-P^{k}_{n}(\partial( \psi_{1}))-P^{k}_{n}(\star_{\iota}(\psi_{1}))-H^{k}_{n}(\partial^{2}\psi_{1}) -H^{k}_{n}(\partial(\star_{\iota}(\psi_{1}))). \tag{14}\] So if we combine this with Equation 12, we get \[-P^{k}_{n}(\partial(\psi_{1}))-P^{k}_{n}(\star_{\iota}(\psi_{1}))-H^{k}_{n}( \partial^{2}\psi_{1})-H^{k}_{n}(\partial(\star_{\iota}(\psi_{1}))). \tag{15}\] Because \(\xi\) satisfies the Maurer-Cartan equation, the terms \(-P^{k}_{n}(\partial(\psi_{1}))-P^{k}_{n}(\star_{\iota}(\psi_{1}))\) are zero. The term \(-H^{k}_{n}(\partial^{2}\psi_{1})\) is also zero since it involves \(\partial^{2}\). So we are left with the term \(-H^{k}_{n}(\partial(\star_{\iota}(\psi_{1})))\) and we need to show that this is of filtration degree \(\geq 3\). If we use the Leibniz rule for algebras over operads we get \[-H^{k}_{n}(\partial(\star_{\iota}(\psi_{1})))=\sum_{r\geq 2}\partial(\mu_{r})( \psi_{1}^{\otimes r})+\sum_{r\geq 2,0\leq l\leq r}\mu_{r}(\psi_{1}^{\otimes l} \otimes\partial(\psi_{1})\otimes\psi_{1}^{\otimes r-l-1}).\] Since \(\partial\psi_{1}\) is of filtration degree \(\geq 2\), all the terms of the form \(\mu_{r}(\psi_{1}^{\otimes l}\otimes\partial(\psi_{1})\otimes\psi_{1}^{ \otimes r-l-1})\) are of filtration degree \(\geq 3\). Further, for \(r\geq 3\), the terms \(\partial(\mu_{r})(\psi_{1}^{\otimes r})\) are all of filtration degree \(\geq 3\). The term \(\partial(\mu_{2})(\psi_{1}\otimes\psi_{1})=0\) because \(\mu_{2}\) is the arity two component of an operadic twisting morphism. The element \(\psi_{2}\) is therefore Maurer-Cartan up to terms of filtration degree \(3\). We continue inductively by defining the next "correction" terms as \[\gamma_{i}:=H^{k}_{n}\left(\partial\psi_{i}+\star_{\iota}(\psi_{i})\right)\] and \[\psi_{i+1}=\psi_{i}-\gamma_{i}.\] By exactly the same arguments it follows that \(\psi_{i}\) is Maurer-Cartan up to elements of filtration degree \(\geq i+1\). The element \(\psi\) is then defined as \[\psi:=\lim\psi_{i}\] by completeness this limit actually converges and is a Maurer-Cartan element. It can further be shown that it satisfies the properties of a horn filler for \(\varphi\) and therefore proves that MC\({}_{\bullet}(V)\) is a Kan complex. Comparison to other approaches and examples Recently, in [8] another approach to the twisting procedure was described by using the gauge group. It is currently unclear how to their twisting procedure exactly compares to ours. In both cases, the unitary operation plays an essential role but it is not clear how the unital Hopf (co)operad condition compares to their unital extendability condition. However, the approach in this paper has multiple advantages compared to [8]. First of all, our constructions work over arbitrary rings and not just fields of characteristic \(0\) or rings that contain \(\mathbb{Q}\). Second, the constructions of this paper also apply to differential graded operads and not just operads defined by quadratic data. We finish this paper by showing that the Koszul dual of every unitary operad \(\mathcal{P}\) in simplicial sets admits a twisting procedure. By unitary in simplicial sets we mean \(\mathcal{P}(0)=*\). We further show that one can construct a Maurer-Cartan simlicial set for the Barratt-Eccles operad and its \(\mathcal{E}_{n}\)-suboperads. **Theorem 6.1**.: _Let \(\mathcal{P}\) be a unitary operad in simplicial sets with finitely many non-degenerate simplices in each arity, then \(N^{*}(\mathcal{P})\) satisfies the conditions of Theorem 4.1 and therefore admits a twisting procedure._ Proof.: We need to show that \(N^{*}(\mathcal{P})\) is a unital Hopf cooperad and that there is a map of cooperads \(u\mathcal{COCOM}\to N^{*}(\mathcal{P})\). Since \(\mathcal{P}\) is an operad of finite type, the cochains on \(\mathcal{P}\) are naturally a cooperad. The Hopf structure comes from the chain level cup product, it follows from the properties of the cochain functor that the cup product is unital and is compatible with the cooperad structure. Since \(u\mathcal{COM}\) is the terminal operad in simplicial sets, every operad \(\mathcal{P}\) comes equipped with a unique map \(\mathcal{P}\to u\mathcal{COM}\). The induced map on cochains is the map \(u\mathcal{COCOM}\to N^{*}(\mathcal{P})\) we need. The main example of a class of cooperads that admit a Maurer-Cartan simplicial set are the \(\mathcal{E}_{n}\)-subcooperads of the dual Barratt-Eccles operad. The Barratt-Eccles operad \(\mathcal{B}\mathcal{E}_{\infty}\) is an operad in simplicial sets which naturally acts on the chains and cochains of a simplicial set (see [2] for more details). It comes with a sequence of suboperads \[\mathcal{B}\mathcal{E}_{1}\hookrightarrow\mathcal{B}\mathcal{E}_{2} \hookrightarrow\mathcal{B}\mathcal{E}_{3}\hookrightarrow...\mathcal{B} \mathcal{E}_{\infty}, \tag{16}\] where each \(\mathcal{B}\mathcal{E}_{n}\) models the chains on the little \(n\)-disks operad. We denote the normalized cochains on \(\mathcal{B}\mathcal{E}_{n}\) by \(\mathcal{E}_{n}:=N^{*}(\mathcal{B}\mathcal{E}_{n})\). Since the cochains are contravariant, the sequence of maps from Equation 16 induces a sequence of maps of cooperads \[\mathcal{E}_{\infty}\rightarrow...\mathcal{E}_{3}\rightarrow\mathcal{E}_{2} \rightarrow\mathcal{E}_{1}.\] So every \(\mathcal{E}_{n}\)-cooperad has a map \(\mathcal{E}_{\infty}\rightarrow\mathcal{E}_{n}\) and therefore satisfies the conditions from Section 5. The Koszul dual of \(\mathcal{E}_{n}\) is given by \(\Omega\mathcal{E}_{n}\) and Fresse showed in [11] that \(\Omega\mathcal{E}_{n}\) is weakly equivalent to \(\Lambda^{-n}\mathcal{E}_{n}^{\vee}\), where \(\Lambda^{-n}\) denotes the operadic desuspension of \(\mathcal{E}_{n}\) (note that we exchanged \(\mathcal{E}_{n}\) and \(\mathcal{E}_{n}^{\vee}\) from [11]). By the results of Lurie from [21], it turns out that these Maurer-Cartan simplicial sets control \(\mathcal{E}_{n}\) deformation problems.
2303.09626
Spectral localizer for line-gapped non-hermitian systems
Short-ranged and line-gapped non-hermitian Hamiltonians have strong topological invariants given by an index of an associated Fredholm operator. It is shown how these invariants can be accessed via the signature of a suitable spectral localizer. This numerical technique is implemented in an example with relevance to the design of topological photonic systems, such as topological lasers.
Alexander Cerjan, Lars Koekenbier, Hermann Schulz-Baldes
2023-03-16T20:06:30Z
http://arxiv.org/abs/2303.09626v1
# Spectral localizer for line-gapped non-hermitian systems ###### Abstract Short-ranged and line-gapped non-hermitian Hamiltonians have strong topological invariants given by an index of an associated Fredholm operator. It is shown how these invariants can be accessed via the signature of a suitable spectral localizer. This numerical technique is implemented in an example with relevance to the design of topological photonic systems, such as topological lasers. ## 1 Overview In a series of recent works, Terry Loring and one of the authors [16, 17] proved that the integer-valued strong topological invariants of solid state systems can be computed as the signature of suitable finite-volume approximations of the so-called spectral localizer. Roughly stated, the localizer is the sum of the Dirac operator with the Hamiltonian as a topological mass term. This provides a very effective numerical tool for the local computation of these invariants. The technique has been extended to weak invariants [21], spin Chern numbers [7], \(\mathbb{Z}_{2}\)-invariants in presence of real symmetries [8] as well as to the detection of local topological data in semimetals [22] and metals [3, 5]. All of these works suppose that the Hamiltonian is selfadjoint. It is the purpose of this note to show that the spectral localizer can also be used in non-hermitian topological systems with a line-gap. While the spectral localizer was recently used to study a specific class of non-hermitian phenomena that can manifest in anomalous Floquet topological insulators [15], this approach still employed a self-adjoint spectral localizer. The literature on non-hermitian systems has grown very rapidly in the last years, as non-hermitian Hamiltonians are relevant for dissipative, bosonic and photonic systems, among others. There are numerous physics reviews available [18, 13, 6, 2, 1] that contain an abundance of further references. Let us directly outline the construction of the non-hermitian spectral localizer and its main properties, focussing on bounded Hamiltonians \(H\) on a \(d\)-dimensional tight-binding Hilbert space \({\cal H}=\ell^{2}({\mathbb{Z}}^{d},{\mathbb{C}}^{L})\) with \(L\) internal degrees of freedom. The Hamiltonian is supposed to be short-range in the sense that there is an \(\alpha>d+2\) and a constant \(C\) such that \[\|\langle n|H|m\rangle\|\;\leq\;\frac{C}{1+|n-m|^{\alpha}}\;,\qquad n,m\in{ \mathbb{Z}}^{d}\;,\;\;\alpha>d+2\;. \tag{1}\] The second main assumption is that \(H\) has a line-gap along the imaginary axis quantified by \[g\;=\;\inf_{s\in{\mathbb{R}}}\|(H^{s})^{-1}\|^{-1}\;,\] where \(H^{s}=H+\imath s{\bf 1}\). One can readily check that \(g>0\) if and only if \(H\) has no spectrum on the imaginary axis. If the resolvent set contains a different straight line, one can shift and rotate the Hamiltonian into the above standard form. The line-gap allows one to define a Riesz projection \(P=\oint_{\gamma}\frac{dz}{2\pi\imath}(z{\bf 1}-H)^{-1}\) onto the spectrum with negative imaginary part by using any path \(\gamma\) encircling it. Even though \(P\) is merely an idempotent and not necessarily selfadjoint, it is possible that \(P\) contains topological content in the form of the so-called strong invariant. Let us introduce this invariant as an index of a Fredholm operator. Later on its connections with more widely used strong Chern numbers will be mentioned. The index is introduced using the (dual) Dirac operator \[D\;=\;\sum_{j=1}^{d}\Gamma_{j}X_{j}\;,\] where \(\Gamma_{1},\ldots,\Gamma_{d}\) form an irreducible selfadjoint representation of the Clifford algebra with \(d\) generators and \(X_{1},\ldots,X_{d}\) are the selfadjoint position operators on \({\cal H}=\ell^{2}({\mathbb{Z}}^{d},{\mathbb{C}}^{L})\). The irreducible representation acts on \({\mathbb{C}}^{d^{\prime}}\) with \(d^{\prime}=2^{\lfloor\frac{d}{2}\rfloor}\) so that \(D\) acts on \({\cal H}\otimes{\mathbb{C}}^{d^{\prime}}\). Note that \(D\) has compact resolvent. In the case that \(d\) is even, there exists a selfadjoint unitary \(\Gamma=\Gamma_{d+1}\) anti-commuting with \(\Gamma_{1},\ldots,\Gamma_{d}\). In a suitable representation, \(\Gamma\) is diagonal and \(D\) off-diagonal: \[\Gamma\;=\;\begin{pmatrix}{\bf 1}&0\\ 0&-{\bf 1}\end{pmatrix}\;,\qquad D\;=\;\begin{pmatrix}0&D_{0}^{*}\\ D_{0}&0\end{pmatrix}\;.\] The Hamiltonian \(H\cong H\otimes{\bf 1}\) is naturally extended to \({\cal H}\otimes{\mathbb{C}}^{d^{\prime}}\). In Section 3 it will be shown that the short-range Hamiltonian leaves the domain of \(D\) invariant and that \([D,H]\) extends to a bounded operator. In other words [10, 9], a short-range Hamiltonian \(H\) is differentiable w.r.t. \(D\) and the Dirac operator \(D\) specifies a Fredholm module for \(H\) (or more precisely the algebra of polynomials in \(H\)) which is even/odd if \(d\) is even/odd. Let us focus on even \(d\), then the Dirac phase is introduced as the unitary operator \(F_{0}=D_{0}|D_{0}|^{-1}\) (strictly speaking \(D_{0}\) has a \(d^{\prime}\)-dimensional kernel, but on this subspace \(F_{0}\) can simply be set to the identity). Then a modification of standard arguments discussed in Section 3 shows that the restriction \(PF_{0}P^{*}|_{\text{Ran}(P)}\) of \(F_{0}\) to the Hilbert space \(\text{Ran}(P)\) is a Fredholm operator. Its index is referred to as the even strong index pairing: \[\text{Ind}\big{(}PF_{0}P^{*}|_{\text{Ran}(P)}\big{)}\;.\] By construction, it is a homotopy invariant. Moreover, if \(H\) is periodic or, more generally, a homogeneous system, then an index theorem [20] shows that the index pairing is equal to the \(d\)th Chern number \(\mathrm{Ch}_{d}(P)\) which in turn is equal to the Chern number \(\mathrm{Ch}_{d}(Q)\) of the selfadjoint projection \(Q\) onto \(\mathrm{Ran}(P)\) (for the latter, see [19] or use the homotopy spelled out in Section 6). As already stated above, this paper is about a non-hermitian generalization of the spectral localizer and the focus will be on even dimension \(d\). For a tuning parameter \(\kappa>0\), the even non-hermitian spectral localizer is introduced by \[L_{\kappa}(H)\;=\;\begin{pmatrix}-H&\kappa D_{0}^{*}\\ \kappa D_{0}&H^{*}\end{pmatrix}\;. \tag{2}\] This operator acts on \(\mathcal{H}\otimes\mathbb{C}^{d^{\prime}}\) and is here written in the grading of \(\Gamma\). Note that for selfadjoint \(H\) this reduces to the even spectral localizer used in [17, 21]. Clearly one has \[L_{\kappa}(H^{s})\;=\;L_{\kappa}(H)\,-\,\imath\,s\,\mathbf{1}\;. \tag{3}\] This indicates that \(L_{\kappa}(H)\) may have a line-gap, a fact that can indeed be confirmed for \(\kappa\) sufficiently small (see Theorem 1 below). Next let us introduce finite-volume approximations, just as in prior works. Let \((\mathcal{H}\oplus\mathcal{H})_{\rho}\) be the range of the finite-dimensional projection \(\chi(|D|\leq\rho)\) and let \(\pi_{\rho}:\mathcal{H}\oplus\mathcal{H}\to(\mathcal{H}\oplus\mathcal{H})_{\rho}\) be the associated surjective partial isometry. Note that \(\mathbf{1}_{\rho}=\pi_{\rho}\pi_{\rho}^{*}\) is then the identity on \((\mathcal{H}\oplus\mathcal{H})_{\rho}\). For any operator \(A\) acting on \(\mathcal{H}\oplus\mathcal{H}\) denote its compression to \((\mathcal{H}\oplus\mathcal{H})_{\rho}\) by \(A_{\rho}=\pi_{\rho}A\pi_{\rho}^{*}\). The finite-volume non-hermitian spectral localizer is then given by \(L_{\kappa}(H)_{\rho}\) and denoted \(L_{\kappa,\rho}(H)=L_{\kappa}(H)_{\rho}\). **Theorem 1**: _Suppose that \(H\) is short range and set \(N=\max\{\|[D,H]\|,\|[[D],H]\|\}<\infty\) where \(H\cong H\otimes\mathbf{1}\) and \(|D|=(D^{*}D)^{\frac{1}{2}}\) is the absolute value of the Dirac operator. If_ \[\kappa\;\leq\;c_{\kappa}\,\frac{g^{3}}{\|H\|\,N}\qquad\text{and}\qquad c_{ \rho}\,\frac{g}{\kappa}\Big{(}1+\frac{\|\Im m(H)\|}{g}\Big{)}\;\leq\;\rho\;, \tag{4}\] _for \(c_{\kappa}=\frac{1}{12}\) and \(c_{\rho}=6\), then \(L_{\kappa,\rho}(H)\) has a quantitative line-gap on the imaginary axis in the sense that, for all \(s\in\mathbb{R}\),_ \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s})\;\geq\;\frac{g^{2}}{4}\, \mathbf{1}_{\rho} \tag{5}\] _and_ \[\mathrm{Ind}\big{(}PF_{0}P^{*}|_{\mathrm{Ran}(P)}\big{)}\;=\;\frac{1}{2}\, \mathrm{Sig}(L_{\kappa,\rho}(H))\;, \tag{6}\] _where here the signature denotes the difference of the joint algebraic multiplicities of eigenvalues with positive and negative real parts._ Let us make a few comments. First of all, compared with earlier works the second bound in (4) has a supplementary factor \(1+\frac{\|\Im m(H)\|}{g}\) which is needed to control the non-hermitian part of the localizer. It is not needed for the proof of the bound (5) in Section 4, but merely for the proof of the constancy of the signature in Section 5. Numerical implementation shows that (4) is far from optimal, and indeed in applications one rather verifies that the line-gap of \(L_{\kappa,\rho}(H)\) is open before confidently using its signature. Let us also stress that the supplementary factor does not alter the invariance of the two bounds (4) under scaling \(H\mapsto\lambda H\) which implies \(g\mapsto\lambda g\) and \(\kappa\mapsto\lambda\kappa\), so that the condition on \(\rho\) remains unchanged. As in all prior works the constants \(c_{\kappa}\) and \(c_{\rho}\) in (4) are not optimal, but rather a result of the method of proof and the choices made in the proof. Second of all, it is, in general, _not_ sufficient to compute the spectrum of the real part \(\Re e(L_{\kappa,\rho}(H))=\frac{1}{2}(L_{\kappa,\rho}(H)+L_{\kappa,\rho}(H)^{ *})\) because \(H\) may be non-normal. However, as in applications one typically only needs to consider relatively small \(\rho\) and thus relatively small non-hermitian matrices \(L_{\kappa,\rho}(H)\), this is not really a limitation, as show the examples in Section 2. Third of all, let us mention that Appendix A describes two efficient techniques to access the signature, one via spectral flow and one by a Routh-Hurewitz theorem. Finally, let us note that in the earlier works [17, 9] only the constant \(\|[D,H]\|\) entered in the bounds, while here also the norm of the commutator \([|D|,H]\) is of relevance. Its boundedness can also be shown if \(H\) satisfies the short-range condition (1), see Section 3. The Fredholm module is then referred to as Lipshitz regular. An alternative way to guarantee the Lipshitz regularity automatically is to replace the Dirac operator \(D\) by \(D(\mathbf{1}+D^{2})^{-\beta}\) for some \(\beta>0\)[12, 23]. The index pairing remains unchanged during the homotopy \(\beta^{\prime}\in[0,\beta]\mapsto D(\mathbf{1}+D^{2})^{-\beta^{\prime}}\). Clearly, also the signature in (6) does not change as long as \(\beta\) is sufficiently small. Up to now, only the case of even dimension \(d\) was considered. For odd \(d\) and hermitian systems, a strong topological invariant is only defined if \(H\) has a chiral symmetry of the form \(JHJ=-H\) where \(J=J^{*}=J^{-1}\). Then there are odd index pairings and odd Chern numbers [20] which can be computed with an odd spectral localizer [16]. In Section 7 it will be explained that this story directly transposes to the study of non-hermitian line-gapped chiral Hamiltonians. ## 2 Numerical implementation To provide an explicit example of the utility of the non-hermitian generalization of the spectral localizer, let us consider a finite heterostructure comprised of two lattices in different topological phases. More specifically, suppose given a Haldane model over a bi-partite honeycomb lattice \(\Gamma=\Gamma_{A}\cap\Gamma_{B}\)[11], whose tight-binding model is \[H = \sum_{n_{A},n_{B}}\,\big{(}M\,|n_{A}\rangle\langle n_{A}|\,-\,M\,| n_{B}\rangle\langle n_{B}|\big{)}\,-\,t\sum_{\langle n_{A},m_{B}\rangle}\big{(}|n_{A} \rangle\langle m_{B}|\,+\,|m_{B}\rangle\langle n_{A}|\big{)} \tag{7}\] \[-\,t_{c}\sum_{\alpha=A,B}\sum_{\langle\!\langle n_{\alpha},m_{ \alpha}\rangle\!\rangle}\big{(}e^{\imath\phi(n_{\alpha},m_{\alpha})}\,|n_{ \alpha}\rangle\langle m_{\alpha}|\,+\,e^{-\imath\phi(n_{\alpha},m_{\alpha})}\,| m_{\alpha}\rangle\langle n_{\alpha}|\big{)}\] Here the first sum runs over all sites in the lattice and is a staggered potential giving the \(A\) and \(B\) lattices opposite on-site energies \(M\) and \(-M\), the second sum is a kinetic energy with nearest neighbor coupling coefficient \(t\) and the third sum is over next-nearest-neighbor pairs and has a direction-dependent phase factor that breaks time-reversal symmetry with a periodic magnetic field, namely \(\phi(n_{\alpha},m_{\alpha})=\pm\phi\) with a geometrically chosen sign [11]. The Hamiltonian is known to have a spectral gap at \(0\) with a topological Fermi projection \(P\) for \(M\ll t_{c}\), and it is a topologically trivial insulator for \(t_{c}\ll M\) (see [11] for the phase diagram). Furthermore, the model can be made lossy with absorption strength \(\mu\) if \(M\) is replaced by \(M\mp\imath\mu\) on the \(A\) and \(B\) sublattices respectively. Altogether, the heterostructure is made up of a topological Haldane model in the central part, surrounded first by a ring of trivial insulator and then a ring of a lossy trivial insulator, see Fig. 1(a). The choice of loss distribution around the lattice's perimeter is guided by analogy to photonic systems, as such systems are one of the most common platforms where non-hermiticity can manifest in topological materials characterized by line-gaps [18, 2, 4]. Unlike electronic systems, for which free space is a trivial insulator, many photonic systems will radiate into their surrounding free-space environment. This radiation can be considered by surrounding a region of interest using an absorbing boundary condition, such as perfectly matched layers [25], which necessarily makes the full system non-hermitian. Heuristically, the purpose of the absorbing boundary condition is to replicate the infinite extent of the environment in a finite simulation domain without introducing spurious reflections. The local density of states (LDOS) of the heterostructure at energy \(E=0\) is shown Fig. 1(b) and the complete spectrum in Fig. 1(c), for parameter values as described there. Note that essentially the only eigenvalues with very small imaginary part are the surface states in the topological central part, as they are separated from the lossy region by the trivial insulator which has an energy gap at \(E=0\). The different local topologies can be identified in the finite non-hermitian heterostructure using the local topological invariant (local marker) given in (6) with a position shift \(x,y\) of the Dirac operator, namely by the half-signature of \[L_{\kappa}(H,x,y)\;=\;\begin{pmatrix}-H&\kappa\big{(}(X-x)-\imath(Y-y)\big{)} \\ \kappa\big{(}(X-x)+\imath(Y-y)\big{)}&H^{*}\end{pmatrix}\;,\] where \(X\) and \(Y\) are the two position operators (denoted by \(X_{1}\) and \(X_{2}\) above) and there is no finite size restriction as all matrices are finite here. The size of the line-gap of \(L_{\kappa}(H,x,y)\) at \(\Re e(E)=0\) is shown in Fig. 1(d) and the value of the half-signature as defined in Theorem 1 in Fig. 1(e). This is computed by the spectral flow method described in Appendix A by using the path \(t\in[0,T]\mapsto L_{\kappa}(H,x+t,y)\) and the fact that \(\text{Sig}(L_{\kappa}(H,x+T,y))=0\) for sufficiently large \(T\), say so that \(x+T\) lies outside of the boundary of the heterostructure. An example of a spectral flow diagram for the real part of the spectrum is given in Fig. 1(f) where the eigenvalue responsible for the signature change is readily visible. To complement the picture, Fig. 1(g), (h) and (i) show the full complex spectrum of \(L_{\kappa}(H,x,y)\) for three different values of \(x\). Here Fig. 1(g) and (i) correspond to the exterior and central regions where one clearly sees the line-gap at \(\Re e(E)=0\) which corresponds to part of the statement of Theorem 1 for the topological and trivial insulator respectively. Finally let us note that, as expected, in Fig. 1(d) the local invariant changes near the interface between the two lattices due to the presence of the chiral interface-localized states visible in Fig. 1(b). Figure 1: (a) Diagram of the tight-binding heterostructure consisting of a topological insulating lattice in the center surrounded by a trivial insulator whose perimeter contains loss. For the topological insulator, \(M=0\), \(t_{c}=0.5\), and \(\phi=\pm\frac{\pi}{2}\). For the trivial insulator, \(M=0.5\sqrt{3}\) and \(t_{c}=0\). Both lattices have \(t=1\). The black vertices are lossless, while the gray vertices have \(\mu=0.2\). (b) Local density of states for this heterostructure at \(E=0\). (c) Full complex spectrum of the heterostructure. (d) The localizer gap given by the smallest of the absolute values of the real parts of the eigenvalues of \(L_{\kappa}(H,x,y)\), namely \(\min|\Re e(\sigma(L_{\kappa}(H,x,y)))|\). (e) Spatially resolved local index. The red region shows where the index is non-trivial and equal to 1. (f) Real part of the spectral flow of \(L_{\kappa,\rho}(H,x,0)\) as a function of position in \(x\). The eigenvalue responsible for the index change is highlighted in teal. (g),(h),(i) Full complex spectrum of \(L_{\kappa}(H,x,0)\) for three different choices of \(x\); the choices of \(x\) are indicated by orange dashed lines in (f). Again, the eigenvalue responsible for the change in index is shown in teal. The scales of (a),(b),(d),(e) and (f) are the same, and \(\kappa=0.1\) for all spectral localizer calculations. Fredholm properties **Lemma 2**: _If \(H\) satisfies the short-range condition (1), then \(H\) leaves the domain of \(D\) invariant and the commutators \([D,H]\) and \([|D|,H]\) extend to bounded operators._ **Proof.** As \(D^{2}=\sum_{j=1}^{d}X_{j}^{2}=X^{2}\), its domain is \({\cal D}(D)=\{\psi\in{\cal H}\otimes{\mathbb{C}}^{d^{\prime}}\::\:\sum_{n\in{ \mathbb{Z}}^{d}}|n|^{2}\|\psi_{n}\|^{2}<\infty\}\). Now \[\sum_{n\in{\mathbb{Z}}^{d}}|n|^{2}\|(H\psi)_{n}\|^{2} = \sum_{n,m,k\in{\mathbb{Z}}^{d}}\psi_{k}^{*}\langle k|H^{*}|n\rangle \,|n|^{2}\,\langle n|H|m\rangle\,\psi_{m}\] \[\leq \sum_{n,m,k\in{\mathbb{Z}}^{d}}\|\psi_{k}\|\,\frac{C}{1+|n-k|^{ \alpha}}\,|n|^{2}\,\frac{C}{1+|n-m|^{\alpha}}\,\|\psi_{m}\|\] \[\leq \sum_{n,m,k\in{\mathbb{Z}}^{d}}\|\psi_{k}\|^{2}\,\frac{C}{1+|n-k| ^{\alpha}}\,|n|^{2}\,\frac{C}{1+|n-m|^{\alpha}}\] \[\leq \sum_{k\in{\mathbb{Z}}^{d}}|k|^{2}\|\psi_{k}\|^{2}\,\sup_{k^{ \prime}\in{\mathbb{Z}}^{d}}\frac{1}{1+|k^{\prime}|^{2}}\sum_{n,m\in{\mathbb{Z }}^{d}}\,\frac{C}{1+|n-k^{\prime}|^{\alpha}}\,|n|^{2}\,\frac{C}{1+|n-m|^{\alpha}}\] \[\leq \Big{(}\sum_{k\in{\mathbb{Z}}^{d}}|k|^{2}\|\psi_{k}\|^{2}\Big{)} \sup_{k^{\prime}\in{\mathbb{Z}}^{d}}\frac{1}{1+|k^{\prime}|^{2}}\sum_{n\in{ \mathbb{Z}}^{d}}\,\frac{C^{\prime}(|n-k^{\prime}|^{2}+|k^{\prime}|^{2})}{1+|n- k^{\prime}|^{\alpha}}\;,\] which is bounded for \(\psi\in{\cal D}(D)\) as \(\alpha-2>d\). Hence \(H\psi\in{\cal D}(D)\). Next note that \[\langle n|[D,H]|m\rangle\;=\;D(n)\,\langle n|H|m\rangle\,-\,\langle n|H|m \rangle\,D(m)\;=\;D(n-m)\,\langle n|H|m\rangle\;,\] where \(D(n)=\sum_{j=1}^{d}n_{j}\Gamma_{j}=\langle n|D|n\rangle\). One has \(\|D(n)\|\leq\sqrt{d}\,|n-m|\) by the Cauchy-Schwarz inequality. Furthermore, it was used that \(H\cong H\otimes{\bf 1}\) commutes with the \(\Gamma_{j}\)'s. Estimating the norm using Holmgren's bound (which contains the maximum of two expressions, but they are bounded in the same manner) gives \[\|[D,H]\|\;\leq\;\sup_{n\in{\mathbb{Z}}^{d}}\sum_{m\in{\mathbb{Z}}^{d}}\|D(n- m)\|\,\|\langle n|H|m\rangle\|\;\leq\;\sup_{n\in{\mathbb{Z}}^{d}}\sum_{m\in{ \mathbb{Z}}^{d}}\sqrt{d}\,|n-m|\,\frac{C}{1+|n-m|^{\alpha}}\;,\] which is bounded because \(\alpha>d+1\). In order to bound the second commutator, let us set \(F=D|D|^{-1}\) and use \[[|D|,H]\;=\;[F^{*}D,H]\;=\;[F^{*},H]D\,+\,F^{*}[D,H]\;.\] As \(F\) is unitary, it is hence sufficient to show that \([F^{*},H]D\) extends to a bounded operator. Let us write out the matrix elements using \(F(n)=\langle n|F|n\rangle\): \[\langle n|[F^{*},H]D|m\rangle\;=\;(F(n)^{*}-F(m)^{*})D(m)\,\langle n|H|m \rangle\;.\] Next let us note the bound \[\|F(n)-F(m)\|\;\leq\;\sqrt{d}\,\big{|}\tfrac{n}{|n|}-\tfrac{m}{|m|}\big{|}\; \leq\;2\,\sqrt{d}\;|n-m|\;\min\!\big{\{}\tfrac{1}{|n|},\tfrac{1}{|m|}\big{\}}\;,\] which can be checked using the Cauchy-Schwarz inequality as above. Hence again appealing to Holmgren's bound gives \[\|[F^{*},H]D\| \leq \sup_{n\in{\mathbb{Z}}^{d}}\sum_{m\in{\mathbb{Z}}^{d}}\|F(n)-F(m)) \|\,\|D(m)\|\,\|\langle n|H|m\rangle\|\] \[\leq \sup_{n\in{\mathbb{Z}}^{d}}\sum_{m\in{\mathbb{Z}}^{d}}2\,d\;|n-m| \;\min\{\tfrac{1}{|n|},\tfrac{1}{|m|}\}\,|m|\,\frac{C}{1+|n-m|^{\alpha}}\;,\] which is bounded as can be seen by splitting the sum in \(|m|<|n|\) and \(|m|\geq|n|\). \(\Box\) **Corollary 3**: _If \(H\) satisfies the short-range condition (1), then the commutators \([P,F_{0}]\) and \([P^{*},F_{0}]\) are compact._ **Proof.** Given the results of Lemma 2, the compactness of \([P,F_{0}]\) follows directly from the standard arguments (_e.g._ in Theorem 10.1.4 in [9] which at no point ponds on the selfadjointness of \(H\); note that the \(F\) there is denoted by \(F_{0}\) here). \(\Box\) Now let us construct Fredholm operators from \(P\) and \(F_{0}\). For this purpose, let us set \[R\;=\;P({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}\;,\] which exists as \(-(P^{*}-P)^{2}=|P^{*}-P|^{2}\geq 0\). One furthermore readily checks that \([P,(P^{*}-P)^{2}]=0\) so that \((P-P^{*})^{2}\) and functions thereof leave \({\rm Ran}(P)\) and \({\rm Ran}(P^{*})\) invariant. Then the (orthogonal) projection \(Q\) onto the range of \(P\) is given by \[Q\;=\;RR^{*}\;=\;P({\bf 1}+(P^{*}-P)(P-P^{*}))^{-1}P^{*}\;=\;P(P^{*}P)^{-1}P^{*}\;.\] **Proposition 4**: _If \(H\) satisfies the short-range condition (1), then \(RF_{0}R^{*}+({\bf 1}-RR^{*})\) and \(PF_{0}P^{*}|_{{\rm Ran}(P)}\) are Fredholm operators and their indices are equal._ **Proof.** First let us note that \[[R,F_{0}]\;=\;[P,F_{0}]({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}\,+\,P[({\bf 1}-(P ^{*}-P)^{2})^{-\frac{1}{2}},F_{0}]\] The first summand is compact by Corollary 3. To verify the compactness of the second summand, one can use the norm convergent Riemann integral \[({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}\;=\;\int_{0}^{\infty}\frac{d\lambda}{ \lambda^{\frac{1}{2}}}\,(\lambda+{\bf 1}-(P^{*}-P)^{2})^{-1}\;,\] which shows that \[[({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}},F_{0}]\;=\;\int_{0}^{\infty}\frac{d \lambda}{\lambda^{\frac{1}{2}}}\,(\lambda+{\bf 1}-(P^{*}-P)^{2})^{-1}[(P^{*}-P)^{2 },F_{0}](\lambda+{\bf 1}-(P^{*}-P)^{2})^{-1}\] is also compact, again by Corollary 3. Now let us set \(T=RF_{0}R^{*}+({\bf 1}-RR^{*})\). Then \[T^{*}T = RF_{0}^{*}R^{*}RF_{0}R^{*}+({\bf 1}-RR^{*})\] \[= RR^{*}F_{0}^{*}RF_{0}R^{*}+R[F_{0}^{*},R^{*}]RF_{0}R^{*}+({\bf 1}- RR^{*})\] \[= RR^{*}F_{0}^{*}F_{0}RR^{*}+RR^{*}F_{0}^{*}[R,F_{0}]R^{*}+R[R,F_{0 }]^{*}RF_{0}R^{*}+({\bf 1}-RR^{*})\] \[= {\bf 1}+QF_{0}^{*}[R,F_{0}]R^{*}+R[R,F_{0}]^{*}RF_{0}R^{*}\;.\] As the last two summands are compact, this implies the desired Fredholm property of \(T\). As the two summands in \(T\) are orthogonal and one is trivial (given by \({\bf 1}-RR^{*}={\bf 1}-Q\) with vanishing index), one concludes that also \(RF_{0}R^{*}|_{{\rm Ran}(Q)}\) is Fredholm with same index as \(T\). Furthermore, \[{\rm Ind}\big{(}RF_{0}R^{*}+({\bf 1}-RR^{*})\big{)} =\ {\rm Ind}\big{(}RF_{0}R^{*}|_{{\rm Ran}(Q)}\big{)}\] \[=\ {\rm Ind}\big{(}P({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}F_{0}({ \bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}P^{*}|_{{\rm Ran}(P)}\big{)}\] \[=\ {\rm Ind}\big{(}({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}PF_{0}P^{* }({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}|_{{\rm Ran}(P)}\big{)}\] \[=\ {\rm Ind}\big{(}PF_{0}P^{*}|_{{\rm Ran}(P)}\big{)}\;,\] because \(({\bf 1}-(P^{*}-P)^{2})^{-\frac{1}{2}}\) is invertible and leaves \({\rm Ran}(P)\) and \({\rm Ran}(P^{*})\) invariant. This proves the claim. \(\Box\) ## 4 Line-gap of the spectral localizer This section is entirely devoted to the proof of (5) under the condition that (4) holds. While the strategy is similar to earlier arguments [17, 9], there are some novel difficulties linked to the non-hermitian nature of the Hamiltonian and the spectral localizer that we hope to address clearly in this section. For this reason we merely restrict to the proof of (5), even though the very same strategy will be expanded (and thus to some extend repeated) to a proof of the constancy of \({\rm Sig}(L_{\kappa,\rho}(H))\) in the next Section 5. Let us start from \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s}) =(L_{\kappa,\rho}(H)-s{\bf 1}_{\rho})^{*}(L_{\kappa,\rho}(H)-s{ \bf 1}_{\rho})\] \[=L_{\kappa,\rho}(H)^{*}L_{\kappa,\rho}(H)\,+\,s^{2}{\bf 1}_{ \rho}\,-\,2s\,\Im m(L_{\kappa,\rho}(H))\] \[=L_{\kappa,\rho}(H)^{*}L_{\kappa,\rho}(H)\,+\,s^{2}{\bf 1}_{ \rho}\,-\,2s\,(\Im m(-H_{\rho})\oplus\Im m(H^{*}_{\rho}))\;,\] where \(\Im m(A)=\frac{1}{2i}(A-A^{*})\) is the imaginary part of the operator \(A\). Hence one has for all \(s\) \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s})\;\geq\;L_{\kappa,\rho}(H)^{*} L_{\kappa,\rho}(H)+(s^{2}\,-\,2\,\|\Im m(H)\|\cdot|s|){\bf 1}_{\rho}\;. \tag{8}\] Note that \(s^{2}-2\,\|\Im m(H)\|\cdot|s|\geq 0\) for all \(s\) with \(|s|\geq 2\,\|\Im m(H)\|\). Thus for the proof of (5) it is sufficient to show that, for all \(|s|\leq 2\,\|\Im m(H)\|\), \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s})\;\geq\;\tfrac{g^{2}}{4}\,{ \bf 1}_{\rho}\;. \tag{9}\] Multiplying out, one finds \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s})\] \[=\;\kappa^{2}\pi_{\rho}D^{2}\pi^{*}_{\rho}\;+\;\pi_{\rho}((-H^{s })^{*}\oplus H^{s}){\bf 1}_{\rho}((-H^{s})\oplus(H^{s})^{*})\pi^{*}_{\rho^{ \prime}}\;+\;\kappa\pi_{\rho}\begin{pmatrix}0&[H,D_{0}]^{*}\\ &0\end{pmatrix}\pi^{*}_{\rho}\;,\] where the last step is based on the algebraic identity \[D((-H^{s})\oplus(H^{s})^{*})+((-H^{s})^{*}\oplus H^{s})D\;=\;\begin{pmatrix}0&[H,D_{0}]^{*}\\ &0\end{pmatrix}\;.\] The first two summands in \(|L_{\kappa,\rho}(H^{s})|^{2}\) are non-negative and on each a quantitative (positive) lower bound will be proved below such that the sum of the two is strictly positive; the third summand will then be shown to be a perturbation that does not spoil the positivity. For that purpose, let us use an even differentiable function \(G_{\rho}\colon\mathbb{R}\to[0,1]\) constructed in references [17, 9] which satisfies \(G_{\rho}(x)=1\) for all \(|x|\leq\frac{1}{2}\rho\) and \(G_{\rho}(x)=0\) for all \(|x|\geq\rho\), and for which, moreover, the Fourier transform \(\widehat{G^{\prime}_{\rho}}\colon\mathbb{R}\to\mathbb{R}\) of the derivative \(G^{\prime}_{\rho}\) has an \(L^{1}\)-norm bounded by \(8\rho^{-1}\). Then (by Lemma 10.15 in [10]) one has for all self-adjoint operators \(A\) and bounded operators \(B\) \[\|[G_{\rho}(A),B]\|\;\leq\;\tfrac{8}{\rho}\;\|[A,B]\|\;. \tag{10}\] With this function, one can bound the first summand by showing \[\kappa^{2}\pi_{\rho}D^{2}\pi_{\rho}^{*}\;\geq\;g^{2}\pi_{\rho}(\mathbf{1}-G_{ \rho}(D)^{2})\pi_{\rho}^{*}\;.\] Indeed, using a rough version of the second hypothesis in (4), one has \(\kappa^{2}\geq g^{2}(\tfrac{1}{2}\rho)^{-2}\) so that the function \(G_{\rho}\) satisfies for \(x\in[\tfrac{1}{2}\rho,\rho]\): \[\kappa^{2}x^{2}\;\geq\;g^{2}(\tfrac{1}{2}\rho)^{-2}x^{2}\;\geq\;g^{2}\;\geq\; g^{2}(\mathbf{1}-G_{\rho}(x)^{2})\;,\] since \(0\leq G_{\rho}(x)\leq 1\). On the other hand, for \(x\in[0,\tfrac{1}{2}\rho]\) the bound holds trivially since there \(\mathbf{1}-G_{\rho}(x)^{2}=0\). In the second summand, one uses the lower bound \(\mathbf{1}_{\rho}\geq G_{\rho}(D)^{2}\) implying \[|(-H^{s}_{\rho})\oplus(H^{s}_{\rho})^{*}|^{2} \geq \pi_{\rho}((-H^{s})^{*}\oplus H^{s})G_{\rho}(D)^{2}((-H^{s}) \oplus(H^{s})^{*})\pi_{\rho}^{*}\] \[= \pi_{\rho}G_{\rho}(D)|(-H^{s})^{*}\oplus H^{s}|^{2}G_{\rho}(D)\pi _{\rho}^{*}\] \[+\;\pi_{\rho}((-H^{s})^{*}\oplus H^{s})G_{\rho}(D)[G_{\rho}(D),( (-H^{s})\oplus(H^{s})^{*})]\pi_{\rho}^{*}\] \[+\;\pi_{\rho}[((-H^{s})^{*}\oplus H^{s}),G_{\rho}(D)]((-H^{s}) \oplus(H^{s})^{*})G_{\rho}(D)\pi_{\rho}^{*}\] Here the first summand can be bounded below by \(|(-H^{s})^{*}\oplus H^{s}|^{2}\geq g^{2}\,\mathbf{1}\), using the line-gap. Collecting these above lower bounds shows \[L_{\kappa,\rho}(H^{s})^{*}L_{\kappa,\rho}(H^{s})\;\geq\;g^{2}\mathbf{1}_{\rho} \,+\,E\;,\] with an error term given by \[E\;= \kappa\pi_{\rho}\begin{pmatrix}0&[H,D_{0}]^{*}\\ [H,D_{0}]&0\end{pmatrix}\pi_{\rho}^{*}\] \[\;+\;\pi_{\rho}((-H^{s})^{*}\oplus H^{s})G_{\rho}(D)[G_{\rho}(D),( (-H^{s})\oplus(H^{s})^{*})]\pi_{\rho}^{*}\] \[\;+\;\pi_{\rho}[((-H^{s})^{*}\oplus H^{s}),G_{\rho}(D)]((-H^{s}) \oplus(H^{s})^{*})G_{\rho}(D)\pi_{\rho}^{*}\;.\] Note that \(G_{\rho}\) is an even function and \(|D|^{2}=D^{2}\), so that one can replace \(G_{\rho}(D)=G_{\rho}(|D|)\) which is diagonal in the \(2\times 2\) grading. Hence \[[G_{\rho}(D),((-H^{s})\oplus(H^{s})^{*})] = [G_{\rho}(|D_{0}|),(-H^{s})]\oplus[G_{\rho}(|D_{0}^{*}|),(H^{s})^ {*})]\] \[= [H,G_{\rho}(|D_{0}|)]\oplus[H,G_{\rho}(|D_{0}^{*}|)]^{*}\;.\] (Note that for the particular choice of \(D\) made here one actually has \(|D_{0}^{*}|=|D_{0}|\).) Therefore using \(\|G_{\rho}(D)\|\leq 1\) and then (10), one has \[\|E\| \leq\ \kappa\,\|[H,D_{0}]\|\ +\ 2\,\|H^{s}\|\,\max\big{\{}\|[H,G_{ \rho}(|D_{0}|)]\|,\|H,G_{\rho}(|D_{0}^{*}|)]\|\big{\}}\] \[\leq\ \kappa\,\|[H,D_{0}]\|\ +\ \tfrac{16}{\rho}\,\|H^{s}\|\,\max\big{\{}\|[H,|D_{0}|]\|,\|H,|D_{0}^{*}|]\|\big{\}}\.\] Finally let us use \(\|H^{s}\|\leq\|H\|+|s|\leq\|H\|+2\,\|\Im m(H)\|\leq 3\,\|H\|\) (note that the factor \(3\) can be omitted if \(H\) is selfadjoint, improving the bound below). Then using the quantity \(N\) introduced in statement of Theorem 1 and the bound \(\tfrac{1}{\rho}\leq\tfrac{\kappa}{c_{\rho}g}\) following from (4), one deduces \[\|E\|\,\leq\,\big{(}\kappa+3\,\|H\|\,\tfrac{16}{\rho}\big{)}N\,\leq\,\kappa \big{(}1+\tfrac{\|H\|}{g}\tfrac{48}{c_{\rho}}\big{)}N\,\leq\,\kappa N\tfrac{ \|H\|}{g}\big{(}1+\tfrac{48}{c_{\rho}}\big{)}\,\leq\,c_{\kappa}\big{(}1+\tfrac {48}{c_{\rho}}\big{)}g^{2}\, \tag{11}\] due to \(\|H\|\geq g\) and (4). Now \(c_{\kappa}(1+\tfrac{48}{c_{\rho}})=\tfrac{3}{4}\), so combining with the above, one deduces (9) for all \(|s|\leq 2\,\|\Im m(H)\|\). ## 5 Constancy of the signature It is the object of this section to prove that the signature \(\operatorname{Sig}(L_{\kappa,\rho}(H))\) does not change with \(\kappa\) and \(\rho\), as long as the bounds (4) hold. For the changes in \(\kappa\), this follows directly from the results of Section 4, on the other hand changing \(\rho\) means changing the size of the matrix which is not a continuous procedure. To address the issue, it will be shown as in [17, 9] that the Hamiltonian can be tampered down away from the origin without changing the signature. Once the corresponding path of tampered spectral localizers is constructed, it is then again sufficient to shown that the line-gap remains open along the path because then there is no spectral flow across the imaginary axis so that the signature remains constant. This will be achieved by a suitable modification of the arguments of Section 4. In particular, the objects and stated bounds of the last section will be freely used. Let us begin by introducing the family of functions \(G_{\rho,\lambda}(x)=(1-\lambda)+\lambda G_{\rho}(x)\) for all \(\lambda\in[0,1]\) and then set \[L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)\ =\kappa\pi_{\rho^{\prime}}D\pi_{\rho^{ \prime}}^{*}\ +\ \pi_{\rho^{\prime}}G_{\rho,\lambda}(D)((-H)\oplus(H)^{*})G_{\rho,\lambda}(D) \pi_{\rho^{\prime}}^{*}\,\] which is an operator acting on \((\mathcal{H}\oplus\mathcal{H})_{\rho^{\prime}}\). This formula clearly shows that the Hamiltonian is redressed. One has \(L_{\kappa,\rho,\rho^{\prime}}(H;0)=L_{\kappa,\rho^{\prime}}(H)\) and \(L_{\kappa,\rho,\rho^{\prime}}(H,1)=\kappa\pi_{\rho^{\prime},\rho}D\pi_{\rho^{ \prime},\rho}^{*}+L_{\kappa,\rho,\rho}(H,1)\), where \(\pi_{\rho^{\prime},\rho}\) is the partial isometry onto the subspace of \(\operatorname{Ran}(\chi(|D|\leq\rho^{\prime}))\) that is orthogonal to \(\operatorname{Ran}(\chi(|D|\leq\rho))\). One finds by essentially the same argument leading to (8) that \[\big{(}L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)-\imath s\mathbf{1 }_{\rho^{\prime}}\big{)}^{*}\big{(}L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)- \imath s\mathbf{1}_{\rho^{\prime}}\big{)}\] \[\qquad\qquad\geq\,L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)^{*}L_ {\kappa,\rho,\rho^{\prime}}(H;\lambda)+(s^{2}\,-\,2\,\|\Im m(H)||\cdot|s|) \mathbf{1}_{\rho^{\prime}}\,.\] Again this shows that it is sufficient to deal with \(|s|\leq 2\|\Im m(H)\|\). For such \(s\), let us compute again in a similar manner as in Section 4, but with a few more algebraic manipulations, \[\big{(}L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)-\imath{s}{\bf 1}_{ \rho^{\prime}}\big{)}^{*}\big{(}L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)-\imath{s }{\bf 1}_{\rho^{\prime}}\big{)}\] \[\qquad=\kappa^{2}\pi_{\rho^{\prime}}D^{2}\pi_{\rho^{\prime}}^{*} \,+\,s^{2}\pi_{\rho^{\prime}}({\bf 1}-G_{\rho,\lambda}(D)^{4})\pi_{\rho^{ \prime}}^{*} \tag{12}\] \[\qquad\qquad+\ \pi_{\rho^{\prime}}G_{\rho,\lambda}(D)((-H^{s})^{*} \oplus H^{s})G_{\rho,\lambda}^{2}(D)((-H^{s})\oplus(H^{s})^{*})G_{\rho,\lambda }(D)\pi_{\rho^{\prime}}^{*}\] (13) \[\qquad\qquad+\ \kappa\,\pi_{\rho^{\prime}}G_{\rho,\lambda}(D) \begin{pmatrix}0&[H,D_{0}]^{*}\\ [H,D_{0}]&0\end{pmatrix}G_{\rho,\lambda}(D)\pi_{\rho^{\prime}}^{*}\] (14) \[\qquad\qquad+\ 2s\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)\,\Im m \big{[}((-H)^{*}\oplus H)({\bf 1}-G_{\rho,\lambda}(D)^{2})\big{]}G_{\rho,\lambda}(D) \pi_{\rho^{\prime}}^{*}\;. \tag{15}\] It is here important that in (13) appears \(H^{s}\) and not just \(H\), because in this manner the line gap of \(H\) can be used efficiently. The first three summands in (12) and (13) are non-negative and on each a quantitative (positive) lower bound will be proved below such that the sum of the three is strictly positive; the last two summands (14) and (15) will then be shown to be a perturbation that does not spoil the positivity. Let us start out with a lower bound on \[(\ref{eq:13}) =\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{2}((H^{s})^{*}H^{s} \oplus H^{s}(H^{s})^{*})G_{\rho,\lambda}(D)^{2}\pi_{\rho^{\prime}}^{*}\] \[\qquad-\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{2}((H^{s})^{*} \oplus H^{s})[(H^{s}\oplus(H^{s})^{*}),G_{\rho,\lambda}(D)]G_{\rho,\lambda}( D)\pi_{\rho^{\prime}}^{*}\] \[\qquad-\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)[G_{\rho,\lambda}(D),((H^{s})^{*}\oplus H^{s})]G_{\rho,\lambda}(D)(H^{s}\oplus(H^{s})^{*})G_{\rho,\lambda}(D)\pi_{\rho^{\prime}}^{*}\] \[\geq\ g^{2}\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{4}\pi_{\rho^{ \prime}}^{*}\] \[\qquad-\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{2}((H^{s})^{*} \oplus H^{s})[(H^{s}\oplus(H^{s})^{*}),G_{\rho,\lambda}(D)]G_{\rho,\lambda}( D)\pi_{\rho^{\prime}}^{*}\] \[\qquad+\ \pi_{\rho^{\prime}}G_{\rho,\lambda}(D)[(H^{s})^{*} \oplus H^{s}),G_{\rho,\lambda}(D)]G_{\rho,\lambda}(D)(H^{s}\oplus(H^{s})^{*}) G_{\rho,\lambda}(D)\pi_{\rho^{\prime}}^{*}\] \[=\ g^{2}\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{4}\pi_{\rho^{\prime}} ^{*}\] \[\qquad-\lambda\ \pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{2}((H^{s})^{*} \oplus H^{s})[H\oplus H^{*},G_{\rho}(D)]G_{\rho,\lambda}(D)\pi_{\rho^{\prime}} ^{*}\] \[\qquad+\ \lambda\ \pi_{\rho^{\prime}}G_{\rho,\lambda}(D)[H\oplus H ^{*},G_{\rho}(D)]G_{\rho,\lambda}(D)(H^{s}\oplus(H^{s})^{*})G_{\rho,\lambda}( D)\pi_{\rho^{\prime}}^{*}\;.\] The first summand is positive and will nicely combine with those in (12), the others combine with (14) and (15) to an error term \[E_{\rho,\rho^{\prime}}(s,\lambda) =\kappa\,\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)\begin{pmatrix}0&[H, D_{0}]^{*}\\ [H,D_{0}]&0\end{pmatrix}G_{\rho,\lambda}(D)\pi_{\rho^{\prime}}^{*}\] \[\qquad+\ \lambda\,\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)[H^{*} \oplus H,G_{\rho}(D)]G_{\rho,\lambda}(D)(H^{s}\oplus(H^{s})^{*})G_{\rho,\lambda }(D)\pi_{\rho^{\prime}}^{*}\] \[\qquad+\ \lambda\,\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)^{2}((H^{s})^{*} \oplus H^{s})[G_{\rho}(D),H\oplus H^{*}]G_{\rho,\lambda}(D)\pi_{\rho^{\prime}} ^{*}\] \[\qquad+\ 2\,s\,\pi_{\rho^{\prime}}G_{\rho,\lambda}(D)\,\Im m \big{[}((-H)^{*}\oplus H)({\bf 1}-G_{\rho,\lambda}(D)^{2})\big{]}G_{\rho,\lambda}(D)\pi_{ \rho^{\prime}}^{*}\;.\] Then, neglecting also the \(s^{2}\)-term in (12), \[|L_{\kappa,\rho,\rho^{\prime}}(H;\lambda)-\imath\,s\,1_{\rho^{\prime}}|^{2}\ \geq\kappa^{2}\,D_{\rho^{\prime}}^{2}\,+\,g^{2}\pi_{\rho^{\prime}}G_{\rho, \lambda}(D)^{4}\pi_{\rho^{\prime}}^{*}\,+\,E_{\rho,\rho^{\prime}}(s,\lambda)\;.\] Note that this is an inequality for matrices on the finite dimensional space \(\operatorname{Ran}(\pi_{\rho^{\prime}})\). This latter space will be decomposed into \(\operatorname{Ran}(\pi_{\frac{\rho}{2}})\oplus\operatorname{Ran}(\pi_{\rho^{ \prime},\frac{\rho}{2}})\) where \(\pi_{\rho^{\prime},\frac{\rho}{2}}=\pi_{\rho^{\prime}}\ominus\pi_{\frac{\rho}{2}}\). Then the strict positivity of the r.h.s. is proved by providing quantitative positive lower bounds on the two diagonal terms, and then showing the positivity of the \(2\times 2\) block matrix is not spoiled by the two off-diagonal terms. Note that the first summands are diagonal in this decomposition, hence the only off-diagonal contribution stems from \(E_{\rho,\rho^{\prime}}(s,\lambda)\). Let us start with the positive term on \({\rm Ran}(\pi_{\frac{\rho}{2}})\). As \(\pi_{\frac{\rho}{2}}G_{\rho,\lambda}(D)^{4}\pi_{\frac{\rho}{2}}^{*}=\pi_{ \frac{\rho}{2}}\pi_{\frac{\rho}{2}}^{*}={\bf 1}_{\frac{\rho}{2}}\), one gets \[\kappa^{2}\,D_{\frac{\rho}{2}}^{2}\,+\,g^{2}\pi_{\frac{\rho}{2}}G_{\rho, \lambda}(D)^{4}\pi_{\frac{\rho}{2}}^{*}\ \geq\ g^{2}\,{\bf 1}_{\frac{\rho}{2}}\;.\] The error term \(E_{\rho,\rho^{\prime}}(s,\lambda)\) restricted to \({\rm Ran}(\pi_{\frac{\rho}{2}})\) only contains the first three summands because \(({\bf 1}-G_{\rho,\lambda}(D)^{2})\pi_{\frac{\rho}{2}}^{*}=0\). Thus with \(\lambda\leq 1\) and \(\|H^{s}\|\leq 3\|H\|\) for \(|s|\leq 2\|\Im m(H)\|\) and (10), it follows as in (11) that \[\big{\|}\pi_{\frac{\rho}{2}}E_{\rho,\rho^{\prime}}(s,\lambda)\pi_{\frac{\rho} {2}}^{*}\big{\|}\ \leq\ \kappa\|[H,D_{0}]\|+2\lambda\|H^{s}\|\cdot\|[G_{\rho}(D),(H\oplus H^{*})]\|\ \leq\ c_{\kappa}\left(1+ \tfrac{48}{c_{\rho}}\right)g^{2}\ =\ \tfrac{3}{4}\,g^{2}\;.\] Together one concludes that \(\pi_{\frac{\rho}{2}}|L_{\kappa,\rho,\rho^{\prime}}(H,\lambda)-\imath s1_{\rho ^{\prime}}|^{2}\pi_{\frac{\rho}{2}}^{*}>\tfrac{1}{4}\,g^{2}\,{\bf 1}_{\frac{ \rho}{2}}\). Next let us come to the other diagonal part. Using merely the first term, one has for the positive contribution \[\pi_{\rho^{\prime},\frac{\rho}{2}}\Big{(}\kappa^{2}D_{\rho^{\prime}}^{2}+g^{ 2}\pi_{\rho^{\prime}}G_{\lambda,\rho}(D)^{4}\pi_{\rho^{\prime}}^{*}\Big{)}\pi _{\rho^{\prime},\frac{\rho}{2}}^{*}\ \geq\ \tfrac{\kappa^{2}\rho^{2}}{4}\,{\bf 1}_{ \rho^{\prime},\frac{\rho}{2}}^{*}\ \geq\ \tfrac{c_{\rho}^{2}}{4}\left(1+\tfrac{\|\Im m(H)\|}{g}\right)^{2}g^{2}\,{\bf 1 }_{\rho^{\prime},\frac{\rho}{2}}\;.\] Let us next bound the error \(E_{\rho,\rho^{\prime}}(s,\lambda)\) on \({\rm Ran}(\pi_{\rho^{\prime},\frac{\rho}{2}})\). The last summand needs particular care, based on the following identity: \[\Im m\big{[}((-H)^{*}\oplus H)({\bf 1}-G_{\rho,\lambda}(D)^{2})\big{]}=({\bf 1}-G_ {\rho,\lambda}(D)^{2})\Im m((-H)^{*}\oplus H)+\frac{1}{2\imath}[G_{\rho, \lambda}(D)^{2},(-H)^{*}\oplus H]\;.\] Using the fact that \((1-x^{2})x\leq\tfrac{2}{9}\sqrt{3}\leq\tfrac{1}{2}\) for all \(x\in[0,1]\), \[\big{\|}\pi_{\rho^{\prime},\frac{\rho}{2}}E_{\rho,\rho^{\prime}}(s,\lambda)\pi_{\rho^{\prime},\frac{\rho}{2}}^{*}\big{\|} \ \leq\ \kappa\|[H,D_{0}]\|+2\lambda\|[(H^{*}\oplus H),G_{\rho}(D)]\| \cdot\|H^{s}\|\] \[\ \ \ \ +2|s|\big{(}\tfrac{1}{2}\|\Im m(H)\|+\lambda\|[G_{\rho}(D),(-H )^{*}\oplus H]\|\big{)}\] \[\ \leq\ \tfrac{3}{4}\,g^{2}\,+4\,\|\Im m(H)\|\big{(}\tfrac{1}{2}\| \Im m(H)\|+\tfrac{8}{\rho}\,N\big{)}\] \[\ \leq\ \tfrac{3}{4}\,g^{2}\,+2\,\|\Im m(H)\|^{2}+\tfrac{32\,c_{ \kappa}\,\|\Im m(H)\|\,g^{2}}{c_{\rho}\,\|H\|\,(1+g^{-1}\|\Im m(H)\|)}\] \[\ \leq\ \tfrac{3}{4}\,g^{2}\,+2\,\|\Im m(H)\|^{2}+\tfrac{32\,c_{ \kappa}}{c_{\rho}}\,g^{2}\;,\] where in the the last two steps respectively the bounds in (4) and \(\|\Im m(H)\|\leq\|H\|\) were used. Thus one obtains \[\pi_{\rho^{\prime},\frac{\rho}{2}}|L_{\kappa,\rho,\rho^{\prime}}(H,\lambda)- \imath\,s\,{\bf 1}_{\rho^{\prime}}|^{2}\pi_{\rho^{\prime},\frac{\rho}{2}}^{*}\ \geq\ \big{(}\tfrac{c_{\rho}^{2}}{4}\big{(}1+\tfrac{\|\Im m(H)\|}{g}\big{)}^{2}- \tfrac{3}{4}-2\,\tfrac{\|\Im m(H)\|^{2}}{g^{2}}-\tfrac{32\,c_{\kappa}}{c_{\rho }}\big{)}\,g^{2}\,{\bf 1}_{\rho^{\prime},\frac{\rho}{2}}\;.\] Finally let us bound the off-diagonal term \(\pi_{\frac{\rho}{2}}E_{\rho,\rho^{\prime}}(s,\lambda)\pi_{\rho^{\prime},\frac{ \rho}{2}}^{*}\). Again by \(\pi_{\frac{\rho}{2}}({\bf 1}-G_{\rho,\lambda}(D)^{2})=0\), the first summand in the above formula for \(\Im m\big{[}((-H)^{*}\oplus H)({\bf 1}-G_{\rho,\lambda}(D)^{2})\big{]}\) drops out. Hence by the estimate above \[\big{\|}\pi_{\frac{\rho}{2}}E_{\rho,\rho^{\prime}}(s,\lambda)\pi_{\rho^{\prime}, \frac{\rho}{2}}^{*}\big{\|}\ \leq\ \tfrac{3}{4}\,g^{2}+\tfrac{32\,c_{\kappa}}{c_{\rho}}\,g^{2}\;.\] The matrix \(\pi_{\rho^{\prime},\frac{\rho}{2}}E_{\rho,\rho^{\prime}}(s,\lambda)\pi_{\frac{ \rho}{2}}^{*}\) satisfies the same norm bound. Therefore in the grading of \(\operatorname{Ran}(\pi_{\frac{\rho}{2}})\oplus\operatorname{Ran}(\pi_{\rho^{ \prime},\frac{\rho}{2}})\) one has \[|L_{\kappa,\rho,\rho^{\prime}}(H,\lambda)-\imath\,s\,{\bf 1}_{\rho^{\prime}}|^{2} \ \geq\ g^{2}\begin{pmatrix}\frac{1}{4}&M\\ M^{*}&\frac{c_{\rho}^{2}}{4}(1+\frac{\|\Im m(H)\|}{g})^{2}-\frac{3}{4}-2\, \frac{\|\Im m(H)\|^{2}}{g^{2}}-\frac{32\,c_{\kappa}}{c_{\rho}}\end{pmatrix}\;,\] with off-diagonal error term \(M\) satisfying \(\|M\|\leq\frac{3}{4}+\frac{32\,c_{\kappa}}{c_{\rho}}\). This is strictly positive as long as \[\tfrac{1}{4}\Big{(}\tfrac{c_{\rho}^{2}}{4}(1+\tfrac{\|\Im m(H)\|}{g})^{2}- \tfrac{3}{4}-2\,\tfrac{\|\Im m(H)\|^{2}}{g^{2}}-\tfrac{32\,c_{\kappa}}{c_{\rho }}\Big{)}\;>\;\Big{(}\tfrac{3}{4}+\tfrac{32\,c_{\kappa}}{c_{\rho}}\Big{)}^{2}\;,\] which can readily be verified using \(c_{\kappa}=\frac{1}{12}\) and \(c_{\rho}=6\). This concludes the proof of the constancy of the signature. ## 6 Homotopy arguments This section proves the equality (6) which hence concludes the proof of Theorem 1. The strategy will be to homotopically deform the Hamiltonian \(H\) and the Riesz projection \(P\) into selfadjoint objects for which (6) is already known by previous works [17, 9]. One then has to show that along those homotopies both sides of the equality (6) remain constant. Let us start with the index. It is well-known [10, 9] that the Riesz projection \(P\) can be deformed into its (selfadjoint) range projection \(Q\) by the linear path \(t\in[0,1]\mapsto P_{t}=tQ+(1-t)P\) of idempotents. Set \(R_{t}=P_{t}({\bf 1}-(P_{t}-P_{t}^{*})^{2})^{\frac{1}{2}}\). The Fredholm property of \(R_{t}F_{0}R_{t}^{*}+{\bf 1}-R_{t}R_{t}^{*}\) follows by the argument of the proof of Proposition 4 because the commutator \([P_{t},F_{0}]\) is compact (since \([Q,F_{0}]=[R,F_{0}]R^{*}+R[R^{*},F_{0}]\) is compact). Therefore the index is constant along the path so that \(\operatorname{Ind}(PF_{0}P^{*}|_{\operatorname{Ran}(P)})=\operatorname{Ind}( RF_{0}R^{*}+{\bf 1}-RR^{*})=\operatorname{Ind}(QF_{0}Q+{\bf 1}-Q)\). Furthermore, by a similar argument one checks that \([D,Q]\) extends to a bounded operator. One can thus use the result of [17] (see also [21] or Theorem 10.3.1 in [9]) applied to the flat-band selfadjoint Hamiltonian \({\bf 1}-2Q\) to conclude that \[\operatorname{Ind}(QF_{0}Q+{\bf 1}-Q)\;=\;\tfrac{1}{2}\;\text{Sig}(L_{\kappa^{ \prime},\rho^{\prime}}({\bf 1}-2Q))\;,\] where \(\kappa^{\prime}\) can be chosen sufficiently small and \(\rho^{\prime}\) sufficiently large such that bounds similar to (4) hold. Furthermore, as \([D,Q]\) and \([D,P]\) both extend to bounded operators, so does \([D,P_{t}]\) for all \(t\in[0,1]\). One thus disposes of the bound (5) for all \(t\in[0,1]\), provided that \(\kappa^{\prime}\) and \(\rho^{\prime}\) are chosen sufficiently small and large respectively (note that \(g\), \(N\) and \(\|P\|\) all depend continuously on \(t\), and the gap of the spectral localizer remains open). This implies that \(\text{Sig}(L_{\kappa^{\prime},\rho^{\prime}}({\bf 1}-2Q))=\text{Sig}(L_{ \kappa^{\prime},\rho^{\prime}}({\bf 1}-2P))\). Finally, one connects the non-selfadjoint flat band Hamiltonian \({\bf 1}-2P\) to \(H\) by the homotopy \(t\in[0,1]\mapsto(1-t)({\bf 1}-2P)+tH\), which lies in the set of line-gapped local Hamiltonians. Hence again the line-gap of the spectral localizer remains open along this path and therefore \(\text{Sig}(L_{\kappa^{\prime},\rho^{\prime}}((1-t)({\bf 1}-2P)+tH))\) is constant in \(t\). Combining all the above facts, one concludes that \[\operatorname{Ind}\bigl{(}PF_{0}P^{*}|_{\operatorname{Ran}(P)}\bigr{)}\;=\; \tfrac{1}{2}\;\text{Sig}(L_{\kappa^{\prime},\rho^{\prime}}(H))\;,\] for suitable \(\kappa^{\prime}\) and \(\rho^{\prime}\). However, by the results of Section 5 the signature is constant for all \(\kappa>0\) and \(\rho>0\) satisfying (4). Odd-dimensional chiral systems with a line-gap Let us briefly explain why the spectral localizer technique for odd-dimensional chiral systems [16, 21, 9] directly transposes to the study of non-hermitian line-gapped chiral Hamiltonians (local as throughout the paper). Suppose that \(H\) and \(P\) are given in the spectral representation of \(J\): \[J\;=\;\begin{pmatrix}{\bf 1}&0\\ 0&-{\bf 1}\end{pmatrix}\;,\qquad H\;=\;\begin{pmatrix}0&B\\ A&0\end{pmatrix}\;,\qquad P\;=\;\frac{1}{2}\begin{pmatrix}{\bf 1}&V^{-1}\\ V&{\bf 1}\end{pmatrix}\;. \tag{16}\] The entries \(A\) and \(B\) are invertible, and \(B=A^{*}\) for \(H\) selfadjoint. The particular form of the entries of \(P\) follows from \(JPJ={\bf 1}-P\), and \(V^{-1}=V^{*}\) is unitary if and only if \(P=P^{*}\). For each of \(A\), \(B\) and \(V\), one computes an odd index pairing, _e.g._\(\mbox{Ind}(EAE+{\bf 1}-E)\) where \(E=\chi(D>0)\) is the Hardy projection. If \(H\) and hence \(A\) is covariant, then this index is equal to an odd Chern number by an index theorem [20]. **Proposition 5**: _Let \(H\) be a line-gapped chiral Hamiltonian with a Riesz projection \(P\) on the spectrum with negative real part. Then there exists a smooth path \(t\in[0,1]\mapsto H_{t}\) of line-gapped and local chiral Hamiltonians such that \(H_{0}=H\) and \(H_{1}={\bf 1}-2P\). In particular, the odd index pairings satisfy_ \[\mbox{Ind}(EAE+{\bf 1}-E)\;=\;-\,\mbox{Ind}(EBE+{\bf 1}-E)\;=\;\mbox{Ind}(EVE+{ \bf 1}-E)\;.\] **Proof.** Let \(\gamma\) be a positively oriented path winding once around each point of the spectrum with negative real part so that \(P=\oint_{\gamma}\frac{dz}{2\pi i}\,(z{\bf 1}-H)^{-1}\). The path can be chosen (sufficiently large) such that \(-\gamma\) encircles the part of the spectrum with positive real part also with a winding number \(1\). Further introduce the interpolating functions \(f_{t}^{\pm}(z)=(1-t)z\pm t\) which are analytic in the interior of \(\gamma\) and \(-\gamma\). Hence one can set \[H_{t}\;=\;\oint_{\gamma}\frac{dz}{2\pi i}\,f_{t}^{-}(z)\,(z{\bf 1}-H)^{-1}\;+ \;\oint_{-\gamma}\frac{dz}{2\pi i}\,f_{t}^{+}(z)\,(z{\bf 1}-H)^{-1}\;.\] The first summand acts non-trivially merely on the range of \(P\), while the second on the range of \({\bf 1}-P\). The spectral mapping theorem implies that \(H_{t}\) has a line-gap for all \(t\in[0,1]\). Furthermore, one readily checks that \(JH_{t}J=-H_{t}\). As clearly \(H_{0}=H\) and \(H_{1}=-P+({\bf 1}-P)\) the path has all the properties claimed in the statement. This homotopy directly implies that the index pairings of \(A\) and \(V\) coincide, as do those of \(B\) and \(V^{-1}\). As those of \(V\) and \(V^{-1}\) differ by a sign, the claim follows. \(\Box\) The index of \(A\) and \(B\) can separately be accessed by the selfadjoint odd spectral localizers [16, 9], but alternatively one can also use the non-selfadjoint one involving the Hamiltonian: \[\mbox{Ind}(EAE+{\bf 1}-E)\;=\;\frac{1}{2}\;\mbox{Sig}(L^{\mbox{\tiny od}}_{ \kappa,\rho}(H))\;,\qquad L^{\mbox{\tiny od}}_{\kappa}(H)\;=\;\begin{pmatrix} \kappa D&B\\ A&-\kappa D\end{pmatrix}\;,\] provided \(\kappa\) and \(\rho\) satisfy (4). Formulas for the signature The signature of an \(N\times N\) matrix \(L\) with no spectrum on the imaginary axis is equal to the difference of the total algebraic multiplicity of all eigenvalues with positive and negative eigenvalues. According to Theorem 1, the signature of the finite volume spectral localizer is the topological invariant of interest. This appendix discusses two ways to access \(\mathrm{Sig}(L)\), one via a spectral flow and one via a winding number. Let us begin by recalling (_e.g._ Section 1.6 of [9]) that for a continuous path \(t\in[0,1]\mapsto L_{t}\) of matrices such that the endpoints \(L_{0}\) and \(L_{1}\) have no spectrum on the imaginary axis, the spectral flow of the path is given by \[\mathrm{Sf}(t\in[0,1]\mapsto L_{t})\;=\;\frac{1}{2}\big{(}\mathrm{Sig}(L_{1}) \,-\,\mathrm{Sig}(L_{0})\big{)}\;. \tag{17}\] Let us stress that this is in general _not_ the spectral flow of the path \(t\in[0,1]\mapsto\Re e(L_{t})=\frac{1}{2}(L_{t}+L_{t}^{*})\). The formula (17) can be used to compute the signature \(\mathrm{Sig}(L)\) if one chooses a suitable path with \(L_{1}=L\) and for which the signature \(\mathrm{Sig}(L_{0})\) is known. An example of such a path is certainly given by \(L_{t}=L+2(1-t)\|L\|\), for which \(\mathrm{Sig}(L_{0})=N\). In Section 2 rather exhibits a path for which \(\mathrm{Sig}(L_{0})=0\). Such paths are advantageous (in numerical applications) if the signature of \(L\) is small compared to the size \(N\). The spectral flow of the path can be obtained numerically by computing the low-lying spectrum of \(L_{t}\) for all \(t\in[0,1]\) (of course, the path is discretized and typical paths are actually analytic in \(t\)). Another formula for the signature is known as the Routh-Hurewitz theorem. As shows the short proof below, it is a basic consequence of the argument principle. It is a way to access the non-hermitian signature as a suitable winding number. Again this is potentially of use for numerics in situations where the signature is small compared to the size of the matrix so that the winding number appearing below is also small. While this formula is not implemented in the present work, it is certainly of theoretical interest in this context. **Proposition 6**: _Let \(L\) be an \(N\times N\) matrix with a line-gap on the imaginary axis. Then its half-signature is given by_ \[\frac{1}{2}\;\mathrm{Sig}(L) = \int_{-\infty}^{\infty}\frac{ds}{2\pi\imath}\;\partial_{s}\,\ln \big{(}\det(L+\imath\,s\,{\bf 1})\big{)} \tag{18}\] \[= \int_{-\infty}^{\infty}\frac{ds}{2\pi}\;\frac{1}{1+s^{2}}\; \mathrm{Tr}\big{(}({\bf 1}+\imath\,s\,L)(L+\imath\,s\,{\bf 1})^{-1}\big{)}\;. \tag{19}\] **Proof.** The characteristic polynomial \(z\in\mathbb{C}\mapsto\det(L-z{\bf 1})\) is analytic and of the form \(\det(L-z{\bf 1})=(-z)^{N}+{\cal O}(|z|^{N-1})\). Let us introduce the meromorphic function \[f(z)\;=\;\frac{1}{2\pi\imath}\;\frac{\partial_{z}\,\det(L-z{\bf 1})}{\det(L-z{ \bf 1})}\;.\] Even though not used in the following, let us note that the fundamental theorem of algebra and the argument principle implies that \[N\;=\;\oint_{\Gamma_{R}}dz\,f(z)\;,\] where \(\Gamma_{R}\) is a positively oriented circle of sufficiently large radius \(R\), centered at the origin. Let us split \(\Gamma_{R}=\Gamma_{R}^{+}\,+\,\Gamma_{R}^{-}\) into the half-circle with positive and negative real part. Then an explicit computation shows \[\frac{N}{2}\ =\ \lim_{R\to\infty}\oint_{\Gamma_{R}^{\pm}}dz\,f(z)\;.\] Furthermore, let \(\Gamma_{R}^{0}\) be the path \(s\in[-R,R]\mapsto\imath s\in\mathbb{C}\). By hypothesis \(f\) has no pole on \(\Gamma_{R}^{0}\). If now \(N_{+}\) and \(N_{-}\) denote the number of zeros of \(\det(L-z{\bf 1})\) (counted with their multiplicity) on the right and left half-plane respectively, then by the argument principle \[N_{-}\ =\ \oint_{\Gamma_{R}^{+}\,+\,\Gamma_{R}^{0}}dz\,f(z)\;,\qquad N_{+}\ =\ \oint_{\Gamma_{R}^{+}\,-\,\Gamma_{R}^{0}}dz\,f(z)\;.\] Taking the difference then shows \[N_{+}\,-\,N_{-}\ =\ \oint_{\Gamma_{R}^{+}}dz\,f(z)\ -\ \oint_{\Gamma_{R}^{-}} dz\,f(z)\ -\ 2\oint_{\Gamma_{R}^{0}}dz\,f(z)\;.\] Taking the limit \(R\to\infty\) now implies the first equality (18) (note that the sign is obtained by the change of orientation in the statement). Using the identity \(\ln\det=\operatorname{Tr}\ln\) and then deriving directly implies \[\frac{1}{2}\,\operatorname{Sig}(L)\ =\ \lim_{R\to\infty}\int_{-R}^{R}\frac{ds}{2 \pi}\,\operatorname{Tr}\bigl{(}(L+\imath\,s\,{\bf 1})^{-1}\bigr{)}\;.\] Now the integrand only decays as \(\frac{1}{s}\) at \(s\to\pm\infty\) and hence is not integrable. However, one can regularize with \[\int_{-R}^{R}\frac{ds}{2\pi}\ \frac{\imath s}{1+s^{2}}\ =\ 0\;.\] Due to \[\operatorname{Tr}\Bigl{(}(L+\imath\,s\,{\bf 1})^{-1}+\frac{\imath s}{1+s^{2}} \,{\bf 1}\Bigr{)}\ =\ \frac{1}{1+s^{2}}\,\operatorname{Tr}\bigl{(}({\bf 1}+\imath\,s\,L)(L+ \imath\,s\,{\bf 1})^{-1}\bigr{)}\;,\] this leads to the second equality (19) because the integral is now absolutely convergent. \(\Box\) **Acknowledgements:** We thank Enrique Zuazua for reminding us of the Routh criterion. This work was supported by the DFG grant SCHU 1358/6-2. A.C. acknowledges support from the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science, and the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE's National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the U.S. DOE or the United States Government.
2305.15181
Gravitational wave signatures from the phase-transition-induced collapse of a magnetized neutron star
Strong magnetic fields make neutron stars potential sources of detectable electromagnetic and gravitational-wave signals. Hence, inferring these magnetic fields is critical to understand the emissions of neutron stars. However, due to the lack of direct observational evidence, the interior magnetic field configuration remains ambiguous. Here, for the first time, we show that the internal magnetic field strength along with the composition of a neutron star can be directly constrained by detecting the gravitational waves from the phase-transition-induced collapse of a magnetized neutron star. By dynamically simulating this collapsing event, we first find that the dominant peaks in the gravitational waveform are the fundamental $l=0$ quasi-radial $F$ mode and the fundamental $l=2$ quadrupolar $^2f$ mode. We next show that the maximum gravitational wave amplitude $|h|_\mathrm{max}$ increases with the maximum magnetic field strength of the interior toroidal field $\mathcal{B}_\mathrm{max}$ until the maximum rest-mass density at bounce $\rho_\mathrm{max,b}$ decreases due to the increasing $\mathcal{B}_\mathrm{max}$. We then demonstrated that the magnetic suppression of fundamental modes found in our previous work remains valid for the hybrid stars formed after the phase-transition-induced collapses. We finally show that measuring the frequency ratio between the two fundamental modes $f_{^2f}/f_{F}$ allows one to infer $\mathcal{B}_\mathrm{max}$ and the baryonic mass fraction of matter in the mixed phase $M_\mathrm{mp} / M_{0}$ of the resulting hybrid star. Consequently, taking $\mathcal{B}_\mathrm{max}$ and $M_\mathrm{mp} / M_{0}$ as examples, this work has demonstrated that much information inside neutron stars could be extracted similarly through measuring the oscillation modes of the stars.
Anson Ka Long Yip, Patrick Chi-Kit Cheong, Tjonnie Guang Feng Li
2023-05-24T14:14:04Z
http://arxiv.org/abs/2305.15181v1
Gravitational wave signatures from the phase-transition-induced collapse of a magnetized neutron star ###### Abstract Strong magnetic fields make neutron stars potential sources of detectable electromagnetic and gravitational-wave signals. Hence, inferring these magnetic fields is critical to understand the emissions of neutron stars. However, due to the lack of direct observational evidence, the interior magnetic field configuration remains ambiguous. Here, for the first time, we show that the internal magnetic field strength along with the composition of a neutron star can be directly constrained by detecting the gravitational waves from the _phase-transition-induced collapse_ of a magnetized neutron star. By dynamically simulating this collapsing event, we first find that the dominant peaks in the gravitational waveform are the fundamental \(l=0\) quasi-radial \(F\) mode and the fundamental \(l=2\) quadrupolar \({}^{2}f\) mode. We next show that the maximum gravitational wave amplitude \(|h|_{\rm max}\) increases with the maximum magnetic field strength of the interior toroidal field \(\mathcal{B}_{\rm max}\) until the maximum rest-mass density at bounce \(\rho_{\rm max,b}\) decreases due to the increasing \(\mathcal{B}_{\rm max}\). We then demonstrated that the magnetic suppression of fundamental modes found in our previous work remains valid for the hybrid stars formed after the phase-transition-induced collapses. We finally show that measuring the frequency ratio between the two fundamental modes \(f_{2f}/f_{F}\) allows one to infer \(\mathcal{B}_{\rm max}\) and the baryonic mass fraction of matter in the mixed phase \(M_{\rm mp}/M_{0}\) of the resulting hybrid star. Consequently, taking \(\mathcal{B}_{\rm max}\) and \(M_{\rm mp}/M_{0}\) as examples, this work has demonstrated that much information inside neutron stars could be extracted similarly through measuring the oscillation modes of the stars. ## Introduction The strongest magnetic fields in the universe, up to \(10^{14-15}\) G, are discovered on the surface of neutron stars. Some puzzling astronomical phenomena can be explained by these highly magnetized neutron stars, including soft gamma-ray repeaters and anomalous X-ray pulsars [1, 2, 3, 4, 5]. Besides, it has been demonstrated that ultrahigh magnetic fields can deform neutron stars based on the field geometry. Neutron stars become prolate by a purely toroidal field [6, 7, 8], while they become oblate by a purely poloidal field [9, 10, 11]. These deformations make rotating neutron stars possible sources of continuous gravitational waves [12]. As the magnetic field governs the emissions of neutron stars, it is vital to interpret the magnetic field configurations of neutron stars. With radio astronomical data, the dipole-spin-down model is typically used to estimate the surface magnetic field strength of neutron stars [13, 14]. Recently, the Neutron Star Interior Composition Explorer (NICER) also allows for the deduction of the geometry and strength of the surface magnetic field from the X-ray emitting hotspots in pulsars [15, 16]. Nevertheless, all these measurements can only provide clues to the surface magnetic field but not the interior magnetic field of neutron stars. Therefore, it is still challenging to determine the nature of the magnetic field inside neutron stars. Detection of gravitational-wave signals from neutron stars provides a novel way of probing information inside the stars. The first signal detected was produced by a binary neutron star merger GW170817 [17]. Aside from merger events, the phase-transition-induced collapse of a neutron star is also a potential scenario for producing observable gravitational wave signals. When the rest-mass density in the neutron star core exceeds a certain threshold, a gravitational collapse is triggered by the phase transition from hadronic matter to deconfined quark matter in the core. This collapse results in a more compact 'hybrid star' composed of hadrons and deconfined quarks. This collapse could occur in a newly born neutron star from a supernova explosion and an accreting neutron star in a binary system [18, 19]. Dynamical simulations were employed to study the dynamics and gravitational-wave signals of such a phase-transition-induced collapse [20, 19]. In particular, they initiated the phase-transition-induced collapse by replacing the original polytropic equation of state with a "softer" equation of state that takes deconfined quarks into account. They demonstrated that fundamental modes of the resulting hybrid star can be excited and these modes give rise to detectable gravitational wave signals. However, the magnetic field was not considered in these studies. Until recently, we took a magnetic field into account for the first time and investigated the properties of a magnetized hybrid star formed from phase-transition-induced collapse [21]. In this work, we further extract the gravitational wave signatures from the _phase-transition-induced collapse_ of a magnetized neutron star through dynamical simulations, and demonstrate that these signatures can be used to probe the internal magnetic field strength together with the composition of the star for the first time. Specifically, we first find that the waveform is primarily composed of the fundamental \(l=0\) quasi-radial \(F\) mode and the fundamental \(l=2\) quadrupolar \({}^{2}f\) mode. We then show that the maximum wave amplitude \(|h|_{\rm max}\) increases with the maximum magnetic field strength of the interior toroidal field \(\mathcal{B}_{\rm max}\) until the maximum rest-mass density at bounce \(\rho_{\rm max,b}\) decreases due to the increasing \(\mathcal{B}_{\rm max}\). We finally demonstrate that measuring the frequency ratio between the two fundamental modes \(f_{z_{f}}/f_{F}\) can infer \(\mathcal{B}_{\rm max}\) and the baryonic mass fraction of matter in the mixed phase \(M_{\rm mp}/M_{0}\). Therefore, we use \(\mathcal{B}_{\rm max}\) and \(M_{\rm mp}/M_{0}\) as two examples to illustrate that analyzing the oscillation modes of neutron stars can provide us information about the interior of the stars. ## Results ### Waveform Similar features in the waveform are observed in different simulations (see 'Simulations' in the Methods), so we pick one to describe the waveform. Here, we take the simulation with the initial magnetized neutron star model T1K6 (see 'Initial neutron star models' in the Methods) and an exponent quantifying the pressure contribution due to deconfined quarks in the mixed phase \(\delta=3\) (see 'Hybrid star models and evolution' in the Methods). We plot the time evolution of the gravitational wave amplitude \(h\) of a magnetized hybrid star at a distance of 10 kpc (top panel) and the corresponding power spectrum \(\hat{h}\) in an arbitrary unit (bottom panel) in Fig. 1. The fundamental \(l=0\) quasi-radial \(F\) mode (red dashed line) and the fundamental \(l=2\) quadrupolar \({}^{2}f\) mode (green dash-dotted line) are the 2 dominating peaks in the spectrum (see 'Gravitational wave extraction and mode identification' in the Methods). The peak of \({}^{2}f\) mode is observed in all models, while the peak of \(F\) mode is observed starting from the initial model T1K4. The appearance of the \(F\) mode peak is mainly due to the deformation of the star by a strong magnetic field. Similar to the effect of rotation [22], the magnetic pressure breaks the spherical symmetry of the star. The radial mode then becomes quasi-radial and can emit gravitational waves. ### Wave amplitude The top panel of Fig. 2 shows the maximum gravitational wave amplitude \(|h|_{\rm max}\) at a distance of 10 kpc against the maximum magnetic field strength of the resulting hybrid star \(\mathcal{B}_{\rm max}\). The data points are arranged into 3 sequences with 3 values of \(\delta\in\{1,2,3\}\), where \(\delta\) is an exponent quantifying the pressure contribution due to quark matter in the mixed phase. First, \(|h|_{\rm max}\) increases with \(\mathcal{B}_{\rm max}\) when \(\mathcal{B}_{\rm max}\lesssim 5\times 10^{17}\) G. After obtaining a maximum value at \(\mathcal{B}_{\rm max}\sim 5\times 10^{17}\) G, \(|h_{\rm max}|\) decreases promptly with \(\mathcal{B}_{\rm max}\). This behavior could be interpreted in respect of magnetic deformation (illustrated as the absolute value of the surface deformation \(|\epsilon_{\rm s}|\) in the middle panel of Fig. 2) and the oscillation amplitude of the rest-mass density (described by the maximum rest-mass density at bounce \(\rho_{\rm max,b}\) in the bottom panel of Fig. 2). Here, we define the surface deformation as \(\epsilon_{\rm s}=r_{\rm e}/r_{\rm p}-1^{8}\), where \(r_{\rm p}\) and \(r_{\rm e}\) are the polar radius and the equatorial radius of the resulting hybrid star respectively. The detailed definitions of the polar and equatorial radius can be found in our previous work of Yip et al. [21]. The maximum rest-mass density at bounce \(\rho_{\rm max,b}\) refers to the peak value of the maximum rest-mass density during the time evolution (Similar to the central rest-mass density at bounce \(\rho_{\rm c,b}\) in Abdikamalov et al. [19]). According to Eq. (9), the oscillation amplitude of the quadrupole moment depends on both the deformation and the rest-mass density of the star. In the lower \(\mathcal{B}_{\rm max}\) regime, as \(|\epsilon_{\rm s}|\) increases with \(\mathcal{B}_{\rm max}\) while \(\rho_{\rm max,b}\) is not sensitive to \(\mathcal{B}_{\rm max}\), which contributes to an increasing quadrupole moment and thus an increasing \(|h|_{\rm max}\). On the other hand, \(\rho_{\rm max,b}\) decreases rapidly when \(\mathcal{B}_{\rm max}\gtrsim 5\times 10^{17}\) so it greatly reduces the oscillation amplitude of the quadrupole moment and gives a lower \(|h|_{\rm max}\). Furthermore, increasing \(\delta\) gives a larger \(|h|_{\rm max}\), which could also result from increasing \(\rho_{\rm max,b}\) as \(\delta\) increases. Accordingly, \(|h|_{\rm max}\) increases with \(\mathcal{B}_{\rm max}\) until \(\rho_{\rm max,b}\) decreases due to the increasing \(\mathcal{B}_{\rm max}\). ### Fundamental mode frequencies We plot fundamental \(l=0\) quasi-radial mode frequency \(f_{F}\) and fundamental \(l=2\) quadrupolar mode frequency \(f_{z_{f}}\) against the maximum magnetic field strength \(\mathcal{B}_{\rm max}\) in Fig. 3. Our data points are arranged into 3 sequences according to values of \(\delta\in\{1,2,3\}\) used, where \(\delta\) is an exponent describing the pressure contribution due to quark matter in the mixed phase. The data points of our previous study of Leung et al. [23] are also included as a comparison. The oscillation modes of magnetized neutron stars without deconfined quark matter was considered in Leung et al.. Both fundamental modes decrease similarly to those in Leung et al.. Nonetheless, \(f_{F}\) in our models is smaller while \(f_{z_{f}}\) is slightly larger than the models in Leung et al. when \(\mathcal{B}_{\rm max}\lesssim 5\times 10^{17}\) G. These frequency differences depend on \(\delta\). \(f_{F}\) decreases with \(\delta\) while \(f_{z_{f}}\) increases with it. Hence, the magnetic suppression of stellar oscillations found by Leung et al. is still valid in our magnetized hybrid star models and the mode frequency is also sensitive to \(\delta\). These behaviors of \(f_{F}\) and \(f_{z_{f}}\) against \(\delta\) were also observed in Abdikamalov et al. [19]. They investigated the oscillation modes in the gravitational wave signals from the formation of unmagnetized rotating hybrid stars. The decrease in \(f_{F}\) was interpreted as the result of forming a more rapidly rotating hybrid star with a larger \(\delta\), leading to a more significant mode suppression by the rotation. With the rediscovery of the fundamental mode behaviors in our models of magnetized non-rotating hybrid stars, these behaviors may be intrinsic properties of such a phase transition or the resulting hybrid star. We plan to investigate this aspect thoroughly in future studies. ### Constraining the magnetic field and the composition To better illustrate the correlation between the fundamental mode frequencies and the properties of the resulting magnetized hybrid star, we plot a contour plot of the frequency ratio between the fundamental \(l=0\) quasi-radial mode and the fundamental \(l=2\) quadrupolar mode \(f_{2f}/f_{F}\) against the maximum magnetic field strength \(\mathcal{B}_{\rm max}\) (horizontal axis) and the baryonic mass fraction of matter in the mixed phase \(M_{\rm mp}/M_{0}\) (vertical axis) in Fig. 4. We constructed this plot by the cubic radial basis function interpolation of the data points of our models and in our previous work of Leung et al. [23]. A colored dot with a black edge labels each data point. The dash-dotted lines denote the contour lines for particular values of \(f_{z_{f}}/f_{F}\). For a fixed value of \(f_{z_{f}}/f_{F}\), there are localized regions in the \(B_{\rm max}\) - \(M_{\rm mp}/M_{0}\) plane. Therefore, the measurement of \(f_{z_{f}}/f_{F}\) constrains the values of \(\mathcal{B}_{\rm max}\) and \(M_{\rm mp}/M_{0}\). ## Discussion In this work, for the first time, we dynamically simulate the collapse of magnetized neutron stars induced by a phase transition and demonstrate that the fundamental modes of neutron stars can be used to constrain the internal magnetic field strength together with the composition of the stars. In particular, we first found that the waveform is primarily composed of the fundamental \(l=0\) quasi-radial \(F\) mode and the fundamental \(l=2\) quadrupolar \({}^{2}f\) mode. We next investigate the maximum gravitational wave amplitude \(|h|_{\rm max}\). \(|h|_{\rm max}\) firstly rises with \(\mathcal{B}_{\rm max}\) due to the increasing magnetic deformation when \(\mathcal{B}_{\rm max}\lesssim 5\times 10^{17}\) G. On the other hand, when \(\mathcal{B}_{\rm max}\gtrsim 5\times 10^{17}\) G, \(|h_{\rm max}|\) decreases substantially with \(\mathcal{B}_{\rm max}\) due to the drop in the oscillation amplitude of the rest-mass density. Then, we have demonstrated that the magnetic suppression of stellar oscillations found in our previous work of Leung et al. [23] remains valid in our models with the mode frequency value being sensitive to the pressure contribution due to quark matter \(\delta\). Finally, we have shown that the maximum magnetic field strength \(\mathcal{B}_{\rm max}\) and the baryonic mass fraction of matter in the mixed phase \(M_{\rm mp}/M_{0}\) could be constrained by measuring the frequency ratio between the 2 fundamental modes \(f_{z_{f}}/f_{F}\). The fundamental modes in this work are in the frequency range of \(f\sim 600-1800\) Hz. Current gravitational wave detectors, including Advanced LIGO [24], Advanced Virgo [25], and KAGRA [26, 27], are sensitive to the gravitational wave signals with frequencies of \(f\sim 20-2000\) Hz. Thus, these detectors can barely detect the higher frequency \({}^{2}f\) modes. In contrast, the third-generation gravitational wave detectors, such as the Einstein Telescope (ET) [28] and Cosmic Explorer (CE) [29, 30, 31], is designed to have a broader sensitivity band with a frequency range of \(f\sim 1-10000\) Hz. Moreover, it has been shown that the mode frequency of the post-merger signal from binary neutron star coalescences can be measured up to an accuracy of \(\sim\mathcal{O}(10)\) Hz with the third-generation gravitational wave detectors [32]. Hence, we expect the fundamental modes in this work could also be measured in roughly the same order of accuracy using these detectors. However, a detailed analysis targeting the gravitational waves from phase-transition-induced collapses of neutron stars is necessary to verify this claim. This kind of analysis is beyond the scope of this study, so we leave it for future work. By examining \(\mathcal{B}_{\rm max}\) and \(M_{\rm mp}/M_{0}\) as an example, this work reveals that studying the fundamental modes of neutron stars can yield important information inside the stars. Several extensions can be made to the current work. Firstly, this can be done using a more realistic equation of state that considers thermal and magnetic effects. Next, other kinds of magnetic field geometries, including purely poloidal fields and twisted torus configurations, should also be investigated. Furthermore, since the instability of the purely toroidal field has been suppressed due to the restriction to axisymmetry in this work, 3D simulations without axisymmetry should also be performed. Finally, it is also necessary to consider the rotation of neutron stars, as different observations suggest that they rotate. ## Methods The set-ups of simulations in this work are identical to those in our previous work of Yip et al. [21]. Here, we briefly highlight these set-ups for completeness. ### Initial neutron star models Equilibrium models of neutron stars in axisymmetry are constructed by the open-sourced code XNS[33, 34, 35, 36, 37], which are the initial data for the simulations. We construct these equilibrium models with a polytropic equation of state, \[P=K\rho^{\gamma}, \tag{1}\] where \(P\) denotes the pressure, \(\rho\) denotes the rest-mass density and we adopt a polytropic constant \(K=1.6\times 10^{5}\) cm\({}^{5}\) g\({}^{-1}\) s\({}^{-2}\) (which is equivalent to 110 in the unit of \(c=G=M_{\odot}=1\)) and a polytropic index \(\gamma=2\). The specific internal energy \(\varepsilon\) on the initial time-slice is specified by \[\varepsilon=\frac{K}{\gamma-1}\rho^{\gamma-1}. \tag{2}\] The toroidal magnetic field _enclosed in the star_ follows a magnetic polytropic law \[\mathcal{B}_{\phi}=\alpha^{-1}K_{\rm m}(\rho h\varpi^{2})^{m}, \tag{3}\] where \(\alpha\) denotes the lapse function, \(K_{\rm m}\) denotes the toroidal magnetization constant, \(h\) denotes the specific enthalpy, \(\varpi^{2}=\alpha^{2}\psi^{4}r^{2}\sin^{2}\theta\), \(\psi\) denotes the conformal factor, \((r,\theta)\) denote the radial and angular coordinates in 2D spherical coordinates, and \(m\geq 1\) denotes the toroidal magnetization index. There are 9 models constructed in total, with 'REF' corresponding to the unmagnetized reference model and the others are magnetized neutron star models. These models are included in our previous work of Leung et al.[23]. As this work does not aim to study neutron stars with various masses, all models have a fixed baryonic mass of \(M_{0}=1.68\ M_{\odot}\), within the mass range of ordinary neutron stars. Also, each magnetized model has the same toroidal magnetization index \(m=1\) but has different values of the toroidal magnetization constant \(K_{\rm m}\). The models are sorted by increasing maximum magnetic field strength \(\mathcal{B}_{\rm max}\), where 'T1K1' has the lowest field strength, 'T1K2' has the second lowest field strength, and so on. ('T1' means the toroidal magnetization index \(m=1\) and 'K' represents the toroidal magnetization constant \(K_{\rm m}\).) These models allow for a phase transition within the stellar core and enable comparison with Leung et al.. An overview of the detailed properties of all nine models can be found in Table 1. ### Hybrid star models and evolution Based on the framework introduced by Lin et al.[20], we assume that the phase transition happens instantaneously in the initial time slice and is triggered by changing the original polytropic equation to a "softer" equation of state for describing hybrid stars. The MIT bag model equation of state[38] for massless and non-interacting quarks is given by \[P_{\rm q}=\frac{1}{3}(e-4B), \tag{4}\] where \(P_{\rm q}\) denotes the pressure of deconfined quarks, \(e\) denotes the total energy density and \(B\) denotes the bag constant. We apply the ideal gas equation of state to describe the evolution of normal hadronic matter \[P_{\rm h}=(\gamma-1)\rho\varepsilon, \tag{5}\] where \(P_{\rm h}\) denotes the pressure of hadrons and \(\gamma\) is chosen to be 2. A hybrid star formed after the phase-transition-induced collapse can be made up of two or three parts: (i) a hadronic phase for the region having a rest-mass density less than the lower threshold density \(\rho<\rho_{\rm hm}\), (ii) a mixed phase of the deconfined quarks and hadrons for the region having a rest-mass density in between the lower threshold density and the upper threshold density \(\rho_{\rm hm}<\rho<\rho_{\rm qm}\), and (iii) a region of pure quark matter phase with a rest-mass density beyond \(\rho>\rho_{\rm qm}\) (the maximum density might or might not correspond to this phase in practice). Following Abdikamalov et al.[19], the equation of state for hybrid stars can be expressed as follows: \[P=\begin{cases}P_{\rm h}&\text{for }\rho<\rho_{\rm hm},\\ \alpha_{\rm q}P_{\rm q}+(1-\alpha_{\rm q})P_{\rm h}&\text{for }\rho_{\rm hm} \leq\rho\leq\rho_{\rm qm},\\ P_{\rm q}&\text{for }\rho_{\rm qm}<\rho,\end{cases} \tag{6}\] where \[\alpha_{\rm q}=1-\left(\frac{\rho_{\rm qm}-\rho}{\rho_{\rm qm}-\rho_{\rm hm}} \right)^{\delta} \tag{7}\] quantifies the relative contribution of hadrons and deconfined quarks to the total pressure in the mixed phase. By using \(\delta\) as the exponent, the pressure contribution due to deconfined quarks can be adjusted. We use 3 values of \(\delta\in\{1,2,3\}\) to vary the pressure contribution due to deconfined quarks in the mixed phase. We take \(\rho_{\rm hm}=6.97\times 10^{14}\) g cm\({}^{-3}\), \(\rho_{\rm qm}=24.3\times 10^{14}\) g cm\({}^{-3}\) and \(B^{1/4}=170\) MeV. ### Simulations Simulations are performed three times for each of the 9 equilibrium models, once for each value of \(\delta\in\{1,2,3\}\). Thus, \(9\times 3=27\) simulations are conducted. The stellar models are evolved in dynamical spacetime using the new general relativistic magnetohydrodynamics code Gmunu[39, 40, 41]. Gmunu adopts a multigrid method for solving the Einstein equations in the conformally flat condition approximation. Ideal general-relativistic magnetohydrodynamics simulations are carried out in 2D cylindrical coordinates \((R,z)\). Axisymmetry with respect to the \(z\)-axis and equatorial symmetry are imposed for the simulations. The computational domain covers the region [0,100] for both \(R\) and \(z\) directions, with the base grid resolution \(N_{R}\times N_{z}=32\times 32\) and allowing 6 AMR levels (effective resolution \(=1024\times 1024\)). The refinement criteria of AMR we used are equivalent to those in Cheong et al.[40] and Leung et al.[23]. TVDLF approximate Riemann solver[42], 3rd-order reconstruction method PPM[43] and 3rd-order accurate SSPRK3 time integrator[44] are used for the simulations. The region surrounding the star is filled with an artificially low-density 'atmosphere', with a rest-mass density of \(\rho_{\rm atm}\sim 10^{-10}\rho_{\rm c}\,(t=0)\). In addition, as the simulations are restricted to magnetized stars with a purely toroidal field in axisymmetry, no divergence cleaning method is adopted. ### Gravitational wave extraction and mode identification The gravitational wave signal due to the phase-transition-induced collapse is computed by the quadrupole moment formula for an axisymmetric source[45] \[h=\frac{1}{2D_{\rm obs}}\left(2I_{zz}-I_{RR}\right), \tag{8}\] where \(h\) is the gravitational wave amplitude observed in the equatorial plane, \(I_{ij}\) is the quadrupole moment, \(\ddot{I}_{ij}\) is its second-time derivative, and \(D_{\rm obs}\) is the distance from the source. Since there is no unique choice for the definition of quadrupole moment in dynamical spacetime, we choose[46] \[I_{ij}=\int\rho_{\rm s}x^{i}x^{j}d^{3}x, \tag{9}\] where \(x^{i}\) is the spatial coordinate, \(\rho_{\rm s}\equiv\rho W\sqrt{\gamma}\) is the conserved rest-mass density, \(\rho\) is the rest-mass density, \(W\equiv 1/\sqrt{1-v^{i}v_{i}}\) is the Lorentz factor, \(v_{i}\) is the 3-velocity and \(\gamma_{ij}\) is the spatial metric. With the continuity equation \[\partial_{t}\rho_{\rm s}+\partial_{i}\left(\rho_{\rm s}\psi^{i}\right)=0, \tag{10}\] where \(\psi^{i}\equiv\left(\alpha v^{i}-\beta^{i}\right)\), \(\alpha\) is the lapse function and \(\beta^{i}\) is the space-like shift vector, the first time derivative of quadrupole moment \(I_{ij}\) can be computed by \[\dot{I}_{ij}=\int\rho_{\rm s}\left(\psi^{i}x^{j}+x^{j}\psi^{i}\right)d^{3}x. \tag{11}\] The second time derivative \(\ddot{I}_{ij}\) is then computed by the finite difference method. After computing the waveform, we follow the method introduced by Lin et al.[20] to identify the fundamental modes in the gravitational wave signal. Specifically, we compare our results with the perturbed equilibrium models in our previous work of Leung et al.[23], which have similar structures to those of the resulting hybrid stars in our simulations.
2302.11340
Are there any Landau poles in wavelet-based quantum field theory?
Following previous work by one of the authors [M.V.Altaisky, Unifying renormalization group and the continuous wavelet transform, Phys. Rev. D 93, 105043 (2016).], we develop a new approach to the renormalization group, where the effective action functional $\Gamma_A[\phi]$ is a sum of all fluctuations of scales from the size of the system $L$ down to the scale of observation $A$. It is shown that the renormalization flow equation of the type $ \frac{\partial \Gamma_A}{\partial \ln A}=-Y(A) $ is a limiting case of such consideration, when the running coupling constant is assumed to be a differentiable function of scale. In this approximation, the running coupling constant, calculated at one-loop level, suffers from the Landau pole. In general case, when the scale-dependent coupling constant is a non-differentiable function of scale, the Feynman loop expansion results in a difference equation. This keeps the coupling constant finite for any finite value of scale $A$. As an example we consider Euclidean $\phi^4$ field theory.
Mikhail Altaisky, Michal Hnatich
2023-02-21T15:05:40Z
http://arxiv.org/abs/2302.11340v3
# Are there any Landau poles in wavelet-based quantum field theory? ###### Abstract Following [1], we develop a new approach to the renormalization group, where the effective action functional \(\Gamma_{A}[\phi]\) is a sum of all fluctuations with scales from the size of the system (\(L\)) down to the scale of observation (\(A\)). It is shown that the renormalization flow equation of the type \(\frac{\partial\Gamma_{A}}{\partial\ln A}=-\beta(A)\) is a limiting case of such consideration, when the running coupling constant is assumed to be a differentiable function of scale. In this approximation, the running coupling constant, calculated at one-loop level, suffers from the Landau pole. In general case, when the scale-dependent coupling constant is a non-differentiable function of scale, the Feynman loop expansion results in a difference equation. This keeps the coupling constant finite for any finite value of scale \(A\). As an example we consider the Euclidean \(\phi^{4}\) field theory. ## I Introduction Renormalization group (RG) was discovered by Stueckenberg and Petermann as a group of parametrizations of the \(S\)-matrix emerging after cancellation of UV divergences in quantum electrodynamics [2]. The RG method has become popular in high-energy physics since Gell-Mann and Low using the functional equation for the renormalized photon propagator in QED has shown that the charge distribution surrounding a test charge in vacuum does not depend on the coupling constant at small scales, except for a scale factor, i.e., posses a kind of self-similarity [3]. The breakthrough in RG approach has been achieved by Kenneth Wilson, who applied it to statistical mechanics, where continuously many degrees of freedom are correlated over long distances. It was found that if there are many degrees of freedom within the correlation length \(\xi\), the behaviour of the system is primarily determined by the fact of cooperative behaviour and the number of degrees of freedom, rather than by the type of the Hamiltonian interaction [4]. The core of the Wilson's formulation was to successively integrate out the fluctuations of small scales to obtain progressively coarse-grained descriptions of fluctuations at larger scales [5; 6]. By doing so, the RG approach unifies the theory of phase transitions, quantum field theory, turbulence and many other branches of physics. The RG approach not only provides an explanation of critical phenomena, but also renders a practical tool for calculation of second order phase transitions [4; 7]. Having started from the weak interaction limit, where one-loop corrections to the correlation functions have tractable physical meaning, the RG approach has gradually evolved into the scaling equations for the exact ("dressed") correlation functions. In this paper, following the previous papers [1; 8; 9], we sum up the fluctuations starting from the IR edge and go down to the observation scale. If the fluctuations are summed up in a thin shell of scales, beta function coincides with the known results, regardless the summation starts from the IR or from the UV edge [1]. We have shown that summing up the fluctuations from IR edge (from size of the system) down to the observation scale in a finite range of scales, renders a finite renormalization of the coupling constant without any Landau poles. The use of _continuous_ wavelet transform is not the only way of the wavelet regularisation in quantum field theory aimed to sum up the fluctuations of different scales. Different authors also used the _discrete wavelet transform_ based on orthogonal wavelets [10; 11; 12]. This gives a natural cutoff at both UV and IR scale, but remains a lattice theory suitable for numerical simulations, rather than analytical evaluation of the scaling behaviour. The remainder of this paper is organised as follows. In _Section II_ we briefly remind some definitions of the Euclidean field theory in its statistical interpretation. _Section III_ presents the formalism of continuous wavelet transform in Euclidean QFT. In _Section IV_ we present one-loop contribution to the vertex in \(\phi^{4}\) model, and show that accurate summation of contributions of all scales from the size of the system down to the observation scale does not produce any singularities such as Landau poles. A few concluding remarks are given in the last section. Statistical mechanics view on quantum field theory Let us briefly remind the statistical view on the formalism of Euclidean quantum field theory. At the state of thermodynamic equilibrium the distribution of a continuous field \(\phi(x)\), say a magnetisation, is given by the canonical partition function \[Z=\mathrm{Tr}e^{-\beta H},\] where \(H=H[\phi]\) is the Hamiltonian, \(\beta=\frac{1}{T}\) is the inverse temperature, and the trace operator assumes the summation over all degrees of freedom. The trace can be expressed in terms of the Feynman integral: \[Z[J]=\int\mathcal{D}\phi\exp\left(-S[\phi]+\int J(x)\phi(x)d^{d}x\right), \tag{1}\] where the formal source \(J(x)\) can be understood as an external magnetic field. The Euclidean action functional \(S[\phi]\) is proportional to the Hamiltonian of the field \(\phi(x)\), \[S[\phi]=\frac{1}{T}\int d^{d}x\left[\frac{1}{2}(\partial\phi)^{2}+\frac{m^{2}} {2}\phi^{2}+\frac{\lambda}{4!}\phi^{4}\right], \tag{2}\] in the Ginzburg-Landau theory of phase transitions [13]. The correlation functions of the field \(\phi(x)\) can be derived as functional derivatives \[G^{(n)}(x_{1},\ldots,x_{n})=\left.\frac{\delta^{n}W[J]}{\delta J(x_{1})\ldots \delta J(x_{n})}\right|_{J=0}, \tag{3}\] where \(W[J]=\ln Z[J]\) is the connected Green functions generating functional, which is proportional to the Helmholtz free energy \(F[J]=-T\ln Z[J]\). The effective action functional \(\Gamma[\phi]\) is defined via the Legendre transform of \(W[J]\): \[\Gamma[\phi]=-W[J]+\int J(x)\phi(x)d^{d}x. \tag{4}\] (Here we keep the notation of [14].) The functional derivatives of \(W[J]\) with respect to the external source \(J(x)\) determine the _mean field_\(\phi=\phi[J]\): \[\frac{\delta W[J]}{\delta J(x)}=\phi(x).\] The functional derivatives of the effective action \(\Gamma[\phi]\) are the _vertex functions_\(\Gamma^{(n)}[\phi]\). In the above considered \(\phi^{4}\) model, the (renormalised) vertex function \(\Gamma^{(4)}[\phi]\) accounts for the value of coupling constant calculated at some reference scale; the \(\Gamma^{(2)}[\phi]\) function is the renormalised inverse propagator, which defines the renormalization of mass at the same reference scale. The most instructive case of the locally known microscopic interaction is the Ising model, described by microscopic Hamiltonian \[H=-J\sum_{<ij>}S_{i}S_{j}-B\sum_{i}S_{i}, \tag{5}\] where \(J\) is the coupling constant of interaction between the neighbouring spins, \(B\) is external magnetic field, and the Ising spins, with the values \(S_{i}=\pm 1\), are located on some regular lattice. In continuous limit, the Hamiltonian of the Ising model (5) with the nearest-neighbour interaction is transferred into Euclidean QFT model with \(\phi^{4}\) interaction (2), which meets the Ginzburg-Landau theory [13; 4]. In many cases, the interaction Hamiltonian or the bare action functional is known at some _macroscopic_ scale \(\mu\), but the microscopic theory at smaller scales (higher momentum transfer) should be unveiled. The typical cases are the QED and the quantum gravity - both having \(1/r\) asymptotic behaviour at macroscopic scales, but different behaviour at smaller scales [15; 16]. The renormalization group method displays its best merits when the microscopic fluctuations of atomic scales cooperate their behaviour into large-scale fluctuations, which are well described by classical mean-field equations. This happens in the theory of phase transitions, critical behaviour, kinetic description of gases, etc. [17; 4; 18]. However, if fluctuations of all scales do matter equally, the averaging of fluctuations from the atomic scales up to the larger scales (say, by Bogolubov's chain) becomes notoriously difficult. It turns easier, say in hydrodynamics, to start with the laminar large scale motion and to sum up all fluctuations arising from instabilities down to the atomic scales, where these fluctuations are completely damped by viscosity [17]. ## III Using continuous wavelet transform in quantum field theory models ### Continuous wavelet transform To separate fluctuations of different scales in quantum field theory, it is convenient to use the formalism of continuous wavelet transform (CWT), as described e.g., in [19; 20]. Let us briefly remind the basics of wavelet transform, see the monographs [21; 22] for a detailed introduction. Let \(\phi(x)\in L^{2}(\mathbb{R}^{d})\) be a square-integrable function. Let \(\chi(x)\in L^{2}(\mathbb{R}^{d})\) be a suitably well localised function, which satisfies the _admissibility condition_ \[C_{\chi}=\int|\tilde{\chi}(k)|^{2}\frac{d^{d}k}{S_{d}|k|^{d}}<\infty, \tag{6}\] where tilde denotes Fourier transform, \[\tilde{\chi}(k):=\int_{\mathbb{R}^{d}}e^{ikx}\chi(x)d^{d}x,\] and \(\quad S_{d}=\frac{2\pi^{d/2}}{\Gamma(d/2)}\) is the area of unit sphere in \(\mathbb{R}^{d}\), then it is possible to decompose the function \(\phi\) with respect to the basis, provided by shifted, dilated, and rotated copies of \(\chi(x)\). This decomposition is known as _continuous wavelet transform_ (CWT) [23; 24]: \[\phi(x)=\frac{1}{C_{\chi}}\int\frac{1}{a^{d}}\chi\left(R^{-1}(\theta)\frac{x-b} {a}\right)\phi_{a\theta}(b)\frac{dad^{d}b}{a}d\mu(\theta), \tag{7}\] where \(R(\theta)\) is the rotation matrix, \(d\mu(\theta)\) is the left-invariant measure on the \(SO(d)\) rotation group, usually written in terms of the Euler angles \[d\mu(\theta)=2\pi\prod_{k=1}^{d-2}\int_{0}^{\pi}\sin^{k}\theta_{k}d\theta_{k}.\] The functions \[\phi_{a,\theta}(b):=\int_{\mathbb{R}^{d}}\frac{1}{a^{d}}\overline{\chi}\left( R^{-1}(\theta)\frac{x-b}{a}\right)\phi(x)d^{d}x \tag{8}\] are known as _wavelet coefficients_ of the function \(\phi\) with respect to the mother wavelet \(\chi\). The decomposition (8) and the reconstruction (7) formulae represent a particular case of the _partition of unity_ in Hilbert space \(\mathcal{H}\) with respect to representation \(U(g)\) of a Lie group \(G\) acting transitively on \(\mathcal{H}\)[25; 26]: \[\hat{\mathbb{I}}=\frac{1}{C_{\chi}}\int_{G}U(g)|\chi\rangle d\mu(g)\langle \chi|U^{*}(g),\] with \(G\) being the group of affine transformations: \[G:x^{\prime}=aR(\theta)x+b,x,b\in\mathbb{R}^{d},a\in\mathbb{R}_{+},\theta\in SO (d). \tag{9}\] Wavelet coefficients (8) have clear physical meaning: The convolution of the analysed function \(\phi\) with a well localised function \(\chi\) at a fixed window width \(a\) comprise only the fluctuations with typical scales close to \(a\) and is insensitive to all other fluctuations. ### Scale-dependent fields The reconstruction (7) of the function \(\phi\) from the set of its wavelet coefficients is generally non-orthogonal, and the wavelet basis is overcomplete [22]. Although the integration \(\int_{0}^{\infty}\frac{da}{a}\ldots\) in (7) provides a formally exact reconstruction formula, depending on the physics of the considered problem we can restrict the integration by the minimal scale \(A\) from below (lattice size - in the case of ferromagnetic) and by the system size \(L\) from above \(\int_{A}^{L}\frac{da}{a}\ldots\) Moreover, as we know from the Heisenberg uncertainty principle, the value of a _quantum_ field \(\phi\) sharp at a point \(x\) is physically meaningless, since any measurement with \(\Delta x\!\rightarrow\!0\) implies an infinite momentum transfer \(\Delta p\!\rightarrow\!\infty\), which definitely drives us out of the applicability of the model. For this reason we have to consider \(A\) as the best available scale of measurement (observation). In the remainder of this paper, following the previous papers [8; 20] we will assume the mother wavelet \(\chi(x)\) to be an isotropic function of \(x\) and thus ignore the rotation factor \(R(\theta)\). In this settings, the scale component of the field \(\phi\), measured in a point \(b\) at the scale \(a\) with respect to the mother wavelet \(\chi\) (considered as an aperture function by the analogy from optics [27]) is given by wavelet coefficient: \[\phi_{a}(b)\equiv\langle a,b;\chi|\phi\rangle=\int\frac{1}{a^{d}}\overline{ \chi}\left(\frac{x-b}{a}\right)\phi(x)d^{d}x. \tag{10}\] However, the space of scale-dependent functions \(\{\phi_{a}(b)\}\) is more general than the space of point-dependent functions \(\phi(x)\in L^{2}(\mathbb{R}^{d})\). Even if all fields \(\phi_{a}(b)\) are well defined \(\forall a\in\mathbb{R}_{+},b\in\mathbb{R}^{d}\), the limit \[\phi(x)=\lim_{A\to 0}\int_{A}^{\infty}\frac{da}{a}\int_{\mathbb{R}^{d}} \frac{1}{a^{d}}\chi\left(\frac{x-b}{a}\right)\phi_{a}(b)d^{d}x\] does not necessarily exist. The divergence of the sum of all scale components happens in UV-divergent theories, where the value of a physical field \(\phi\) sharp at a point \(x\) is meaningless. If \(\phi(x)\) is understood as a wave function of physical particle, its normalization \[\langle\phi|\phi\rangle=\int\tilde{\phi}(x)\phi(x)d^{d}x=1\] is the statement of existence: the probability of finding this particle anywhere in space \(\mathbb{R}^{d}\) is exactly \(1\). Wavelet approach, based on the affine group (9), generalises the statement of existence in the form \[\frac{1}{C_{\chi}}\int_{g\in G}|\langle\phi|U(g)|\chi\rangle|^{2}d\mu(g)=1, \quad g=(a,b,\theta). \tag{11}\] The latter equation states that sweeping the measurement parameters, position \(b\) and the resolution \(a\) and the direction \(\theta\), over all possible values will necessarily implies the registration of the particle. Technically, the use of the scale-dependent functions \(\phi_{a}(b)\) in a local quantum field theory is rather straightforward: one can express local fields in terms of their wavelet transform \[\phi(x)=\frac{1}{C_{\chi}}\int\frac{da}{a}\int\frac{d^{d}k}{(2\pi)^{d}}e^{- ikx}\tilde{\chi}(ak)\tilde{\phi}_{a}(k), \tag{12}\] where \(\tilde{\phi}_{a}(k)\) are the Fourier images of the wavelet coefficients (10). This defines an easy rule to redefine the Feynman diagram technique: \[\tilde{\phi}(k)\rightarrow\tilde{\phi}_{a}(k)=\overline{\tilde{\chi}(ak)} \tilde{\phi}(k) \tag{13}\] Doing so, we have the following modification of the Feynman diagram technique [8; 19]: 1. Each field \(\tilde{\phi}(k)\) is substituted by the scale component: \(\tilde{\phi}(k)\rightarrow\tilde{\phi}_{a}(k)=\overline{\tilde{\chi}(ak)} \tilde{\phi}(k)\). 2. Each integration in the momentum variable is accompanied by the corresponding scale integration: \[\frac{d^{d}k}{(2\pi)^{d}}\rightarrow\frac{d^{d}k}{(2\pi)^{d}}\frac{da}{a}\frac{1 }{C_{\chi}}.\] 3. Each interaction vertex is substituted by its wavelet transform; for the \(N\)th power interaction vertex, this gives multiplication by the factor \(\prod_{i=1}^{N}\tilde{\chi}(a_{i}k_{i})\). According to these rules, the bare Green function of a massive scalar field in wavelet representation takes the form \[G_{0}^{(2)}(a_{1},a_{2},p)=\frac{\tilde{\chi}(a_{1}p)\tilde{\chi}(-a_{2}p)}{p^ {2}+m^{2}}.\] The finiteness of loop integrals is provided by the following rule: _There should be no scales \(a_{i}\) in internal lines smaller than the minimal scale of all external lines_[19; 20]. Therefore, the integration in \(a_{i}\) variables is performed from the minimal scale of all external lines up to infinity, or up to the system size. This corresponds to the summation of all fluctuations of all scales from the system size down to the fines scale of observation. The cutoff in scale variables \(a\) is a milder assumption than momentum cutoff \(\Lambda\) in a usual theory. Since the scale \(a\) is _a setting of observation_, rather than a measurable quantity like momentum, the cutoff in it results neither in violation of momentum conservation, nor in violation of other important symmetries. The summation of all fluctuations from the system size down to the finest observation scale, but not below it, seems quite natural for the integration over infinitely small scales is often beyond the applicability range of a particular physical model. This happens in ferromagnetic below the grid spacing, in turbulence below the mean free path, etc. For a theory with local \(\phi^{N}(x)\) interaction, the presence of two conjugated factors \(\tilde{\chi}(ak)\) and \(\overline{\tilde{\chi}(ak)}\) on each diagram line connected to interaction vertex, simply means that each internal line of the Feynman diagram carrying momentum \(p\) is supplied by the cutoff factor \(f^{2}(Ap)\), where \[f(x):=\frac{1}{C_{\chi}}\int_{x}^{\infty}|\tilde{\chi}(a)|^{2}\frac{da}{a}, \quad f(0)=1, \tag{14}\] with \(A\) being the minimal scale of all external lines of this diagram. ### Mother wavelets In our calculations, we use different derivatives of the Gaussian as mother wavelets. The admissibility condition (6) is rather loose: practically any well-localized function with the Fourier image vanishing at zero momentum (\(\tilde{\chi}(0)=0\)) obey this requirement. As for the Gaussian functions \[\chi_{n}(x)=(-1)^{n+1}\frac{d^{n}}{dx^{n}}\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{2 \pi}},\quad n>0, \tag{15}\] where \(x\) is a dimensionless argument, they are easy to integrate in Feynman diagrams. The graphs of first two wavelets of the (15) family, \[\chi_{1}(x)=-\frac{xe^{-\frac{x^{2}}{2}}}{\sqrt{2\pi}},\quad\chi_{2}(x)=\frac {(1-x^{2})e^{-\frac{x^{2}}{2}}}{\sqrt{2\pi}},\] are shown in Fig. 1 below. Their Fourier images are \[\tilde{\chi}_{n}(k)=-(k)^{n}e^{-\frac{k^{2}}{2}}. \tag{16}\] Respectively, the normalization constants and the wavelet cutoff functions are: \[C_{\chi_{n}}=\frac{\Gamma(n)}{2},\quad f_{\chi_{n}}(x)=\frac{\Gamma(n,x^{2})} {\Gamma(n)},\] where \(\Gamma(\cdot)\) is the Euler gamma function, and \(\Gamma(\cdot,\cdot)\) is the incomplete gamma function. For the first two wavelets of the family (15) the cutoff functions are: \[f_{\chi_{1}}(x)=e^{-x^{2}},\quad f_{\chi_{2}}(x)=(1+x^{2})e^{-x^{2}}. \tag{17}\] ## IV An example of \(\phi^{4}\) model Let us consider Euclidean action of a massive scalar field with the local \(\phi^{4}\) interaction (2): \[S[\phi]=\int d^{d}x\left[\frac{1}{2}(\partial\phi)^{2}+\frac{m^{2}}{2}\phi^{ 2}+\frac{\lambda}{4!}\phi^{4}\right].\] This model is an extrapolation of a classical interacting spin model to the continual limit [28]. Known as the Ginzburg-Landau model [13], it describes phase transitions in superconductors and other magnetic systems Figure 1: First two wavelets of the Gaussian wavelet family (15) fairly well, but it produces divergences when the correlation functions are evaluated from the generating functional (1) by perturbation expansion; see, e.g., [29] for a discussion. The parameter \(\lambda\) in the action functional (2) is a phenomenological coupling constant, which knows nothing about the scale of observation, and becomes the running action is now modulated by the wavelet factor \(V^{a_{1}a_{2}a_{3}a_{4}}_{x_{1}x_{2}x_{3}x_{4}}\), which is the Fourier transform of \(\prod_{i=1}^{4}\tilde{\chi}(a_{i}k_{i})\). As usual in functional renormalization group technique [30], we can introduce the effective action functional (4), the functional derivatives of which are the vertex functions \(\Gamma^{(n)}_{(A)}\) : \[\Gamma_{(A)}[\phi_{a}]=\Gamma^{(0)}_{(A)}+\sum_{n=1}^{\infty}\int\Gamma^{(n)} _{(A)}(a_{1},b_{1},\ldots,a_{n},b_{n})\phi_{a_{1}}(b_{1})\ldots\phi_{a_{n}}(b_ {n})\frac{da_{1}d^{d}b_{1}}{C_{\chi}a_{1}}\ldots\frac{da_{n}d^{d}b_{n}}{C_{ \chi}a_{n}}.\] The subscript \((A)\) indicates the presence in the theory of some minimal scale - the observation scale. In one-loop approximation, the two-point and the four-point vertex functions, \(\Gamma^{(2)}\) and \(\Gamma^{(4)}\), respectively, are given by the following diagrams: \[\Gamma^{(2)}=\Delta_{12}-\frac{1}{2}\ 1\ \raisebox{-14.226378pt}{\includegraphics[]{fig/f1.eps}} \tag{19}\] \[\Gamma^{(4)}=-\ 2\ \raisebox{-14.226378pt}{\includegraphics[]{fig/f2.eps}} \tag{20}\] Each vertex of the Feynman diagram corresponds to \(-\lambda\), and each external line of the 1PI diagram contains wavelet factor \(\tilde{\chi}(ak)\). In one-loop approximation, we have the following expressions, for the renormalized inverse propagator \(\Gamma^{(2)}_{(A)}\), and renormalized vertex function \(\Gamma^{(4)}_{(A)}\), respectively: \[\frac{\Gamma^{(2)}_{(A)}(a_{1},a_{2},p)}{\tilde{\chi}(a_{1}p)\tilde{\chi}(-a _{2}p)}=p^{2}+m^{2}+\frac{\lambda}{2}T^{d}_{\chi}(A), \tag{21}\] \[\frac{\Gamma^{(4)}_{(A)}}{\tilde{\chi}(a_{1}p_{1})\tilde{\chi}(a_{2}p_{2}) \tilde{\chi}(a_{3}p_{3})\tilde{\chi}(a_{4}p_{4})}=\lambda-\frac{3}{2}\lambda^ {2}X^{d}_{\chi}(A), \tag{22}\] where \(A\) is the minimal scale of all external lines of the corresponding diagram. The tadpole integral in Eq.(21) \[T^{d}_{\chi}(A)=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{f_{\chi}^{2}(Aq)}{q^{2}+m ^{2}}\] determines the contribution of all fluctuations with scales from \(A\) to \(\infty\) to the 'dressed mass' at the observation scale \(A\). In the local theory with \(\phi^{4}\) interaction, the natural length scale is the bare mass, the parameter of the action (2). Expressing the momenta in the units of mass \(m\), we get \[T^{d}_{\chi}(A)=\frac{S_{d}m^{d-2}}{(2\pi)^{d}}\int_{0}^{\infty}f_{\chi}^{2}( Amx)\frac{x^{d-1}dx}{x^{2}+1}, \tag{23}\] where \(x\) is dimensionless, and \(\alpha=Am\) is dimensionless scale of observation. Similarly, the one-loop contribution to the vertex function is given by the 'fish' integral \[X_{\chi}^{d}(A)=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{f_{\chi}^{2}(qA)f_{\chi}^{2}( (q-s)A)}{[q^{2}+m^{2}]\,[(q-s)^{2}+m^{2}]}. \tag{24}\] Let us consider loop integrals (23) and (24) in \(d\!=\!4\) dimension, where the coupling constant \(\lambda\) is dimensionless, for different mother wavelets \(n=1,2\) of (15) family. For \(n=1\) wavelet we get \[T_{\chi_{1}}^{4}(A) = \frac{m^{2}}{8\pi^{2}}\int_{0}^{\infty}e^{-2\alpha^{2}x^{2}}\frac {x^{3}dx}{x^{2}+1} \tag{25}\] \[= \frac{m^{2}}{32\pi^{2}}\left(\frac{1}{\alpha^{2}}-2e^{2\alpha^{2 }}\text{Ei}_{1}(2\alpha^{2})\right),\] where \(\alpha=Am\) and \[\text{Ei}_{1}(z):=\int_{1}^{\infty}\frac{e^{-xz}}{x}dx\] is the exponential integral of the first kind. Similarly, for \(n=2\) wavelet we get \[T_{\chi_{2}}^{4}(A)=\frac{m^{2}}{8\pi^{2}}\int_{0}^{\infty}e^{-2\alpha^{2}x^{ 2}}(1+\alpha^{2}x^{2})^{2}\frac{x^{3}dx}{x^{2}+1}=\frac{m^{2}}{32\pi^{2}}\Big{(} \frac{5}{2\alpha^{2}}-\frac{5}{2}+\alpha^{2}+2e^{2\alpha^{2}}\text{Ei}_{1}(2 \alpha^{2})[2\alpha^{2}-\alpha^{4}-1]\Big{)}, \tag{26}\] For small scales (\(Am\!\ll\!1\)) the one-loop contribution to the effective mass in (21) is dominated by the square term \(\propto\frac{\lambda}{A^{2}}\). The 'fish' integral contribution (24) to the vertex function (22) can be evaluated by symmetrisation of loop momenta \(q\to q+s/2\), where \(s=p_{1}+p_{2}\) is the sum of the incoming momenta. In terms of the dimensionless momentum \(\mathbf{y}=\mathbf{q}/|s|\), the integral takes the form \[X_{\chi}^{d}(A)=\frac{S_{d-1}s^{d-4}}{(2\pi)^{d}}\int_{0}^{\pi}d\theta\sin^{d -2}\theta\int_{0}^{\infty}dyy^{d-3}\frac{f_{\chi}^{2}\left(As\sqrt{y^{2}+y \cos\theta+\frac{1}{4}}\right)f_{\chi}^{2}\left(As\sqrt{y^{2}-y\cos\theta+ \frac{1}{4}}\right)}{\left[\frac{y^{2}+\frac{1}{4}+\frac{m^{2}}{4}}{y}+\cos \theta\right]\left[\frac{y^{2}+\frac{1}{4}+\frac{m^{2}}{4}}{y}-\cos\theta \right]}, \tag{27}\] where \(\theta\) is the angle between the loop momentum \(q\) and the total momentum \(s\). The integral (27) can be evaluated in the relativistic limit \(s^{2}\gg 4m^{2}\). In logarithmic dimension \(d=4\), when the coupling constant \(\lambda\) is dimensionless, relativistic approximation drastically simplifies the integral: the dependence on the total momentum \(s\) is manifested only through the dimensionless scale \(As\) in wavelet cutoff factors \(f_{\chi}^{2}\). For \(n=1\) this gives \[X_{\chi_{1}}^{4}(A)=\frac{1}{16\pi^{2}}\Big{[}2\text{Ei}_{1}(2\alpha^{2})- \text{Ei}_{1}(\alpha^{2})+e^{-\alpha^{2}}\frac{1-e^{-\alpha^{2}}}{\alpha^{2}} \Big{]}, \tag{28}\] where \(\alpha=As\). Similarly, for \(n=2\) we have \[X_{\chi_{2}}^{4}(A)=\frac{1}{16\pi^{2}}\Big{[}2\text{Ei}_{1}(2\alpha^{2})- \text{Ei}_{1}(\alpha^{2})-e^{-2\alpha^{2}}\left(\frac{5}{2\alpha^{2}}+\frac{1 }{2}\right)+e^{-\alpha^{2}}\big{(}\frac{67}{128}+\frac{9}{128}\alpha^{2}+\frac{ 1}{256}\alpha^{4}+\frac{5}{2\alpha^{2}}\big{)}\Big{]} \tag{29}\] The details of integral evaluation can be found in Appendix of [9]. Since, the equation (22) gives an exact (in one-loop approximation) contribution of all fluctuations with scales from \(A\) to infinity to the dependence of the effective coupling constant on the observation scale \(A\), this dependence can be written as explicit function of the dimensionless scale \(\alpha=As\). This dependence, calculated with \(\chi_{1}\) wavelet (28) is \[\lambda_{eff}(\alpha^{2})=\lambda+\frac{3}{2}\frac{\lambda^{2}e^{-\alpha^{2}}}{ 16\pi^{2}}\Big{[}e^{\alpha^{2}}(2\text{Ei}_{1}(2\alpha^{2})-\text{Ei}_{1}( \alpha^{2}))+\frac{1-e^{-\alpha^{2}}}{\alpha^{2}}\Big{]}, \tag{30}\] where we have change sign in (22) to invert it from \(\lambda=\lambda_{bare}\) to \(\lambda=\lambda_{phys}\). Let us consider the _contribution of a finite shell of scales_\((A,L)\), when a classical field is known at certain finite scale \(L\), in contrast to previous construction (28,30), where we have integrated out all fluctuations in the semi-infinite range \((A,L=\infty)\). The value of the effective coupling constant of the type (30) does not diverge for any finite scale \(A>0\) (in contrast to its differential analogue (37), presented below, which suffers from the Landau pole). The reason for this can be understood physically, if we assume a system of size \(L\) in equilibrium, with well defined coupling constant \(\lambda_{L}\). Any measurements on such system can be executed at scales \(A<L\). The effective coupling constant relevant to a measurement at the scale \(A\) is \(\lambda_{A}\). Its particular value is determined by all fluctuations in the range of scales \([A,L]\). In one-loop approximation for the \(\phi^{4}\) theory this effective coupling constant is \[\lambda_{A}=\lambda_{L}+\frac{3}{2}\lambda_{L}^{2}[X(A)-X(L)] \tag{31}\] where the function \(X(A)\) is the 'fish' integral of the type (28). If the scales \(A\) and \(L\) are sufficiently close to each other, the difference equation (31) \[-\frac{\Delta\lambda}{\lambda^{2}}=-\frac{3}{2}\Delta X\] can be transformed to differential equation \(d\frac{1}{\lambda}=-\frac{3}{2}dX\), which has the solution \[\lambda(A)=\frac{\lambda_{L}}{1-\frac{3}{2}\lambda_{L}(X(A)-X(L))}, \tag{32}\] which coincides with the solution of the original equation (31) _only for small values of \(\lambda_{L}\)_, otherwise it suffers from the pole. The formal differentiation of the effective coupling constant (30) with respect to the logarithmic scale argument gives the scaling equation \[\alpha^{2}\frac{\partial\lambda_{eff}}{\partial\alpha^{2}}=\frac{3}{2}\lambda ^{2}\alpha^{2}\frac{\partial X_{\chi_{1}}^{4}}{\partial\alpha^{2}}=\frac{3 \lambda^{2}}{32\pi^{2}}\frac{e^{-\alpha^{2}}-1}{\alpha^{2}}e^{-\alpha^{2}}, \tag{33}\] which for small values \(\alpha\ll 1\) coincides with the standard result \[\frac{\partial\lambda_{eff}}{\partial\mu}\approx\frac{3\lambda^{2}}{16\pi^{2} },\quad\mu=-\ln\alpha.\] In the latter limit of small \(\alpha\) the RG equation for the coupling constant \[\frac{\partial\lambda}{\partial\ln\alpha}=-\frac{3\lambda^{2}}{16\pi^{2}} \tag{34}\] has well-known solution \[\lambda(\alpha)=\frac{\lambda_{1}}{1+\frac{3\lambda_{1}}{16\pi^{2}}\ln\frac{ \alpha}{\alpha_{1}}}, \tag{35}\] where \(\lambda_{1}=\lambda(\alpha_{1})\) is a reference value of the coupling constant at certain reference value \(\alpha_{1}\). The solution (35) suffers from a Landau pole. In the full form, the ordinary differential equation (33) can be solved as an RG-type equation \[d\left(\frac{1}{\lambda}\right)=-\frac{3}{32\pi^{2}}\frac{e^{-\alpha^{2}}(e^ {-\alpha^{2}}-1)}{\alpha^{4}}d\alpha^{2} \tag{36}\] If the value of the effective coupling constant \(\lambda\) is known at certain squared dimensionless scale \(\lambda_{1}=\lambda(x_{1}=(A_{1}s)^{2})\), its value at other scales \(x\!=\!(As)^{2}\), is given by an explicit solution \[\lambda(x)=\frac{1}{\frac{1}{\lambda_{1}}+\frac{3}{32\pi^{2}}\left[F(x)-F(x_{ 1})\right]}, \tag{37}\] where \(F(x):=2\Gamma(-1,2x)-\Gamma(-1,x)\), with \[\Gamma(a,z)=\int_{z}^{\infty}t^{a-1}e^{-t}dt\] being the incomplete gamma-function. Similar to the small-scale case, the solution (37) also suffers from the Landau pole. In the actual sense of the Ginzburg-Landau model, we cannot really assert that \(\phi^{4}(x)\) interaction is realistic large-scale interaction from which one can derive the small scale interaction of fields at \(A\to 0\) by means of RG and loop corrections to the large-scale theory. Instead, what we can do is to approximate some medium-scale interaction from the known parameters of the Hamiltonian at microscopic, i.e., at the UV-cutoff scale. In case of ferromagnetic model this is the grid size. From this microscopic interaction we can infer the interaction strength \(\lambda\) for bigger (Kadanoff's) blocks, but not for the ground state of the whole crystal of finite size [5]. In this case the Ginzburg-Landau model is not a fairly good approximation. However, there are QFT models in which the large-scale fields provide a good approximation for the measured physical fields, and the renormalization group with the loop corrections provide a good estimation of field interactions at smaller scales. Quantum electrodynamics is a well known example. ## V Conclusions We have shown in this paper that the summation of all fluctuations with scales from the size of the system down to the observation scale by means of continuous wavelet transform results in a finite renormalization of the coupling constant without any Landau poles. It was demonstrated on a simple example of \(\phi^{4}\) field theory. Our conclusion seems rather general, since the same technique can be applied to QED [9], QCD [8], and other models. The Landau poles then remain to be artefacts of approximating the results in a _finite_ range of scales by the results obtained from differential equation in an infinitesimally thin shell. In probabilistic sense the summation of all fluctuations from large scale down to smaller scales may be related to the probabilities of small scale fluctuations constrained by the fluctuations of larger scales [31]. ## Acknowlegement M.H. acknowledges the support from the project VEGA 1/0535/21 of Ministry of Education, Science, Research and Sport of Slovak Republic.
2308.04070
ConDistFL: Conditional Distillation for Federated Learning from Partially Annotated Data
Developing a generalized segmentation model capable of simultaneously delineating multiple organs and diseases is highly desirable. Federated learning (FL) is a key technology enabling the collaborative development of a model without exchanging training data. However, the limited access to fully annotated training data poses a major challenge to training generalizable models. We propose "ConDistFL", a framework to solve this problem by combining FL with knowledge distillation. Local models can extract the knowledge of unlabeled organs and tumors from partially annotated data from the global model with an adequately designed conditional probability representation. We validate our framework on four distinct partially annotated abdominal CT datasets from the MSD and KiTS19 challenges. The experimental results show that the proposed framework significantly outperforms FedAvg and FedOpt baselines. Moreover, the performance on an external test dataset demonstrates superior generalizability compared to models trained on each dataset separately. Our ablation study suggests that ConDistFL can perform well without frequent aggregation, reducing the communication cost of FL. Our implementation will be available at https://github.com/NVIDIA/NVFlare/tree/dev/research/condist-fl.
Pochuan Wang, Chen Shen, Weichung Wang, Masahiro Oda, Chiou-Shann Fuh, Kensaku Mori, Holger R. Roth
2023-08-08T06:07:49Z
http://arxiv.org/abs/2308.04070v1
# ConDistFL: Conditional Distillation for Federated Learning from Partially Annotated Data ###### Abstract Developing a generalized segmentation model capable of simultaneously delineating multiple organs and diseases is highly desirable. Federated learning (FL) is a key technology enabling the collaborative development of a model without exchanging training data. However, the limited access to fully annotated training data poses a major challenge to training generalizable models. We propose "ConDistFL", a framework to solve this problem by combining FL with knowledge distillation. Local models can extract the knowledge of unlabeled organs and tumors from partially annotated data from the global model with an adequately designed conditional probability representation. We validate our framework on four distinct partially annotated abdominal CT datasets from the MSD and KiTS19 challenges. The experimental results show that the proposed framework significantly outperforms FedAvg and FedOpt baselines. Moreover, the performance on an external test dataset demonstrates superior generalizability compared to models trained on each dataset separately. Our ablation study suggests that ConDistFL can perform well without frequent aggregation, reducing the communication cost of FL. Our implementation will be available at [https://github.com/NVIDIA/NVFlare/tree/dev/research/condist-fl](https://github.com/NVIDIA/NVFlare/tree/dev/research/condist-fl). Keywords:Federated learning Partially labeled datasets Multi-organ and tumor segmentation Abdominal CT. ## 1 Introduction Accurately segmenting abdominal organs and malignancies from computed tomography (CT) scans is crucial for clinical applications such as computer-aided diagnosis and therapy planning. While significant research has focused on segmenting individual organs [1, 7] and multiple classes of organs without malignancies [4, 6], a generalized model capable of handling multiple organs and diseases simultaneously is desirable in real-world healthcare scenarios. Traditional supervised learning methods, on the other hand, rely on the amount and quality of the training data. Regrettably, the cost of high-quality medical image data contributed to a paucity of training data. For many anatomies, only trained professionals can produce accurate annotations on medical images. On top of this, even experts often only have specialized knowledge for a specific task, making it challenging to annotate the organs and corresponding malignancies of various anatomies and imaging modalities. The lack of sufficient annotated datasets for multiple organs and tumors poses a significant challenge in developing generalized segmentation models. To address this issue, several studies have explored partially annotated datasets, where only a subset of targeted organs and malignancies are annotated in each image, to build generalized segmentation models [21, 10, 5, 11, 23]. However, sharing private medical datasets among institutions raises privacy and regulatory concerns. To overcome these challenges, federated learning (FL) was introduced [16]. FL enables collaborative training of a shared (or "global") model across multiple institutions without centralizing the data in one location. FL has emerged as a promising technology to enhance the efficiency of medical image segmentation [19, 25, 24]. In FL, each client trains a local model using its data and resources while only sending model updates to the server. The server then combines these updates into a global model using "FedAvg" [16]. Recent studies have utilized FL to develop unified multi-organ segmentation models using partially annotated abdominal datasets [15, 26] as illustrated in Fig. 1. However, these approaches often neglect lesion areas. Only a few studies attempt to generate to segment the various organs and their cancers simultaneously [29, 20]. The model aggregation in FL is a major hurdle because of the data heterogeneity problem brought on by data diversity [30]. Merging models from different sources with non-IID data can lead to performance degradation. This issue is further exacerbated when clients use data annotated for different tasks, introducing more domain shifts in the label space. Additionally, unbalanced dataset sizes among clients may affect the global model's performance on tasks with limited data. In this work, we suggest a framework to tackle data heterogeneity in FL for multi-class organ and tumor segmentation from partially annotated abdominal CT images. The main contributions of this work are as follows: 1. Our proposed conditional distillation federated learning (ConDistFL) framework enables joint multi-task segmentation of abdominal organs and malignancies without additional fully annotated datasets. Figure 1: An illustration of the ConDistFL framework for multi-organ and tumor segmentation from partial labels. Each client has only a subset of the targeted organs and malignancies annotated in their local datasets. 2. The proposed framework exhibits stability and performance with long local training steps and a limited number of aggregations, reducing data traffic and training time in real-world FL scenarios. 3. We further validate our models on an unseen fully annotated public dataset AMOS22 [13]. The robustness of our approach is supported by both the qualitative and quantitative evaluation results. ## 2 Method ConDistFL extends the horizontal FL paradigm [27] to handle partially annotated datasets distributed across clients. An illustration of our ConDistFL framework for multi-organ and tumor segmentation from partial labels is shown in Fig. 1. In the client training of ConDistFL, we combine supervised learning on ground truth labels and knowledge distillation learning [9] using the global model's predictions. During supervised learning, we adopt the design of marginal loss [21] to avoid knowledge conflicts caused by missing labels. To improve the knowledge distillation in FL settings, we proposed a conditional distillation loss to maximize the agreement between the global model and local model predictions on unlabeled voxels. ### Conditional Distillation for Federated Learning In ConDistFL, the client keeps the latest global model as the teacher model for knowledge distillation and uses the local model as the student model. Figure 2 illustrates the training data flow in client \(k\) and the relationship between the global, local, and loss functions. ### Supervised Loss We adopt the design of marginal loss [21] for the supervised loss \(\mathcal{L}_{sup}\). Let \(N\) be the total number of classes across all datasets, \(F_{k}\) be a collection of foreground classes on Figure 2: ConDistFL data flow diagram for client \(k\); \(x\) is a batch of image patches from the local dataset; \(y\) is the corresponding label; \(\hat{y}_{g}\) is the output of global model; and \(\hat{y}_{k}\) is the output of the local model. client \(k\), \(B_{k}\) be the background class, and all unlabeled classes on client \(k\), and \(\hat{y}_{k,i}\) be the output logits of client \(k\)'s model for class \(i\). By applying softmax normalization on the output logits \(\hat{y}_{k,i}\) for each \(i\!\in\!N\), we can derive the output probability \(\hat{p}_{k,i}\) for each class \(i\) as \[\hat{p}_{k,i}\!=\!\frac{e^{\hat{y}_{k,i}}}{\sum_{j=0}^{N}e^{\hat{y}_{k,j}}}\ \text{for}\ i\!=\!0,\,1,\,2,\,...,\,N\!-\!1. \tag{1}\] Similar to the marginal loss, all probabilities in the background \(B_{k}\) are merged into one for a new non-foreground class. The probabilities remain the same as \(\hat{p}_{k,i}\) for all \(i\!\in\!F_{k}\). Then we apply Dice loss [17] with cross-entropy loss [28] (DiceCELoss) for supervised learning of the segmentation model. The final loss term \(\mathcal{L}_{sup}\) is defined as \[\mathcal{L}_{sup}\!=\!\text{DiceCELoss}(\hat{p}^{\prime}_{k}\!,\!y^{\prime}), \tag{2}\] where the background merged probability is \(\hat{p}^{\prime}_{k}\), and the corresponding one-hot label is \(y^{\prime}\). ### ConDist Loss For the conditional distillation (ConDist) loss \(\mathcal{L}_{ConDist}\), we normalize the output logits of both the global and the local model using softmax with temperature \(\tau\). The normalized logits from the global model \(\hat{p}^{\tau}_{g}\) and the \(k\)-th client's local model \(\hat{p}^{\tau}_{k}\) is defined as \[\hat{p}^{\tau}_{k}\!=\!\text{softmax}(\hat{y}_{k}/\tau)\quad\text{and}\quad \hat{p}^{\tau}_{g}\!=\!\text{softmax}(\hat{y}_{g}/\tau), \tag{3}\] where \(\tau\) is set to 0.5 to enhance the confidence of the model output. #### 2.3.1 Foreground Merging and Background Grouping. Contrary to the supervised loss, we merge the probabilities for class \(i\) for all \(i\!\in\!F_{k}\) in \(\hat{p}^{\tau}_{k}\) and \(\hat{p}^{\tau}_{g}\). Then we define \[\hat{p}_{k,F_{k}}\!=\!\sum_{i\in F_{k}}\!\hat{p}^{\tau}_{k,i}\quad\text{and} \quad\hat{p}_{g,F_{k}}\!=\!\sum_{i\in F_{k}}\!\hat{p}^{\tau}_{g,i}, \tag{4}\] where the \(\hat{p}^{\tau}_{k,i}\) and \(\hat{p}^{\tau}_{g,i}\) are the probabilities for class \(i\) in \(\hat{p}^{\tau}_{k}\) and \(\hat{p}^{\tau}_{g}\), respectively. Moreover, we group the background classes in client \(k\) by the organs in the background \(B_{k}\). Let \(M_{k}\) be the number of unlabeled organs in client \(k\), and \(\mathcal{O}_{k}\!=\!\{G_{0}\),\(G_{1}\),\(G_{2}\),...\(G_{M_{k}}\}\). \(G_{0}\) is the set containing the global background class. The probability for each background group can be calculated as \[\hat{p}_{k,G_{i}}\!=\!\sum_{j\in G_{i}}\!\hat{p}^{\tau}_{k,j}\quad\text{and} \quad\hat{p}_{g,G_{i}}\!=\!\sum_{j\in G_{i}}\!\hat{p}^{\tau}_{g,j}\quad\text{ for}\quad i\!=\!0,\,1,\,...,\,M_{k}, \tag{5}\] where \(G_{i}\) is a set containing the class of the healthy part of unlabeled organ \(i\) and all the classes of associated lesions to the organ \(i\). **Conditional Probability for Background Organs.** We define conditional probabilities \(\hat{p}_{k,\mathcal{O}_{k}|B_{k}}\) and \(\hat{p}_{g,\mathcal{O}_{k}|B_{k}}\) as \[\hat{p}_{k,\mathcal{O}_{k}|B_{k}} = \bigg{(}\frac{\hat{p}_{k,G_{0}}}{1\!-\!\hat{p}_{k,F_{k}}},\frac{ \hat{p}_{k,G_{1}}}{1\!-\!\hat{p}_{k,F_{k}}},...,\frac{\hat{p}_{k,G_{M_{k}}}}{1 \!-\!\hat{p}_{k,F_{k}}}\bigg{)}, \tag{6}\] \[\hat{p}_{g,\mathcal{O}_{k}|B_{k}} = \bigg{(}\frac{\hat{p}_{g,G_{0}}}{1\!-\!\hat{p}_{g,F_{k}}},\frac{ \hat{p}_{g,G_{1}}}{1\!-\!\hat{p}_{g,F_{k}}},...,\frac{\hat{p}_{g,G_{M_{k}}}}{1 \!-\!\hat{p}_{g,F_{k}}}\bigg{)}, \tag{7}\] where \(1\!-\!\hat{p}_{k,F_{k}}\) and \(1\!-\!\hat{p}_{g,F_{k}}\) are the total probabilities of all classes in \(B_{k}\) with respect to \(\hat{p}_{k}^{\tau}\) and \(\hat{p}_{g}^{\tau}\). The conditional probability \(\hat{p}_{k,\mathcal{O}_{k}|B_{k}}\) and \(\hat{p}_{g,\mathcal{O}_{k}|B_{k}}\) are the probabilities given that the prediction is only in \(B_{k}\). **Foreground Filtering.** To avoid learning from incorrect predictions and reduce the potential conflict with the supervised loss, we filter off undesired voxels with a mask operation \(\mathcal{M}\), which removes the union of foreground area in ground truth label \(y\) and all the area in \(\hat{y}_{g}\) where the predictions are in \(F_{k}\). **Segmentation ConDist Loss** Combining the conditional probability \(\hat{p}_{k,\mathcal{O}_{k}|B_{k}}\), \(\hat{p}_{g,\mathcal{O}_{k}|B_{k}}\), and the foreground filtering mask \(\mathcal{M}\), we define the ConDist loss \(\mathcal{L}_{ConDist}\) for segmentation task by applying soft Dice loss as \[\mathcal{L}_{ConDist}\!=\!DiceLoss(\mathcal{M}(\hat{p}_{k,\mathcal{O}_{k}|B_{ k}})\!,\!\mathcal{M}(\hat{p}_{g,\mathcal{O}_{k}|B_{k}})). \tag{8}\] To handle meaningless global model predictions in the initial FL rounds, we incorporate an adjustable weight \(w\) for the ConDist loss, gradually increasing it as the FL round number increments. The total loss \(\mathcal{L}\) for ConDistFL is defined as \[\mathcal{L}\!=\!\mathcal{L}_{sup}\!+\!w\!*\mathcal{L}_{ConDist}. \tag{9}\] In practice, we schedule the weight \(w\) from 0.01 to 1.0 linearly. ## 3 Experiments We conducted our experiments on the Medical Segmentation Decathlon (MSD) [22] and the KiTS19 Challenge [8] datasets. In the MSD dataset, we only used the liver, pancreas, and spleen subsets. Except for the spleen dataset, each above dataset includes annotations for the organs and tumors. We split the dataset into training, validation, and testing subsets by 60%, 20%, and 20%, respectively. For the non-FL standalone training, each model only uses a single dataset. For FL training, we distributed the four datasets to four independent clients. In addition, we evaluated our models on the multi-modality Abdominal Multi-Organ Segmentation Challenge 2022 (AMOS22) dataset [13], which consists of 300 CT volumes with 15 abdominal organ annotations. To accommodate the labeling format of AMOS22, where healthy organs and lesions are not distinguished, we merged the tumor predictions with associated organs from our model output before computing the metrics using ground truth labels. The nnU-Net [12] data preprocessing pipeline was adopted with minor modifications. We first resampled the images to a median spacing of [1.44, 1.44, 2.87] millimeters and clipped the intensity to the range [\(-\)54, 258]. Then we applied z-score normalization by assuming the mean intensity under ROI to be 100 and its standard deviation to be 50 since the complete multi-organ ROI is unavailable. We set the input patch size to [224, 224, 64] and the training batch size to 4. Our neural network backbone was built using the 3D DynU-Net from MONAI [3], an effective and flexible U-Net implementation. Deep supervision was also enabled to speed up the training process and enhance model performance. The deep supervision loss is identical to the supervised loss with an extra exponential weight decay. We trained our models using stochastic gradient descent (SGD) and cosine annealing schedule. The initial learning rate was set to \(10^{-2}\) and decreased gradually to \(10^{-7}\). The loss function for the non-FL standalone baselines is Dice loss with cross-entropy. For the FL experiments, we evaluated FedAvg [16], FedOpt [2], FedProx [14], and ConDistFL. To assess the effectiveness of the marginal loss in section 2.2, we trained two sets of FedAvg models: one using the standard Dice loss and the other employing the marginal loss. The FedProx model was trained with the FedAvg aggregator and \(\mu=0.01\). For FedOpt and ConDistFL, we utilized the Federated Optimization (FedOpt) aggregation method, with an additional SGD optimizer with momentum \(m=0.6\) implemented on the server. We employed a single NVIDIA V100 GPU for each standalone experiment and FL client. The FL experiments were implemented using NVIDIA FLARE [18]. ## 4 Results & Discussion Our experiments encompass standalone baselines using a single dataset, an ablation study of standard Dice loss on FedAvg (FedAvg*), marginal loss on FedAvg, FedProx, and FedOpt, and the combined marginal loss and ConDist loss on ConDistFL. To establish a fair comparison between related works, we trained a ConDistFL (Union) model with the same learning targets as [15] and [26], i.e., only to segment the union of the organs and tumors. Additionally, we evaluated the proposed method on the unseen AMOS22 dataset to demonstrate its generalizability. Table 1 compares the average Dice score of each task between standalone baseline models and the best performance server models for FedAvg, FedProx, FedOpt, and ConDistFL. All the models are trained for a total of 120,000 steps to allow for a fair comparison. For the best FedAvg*, FedAvg, and ConDistFL models, we utilized 60 FL aggregation rounds with 2000 local steps per round. As for the FedProx and FedOpt best model, we conducted 120 FL rounds and 1000 local steps per round. The results of FedAvg* and FedAvg demonstrate that the marginal loss effectively resolves conflicts between inconsistent labels and yields reasonable performance. ConDistFL stands out as the top-performing method among all experiments utilizing the marginal loss. FedAvg, FedProx, and FedOpt show similar performance overall, with FedAvg and FedOpt delivering acceptable results for most tasks, except for the pancreas and tumor. In contrast, FedProx performs well for the pancreas and tumor, but there is a notable drop in performance on other tasks. This suggests that although the FedProx loss can regularize the models for heterogeneous clients like the proposed ConDist loss, its task-agnostic nature harms the performance when training on multiple tasks with different partial labels. The ablation study in Fig. 3 investigates the impact of the number of local training steps on the global model performance. Increasing the number of local training steps from 100 to 1000 for all tested methods improved performance. However, when more local training steps were used, both FedAvg and FedOpt encountered model divergence issues, with FedAvg experiencing a more significant performance drop than FedOpt. In contrast, ConDistFL consistently delivered better performance across different local step experiments. This can be attributed to ConDistFL providing a common task, preventing model divergence in FL, and the inherent complexity of tumor segmentation requiring more local training steps. By maintaining consistent representations of unknown classes, ConDistFL allows for the use of larger local step sizes to learn the tumor segmentation task effectively. Table 2 compares the average Dice scores of standalone baselines, ConDistFL, and ConDistFL (Union) on the unseen AMOS22 dataset. ConDistFL demonstrates significant generalizability improvements over the standalone baselines, while ConDistFL (Union) further enhances performance. This highlights the challenge of segmenting tumors and organs together compared to considering them a single class. \begin{table} \begin{tabular}{l|c c|c c|c c|c|c} \hline & Kidney & Tumor & Liver & Tumor & Pancreas & Tumor & Spleen & Average \\ \hline Standalone & 0.9563 & 0.8117 & 0.9525 & 0.7071 & 0.7974 & 0.5012 & 0.9632 & 0.8102 \\ \hline \hline FedAvg* & 0.7707 & 0.4894 & 0.4937 & 0.3202 & 0.5403 & 0.1396 & 0.0000 & 0.3934 \\ FedAvg & 0.9419 & 0.6690 & 0.9381 & 0.6500 & 0.6933 & 0.2985 & 0.9059 & 0.7281 \\ FedProx & 0.9247 & 0.6799 & 0.8972 & 0.6244 & 0.7419 & **0.4033** & 0.7060 & 0.7111 \\ FedOpt & 0.9473 & 0.7212 & 0.9386 & 0.6087 & 0.6734 & 0.2390 & 0.9394 & 0.7239 \\ ConDistFL & **0.9477** & **0.7333** & **0.9446** & **0.6944** & **0.7478** & 0.3660 & **0.9562** & **0.7700** \\ \hline \end{tabular} \end{table} Table 1: Comparison between non-FL results and each FL result. The average Dice score of each organ for the standalone model is computed separately from four distinct models. FedAvg* indicates the model trained with FedAvg and standard Dice loss. Figure 3: The ablation study results on the test set. The x-axis is the number of local training steps (s) and rounds numbers (r), while the y-axis is the average Dice score of all organs and tumors. Table 3 compares ConDistFL (Union) and the reported performance of FL PSMOS and MENU-Net on the MSD test set. FL PSMOS utilizes a fully annotated dataset for model pre-training, while MENU-Net introduces a fifth client with a fully annotated dataset. The results demonstrate that ConDistFL achieves comparable performance without needing fully annotated data. Additionally, ConDistFL significantly reduces the number of aggregation rounds, leading to substantial savings in data traffic and synchronization overheads. Fig. 4 showcases 3D visualizations of our proposed ConDistFL, demonstrating effective and simultaneous segmentation of multiple organs and tumors without ensembling. Compared to FedAvg and FedOpt, ConDistFL achieves smoother and more continuous segmentations. The comparison with the ground truth of AMOS22 validates the generalizability of our FL framework. \begin{table} \begin{tabular}{l|c c c c c} \hline & Kidney & Liver & Pancreas & Spleen & Average \\ \hline Standalone & 0.5916 & 0.9419 & 0.5944 & 0.8388 & 0.7417 \\ FedAvg & 0.5032 & 0.8718 & 0.4637 & 0.5768 & 0.6039 \\ FedProx & 0.4698 & 0.6994 & 0.5185 & 0.7120 & 0.5999 \\ FedOpt & 0.5171 & 0.6740 & 0.4113 & 0.6418 & 0.5611 \\ ConDistFL & 0.7218 & 0.9191 & 0.6188 & 0.8556 & 0.7788 \\ ConDistFL (Union) & **0.8746** & **0.9471** & **0.7401** & **0.9079** & **0.8674** \\ \hline \end{tabular} \end{table} Table 2: External test results for AMOS22 dataset in average Dice scores. Figure 4: 3D renderings of segmentation on the best performed FL server model using (a) FedAvg, (b) FedOpt, (c) FedProx, (d) ConDistFL on KITS19 data, and (e) ground truth and (f) the external segmentation using ConDistFL on AMOS22 data. \begin{table} \begin{tabular}{l|c|c c c c c} \hline & Rounds & Kidney & Liver & Pancreas & Spleen & Average \\ \hline FL PSMOS [15] & 2,000 & **0.966** & 0.938 & 0.788 & **0.965** & 0.9143 \\ MENU-Net [26] & 400 & 0.9594 & 0.9407 & 0.8005 & 0.9465 & 0.9118 \\ ConDistFL (Union) & 120 & 0.9657 & **0.9619** & **0.8210** & 0.9626 & **0.9278** \\ \hline \end{tabular} \end{table} Table 3: Comparing the average Dice scores of ConDistFL to reported performance of related works. “Rounds” is the number of FL aggregation rounds. ## 5 Conclusion This work offers a promising FL approach for generalized segmentation models from partially annotated abdominal organs and tumors, reducing annotation costs and speeding up model development. Moreover, the proposed method requires less frequent aggregation, making it suitable for real-world FL scenarios with limited communication bandwidth.
2303.06515
Multistage Stochastic Optimization via Kernels
We develop a non-parametric, data-driven, tractable approach for solving multistage stochastic optimization problems in which decisions do not affect the uncertainty. The proposed framework represents the decision variables as elements of a reproducing kernel Hilbert space and performs functional stochastic gradient descent to minimize the empirical regularized loss. By incorporating sparsification techniques based on function subspace projections we are able to overcome the computational complexity that standard kernel methods introduce as the data size increases. We prove that the proposed approach is asymptotically optimal for multistage stochastic optimization with side information. Across various computational experiments on stochastic inventory management problems, {our method performs well in multidimensional settings} and remains tractable when the data size is large. Lastly, by computing lower bounds for the optimal loss of the inventory control problem, we show that the proposed method produces decision rules with near-optimal average performance.
Dimitris Bertsimas, Kimberly Villalobos Carballo
2023-03-11T23:19:32Z
http://arxiv.org/abs/2303.06515v1
# Multistage Stochastic Optimization via Kernels ###### Abstract We develop a non-parametric, data-driven, tractable approach for solving multistage stochastic optimization problems in which decisions do not affect the uncertainty. The proposed framework represents the decision variables as elements of a reproducing kernel Hilbert space and performs functional stochastic gradient descent to minimize the empirical regularized loss. By incorporating sparsification techniques based on function subspace projections we are able to overcome the computational complexity that standard kernel methods introduce as the data size increases. We prove that the proposed approach is asymptotically optimal for multistage stochastic optimization with side information. Across various computational experiments on stochastic inventory management problems, our method performs well in multidimensional settings and remains tractable when the data size is large. Lastly, by computing lower bounds for the optimal loss of the inventory control problem, we show that the proposed method produces decision rules with near-optimal average performance. Keywords:data-drive optimization kernel methods prescriptive analytics orthogonal matching pursuit ## 1 Introduction Multistage stochastic optimization arises in numerous applications (e.g., supply chain management, energy planning, inventory management among others) and remains an important research area in the optimization community (Birge and Louveaux, 2011; Shapiro et al., 2014; Bertsimas et al., 2011). In these problems, the decision variables are split across multiple periods and decisions are made sequentially as more information becomes available. The goal is to make high quality decisions that minimize the expectation of a given cost function by accurately modeling future uncertainty. In practice, decision makers can use historical data to get a sense of the future uncertainty. For instance, consider a retailer selling products with short life cycles who needs to make frequent orders to restock inventory without knowing the future demands. To minimize costs the retailer must use the remaining inventory quantities as well as historical data to gain insight into future demands. Another example is energy planning, in which operators decide daily production levels without knowing how weather conditions will affect the output of the wind turbines. In this case historical wind patterns are valuable for better planning. Besides historical data, auxiliary covariates are often available and can help predict uncertainty. For example, in the fashion industry, color and brand are useful factors to predict demand of a new item. Accordingly, recent work has focused on using predictive analytics to leverage available side information and historical data to make better decisions. Ban et al. (2019) for instance, fit covariate and historical data to a regression model and prove theoretical guarantees for the dynamic procurement problem. Another approach is that of Bertsimas et al. (2022b), which considers an uncertainty set around each data sample and applies robust optimization tools to find linear decision rules that are asymptotically optimal under mild conditions. This framework was generalized in Bertsimas and McCord (2019), where machine learning methods are incorporated to find weights that produce more accurate approximations of the objective. However, these dynamic methods are affected by the curse of dimensionality; they require scenario tree enumeration and can require many hours for solving problems with only a few stages. In this paper, we propose a non-parametric, data-driven and tractable approach to solving multistage stochastic optimization problems. By restricting the decision variables to be in a reproducing kernel Hilbert space (RKHS) generated by a universal kernel, we can approximate a large class of functions using non-parametric functional representations. We incorporate sparsification techniques based on function subspace projections that allow our proposed algorithm to overcome the complexity growth that kernel methods introduce when directly applying the Representer Theorem to large data sets. The input to our algorithm is historical data and we make no assumptions on the correlation structure of the uncertainties across stages. We perform computational experiments on real-world multistage stochastic problems, and show how our method not only produces near optimal solutions but also remains tractable in higher dimensions and with large data sizes. ### Related Literature Kernel methods have been used in recent work to solve stochastic multistage optimization problems with side information. Hanasusanto and Kuhn (2013), for example, approximate the objective using kernel regression, and Pflug and Pichler (2016) apply a kernel density estimator to the historical data to develop a non-parametric predict-then-optimize approach that comes with asymptotic optimality guarantees under strong conditions. However, these are local methods in which the predictions are made based only on those data points that are similar to the current observation. As noted in Bertsimas and Koduri (2022), such approaches require more data and perform worse on high dimensions compared to global methods, which instead optimize over functional variables that make the predictions. The Machine Learning community has long applied kernel methods to solve online learning problems (Wheeden, 2015; Norkin and Keyzer, 2009), but they have focused purely on predictive and not on prescriptive tasks. More recently, Bertsimas and Koduri (2022) has aimed to extend kernel methods to data-driven, single-period optimization problems with auxiliary information by using the Representer Theorem to transform the optimization over functions into an optimization over parameters. They show that this approach overcomes the curse of dimensionality; however, its main disadvantage is that the number of parameters per decision grows linearly with the number of observations, resulting in function representations that are as complex as the size of the data and that become potentially intractable especially in multistage settings. Works on stochastic optimization in a RKHS have developed multiple heuristics to reduce the number of parameters in the function representation. For instance, Zhang et al. (2013) uses random dropping, Kivinen et al. (2004) introduces forgetting factors, and Honeine (2011) as well as Engel et al. (2004) apply compressive sensing techniques. These approaches sucessfully achieve sparser functional representations, but they usually produce suboptimal approximations (Honeine, 2011; Engel et al., 2004). We instead follow the approach from Koppel et al. (2016) of applying Functional Stochastic Gradient Descent (FSGD) and projecting the iterates onto sparse subspaces that are found by removing parameters associated with data points that do not contribute much to the value of the decisions (Pati et al., 1993). This approach maintains optimality while addressing the complexity growth that kernel methods exhibit as the data size increases. Intuitively, since stochastic gradient descent iterates are a noisy signal for the optimal solution, by projecting the iterates to have small model order we can ignore some of the noise while preserving the goal signal. The sparse subspaces of the RKHS onto which projections are made can be effectively found using kernel orthogonal matching pursuit (Vincent and Bengio, 2002), an algorithm which given a function \(f\) and an error bound \(\epsilon\), generates a sparse approximation of \(f\) that is in a neighborhood of \(f\) of radius \(\epsilon\) in Hilbert norm. Koppel et al. (2016) show that for a specific choice of \(\epsilon\) and of step-size for the FSGD algorithm, the projected FSGD iterates produce decisions that converge in mean to the optimal solution. ### Contributions In this paper, we propose a novel data-driven approach for solving multistage stochastic optimization problems with side information using kernels. Specifically, we represent the controls as elements of a reproducing kernel Hilbert space and use loss-minimizing machine learning methods to predict them. In addition, we incorporate sparsification techniques to reduce the total number of parameters per control. We prove that this approach is asymptotically optimal, guaranteeing near optimal approximations for problems with large amounts of data. We also show that our approach remains computationally tractable in high dimensions and with large data sizes. In detail, our contributions are as follows. 1. We propose a novel data-driven approach for multistage stochastic optimization problems with side information based on reproducing kernel Hilbert spaces. The approach takes as input historical data and minimizes the regularized empirical loss by applying functional stochastic gradient descent to optimize the decision rules, i.e., the functions which specify what decision to make in each stage. To the best of our knowledge, this is the first tractable application of reproducing kernel Hilbert spaces to multistage optimization problems with large data sizes. While a kernel based formulation of the multistage stochastic optimization problem is briefly suggested (without any computational experiments) in Bertsimas and Koduri (2022), their non-stochastic and non-sparse approach is not tractable for large data sizes since both time and memory requirements increase cubically with the amount of data. 2. We extend sparsification techniques used by Koppel et al. (2016) to multi-stage optimization settings in order to reduce both, space and time complexities of our algorithm. Specifically, we use Functional Stochastic Gradient Descent (FSGD) to minimize the objective and project each iterate onto a sparse subspace that is found by removing parameters corresponding to data points with small contributions. We show that applying FSGD without any sparsification results in methods that do not scale to larger number of periods or data sizes. If sparsity is not added, the computational cost and the storage requirement increase quadratically with the data size. With the proposed method, however, both space and time complexities present linear growth with a constant factor that depends on the step size of the FSGD algorithm. 3. We prove that if the loss function is convex, Lipschitz and differentiable almost everywhere, then the expected loss achieved with our algorithm converges in probability to the expected loss of the optimal decision rules in the space of continuous functions. 4. We demonstrate across several instances of inventory management problems that the proposed method finds near-optimal solutions using only a few parameters and with very low computational times. We show that increasing the number of periods, the dimension of the data, the dimension of the controls or the data size does not affect the tractability of our approach. The paper is organized as follows: Section 2 introduces the exact framework for the problem being solved, Section 3 contains the data-driven formulation of the multistage stochastic optimization problem with side information, Section 4 presents the proposed algorithm, Section 5 states the convergence theorems, Section 6 analyses the complexity of the proposed method, and Section 7 shows the results for several computational experiments. ## 2 Problem Setting We consider a discrete-time, convex, multistage stochastic problem over a finite horizon \(T\). Initially, we observe some auxiliary covariates \(\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{q_{0}}\). Then, random disturbances \(\mathbf{w}_{t}\) that belong to a known set \(\mathcal{W}_{t}\subseteq\mathbb{R}^{q_{t}}\) are sequentially observed over time. At every stage \(t\), after observing the covariates \(\mathbf{x}\) and the previous disturbances \((\mathbf{w}_{1},\ldots,\mathbf{w}_{t-1})\), a decision \(\mathbf{u}_{t}\in\mathbb{R}^{r_{t}}\) is made. The total cost for the observed sequence of covariates, disturbances and decisions is \(c(\mathbf{u}_{1},\ldots,\mathbf{u}_{T},\mathbf{x},\mathbf{w}_{1},\ldots, \mathbf{w}_{T})\). A standard decision rule \(\bar{\mathbf{u}}(\cdot)=(\bar{\mathbf{u}}_{1}(\cdot),\ldots,\bar{\mathbf{u}}_{ T}(\cdot))\) consists of functions \(\bar{\mathbf{u}}_{t}:\mathcal{W}_{1}\times\ldots\times\mathcal{W}_{t-1} \rightarrow\mathbb{R}^{r_{t}}\) that at each stage \(t\) take as input the disturbances up to that point and output a decision for the given stage. Specifically, denoting \(\mathbf{w}\coloneqq(\mathbf{w}_{1},\ldots,\mathbf{w}_{T})\) and \(\mathbf{w}_{1:t}\coloneqq(\mathbf{w}_{1},\ldots,\mathbf{w}_{t})\), we have that the standard decision rule \(\bar{\mathbf{u}}(\cdot)\) applied to \(\mathbf{w}\) outputs \(\bar{\mathbf{u}}(\mathbf{w})=\big{(}\bar{\mathbf{u}}_{1},\bar{\mathbf{u}}_{2} (\mathbf{w}_{1:1}),\ldots,\bar{\mathbf{u}}_{T}(\mathbf{w}_{1:T-1})\big{)}\). The multistage optimization problem over the space of continuous decision rules \(\hat{\mathcal{F}}\) conditioned on some observed covariates \(\mathbf{x}_{0}\) can then be written as \[\min_{\bar{\mathbf{u}}\in\hat{\mathcal{F}}}\quad\mathbb{E}_{\mathbf{w}|\mathbf{ x}}\left[c(\bar{\mathbf{u}}(\mathbf{w}),\mathbf{x},\mathbf{w})\mid\mathbf{x}= \mathbf{x}_{0}\right], \tag{1}\] where \(c(\cdot)\) is a convex loss function. As noted in Bertsimas and Koduri (2022), the conditional problem in Eq. (1) can be formulated as an unconditional optimization problem by augmenting the domain of the decision rules to also take the covariates as input, and then evaluating the observed covariates in the decision rules found. In this paper, we adopt the same approach and therefore we consider augmented decision rules \(\mathbf{u}(\cdot)=\big{(}\mathbf{u}_{1}(\cdot),\ldots,\mathbf{u}_{T}(\cdot) \big{)}\) with augmented domains \(\mathbf{u}_{t}:\mathcal{X}\times\mathcal{W}_{1}\times\ldots\times\mathcal{W}_{t -1}\rightarrow\mathbb{R}^{r_{t}}\). The augmented decision rule applied to the data point \(\mathbf{w}\) with covariates \(\mathbf{x}\) outputs \[\mathbf{u}(\mathbf{x},\mathbf{w})=\big{(}\mathbf{u}_{1}(\mathbf{x}),\mathbf{u} _{2}(\mathbf{x},\mathbf{w}_{1:1}),\ldots,\mathbf{u}_{T}(\mathbf{x},\mathbf{w}_ {1:T-1})\big{)}.\] From now on we will join the covariates and the disturbances into a single random variable \(\mathbf{z}\coloneqq(\mathbf{x},\mathbf{w})\) to simplify notation, and we index \(\mathbf{z}\) starting at time \(0\) instead of time \(1\), so that \(\mathbf{z}_{0:t}\coloneqq(\mathbf{x},\mathbf{w}_{1},\ldots,\mathbf{w}_{t})\). Defining \(\mathcal{F}\) as the space of continuous augmented decision rules, and \(\mathcal{Z}\coloneqq\mathcal{X}\times\mathcal{W}_{1}\times\ldots\times \mathcal{W}_{T}\), we obtain that solving Eq. (1) is equivalent to solving the problem \[\min_{\mathbf{u}\in\mathcal{F}}\quad\mathbb{E}_{\mathbf{z}}\big{[}c\big{(} \mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}\big{]} \tag{2}\] and evaluating the optimal solution \(\mathbf{u}^{*}(\cdot)\) at \(\mathbf{x}=\mathbf{x}_{0}\) to obtain the standard decision rule \(\bar{\mathbf{u}}^{*}(\mathbf{w})=\mathbf{u}^{*}(\mathbf{x}_{0},\mathbf{w})\). ## 3 Reproducing Kernel Hilbert space formulation for Multistage Optimization We now propose a data-driven approach for multistage stochastic optimization problems with side information based on a Reproducing Kernel Hilbert space (RKHS). We include an overview of these spaces in Appendix A. We will assume that we have historical observations \(\mathcal{S}\!=\!\{\mathbf{z}^{n}\}_{n=1}^{N}\!=\!\{(\mathbf{x}^{n}\!,\! \mathbf{w}_{1}^{n},\ldots,\mathbf{w}_{T}^{n})\}_{n=1}^{N}\) that are independently and identically distributed according to some unknown distribution. Let \(K_{t}:\mathcal{X}\times\mathcal{W}_{1}\times\ldots\times\mathcal{W}_{t-1}\to \mathbb{R}\) be a positive universal kernel and \(\mathcal{H}_{t}\) the reproducing Kernel Hilbert space generated by \(K_{t}\). We consider the Cartesian product Hilbert space, \(\mathcal{H}\coloneqq\mathcal{H}_{1}^{r_{1}}\times\ldots\times\mathcal{H}_{T}^ {r_{t}}\) with inner product defined by \[\begin{split}&\left\langle\!\!\left((u_{1,1},...\,,u_{1,r_{1}}),...\,,(u_{T,1},...\,,u_{T,r_{T}})\right)\!,\big{(}(v_{1,1},...\,,v_{1,r_{1}}),...\,,(v_{T,1},...\,,v_{T,r_{T}})\!\right\rangle\!\right\rangle_{\mathcal{H} }\\ &\coloneqq\sum_{t=1}^{T}\sum_{i=1}^{r_{t}}\langle u_{t,i},v_{t,i} \rangle_{\mathcal{H}_{t}},\end{split}\] where \(\langle u,v\rangle_{\mathcal{H}_{t}}\) corresponds to the inner-product between \(u\) and \(v\) with respect to the Hilbert space \(\mathcal{H}_{t}\). We can approximate the solution of problem (2) by applying its empirical regularized version and restricting the decision rules to be in \(\mathcal{H}\): \[\min_{\mathbf{u}\in\mathcal{H}}\;\frac{1}{N}\sum_{n=1}^{N}c\big{(}\mathbf{u}( \mathbf{z}^{n}),\mathbf{z}^{n}\big{)}+\frac{\lambda}{2}\|\mathbf{u}\|_{ \mathcal{H}}^{2}. \tag{3}\] Even though problem (3) is not equivalent to problem (2), if \(\lambda\) vanishes with the data size then the regularized empirical average becomes a closer estimate of the expectation as \(N\) increases. We will then focus on solving problem (3), and later in Corollary 1 we show that as the data size goes to infinity, the expected loss converges in probability to the optimal solution of problem (2). One way to solve the regularized empirical problem (3) is to use the multidimensional version of the Representer Theorem (Wahba, 1990; Soentpiet et al., 1999; Scholkopf et al., 2002; Shawe-Taylor et al., 2004), which says that for each \(t=1,\ldots,T\) there exists a scalar matrix \(\mathbf{A}_{t}\) such that the optimal solution to (3) satisfies \[\mathbf{u}_{t}(\cdot)=\mathbf{A}_{t}\mathbf{K}_{t}(\mathbf{Z}_{t},\cdot),\] where \(\mathbf{K}_{t}(\mathbf{Z},\cdot)\coloneqq[K_{t}(\mathbf{z}^{1},\cdot),\ldots,K_{t}( \mathbf{z}^{N},\cdot)]^{T}\), and the time subscript for a data matrix \(\mathbf{D}=[\mathbf{d}^{1},\ldots,\mathbf{d}^{N}]\) refers to \(\mathbf{D}_{t}=[\mathbf{d}^{1}_{0:t-1},\ldots,\mathbf{d}^{N}_{0:t-1}]\). However, with this approach each decision \(u_{t,i}\) has as many scalar parameters as data points, which generates both memory and performance problems as the number of data points becomes large. We instead want an algorithm for which more data yields better results overall, without increasing its complexity or worsening performance. General sparsification techniques like those found in Kivinen et al. (2004), Zhang et al. (2013) or Engel et al. (2004), successfully reduce the number of parameters; however they do so at the cost of compromising optimality. We therefore take the pruning approach developed in Koppel et al. (2016) to solve problem (3); we apply functional gradient descent to minimize the objective and at each iteration we drop those parameters that add near zero contribution to the value of the decisions, ensuring convergence to an optimal solution. ## 4 Sparse Multistage Optimization with Kernels In this section, we extend sparsification techniques used by Koppel et al. (2016) to the multistage optimization setting described in the previous section in order to reduce both, space and time complexities of our algorithm. Specifically, we describe an iterative algorithm for solving (3) using Functional Stochastic Gradient Descent and sparse projections. In order to ease notation, we first make the following definitions for an augmented decision rule \(\mathbf{u}\): \[E(\mathbf{u}) \coloneqq\mathbb{E}_{\mathbf{z}}\left[c(\mathbf{u}(\mathbf{z}), \mathbf{z})\right], \tag{4}\] \[E^{\lambda}(\mathbf{u}) \coloneqq E(\mathbf{u})+\frac{\lambda}{2}\|\mathbf{u}\|_{\mathcal{H}}^{2},\] (5) \[E^{\lambda}_{\mathcal{S}}(\mathbf{u}) \coloneqq\frac{1}{N}\sum_{n=1}^{N}c\big{(}\mathbf{u}(\mathbf{z}^ {n}),\mathbf{z}^{n}\big{)}+\frac{\lambda}{2}\|\mathbf{u}\|_{\mathcal{H}}^{2},\] (6) \[E^{\lambda}_{n}(\mathbf{u}) \coloneqq c\big{(}\mathbf{u}(\mathbf{z}^{n}),\mathbf{z}^{n}\big{)}+\frac{ \lambda}{2}\|\mathbf{u}\|_{\mathcal{H}}^{2}. \tag{7}\] The algorithm relies on the fact that the expectation of \(E^{\lambda}_{n}(\mathbf{u})\) over data yields \(E^{\lambda}(\mathbf{u})\) to make stochastic gradient updates that converge to the optimal solution, while at the same time removing unnecessary parameters along the descent trajectory. ### Functional Stochastic Gradient Descent (FSGD) Thanks to the fact that a RKHS preserves distance and to the continuity properties of real spaces, a derivative with respect to an element \(f\) of a RKHS (a function) can be well defined and it satisfies the standard properties of derivatives of real functions. Following Kivinen et al. (2004), we can then derive a generalization of the Stochastic Gradient Descent algorithm for elements of \(\mathcal{H}\). This method is referenced as _functional stochastic gradient descent_. We compute the gradient of \(E_{n}^{\lambda}(\mathbf{u})\) with respect to the functions \(\mathbf{u}\) using the identity \(u_{t,i}(\mathbf{z}_{0:t-1})=\langle K(\mathbf{z}_{0:t-1},\cdot),u_{t,i}\rangle_ {\mathcal{H}}\), which is known as the _reproducing property_ of kernels. Differentiating on both sides of this equation we obtain \[\frac{\partial u_{t,i}(\mathbf{z}_{0:t-1})}{\partial u_{t,i}}=\frac{\partial \big{\langle}u_{t,i},K_{t}(\mathbf{z}_{0:t-1},\cdot)\big{\rangle}}{\partial u_ {t,i}}=K_{t}(\mathbf{z}_{0:t-1},\cdot),\ \forall\ i\in[r_{t}],\ \ t\in[T], \tag{8}\] where \([K]=\{1,\ldots,K\}\). The stochastic functional gradient can then be computed using the chain rule: \[\nabla_{\mathbf{u}_{t}}c\big{(}\mathbf{u}(\mathbf{z}^{n}),\mathbf{ z}^{n}\big{)} =\nabla_{u_{t}(\mathbf{z}_{0:t-1})}c(\mathbf{u}(\mathbf{z}^{n}), \mathbf{z}^{n})\,K_{t}(\mathbf{z}_{0:t-1}^{n},\cdot), \tag{9}\] \[\implies\nabla_{\mathbf{u}_{t}}E_{n}^{\lambda}(\mathbf{u}) =\nabla_{u_{t}(\mathbf{z}_{0:t-1})}c(\mathbf{u}(\mathbf{z}^{n}), \mathbf{z}^{n})\,K_{t}(\mathbf{z}_{0:t-1}^{n},\cdot)+\lambda\mathbf{u}_{t}, \tag{10}\] where \(\nabla_{u_{t}(\mathbf{z}_{0:t-1})}c\big{(}\mathbf{u}(\mathbf{z}^{n}),\mathbf{ z}^{n}\big{)}\) corresponds to the derivative of \(c\big{(}\mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}\) with respect to its scalar arguments \(u_{t}^{1}(\mathbf{z}_{0:t-1}),\ldots,u_{t}^{r_{t}}(\mathbf{z}_{0:t-1})\) evaluated at \(\mathbf{z}^{n}\): \[\nabla_{u_{t}(\mathbf{z}_{0:t-1})}c(\mathbf{u}(\mathbf{z}^{n}), \mathbf{z}^{n})=\left[\frac{\partial c(\mathbf{u}(\mathbf{z}^{n}),\mathbf{z}^ {n})}{\partial u_{t}^{1}(\mathbf{z}_{0:t-1})},\ldots,\frac{\partial c(\mathbf{ u}(\mathbf{z}^{n}),\mathbf{z}^{n})}{\partial u_{t}^{r_{t}}(\mathbf{z}_{0:t-1})} \right].\] Thus, the update rule for the standard functional stochastic gradient descent (FSGD) algorithm becomes \[\mathbf{u}_{t}^{n+1}= \,\mathbf{u}_{t}^{n}-\eta_{n}\nabla_{\mathbf{u}_{t}}E_{n}^{ \lambda}(\mathbf{u}^{n})\] \[= \,(1-\eta_{n}\lambda)\mathbf{u}_{t}^{n}-\eta_{n}\nabla_{\mathbf{u }_{t}(\mathbf{z}_{0:t-1})}c(\mathbf{u}(\mathbf{z}^{n}),\mathbf{z}^{n})\,K_{t }(\mathbf{z}_{0:t-1}^{n},\cdot), \tag{11}\] where \(\eta_{n}\) is the step-size of the algorithm and the sequence of controllers is initialized at some fixed function \(\mathbf{u}_{0}\in\mathcal{H}\). Using the update rule in Eq. (11), we can easily show by induction on \(n\) that if the initial decision is of the form \(\mathbf{u}_{t}^{0}(\cdot)=\mathbf{A}_{t}^{0}\mathbf{K}_{t}(\mathbf{D}_{t}^{0},\cdot)\) for some initial data matrix \(\mathbf{D}^{0}\) and initial parameters \(\mathbf{A}_{t}^{0}\), then the solutions \(\mathbf{u}^{n}\) produced at every iteration also have this form. Specifically, for each \(n>0\) and for all \(t\in[T]\), there exist a scalar matrix \(\mathbf{A}_{t}^{n}\) and a data matrix \(\mathbf{D}^{n}\) such that \(\mathbf{u}_{t}^{n}(\cdot)=\mathbf{A}_{t}^{n}\cdot\mathbf{K}_{t}(\mathbf{D}_{t}^{n },\cdot)\). In fact, this parametrization allows us to rewrite the functional update rule in Eq. (11) as a nonfunctional (scalar) update on the data matrix \(\mathbf{D}^{n}\) and the parameters \(\mathbf{A}_{1}^{n},\ldots,\mathbf{A}_{T}^{n}\) as follows: \[\mathbf{D}^{n+1}=[\mathbf{D}^{n},\ \ \mathbf{z}^{n}],\quad\mathbf{A}^{n+1}= \left[(1-\eta_{n}\lambda)\mathbf{A}^{n},\ \eta_{n}\nabla_{\mathbf{u}(\mathbf{z})}c\big{(}\mathbf{u}^{n}(\mathbf{z}^{n}), \mathbf{z}^{n}\big{)}\right],\] where \[\mathbf{A}^{n}\coloneqq\begin{bmatrix}\mathbf{A}_{1}^{n}\\ \vdots\\ \mathbf{A}_{T}^{n}\end{bmatrix},\quad\text{and}\quad\nabla_{\mathbf{u}(\mathbf{ z})}c\big{(}\mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}\coloneqq\begin{bmatrix}\nabla_{ \mathbf{u}_{1}(\mathbf{z}_{0})}c\big{(}\mathbf{u}(\mathbf{z}),\mathbf{z}\big{)} \\ \vdots\\ \nabla_{\mathbf{u}_{T}(\mathbf{z}_{0:T-1})}c\big{(}\mathbf{u}(\mathbf{z}),\mathbf{ z}\big{)}\end{bmatrix}.\] Notice that this update forces the data matrix to have one more column after every iteration, which brings us back to the same problem we had when applying the Representer Theorem. However, because this is an iterative algorithm, we will reduce the dimension of the data matrix \(\mathbf{D}^{n}\) after every iteration by measuring the contribution of each individual observation \(\mathbf{z}^{n}\) and removing those observations that added almost no value to the decision. ### Proximal Projection We now describe how to reduce the number of observations in the data matrix \(\mathbf{D}^{n}\) with the goal of reducing the dimension of the parameters \(\mathbf{A}^{n}\). We observed that the Representer Theorem as well as the FSGD algorithm generate decisions \(u_{t,i}\) that belong to the subspace of \(\mathcal{H}_{t}\) spanned by the functions \(K_{t}(\mathbf{z}^{1}_{0:t-1},\cdot),\ldots,K_{t}(\mathbf{z}^{N}_{0:t-1},\cdot)\). What we want is to produce decisions that belong to a smaller subspace, one generated using fewer observations. Suppose that \(\tilde{\mathbf{D}}^{n+1}\), and \(\tilde{\mathbf{A}}^{n+1}\) are the values resulting from the FSGD iterative rule in Eq. (11), i.e, \[\tilde{\mathbf{D}}^{n+1}=[\mathbf{D}^{n},\ \mathbf{z}^{n}]\quad\text{and} \quad\tilde{\mathbf{A}}^{n+1}=\left[(1-\eta_{n}\lambda)\mathbf{A}^{n},\ \eta_{n}\nabla_{\mathbf{u}(\mathbf{z})}c(\mathbf{u}^{n}(\mathbf{z}^{n}), \mathbf{z}^{n})\right],\] which represent the decisions \(\tilde{\mathbf{u}}^{n+1}_{t}(\cdot)=\tilde{\mathbf{A}}^{n+1}_{t}\boldsymbol{K }_{t}(\tilde{\mathbf{D}}^{n+1}_{t},\cdot)\), and assume that we want to generate a decision that only uses observations from a smaller data matrix \(\mathbf{D}^{n+1}\). We can approximate \(\tilde{\mathbf{u}}^{n+1}\) with a decision \(\mathbf{u}^{n+1}\) that only depends on observations in \(\mathbf{D}^{n+1}\) by projecting each decision \(\tilde{u}^{n+1}_{t,i}\) onto the subspace of \(\mathcal{H}_{t}\) that is spanned by the functions \(\boldsymbol{K}_{t}(\mathbf{D}^{n+1}_{t},\cdot)\). If we denote this projection by \(\Pi_{\mathbf{D}^{n+1}}(\cdot)\) then we can define \[\mathbf{u}^{n+1}\coloneqq\Pi_{\mathbf{D}^{n+1}}(\tilde{\mathbf{u}}^{n+1})= \Pi_{\mathbf{D}^{n+1}}\big{(}(1-\eta_{n}\lambda)\mathbf{u}^{n}-\eta_{n}\nabla _{\mathbf{u}}c(\mathbf{u}^{n}(\mathbf{z}^{n}),\mathbf{z}^{n})\big{)}. \tag{12}\] The projection operator can be computed by solving the least squares problem \[\mathbf{A}^{n+1}=\operatorname*{arg\,min}_{\tilde{\mathbf{A}}^{n+1}}\ \sum_{t=1}^{T}\ \Big{\|}\tilde{\mathbf{A}}^{n+1}_{t}\boldsymbol{K}_{t}(\tilde{\mathbf{D}}^{n +1}_{t},\cdot)-\hat{\mathbf{A}}^{n+1}_{t}\boldsymbol{K}_{t}(\mathbf{D}^{n+1}_ {t},\cdot)\Big{\|}^{2}_{\mathcal{H}^{\tau_{t}}_{t}}, \tag{13}\] which has a closed form solution given by \[\mathbf{A}^{n+1}_{t}=\left(\mathbf{K}_{t}[\mathbf{D}^{n+1}_{t},\mathbf{D}^{n+1 }_{t}]\right)^{-1}\mathbf{K}_{t}[\mathbf{D}^{n+1}_{t},\tilde{\mathbf{D}}^{n+1 }_{t}]\tilde{\mathbf{A}}^{n+1}_{t},\quad\text{for all }t\in[T]. \tag{14}\] We then have a simple way to project the FSGD solution onto the Hilbert subspace generated by a smaller data matrix \(\mathbf{D}^{n+1}\), but we are still left the question: how do we find the right data matrix \(\mathbf{D}^{n+1}\)? As in Koppel et al. (2016), we use a method called destructive _kernel orthogonal matching pursuit_ (KOMP) with pre-fitting, which was developed in Vincent and Bengio (2002). The KOMP algorithm takes as input a function \(\tilde{\mathbf{u}}\in\mathcal{H}\) (represented by its data matrix \(\tilde{\mathbf{D}}\) as well as the corresponding parameters \(\tilde{\mathbf{A}}\)), and a maximum error bound \(\epsilon\). For each element \(\mathbf{d}\) in the data matrix \(\tilde{\mathbf{D}}\), the algorithm computes the approximation function \(\mathbf{u}=\Pi_{\tilde{\mathbf{D}}\setminus\{\mathbf{d}\}}(\tilde{\mathbf{u}})\) obtained by removing observation \(\mathbf{d}\) from \(\tilde{\mathbf{D}}\). Next, the algorithm removes the observation that produced the lowest error, updates the current function accordingly and then repeats this procedure to remove the next element. The algorithm stops removing elements when the difference between the current function and the best approximation function is larger than \(\epsilon\). The exact algorithm can be found in Algorithm 1. ### The Algorithm By combining Functional Stochastic Gradient Descent with the Kernel Orthogonal Matching Pursuit we are able to develop an algorithm that approximates the minimizer of \(\mathbb{E}^{\lambda}(\mathbf{u})\) with decision rules that are represented using only a few parameters. The algorithm is initialized with a decision rule \(\mathbf{u}_{t}^{0}=\mathbf{A}^{0}\boldsymbol{K}_{t}(\mathbf{D}^{0},\cdot)\), which in practice is usually set to \(0\). Then, in each iteration, it performs one FSGD step and then applies the KOMP algorithm in order to obtain an approximated decision with fewer observations. Notice that if we define the projected gradient \(\tilde{\nabla}\) by \[\tilde{\nabla}_{\mathbf{u}}E_{n}^{\lambda}(\mathbf{u}^{n})\coloneqq\frac{ \mathbf{u}^{n}-\Pi_{\mathbf{D}^{n+1}}[\mathbf{u}^{n}-\eta_{n}\nabla_{\mathbf{u }}E_{n}^{\lambda}(\mathbf{u}^{n})]}{\eta_{n}}, \tag{15}\] then we can write the iterative updates of this procedure in the same form as the standard iterative updates of FSGD: \[\mathbf{u}^{n+1}=\mathbf{u}^{n}-\eta_{n}\tilde{\nabla}_{\mathbf{u}}E_{n}^{ \lambda}(\mathbf{u}^{n}). \tag{16}\] Since stochasticity does not guarantee a strict objective descent, the algorithm keeps track of the best decision rules observed and at the end it outputs the decision \(\mathbf{u}_{\mathcal{S}}^{*}\) with the lowest empirical error \(E_{\mathcal{S}}^{\lambda}\) with respect to the data set \(\mathcal{S}\). The exact formulation can be found in Algorithm 2. ## 5 Convergence Analysis In this section, we show that for a specific choice of step-size the objective value of the decision output by the algorithm converges to the objective value of the true minimizer. We first present the three main assumptions that we make on the problem settings in order to guarantee convergence of the algorithm: **Assumption 1**: The data space \(\mathcal{Z}\) is compact, the kernels \(K_{t}\) are universal, and there exists a constant \(\kappa\) such that \[K_{t}(\mathbf{z}_{1:t-1},\mathbf{z}_{1:t-1})\leq\kappa,\quad\forall\, \mathbf{z}\in\mathcal{Z},\quad\forall\,\,t\in[T].\] **Assumption 2**: There exists a constant \(C\) such that for all \(\boldsymbol{z}\in\mathcal{Z}\) the loss function satisfies \[\big{|}c(\mathbf{u},\mathbf{z})-c(\mathbf{u}^{\prime},\mathbf{z})\big{|}\leq C \|\mathbf{u}-\mathbf{u}^{\prime}\|_{2},\quad\forall\,\,\mathbf{u},\mathbf{u}^ {\prime}\in\mathbb{R}^{r_{1}+\ldots+r_{T}}.\] **Assumption 3**: The loss function \(c(\mathbf{u}(\mathbf{z}),\mathbf{z})\) is convex and differentiable with respect to the scalar arguments \(\mathbf{u}(\mathbf{z})\) for all \(\mathbf{z}\in\mathcal{Z}\). Assumption 1 naturally holds for most data domains, and this is a necessary assumption to ensure that the Hilbert norm of the optimizer of \(E^{\lambda}\) is bounded. Assumption 2 holds whenever the cost function \(c\) as well as the constraint functions \(g_{q}\) are Lipschitz. This assumption implies that the gradient of \(c\) with respect to the scalars \(\mathbf{u}(\mathbf{z})\) is bounded as \[\|\nabla_{\mathbf{u}(\mathbf{z})}c(\mathbf{u}(\mathbf{z}),\mathbf{z})\|_{2} \leq C, \tag{17}\] which in turn allows us to upper bound the expected norm of the gradient \(\mathbb{E}\left[\|\nabla_{\mathbf{u}}E_{n}^{\lambda}(\mathbf{u})\|_{\mathcal{H }_{t}}^{2}\right]\). Assumption 3 is a standard condition for convergence of descent methods, and it can be relaxed to the case in which the loss function is almost everywhere differentiable by applying subgradients instead of gradients. **Theorem 1**: _Let \(\mathbf{u}_{\mathcal{S}}^{*}\coloneqq\arg\min_{\mathbf{u}\in\{\mathbf{u}^{1},..., \mathbf{u}^{N}\}}E_{\mathcal{S}}^{\lambda}(\mathbf{u})\) be the decisions generated by Algorithm 2 when given the set \(\mathcal{S}=\{\mathbf{z}^{n}\}_{n=1}^{N}\) as input, and let \(\mathbf{u}^{\lambda}\) be the true minimizer of \(E^{\lambda}(\mathbf{u})\) over \(\mathcal{H}\). If we use constant step-size \(\eta\) and constant error bounds \(\epsilon=P_{2}\eta^{2}\) for some constant \(P_{2}>0\), then under Assumptions 1-3, we have that_ \[\mathbb{E}\left[E^{\lambda}(\mathbf{u}_{\mathcal{S}}^{*})-E^{\lambda}( \mathbf{u}^{\lambda})\right]\leq\mathcal{O}\left(\frac{\eta}{\lambda}\right).\] _Proof_ See Appendix C. Corollary 1: _Let \(\mathbf{u}^{*}\) be the true minimizer of \(E(\cdot)\) over \(\mathcal{F}\). If we use constant step-size with \(\eta=\frac{P_{1}}{\sqrt{N}}<\frac{1}{\lambda}\), and \(P_{1}>0\), constant error bounds \(\epsilon=P_{2}\eta^{2}\) for some constant \(P_{2}>0\), and regularization parameter \(\lambda\) such that \(\lambda\xrightarrow[N\to\infty]{}0\) and \(\lambda\sqrt{N}\xrightarrow[N\to\infty]{}\infty\), then under Assumptions 1-3 we have that_ \[\lim_{N\to\infty}\mathbb{E}[|E(\mathbf{u}_{\mathcal{S}}^{*})-E(\mathbf{u}^{*}) |]=0. \tag{18}\] _Proof_ See Appendix C. Since \(L_{1}\) convergence implies convergence in probability, the corollary also implies that the expected loss achieved with Algorithm 2 converges in probability to the optimal solution. In addition, from Theorem 1 we observe that setting \(\eta=\frac{P_{1}}{\sqrt{N}}\) makes the objective value of the solution found by Algorithm 2 converge to the optimal solution of problem (3) with a rate of convergence of \(\mathcal{O}\left(\frac{1}{\lambda\sqrt{N}}\right)\). Convergence can also be achieved under diminishing step size, although with a slower rate of \(\mathcal{O}\left(\frac{1}{\lambda\log N}\right)\). In practice, a diminishing step size or a very small constant step size might make our data matrix \(\mathbf{D}^{n}\) grow arbitrarily large, since little or no pruning would be done at each iteration. A constant step size is then what allows us to control the trade-off between accuracy and memory required; we want to use a step size \(\eta\) that is small enough to make the error in Theorem 1 small, but large enough for the pruning to be done. ## 6 Complexity Analysis Let \(M_{n}\) be the size of the data matrix \(\mathbf{D}^{n}\) during the \(n^{th}\) iteration of Algorithm 2. We analyze both space and time complexities per iteration in terms of \(M_{n}\). _Space:_ At each iteration we need to store the kernel matrix \(\mathbf{K}_{t}[\mathbf{D}_{t}^{n},\mathbf{D}_{t}^{n}]\in\mathbb{R}^{M_{n}\times M _{n}}\) and its inverse as well as the parameters \(\mathbf{A}_{t}^{n}\in\mathbb{R}^{r_{t}\times M_{n}}\) for each \(t\). This results in \(\mathcal{O}(TM_{n}^{2}+M_{n}\sum_{t=1}^{T}r_{t})\) memory requirement. _Time:_ For the FSGD step, computing the gradient takes \(\mathcal{O}(M_{n}\sum_{t=1}^{T}r_{t})\) time. Computing from scratch the kernel matrices \(\mathbf{K}_{t}[\mathbf{D}_{t}^{n},\mathbf{D}_{t}^{n}]\in\mathbb{R}^{M_{n} \times M_{n}}\) and its inverses (needed for the pruning step) takes \(\mathcal{O}(M_{n}^{2}\sum_{t=0}^{T}q_{t})\) and \(\mathcal{O}(TM_{n}^{3})\) time respectively. However, by using a recursive rule to compute these matrices in terms of the corresponding values in the previous iteration, the times become \(\mathcal{O}(M_{n}\sum_{t=0}^{T}q_{t})\) and \(\mathcal{O}(M_{n}^{2})\) respectively. In addition, the matrix multiplication in Eq. (14) takes \(O(M_{n}^{2})\) time for each \(t\). Since at most \(M_{n}\) elements can be removed from the dictionary at the \(n^{th}\) iteration, we obtain that in the worst case scenario the time per iteration becomes \(\mathcal{O}(TM_{n}^{3}+M_{n}^{2}\sum_{t=0}^{T}q_{t})\). Let us now discuss the size of \(M_{n}\). In the worst-case, we know that for all iterations the size of the data matrix is upper bounded by the covering number \(M\) of the data domain (Zhou, 2002). More specifically, for fixed step size \(\eta\) and fixed error bound \(\epsilon=P_{2}\eta^{2}\), we have that if the data space \(\mathcal{Z}\) is compact (Assumption 1), then \(M_{n}\) is upper bounded by the minimum number of balls of radius \(\frac{P_{2}\eta}{C}\) needed to cover the compact set \(K_{1}(\mathcal{Z}_{0},\cdot)\times\ldots\times K_{T}(\mathcal{Z}_{0:T-1},\cdot)\) of kernel transformations (see for example the proof of Theorem 3.1 in Koppel et al. (2016)). While an exact expression for this cover number \(M\) is unknown, the number is finite (Anthony and Bartlett, 2009) and it decreases as \(\eta\) or \(P_{2}\) increases. In particular, the maximum number of samples in the data matrix depends on the step size \(\eta\) and the constant \(P_{2}\), but not on the data size \(N\). Denoting the cover number described above by \(M\) and considering fixed values of \(T\) and of the dimensions \(r_{1},\ldots,r_{T}\) and \(q_{0},\ldots,q_{T}\), we obtain that the worst case total time across the \(N\) iterations of Algorithm 2 can be upper bounded by \(\mathcal{O}(NM^{3})\) and worst case total space required is \(\mathcal{O}(NM^{2})\). While the worst case scenario cannot happen for all iterations (for example, if \(M\) elements are pruned in one iteration, the next iteration is very fast), this bound is enough to conclude that total time and total space are in the worst case linear in the number of iterations. Notice that if we removed the pruning step, the entire algorithm would require \(\Omega(N^{2})\) space to store the kernel matrix and \(\Omega(N^{2})\) time for computations, showing that Algorithm 2 indeed reduces the overall complexity as the number of iterations becomes much larger than \(M\). ## 7 Computational Experiments We perform computational experiments for the inventory control and the shipment planning problems to analyze the average out-of-sample performance as well as the tractability of the proposed algorithms. For both applications we compare the SMOK algorithm proposed in Algorithm 2, to the MOK algorithm (Multistage Optimization with Kernels), which is the result of applying the FSGD algorithm without the pruning step. Moreover, we compare the SMOK and MOK algorithms against three other benchmarks: 1. **SRO:** Sample robust optimization approach from Bertsimas et al. (2022b), in which all samples are assigned equal weight \(\frac{1}{N}\). We use uncertainty sets bounded by \(\epsilon\) in the \(\ell_{1}\) norm as well as multi-policy approximation with linear decision rules. 2. **SRO-knn:** Sample robust optimization with covariates approach developed in Bertsimas and McCord (2019), using uncertainty sets bounded by \(\epsilon\) in the \(\ell_{1}\) norm as well as multi-policy approximation with linear decision rules. The weights were obtained using the \(k_{N}\)-nearest neighbors approach. 3. **SAA-knn:** Sample average approximation method, which is equivalent to the SRO-knn approach with (\(\epsilon=0\)). We analyze the computational results for several instances of the inventory control problem. First, we consider a high dimensional instance of the problem to show the tractability of the SMOK algorithm as well as to compare its performance against other methods. Next, we analyze how the performance of the proposed algorithms varies with the dimensions of the problem (number of periods, data size, dimension of the data as well as dimension of the controllers). For instances in which the number of periods is less than 5 we are also able to compute lower bounds for the loss achieved by the optimal decision rules, which enables us to quantify the optimality gap of the proposed methods. For the shipment planning application we reproduce the results from Bertsimas and McCord (2019) to compare the SMOK and MOK algorithms against sample robust optimization (with and without covariates) and sample average approximation. For training all these benchmarks we use the same parameter values reported in Bertsimas and McCord (2019). **Handling Constraints:** Often the sequence of decisions \(\mathbf{u}(\mathbf{z})\) must satisfy certain convex constraints for all possible disturbances, transforming the problem of interest into \[\begin{array}{ll}\min_{\mathbf{u}\in\mathcal{F}}&\mathbb{E}_{\mathbf{z}} \big{[}c\big{(}\mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}\big{]}\\ \text{s.t.}&g_{q}\big{(}\mathbf{u}(\mathbf{z})\big{)}\leq 0,\quad\forall\, \mathbf{z}\in\mathcal{Z},\quad\forall\,q\in[Q].\end{array} \tag{19}\] We address this problem by relaxing the constraints into the objective with a penalty function. More specifically, in Algorithm 2 we replace the cost \(c(\mathbf{u}(\mathbf{z}),\mathbf{z})\) with a new loss function \(c^{\psi}\) defined as \[c^{\psi}\big{(}\mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}\coloneqq c\big{(} \mathbf{u}(\mathbf{z}),\mathbf{z}\big{)}+\psi\sum_{q=1}^{Q}\max\Big{(}0,g_{q} \big{(}\mathbf{u}(\mathbf{z})\big{)}\Big{)}^{2}, \tag{20}\] where \(\psi\) is the penalty parameter. Although feasibility is not guaranteed, the constraint violation is expected to vanish for large enough \(\psi\) (see Lemma 7). Convergence analysis for the SMOK algorithm applied to this constrained problem can be found in Appendix C. **Parameter Settings:** We train the SMOK and MOK algorithms using Gaussian kernels and constant step size. The values for \(\lambda,\psi\) and \(\theta\) were found using validation, and the decisions were projected onto the space of feasible decisions before making any evaluations, both at training and testing stages (this means that the decisions evaluated had 0 constraint violation). For each instance of the problem the constant step size \(\eta\) was initially set to \(10^{-5}\) and it was repeatedly increased by factors of 5 so long as the average training loss did not worsen and the iterations were reaching convergence. The parameter \(P_{2}\) for the error bound \(\epsilon\) was initially set to \(0.1\) and was repeatedly increased by factors of \(2\); we stopped increasing it when the average training loss significantly worsened. **Software Utilized:** Experiments were implemented in Python 3 (Van Rossum and Drake, 2009) using the NumPy library (Harris et al., 2020). We clarify that Eq. (14) can often be difficult to compute due to numerical instability in the calculations for the inverse matrix. To address this issue we add a small value \(\lambda=1e^{-7}\) to the diagonal of a matrix before computing its inverse. In terms of hardware, all experiments where run on an Intel(R) Core(TM) i7-8557U CPU @ 1.70GHz processor with 4 physical cores (hyper-threading enabled). The machine has a 32KB L1 cache and 256KB L2 cache per core, and an 8MB L3 cache. There is a total of 16GB DRAM. ### Inventory Control Problem We consider a multistage inventory control problem with linear constraints. At each stage \(t\) with initial inventory \(s_{t}\), a retailer places procurement orders \(\mathbf{u}_{t}\in\mathbb{R}^{r}\) at various suppliers, and later observes the demands \(\mathbf{w}_{t}\in\mathbb{R}^{q}\). At the end of each stage, the firm incurs a per-unit holding cost of \(h_{t}\) and a back-order cost of \(b_{t}\). The inventory is not backlogged, and therefore the initial inventory for the next period is given by the linear equation \(s_{t}=s_{t-1}+\mathbf{1}^{\top}\mathbf{u}_{t}-\mathbf{1}^{\top}\mathbf{w}_{t}\), with zero initial inventory for the first period. In addition, the procurement orders are upper bounded by a constant \(L\) and the sum of procurement orders for two consecutive stages cannot exceed a constant \(\ell\). As in Ban et al. (2018), we consider the scenario in which retailers can observe auxiliary covariates \(\mathbf{x}\) that relate to the future demands (e.g. in the fashion industry color and brand are useful factors for predicting demand of the products). For a problem with \(T\) periods, we can formulate this optimization problem as \[\min_{\mathbf{u}_{1:T}} \mathbb{E}_{\mathbf{w}|\mathbf{x}}\left[\sum_{t=1}^{T}h_{t}\left[ s_{t}\right]^{+}+b_{t}\left[-s_{t}\right]^{+}\bigg{|}\,\mathbf{x}=\mathbf{x}_{0}\right]\] \[\text{s.t.} s_{t}=s_{t-1}+\mathbf{1}^{\top}\mathbf{u}_{t}-\mathbf{1}^{ \top}\mathbf{w}_{t}, \forall t\in[T],\] \[\mathbf{u}_{t}\geq\mathbf{0}, \forall t\in[T],\] \[\mathbf{u}_{t}\leq L\mathbf{1}, \forall t\in[T],\] \[\mathbf{u}_{t}+\mathbf{u}_{t+1}\leq\ell\mathbf{1}, \forall t\in[T-1].\] The parameters \(h_{t},b_{t}\) were chosen to be \(2\) and \(1\), respectively. The data sets used in these experiments were generated by sampling \(\mathbf{x}\) from a Truncated Gaussian Distribution with mean \(2\) and standard deviation \(0.5\), and with truncating bounds \(0\) and \(6\). The demands \(\mathbf{w}_{t}\) were then obtained as a linear function of the covariates with some added noise; specifically, \(\mathbf{w}_{t}=\alpha_{t}\mathbf{x}+\boldsymbol{\epsilon}_{t}\), where \(\epsilon_{t}\) was sampled from a standard distribution and the constants \(\alpha_{t}\) were selected to be close to \(50\). We first consider a large instance of the problem with \(T=q=r=10\), and we set the control bounds as \(L=150\) and \(\ell=200\). We use a training set with 2000 sample paths and we approximate the expected loss achieved by each method by averaging the losses across a common testing set with \(10^{4}\) sample paths. Since the SRO and SRO-knn methods become intractable for problems of this magnitude, in this experiment we only compare the SMOK and MOK methods to SAA-knn. We use validation to choose the best parameters for all methods and we evaluate the results on the testing set. In table 1 we observe that both SMOK and MOK outperform SAA-knn in terms of average out-of-sample loss and computational time. Moreover, the number of parameters needed for the SMOK algorithm is smaller by two orders of magnitude compared to the other methods. Even though we observe an increase in computation time for SMOK with respect to MOK (due to the overhead computation time for the pruning step), we also see that adding sparsity helped SMOK achieve a better average loss. We next consider other instances of the inventory problem to analyze how the dimensions of the problem affect the overall performance of the SMOK and MOK algorithms. We compared these two methods to a third algorithm ADR (Affine Decision Rules), which refers to the common approximation technique of restricting the space of decision rules to be affine functions. We train all methods using the same training sets and the same validation sets (with size equal to 30% of the training size), and we approximate the expected loss achieved by averaging across a common testing set of \(10^{5}\) sample paths. In addition, we compute lower bounds for the optimal expected loss when \(T\leq 5\) (see Appendix D), which allows us to analyze the optimality gap for the different methods. Multiple data sets were generated to analyze the performance of the algorithms as we increase the number of periods, the training size, the dimension of the data and the dimension of the controls. In each case we analyze the average out-of-sample loss and the size \(M\) of the data matrix, which refers to number of parameters per control. We also analyze the computational time for each iteration of Stochastic Gradient Descent (projected or not projected), and the evaluation time (time it takes to evaluate the empirical loss function \(E_{\mathcal{S}}^{\lambda}(\mathbf{u})\) given the parameters for the functional representation of \(\mathbf{u}\)). Notice that since the stochastic gradient descent algorithm does not strictly descend, the \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **Avg OOS Loss** & **Total Time (hours)** & **No. of Params** \\ \hline \hline **SMOK** & 491.30 & 0.3 & \(1.5\times 10^{3}\) \\ \hline **MOK** & 493.74 & 0.1 & \(2\times 10^{5}\) \\ \hline **SAA-knn** & 496.04 & 14.36 & \(2.2\times 10^{5}\) \\ \hline \end{tabular} \end{table} Table 1: Average out-of-sample (OOS) loss and total computation time for inventory problem with \(T=q=r=10\). empirical loss of the validation set needs to be evaluated every certain number of iterations, which makes the evaluation time part of the total training time. #### 7.1.1 Varying the Number of Periods: (\(L=150,\ell=200,q=r=1,N=2000,T=2,3,4,5\)) In Figure 0(a), we observe that the convergence trajectory is not significantly affected by the pruning step, and the number of iterations needed until convergence does not change much for \(T\geq 3\). In addition, we see in Figure 0(b) that ADR results in very poor performance, while both the SMOK and MOK algorithms are quite close to the lower bounds found for the optimal expected loss. In Figure 0(c) we observe that the time per iteration of stochastic gradient descent grows linearly for both SMOK and MOK, but MOK takes longer times due to the overhead introduced by the pruning step. The evaluation time (Figure 0(d)) also grows linearly for both algorithms, although unlike the time per iteration, the slope is larger for MOK than for SMOK because the number of parameters is significantly smaller for this last method (SMOK algorithm reduced the size \(M\) of the data matrix from 2000 to values below 15). #### 7.1.2 Varying the Data Size: (\(L=150,\ell=200,q=r=1,T=3,N=10,100,1000,4000,7000,10000\)) Figure 1(b) shows that, as anticipated, the expected loss achieved by both MOK and SMOK algorithms decreases as the size of the training set becomes larger. The number of iterations required to reach convergence (Figure 1(a)) does not Figure 1: Expected loss and computational time for varying number of periods. change much with the data size and the expected loss achieved remains relatively constant after a large enough training size, which occurs around \(N=1000\). In Figures 2c,2d we can observe a significant memory improvement of SMOK over MOK when \(N\) becomes very large. For \(N=10^{4}\), for example, SMOK outputs decision rules with only 11 parameters, while SMOK requires \(10^{4}\) parameters per control. The evaluation time in Figure 2d grows quadratically with the number of parameters in each control (the quadratic factor comes from computing the kernel matrix \(\mathbf{K}_{t}[\mathbf{D}_{t},\mathbf{D}_{t}]\)), which in the case of MOK corresponds to the size of the training set. Since the SMOK algorithm has much fewer parameters, it takes under half a second to evaluate the average loss of 1000 samples regardless of the training data size. Notice that the time per iteration (Figure 2c) is higher for SMOK than for MOK when \(N\) is small due to the pruning step. However, we observe that the time per iteration increases linearly for MOK while it stabilizes for SMOK, implying that for bigger values of \(N\) the SMOK method actually takes less time per iteration and per evaluation. #### 7.1.3 Varying Data Dimension: (\(L=150,\ell=200,r=1,T=3,N=2000,q=1,10,20,30,40,50\)) When generating data sets for this part we enforce that the value \(\sum_{q}\left(\mathbf{w}_{t}\right)_{q}\) remains constant for all \(t\in[T]\), which guarantees that the optimal expected loss is the same across instances. In Figure 3a, we observe that the trajectories of the expected loss across the FSGD iterations are quite similar for all the different dimensions of the data. More importantly, the error gap does not worsen as the dimension of the data increases (Figure 3b), showing that the Figure 2: Expected loss and computational time for varying data sizes. accuracy of our algorithms does not worsen for data sets in large dimensional spaces. Additionally, in Figure 3d we observe that there is a slight linear increase in the evaluation time for both SMOK and MOK algorithms, which is expected since the dimension of the demand vector affects the computation of the exponent in the Gaussian kernel. In terms of the iteration time (Figure 3c), we can see that SMOK remains quite stable around 4 seconds per 1000 iterations, while MOK shows linear increase. As in the previous examples, the number of parameters of the SMOK algorithm is quite similar across the different experiments and remains under 15. #### 7.1.4 Varying Control Dimension: (\(L=150,\ell=\frac{200}{r},q=1,T=3,N=2000,r=1,3,5,10\)) In order to make a fair comparison, we set \(L=\frac{150}{r}\) and \(\ell=\frac{200}{r}\), which guarantees that the optimal expected loss is the same across instances. We observe in Figure 4b that the SMOK and MOK algorithms achieve very similar average out-of-sample loss across the different dimensions, and there are a couple of scenarios in which the pruning step helped to improve the expected loss. In addition, the number of iterations required for convergence (Figure 4a) does not seem to depend on the dimension of the control. Lastly, in Figure 4c we observe a slight linear increase in iteration time for both SMOK and MOK algorithms, with MOK having an advantage of around 4 seconds per 1000 iterations. In terms of evaluation time (Figure 4d) both algorithms grow linearly. As in the previous examples, the number of parameters for the SMOK algorithm is very low and varies between 13 and 14 across the different experiments. Figure 3: Expected loss and computational time for varying data dimensions. ### Shipment Planning We next analyze a two-stage shipment planning problem, following the same problem setting as in Bertsimas et al. (2022a) and Bertsimas and Kallus (2020). In this example, a decision maker has access to side information \(\mathbf{x}\) (market trends, advertisements, etc.) and the goal is to ship items from the production facilities to multiple locations as to satisfy demand at minimum cost. First, the decision maker chooses an initial inventory quantity \(u_{1f}\geq 0\) to be produced in each of the production facilities \(f\in[F]\) at a per unit cost of \(p_{1}\). Next, the demands \(w_{\ell}\geq 0\) are observed in each location \(\ell\in[L]\). If needed, the decision maker can produce additional units in each facility to satisfy demand, but at a higher per unit cost \(p_{2}>p_{1}\). Finally, demand is fulfilled by shipping \(u_{2f\ell}\) units from facility \(f\) to location \(\ell\) at per-unit cost \(c_{f\ell}\), and each unit of satisfied demand generates revenue \(a>0\). The multistage optimization problem can then be written as \[\min_{\mathbf{u}_{1},\mathbf{u}_{2}}\hskip-1.0pt\mathbb{E}_{ \mathbf{w}|\mathbf{x}}\hskip-1.0pt\left[p_{1}\hskip-1.0pt\sum_{f=1}^{F}\hskip-1.0 ptu_{1f}-a\hskip-1.0pt\sum_{\ell\in[L]}\hskip-1.0ptw_{\ell}+p_{2}\hskip-1.0pt \sum_{f=1}^{F}\left[\sum_{\ell=1}^{L}\hskip-1.0ptu_{2f\ell}-u_{1f}\right] \hskip-1.0pt\right]^{+}\hskip-1.0pt\sum_{f=1}^{L}\sum_{\ell=1}^{L}\hskip-1.0 ptc_{f\ell}\,u_{2f\ell}\Big{|}\mathbf{x}=\mathbf{x}_{0}\right]\] \[\text{s.t.}\quad\sum_{f=1}^{F}u_{2f\ell}\geq w_{\ell},\quad \forall\ell\in[L],\forall\mathbf{w}\in\mathcal{W},\] where \(\mathcal{W}\) is the set of all possible demand realizations. We reproduced the computational experiments performed in Bertsimas et al. (2022a) using the same parameters, the same data generation procedure as well as the same data set sizes. More specifically, we use \(F=4,L=12,p_{1}=5,p_{2}=100\) and \(a=90\). The costs \(\mathbf{c}\) and covariates \(\mathbf{x}\) are also generated in an identical manner as in Bertsimas et al. (2022a). Figure 4: Expected loss and computational time for varying control dimensions. We compare the SMOK and MOK algorithms against **SRO**, **SRO-knn** and **SAA-knn**. We train all methods over 100 independent training sets and evaluate them on a test set of size 100. The average out-of-sample profits achieved across the different methods are shown in Table 2. We observe that both MOK and SMOK outperform the other methods, with MOK achieving the highest revenues. However, as observed in Table 3, only the SAA and SMOK methods have tractable growth as the data size increases. In particular, the SMOK algorithm achieves high accuracies using only around 60 parameters per decision even when the data size increases to large numbers. ## 8 Conclusion In this work, we developed a tractable data-driven approach for solving multi-stage stochastic optimization problems in which the uncertainties are independent of previous decisions. We represented the decision rules as elements of a reproducing kernel Hilbert space and performed functional stochastic gradient descent to minimize the empirical regularized loss. We next incorporated sparsification techniques based on function subspace projections, which decreased the number of parameters per controller. We prove that the proposed approach is asymptotically optimal for multistage stochastic programming with side information. The practical value of the proposed data-driven approach was shown across various computational experiments on stochastic inventory management prob \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(N\) & **SRO** & **SRO** & **SRO-knn** & **SRO-knn** & **SAA-knn** & **MOK** & **SMOK** \\ & (\(\epsilon=100\)) & (\(\epsilon=500\)) & (\(\epsilon=100\)) & (\(\epsilon=500\)) & & & \\ \hline \hline **100** & 160007.0 & 159866.7 & 157522.9 & 158671.5 & 156639.6 & **161536.9** & 160737.0 \\ \hline **200** & 160221.1 & 160075.0 & 157863.5 & 159136.9 & 156911.9 & **164050.1** & 163039.2 \\ \hline **300** & 160431.0 & 160145.6 & 158697.6 & 159656.2 & 157669.6 & **164860.6** & 163703.8 \\ \hline \end{tabular} \end{table} Table 2: Out-of-sample profit for the shipment planning problem. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(N\) & **SRO** & **SRO** & **SRO-knn** & **SRO-knn** & **SAA-knn** & **MOK** & **SMOK** \\ & (\(\epsilon=100\)) & (\(\epsilon=500\)) & (\(\epsilon=100\)) & (\(\epsilon=500\)) & & & \\ \hline \hline **100** & 8 & 6 & 30 & 35 & 4 & 38 & 150 \\ \hline **200** & 11 & 12 & 78 & 75 & 4 & 42 & 260 \\ \hline **300** & 19 & 21 & 125 & 132 & 4 & 46 & 240 \\ \hline **500** & 38 & 39 & 276 & 280 & 5 & 50 & 245 \\ \hline **1000** & 74 & 76 & 772 & 790 & 10 & 65 & 255 \\ \hline **5000** & 559 & 581 & 54000 & 54100 & 54 & 288 & 252 \\ \hline \end{tabular} \end{table} Table 3: Total computation time (seconds) for solving one instance of the shipment planning problem. lems, demonstrating that it produces high-quality decisions, does not worsen in multidimensional settings and remains tractable even with large data sizes. This approach does not rely on the traditional use of approximation with scenario trees, and provides a novel method for leveraging advances in machine learning to solve multistage stochastic optimization problems. **Statements and Declarations.** The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose.
2310.08400
Koszul homomorphisms and universal resolutions in local algebra
We define a local homomorphism $(Q,k)\to (R,\ell)$ to be Koszul if its derived fiber $R \otimes^{\mathsf{L}}_Q k$ is formal, and if $\operatorname{Tor}^Q(R,k)$ is Koszul in the classical sense. This recovers the classical definition when $Q$ is a field, and more generally includes all flat deformations of Koszul algebras. The non-flat case is significantly more interesting, and there is no need for examples to be quadratic: all complete intersection and all Golod quotients are Koszul homomorphisms. We show that the class of Koszul homomorphisms enjoys excellent homological properties, and we give many more examples, especially various monomial and Gorenstein examples. We then study Koszul homomorphisms from the perspective of $\mathrm{A}_\infty$-structures on resolutions. We use this machinery to construct universal free resolutions of $R$-modules by generalizing a classical construction of Priddy. The resulting (infinite) free resolution of an $R$-module $M$ is often minimal, and can be described by a finite amount of data whenever $M$ and $R$ have finite projective dimension over $Q$. Our construction simultaneously recovers the resolutions of Shamash and Eisenbud over a complete intersection ring, and the bar resolutions of Iyengar and Burke over a Golod ring, and produces analogous resolutions for various other classes of local rings.
Benjamin Briggs, James C. Cameron, Janina C. Letz, Josh Pollitz
2023-10-12T15:10:37Z
http://arxiv.org/abs/2310.08400v1
# Koszul homomorphisms and universal resolutions in local algebra ###### Abstract. We define a local homomorphism \((Q,k)\to(R,\ell)\) to be Koszul if its derived fiber \(R\otimes_{Q}^{1}k\) is formal, and if \(\operatorname{Tor}^{Q}(R,k)\) is Koszul in the classical sense. This recovers the classical definition when \(Q\) is a field, and more generally includes all flat deformations of Koszul algebras. The non-flat case is significantly more interesting, and there is no need for examples to be quadratic: all complete intersection and all Golod quotients are Koszul homomorphisms. We show that the class of Koszul homomorphisms enjoys excellent homological properties, and we give many more examples, especially various monomial and Gorenstein examples. We then study Koszul homomorphisms from the perspective of \(\operatorname{A_{\infty}}\)-structures on resolutions. We use this machinery to construct universal free resolutions of \(R\)-modules by generalizing a classical construction of Priddy. The resulting (infinite) free resolution of an \(R\)-module \(M\) is often minimal, and can be described by a finite amount of data whenever \(M\) and \(R\) have finite projective dimension over \(Q\). Our construction simultaneously recovers the resolutions of Shamash and Eisenbud over a complete intersection ring, and the bar resolutions of Iyengar and Burke over a Golod ring, and produces analogous resolutions for various other classes of local rings. Key words and phrases:Free resolutions, Koszul algebras, Koszul duality, bar resolution, twisted tensor products, A-infinity algebras 2020 Mathematics Subject Classification: 13D02 (primary), 16S37, 16E45, 13H10, 13F55 ###### Contents * 1 Introduction * 2 Koszul homomorphisms * 2.8 Cohen Koszul local rings * 2.15 Properties of Koszul homomorphisms * 3 Examples of Koszul homomorphisms * 3.8 Rings of small codepth * 3.13 Almost Golod Gorenstein rings * 3.20 Monomial rings * 4 Background on \(\operatorname{A_{\infty}}\)-algebras and coalgebras * 5 Transfer of \(\operatorname{A_{\infty}}\)-algebra structures * 5.6 Cyclic \(\operatorname{A_{\infty}}\)-algebras * 6 Twisted tensor products * 7 \(\operatorname{A_{\infty}}\)-algebra presentations for Koszul homomorphisms * 7.2 Strictly Koszul presentations * 7.6 The Priddy resolution * 8 Examples of strictly Koszul presentations * 8.7 Complete intersection homomorphisms * 8.15 Almost Golod Gorenstein rings ## 1. Introduction The phenomenon of Koszul duality has been observed in many forms across algebra, geometry and topology. It provides explicit computational tools for answering homological questions and opens up deep connections between a number of seemingly unrelated areas of mathematics. The goal of the present work is to develop a relative theory of Koszul duality in local commutative algebra, and to give concrete applications for understanding infinite free resolutions. For a finite homomorphism \(\varphi\colon Q\to R\) of commutative noetherian local rings, the derived fiber \(F=R\otimes_{Q}^{\mathbb{L}}k\), where \(k\) is the residue field of \(Q\), is a differential graded \(k\)-algebra that encodes important ring theoretic properties of \(\varphi\). We define \(\varphi\) to be _Koszul_ if \(F\) is formal (see 2.3), and if its homology \(\operatorname{H}(F)=\operatorname{Tor}^{Q}(R,k)\) is a Koszul \(k\)-algebra (see 2.1). This recovers the classical definition when the source is a field. Through the looking glass that connects local algebra with rational homotopy theory, the definition is directly analogous to Berglund's notion of a Koszul space. Flat local maps that have a Koszul fiber are natural examples of Koszul homomorphisms, but the non-flat case is significantly more interesting: all complete intersection and all Golod quotient homomorphisms are Koszul, and we give many other monomial and Gorenstein examples. In particular, there is no need for the homomorphism to be quadratic in any sense. The definition also has structural consequences connecting the homological algebra over \(R\) and \(Q\). Our main theorem provides an algorithmic way to transfer free resolutions over \(Q\) into free resolutions over \(R\). To achieve this we introduce a slightly stronger "strictly Koszul" property (see 7.2) that is satisfied in our main examples. These ideas borrow from a long history, and we will discuss the context and technology behind the construction following this summary of our main results. **Theorem A**.: _For any strictly Koszul local homomorphism \(\varphi\colon Q\to R\) there is a non-negatively graded, degreewise finite rank free \(Q\)-module \(C\) such that:_ 1. _For each finitely generated_ \(R\)_-module_ \(M\) _with a minimal_ \(Q\)_-free resolution_ \(G\to M\)_, there is a differential_ \(\partial^{\tau}\) _on the graded_ \(R\)_-module_ \(R\otimes C\otimes G\) _such that the resulting "twisted tensor product" complex_ \[(R\otimes C\otimes G,\partial^{\tau})\stackrel{{\simeq}}{{ \longrightarrow}}M\] _is an_ \(R\)_-free resolution of_ \(M\)_. If_ \(R\) _and_ \(M\) _have finite projective dimension over_ \(Q\)_, then both_ \(C\) _and the twisted tensor product differential can be explicitly described in their entirety with a finite amount of data._ 2. _Assume that_ \(\varphi\) _is small (a central case of interest is_ \((Q,\mathfrak{m}_{Q})\) _regular and_ \(\ker(\varphi)\subseteq\mathfrak{m}_{Q}^{2}\)_). The twisted tensor product complex is minimal for the residue field_ \(k\) _of_ \(R\)_. More generally, the resolution is minimal whenever_ \(M\) _is inert with respect to_ \(\varphi\)_, in the sense of Lescot. Moreover,_ \[\sum_{i}\operatorname{rank}_{Q}(C_{i})t^{i}=\tfrac{\operatorname{P}_{k}^{R}( t)}{P_{k}^{Q}(t)}\,.\] _The following homomorphisms are strictly Koszul:_ 1. _Surjective complete intersection homomorphisms._ 2. _Surjective Golod homomorphisms._ 3. _Surjective Gorenstein homomorphisms of projective dimension three or less._ 4. _Cohen presentations of compressed artinian Gorenstein rings having characteristic zero and odd embedding dimension._ Part (1) of the Theorem, with an explicit description of the twisted tensor product differential, is Theorem 7.7, while part (2) is contained in Theorem 7.10. The examples (a)-(d), and several more, are introduced in Section 3 and treated again in Section 8, with a complete description of the corresponding in each case. Universal resolutions, that is, free resolutions over a ring that are defined in a uniform way for all finitely generated modules, have been of central interest in homological commutative algebra since at least the 60s, often importing tools such as Massey operations and bar resolutions from algebraic topology. Let \(\varphi\colon Q\to R\) be a local homomorphism. Shamash constructed universal resolutions for \(R\)-modules when \(\varphi\) is a hypersurface quotient [10], and these were clarified and extended to complete intersection quotients by Eisenbud using the theory of higher homotopies [11]. Burke recognized in [1] that the higher homotopies are a manifestation of certain \(\mathrm{A}_{\infty}\)-structures (we will return to these later in the introduction). In the presence of a \(Q\)-free differential graded algebra resolution \(A\to R\), and a \(Q\)-free differential graded \(A\)-module resolution \(G\) of an \(R\)-module \(M\), Iyengar constructed a bar resolution for \(M\) over \(R\)[13]. By endowing \(A\) with an \(\mathrm{A}_{\infty}\)-algebra structure, and \(G\) with an \(\mathrm{A}_{\infty}\)-module structure over \(A\), Burke constructed a bar resolution even when associative multiplicative resolutions do not exist [1]. The resulting resolution is minimal when \(M\) is Golod with respect to \(\varphi\), and is otherwise typically far from minimal. Theorem A recovers both the resolutions of Shamash and Eisenbud, when \(\varphi\) is a complete intersection quotient, and the bar resolutions of Iyengar and Burke, when \(\varphi\) is a Golod quotient. In parallel, the universal resolutions introduced by Priddy over Koszul algebras have had far-reaching impact [14], not least as a computational tool. Our theory directly builds on and recovers his construction, while providing a common framework for the universal resolutions above. The technical foundation for our universal resolutions is in Section 6. Here we develop a general theory of twisted tensor products over a commutative ring \(Q\). The data that goes into this construction is a curved differential graded coalgebra \(C\) over \(Q\), a quasi-isomorphism \(\Omega(C)\to R\) from the cobar construction of \(C\) to \(R\), and a differential graded module structure over \(\Omega(C)\) on \(G\). These terms are defined in Section 4. From this, in Theorem 6.5, we construct a canonical resolution \[R\otimes^{\tau}C\otimes^{\tau}G=(R\otimes C\otimes G,\partial^{\tau}) \stackrel{{\simeq}}{{\longrightarrow}}M\,.\] The key to proving Theorem A is to show that \(C\) can be defined in an explicit, canonical, and minimal way when \(\varphi\) is strictly Koszul. We will return to this at the end of the introduction, with more context in hand. We turn our attention back to the Koszul homomorphisms. The first half of this work develops the theory of these maps; this part of the paper does not involve \(\mathrm{A}_{\infty}\)-structures, using only ordinary differential graded algebras. Similar Koszul-type conditions have been considered by other authors [1, 1, 13, 14], and we compare our definition with theirs in Remark 2.18. We motivate our condition as well by drawing connections with other areas, such as rational homotopy theory (Remarks 2.5 and 3.18) and toric topology (Remark 3.27). We pay particular attention to the case that \(Q\) is regular. In this situation, the resolutions constructed in Theorem A essentially depend only on the ring \(R\), and they are always finitely determined. When \(R\) is a local ring such that every Cohen presentation \(\varphi\colon Q\to\widehat{R}\) is a Koszul homomorphism, we say that \(R\) is _Cohen Koszul_; see Section 2.8. These rings enjoy excellent homological properties while being surprisingly abundant; they behave in many ways like classical Koszul algebras despite not necessarily being quadratic. We show that Cohen Koszul local rings have rational Poincare series that can be computed explicitly from their Koszul homology: \[\mathrm{P}^{R}_{k}(t)=\frac{(1+t)^{e}}{\sum_{i,w}(-1)^{w}\operatorname{rank}_{k }(\mathrm{H}_{i}(K^{R})_{(w)})t^{i+w}}\,;\] see Proposition 2.11, where the notation is explained. Section 3 is devoted entirely to examples, and we prove that surjective complete intersection homomorphisms are Koszul (Example 3.2), along with surjective Golod homomorphisms (Example 3.5), and Gorenstein homomorphisms of projective dimension three (Example 3.12). We exactly determine the local rings of codepth three or less that are Cohen Koszul in terms of the classification into the types described in [1]; we find that in every type except one the local ring is Cohen Koszul (Theorem 3.10). Graded local rings having an almost linear resolution in the sense of [1] are also Cohen Koszul (Remark 3.17). We treat monomial rings in Section 3.20, making connections with combinatorial commutative algebra and with the topology of moment angle complexes. Monomial rings are classically Koszul exactly when they are quadratic [10], while Cohen Koszul monomial rings need not be, and we produce many nontrivial examples in Proposition 3.22. We further give examples that illuminate how the Koszul condition relates with classical Koszulity, formality, being quadratic, and various other technical conditions. One of our main examples is a class of rings that we call _almost Golod Gorenstein_, treated in Section 3.13. Within the class of Gorenstein local rings, these display extremal homological behavior analogous to Golod rings within the class of all local rings; cf. Proposition 3.19. In Theorem 3.16 we establish a characterization in terms of the derived fiber that is similar to Avramov's characterization of Golod rings [11], and, under some additional technical assumptions, we deduce that almost Golod Gorenstein rings are Cohen Koszul. In the second half of the paper we study Koszul homomorphims from the perspective of \(\mathrm{A}_{\infty}\)-structures on resolutions. An \(\mathrm{A}_{\infty}\)-algebra is a complex \(A\) equipped with multilinear operations \(m_{n}\colon A^{\otimes n}\to A\) for \(n\geqslant 2\) that together satisfy certain associativity conditions generalizing the definition of a differential graded algebra (which one recovers by assuming \(m_{n}=0\) for \(n\geqslant 3\)). These objects were introduced by Stasheff to characterize loop spaces in algebraic topology [14]. Koszulity is well-known to be connected with formality (Remark 2.5), and in turn it has been understood since [13] that formality can be made visible through \(\mathrm{A}_{\infty}\)-structures. In the present context, these structures are important because they carry the information necessary to construct the universal resolutions in Theorem A while being flexible enough that all resolutions can always be given \(\mathrm{A}_{\infty}\)-structures. An introduction to \(\mathrm{A}_{\infty}\)-algebras and \(\mathrm{A}_{\infty}\)-modules over commutative rings is given in Section 4. In Section 5 we prove some quite general transfer results, in particular, constructing \(\mathrm{A}_{\infty}\)-structures on minimal resolutions of local rings and modules. Burke was one of the first to develop and apply the theory of \(\mathrm{A}_{\infty}\)-algebras over a commutative ring (rather than over a field) [1, 10], and our treatment owes a substantial intellectual debt to his work. We develop the theory of cyclic \(\mathrm{A}_{\infty}\)-algebras over commutative rings in Section 5.6. These were introduced by Kontsevich as part of his homological mirror symmetry program [11]. These \(\mathrm{A}_{\infty}\)-algebras possess extra structure that takes advantage of the Poincare duality on the minimal resolution of a Gorenstein ring, and we apply this theory to almost Golod Gorenstein rings. With this perspective in hand we return to Koszul homomorphisms in Section 7. The next result shows that, at the derived level, Koszul homomorphisms may be thought of as deformations of classical Koszul algebras. A more precise and more general statement is given in Theorem 7.1. We write \(\mathrm{T}(V)\) for the tensor algebra \(\bigoplus_{n\geqslant 0}V^{\otimes_{Q}n}\) on a graded \(Q\)-module \(V\). **Theorem B**.: _A surjective local homomorphism \(\varphi\colon Q\to R\) is Koszul if and only if there is a positively graded, degreewise finite rank free \(Q\)-module \(V\), a direct summand \(W\subseteq V\otimes_{Q}V\), and an \(\mathrm{A}_{\infty}\)-structure \(\{m_{n}\}\) on \(A=\mathrm{T}(V)/(W)\) such that_ 1. _the induced quotient_ \(A\to R\) _is an_ \(\mathrm{A}_{\infty}\)_-algebra quasi-isomorphism,_ 2. _modulo the maximal ideal of_ \(Q\)_, the_ \(\mathrm{A}_{\infty}\)_-stucture_ \(\{m_{n}\}\) _on_ \(A\) _agrees with the usual algebra structure on_ \(\mathrm{T}(V)/(W)\)_, that is,_ \[m_{2}\otimes_{Q}k=\mu\otimes_{Q}k\quad\text{and}\quad m_{n}\otimes_{Q}k=0\ \ \text{for}\ \ n\neq 2\,,\] _where_ \(\mu\) _is the usual product on the quotient of a tensor algebra,_ 3. _the_ \(k\)_-algebra_ \(\mathrm{T}(V\otimes_{Q}k)/(W\otimes_{Q}k)\) _is Koszul with this algebra structure._ Coming full circle, we are now able to describe the coalgebra \(C\) that appears in Theorem A. Fixing a Koszul homomorphism \(\varphi\) with \(V\) and \(W\) as in Theorem B, we define \[C:=\bigoplus_{n}\left(\bigcap_{i+2+j=n}V^{\otimes_{Q}i}\otimes_{Q}W\otimes_{Q }V^{\otimes_{Q}j}\right).\] This is modeled on the work of Priddy [10]. By construction, the \(Q\)-dual \(C^{\vee}\) is the quadratic dual \(\mathrm{T}(V^{\vee})/(W^{\perp})\) of the algebra \(\mathrm{T}(V)/(W)\). The strict Koszul condition introduced in Section 7.2 guarantees that the \(\mathrm{A}_{\infty}\)-structure on \(A\) induces the structure of a curved differential graded coalgebra on \(C\); see Definition 7.3. We think of \(C\) as Koszul dual to \(R\)_relative to \(Q\)_, as justified by Theorem 7.5. We conclude the paper with a reexamination of examples, in Section 8. We start by showing that certain deformations of classical Koszul algebras yield strictly Koszul homomorphisms, and we obtain resolutions that directly deform the original resolutions of Priddy. We study Golod homomorphisms in Example 8.2; in this case \(C=\mathsf{B}(A)\) is the bar construction of the \(\mathrm{A}_{\infty}\)-algebra \(A\), and we recover the bar resolution of Iyengar and Burke. In Example 8.4 we show that Gorenstein homomorphisms of projective dimension three are strictly Koszul, and describe \(C\) as the dual of a noncommutative hypersurface. We show that complete intersection homomorphisms are strictly Koszul in Section 8.7, and we show that \(C\) is the free divided power algebra on \(V\); our twisted tensor products encode the theory of higher homotopies, and the resulting resolutions recover those of Shamash and Eisenbud. We end by treating almost Golod Gorenstein rings in Section 8.15; this requires a substantial amount of machinery, and the result is a large class of interesting Gorenstein rings over which we have explicit, small, universal resolutions. _Acknowledgements_. We owe thanks to Steven Amelotte for many useful conversations, in particular improving our understanding of moment angle manifolds, the combinatorics of simplicial complexes, and almost linear resolutions. We thank Alexander Berglund for sharing important insights on Koszul-type phenomena in algebra and topology. For many fruitful discussions on systems of higher homotopies, and for allowing us to share some of these ideas here, we are grateful to Eloisa Grifo. We thank Srikanth Iyengar for his encouragement and many useful discussions on \(\mathrm{A}_{\infty}\)-structures. We also thank Trung Chau, Michael DeBellevue, and Keller VandeBogert for helpful conversations regarding concrete examples. Part of this work was done at the Hausdorff Research Institute for Mathematics, Bonn, when Briggs, Letz, and Pollitz were at the "Spectral Methods in Algebra, Geometry, and Topology" trimester, funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy-EXC-2047/1-390685813. Briggs is supported by the European Union under the Grant Agreement no. 101064551, and part of this work was completed when he was funded by NSF grant DMS-1928930. Letz was partly supported by the Deutsche Forschungsgemeinschaft (SFB-TRR 358/1 2023 - 491392403) and by the Alexander von Humboldt Foundation in the framework of a Feodor Lynen research fellowship endowed by the German Federal Ministry of Education and Research. Pollitz was supported by NSF grants DMS-1840190, DMS-2002173, and DMS-2302567. ## 2. Koszul homomorphisms In this section we discuss the Koszul property in various settings, starting from the classical notion for an algebra over a field and leading up to a definition of a Koszul local homomorphism. Examples have been collected in the Section 3. While later sections exploit the machinery of \(\mathrm{A}_{\infty}\)-algebras, this section requires only knowledge of differential graded (dg) algebras; a suitable reference for the latter is [1]. We fix a field \(k\), and work with local rings having residue field \(k\). We also consider modules with two \(\mathbb{Z}\)-gradings, the _weight grading_ and the _homological grading_. The weight grading is denoted \(M=M_{(\star)}\) and the corresponding shift \(M(d)\) is given by \(M(d)_{(w)}=M_{(w+d)}\). For the homological grading we write \(M=M_{\bullet}\) and the suspension \(\Sigma^{d}M\) is given by \((\Sigma^{d}M)_{i}=M_{i-d}\). The homological degree of an element \(m\in M\) is denoted \(|m|\). We assume that the two gradings are _compatible_ in the sense that \(M\) is bigraded by its submodules \(M_{i,(w)}:=M_{i}\cap M_{(w)}\). If \(M\) is a complex, the differential \(\partial^{M}\) should preserve the weight grading and decrease the homological grading by one, and we equip \(\Sigma M\) with the differential \(\partial^{\Sigma M}:=-\partial^{M}\). **Definition 2.1**.: An augmented \(k\)-algebra \(K\) is _Koszul (over \(k\))_ if it admits an algebra grading \(K=\bigoplus_{w\geq 0}K_{(w)}\), known as a _weight grading_, such that \(K_{(0)}=k\) and such that the minimal resolution of \(k\) is linear with respect to this grading. _Remark 2.2_.: Definition 2.1 is essentially the classical definition of a Koszul algebra due to Priddy [14, Section 2], see also [15, 16], except that we do not consider the weight grading to be part of the given data. We emphasize that the additional grading may be _different_ to the given one; cf. Remark 3.3. By [1, Section 2.3] a Koszul algebra is quadratic with respect to this new grading; that is, generated as an associative \(k\)-algebra by elements of weight one, subject to relations of weight two. For example if \(U\) is a graded vector space then the trivial extension algebra \(K=k\ltimes U\) is a graded augmented algebra, and the weight grading \(K_{(0)}=k\) and \(K_{(1)}=U\) makes \(K\) Koszul; see Example 3.4. Next we define a Koszul property for dg algebras. This definition appears implicitly in the work of Berglund [1], where it is applied to Sullivan models for topological spaces; it is _not_ the same as the Koszul property introduced in [10], which is too general for our purposes. **2.3**.: Let \(A\) and \(B\) be dg \(k\)-algebras. Recall that \(A\) is _quasi-isomorphic_ to \(B\), denoted \(A\simeq B\), if there exists a zig-zag of quasi-isomorphisms of dg \(k\)-algebras connecting \(A\) and \(B\). A dg \(k\)-algebra \(K\) is called _formal_ if it is quasi-isomorphic to \(\operatorname{H}(K)\). **Definition 2.4**.: An augmented dg \(k\)-algebra \(K\), such that \(\operatorname{H}(K)\) is _Koszul_ if it is formal and \(\operatorname{H}(K)\) is Koszul in the sense of Definition 2.1. _Remark 2.5_.: It is well-known that formality is closely related with the Koszul property. In fact, an augmented \(k\)-algebra \(K\) is Koszul in the sense of Definition 2.1 if and only if the dg \(k\)-algebra \(\operatorname{\mathsf{RHom}}_{K}(k,k)\) is formal; see [11] and also [1, 2, 2] and [1, Theorem 2.9]. This condition is sometimes called _coformality_ of \(K\). From this perspective, a dg algebra \(K\) is Koszul in the sense of Definition 2.4 if and only if it is both formal and coformal. Before introducing the Koszul property for local homomorphisms, we need to recall the notion of the derived fiber. Let \(\varphi\colon Q\to R\) be a local homomorphism of commutative noetherian local rings having maximal ideals \(\mathfrak{m}_{Q}\) and \(\mathfrak{m}_{R}\), respectively, and common residue field \(k\). Let \(A\to R\) be a dg algebra resolution of \(R\) over \(Q\); that is, \(A\) is a dg algebra concentrated in non-negative degrees, such that \(A\) is degreewise a free \(Q\)-module, and \(A\to R\) is a morphism of dg algebras inducing an isomorphism in homology. The _derived fiber_ of \(\varphi\) is the dg \(k\)-algebra \[R\otimes_{Q}^{\mathsf{L}}k:=A\otimes_{Q}k\,.\] Up to a zig-zag of quasi-isomorphisms of dg \(k\)-algebras, \(R\otimes_{Q}^{\mathsf{L}}k\) is independent of the choice of \(A\). For more information see [1]. We remark that one can equally well use a different species of model for the resolution \(A\), such as simplicial algebras or \(\operatorname{A_{\infty}}\)-algebras. We use the latter in Section 7. **Definition 2.6**.: Let \(\varphi\colon Q\to R\) be a finite local homomorphism. We say that \(\varphi\) is _Koszul_ if \(R\otimes_{Q}^{\mathsf{L}}k\) is a Koszul dg \(k\)-algebra; that is, \(R\otimes_{Q}^{\mathsf{L}}k\) is formal and its homology \(\operatorname{Tor}^{Q}(R,k)\) is Koszul in the sense of Definition 2.1. According to Definition 2.1, when \(\varphi\) is Koszul, the Tor algebra \(\operatorname{Tor}^{Q}(R,k)\) admits a quadratic presentation, albeit not necessarily generated by elements in homological degree one. Taking \(Q=k\) to be a field, we recover the classical definition: The homomorphism \(k\to R\) is Koszul if and only if \(R\) is a Koszul \(k\)-algebra; cf. Definition 2.1. The examples given in the next section show that Koszul homomorphisms are extremely common. In particular, this class includes all flat local homomorphisms whose fibers are (classically) Koszul; all complete intersection and all Golod homomorphisms; Cohen presentations of most local rings having codepth at most \(3\), and of generic Gorenstein rings. _Remark 2.7_.: When the minimal \(Q\)-free resolution \(A\) of \(R\) admits a dg algebra structure, the derived fiber \(R\otimes^{\mathsf{L}}_{Q}k=A\otimes_{R}k\) has zero differential, and is automatically formal. This is the case for complete intersection homomorphisms (Example 3.2) and when \(\operatorname{proj\,dim}_{Q}(R)\leqslant 3\) (Theorem 3.10). When \(\varphi\) is Golod the minimal resolution typically does not support a dg algebra structure, cf. Remark 8.3, but nonetheless \(R\otimes^{\mathsf{L}}_{Q}k\) is formal (Example 3.5). The monomial rings presented in Examples 3.6 and 3.24 yield non-Koszul homomorphisms; the corresponding minimal resolution does not admit a dg algebra structure in the former example (see [20, 2.2]), but it does in the latter example. ### Cohen Koszul local rings Recall that every local ring \(R\) admits a Cohen presentation, that is, a surjection \(\varphi\colon Q\to\widehat{R}\) from a regular local ring \(Q\), and one may assume that \(\varphi\) is minimal in the sense that \(\ker(\varphi)\subseteq\mathfrak{m}^{2}_{Q}\). **Definition 2.9**.: A local ring \(R\) is _Cohen Koszul_ if every homomorphism \(\varphi\colon Q\to\widehat{R}\) is Koszul, where \(Q\) is a regular local ring and \(\varphi\) is surjective with \(\ker(\varphi)\subseteq\mathfrak{m}^{2}_{Q}\). In other words, every minimal Cohen presentation of \(R\) is Koszul. _Remark 2.10_.: For equicharacteristic rings the minimal Cohen presentation \(Q\to\widehat{R}\) is essentially unique. In this situation, if \(R\) is already a quotient of a regular local ring, then by Proposition 2.16 there is no need to complete \(R\) to determined whether \(R\) is Cohen Koszul. Also in this equicharacteristic case, \(R\) can be given the structure of an algebra over its residue field \(k\), and then the Koszul complex \(K^{R}\) on the maximal ideal of \(R\) is a dg \(k\)-algebra as well. It is well-known that \[K^{R}\simeq\widehat{R}\otimes^{\mathsf{L}}_{Q}k\] as dg \(k\)-algebras, see for example [1, Theorem 8.1]. Therefore in this situation we can say that \(R\) is Cohen Koszul exactly when \(K^{R}\) is a Koszul dg \(k\)-algebra. In the mixed characteristic case, the fact that \(K^{R}\) is not a dg \(k\)-algebra introduces subtleties. The distinction between formality of dg \(k\)-algebras and formality of dg rings means that it is not clear whether Definition 2.9 is independent of the choice of Cohen presentation. In all of our examples however, the choice will be irrelevant. Complete intersection rings are Cohen Koszul by Example 3.2, Golod rings are Cohen Koszul by Example 3.5, and most rings of codepth 3 are Cohen Koszul according to Theorem 3.10. Cohen Koszul local rings have rational Poincare series. Recall that the Poincare series of a finitely generated \(R\)-module \(M\) is \[\operatorname{P}^{R}_{M}(t)=\sum_{n\in\mathbb{Z}}\operatorname{rank}_{k}( \operatorname{Tor}^{R}_{n}(M,k))t^{n}\,.\] **Proposition 2.11**.: _Let \(R\) be a Cohen Koszul local ring with embedding dimension \(e\) and residue field \(k\). Fix a weight grading making the Koszul homology \(\operatorname{H}(K^{R})\) a Koszul \(k\)-algebra. Then_ \[\operatorname{P}^{R}_{k}(t)=\frac{(1+t)^{e}}{\sum_{i,w}(-1)^{w}\operatorname{ rank}_{k}(\operatorname{H}_{i}(K^{R})_{(w)})t^{i+w}}\,.\] _Remark 2.12_.: We note that since \(\operatorname{H}_{*}(K^{R})\) is generated in weight \(1\), the rank of \(\operatorname{H}_{i}(K^{R})_{(w)}\) is equal to the rank of \([\operatorname{H}_{>0}(K^{R})^{w}/\operatorname{H}_{>0}(K^{R})^{w+1}]_{i}\). Therefore is is not necessary to _choose_ a weight grading to calculate the Poincare series above. Proof.: Let \(T=\operatorname{H}(K^{R})\), bigraded by homological degree and by weight, and write \[\operatorname{H}_{T}(s,t):=\sum_{i,w}\operatorname{rank}_{k}(T_{i,(w)})t^{i}s^{w }\quad\text{and}\quad\operatorname{P}_{k}^{T}(s,t):=\sum_{i,w}\operatorname{ rank}_{k}(\operatorname{Tor}_{w}^{T}(k,k)_{i})t^{i}s^{w}\,.\] In \(\operatorname{Tor}_{w}^{T}(k,k)_{i}\), the index \(w\) is the usual homological grading of \(\operatorname{Tor}\), and \(i\) is the extra grading that comes from the homological grading on \(T\). Since \(T\) is Koszul with respect to its weight grading, the usual computation of the Poincare series of a Koszul algebra shows that \(\operatorname{H}_{T}(s,t)\operatorname{P}_{k}^{T}(-s,t)=1\); see [10]. Let \(Q\to\widehat{R}\) be a minimal Cohen presentation. Formality of \(\widehat{R}\otimes^{\operatorname{L}}_{Q}k\) implies that the spectral sequence [1, 6.2.1] is degenerate, and so from [1, 6.2 (b')] we obtain the first equality below, which yields the desired series \[\operatorname{P}_{k}^{R}(t)=(1+t)^{e}\operatorname{P}_{k}^{T}(t,t)=\frac{(1+t )^{e}}{\operatorname{H}_{T}(-t,t)}\,.\qed\] Proposition 2.11 recovers the known Poincare series for complete intersection rings, Golod rings, and almost Golod Gorenstein rings; see Section 3 for the latter. There are a number of results that apply to certain subsets of Cohen Koszul rings, motivating the study of whether such a property holds for these rings in general. We highlight a couple instances below. _Remark 2.13_.: Recently, Brown-Dao-Sridhar have shown that over complete intersection and Golod rings, the ideals of minors of differentials in minimal free resolutions are eventually two-periodic [1]. It would be worthwhile, and seems plausible (in light of the structural result in Theorem 7.7), to determine whether (strictly) Cohen Koszul rings satisfy this property more generally. _Remark 2.14_.: Lower bounds on the Loewy length of the homology module of perfect complexes are of interest in both algebra and topology; see, for example, [1, 1, 2, 10]. For Cohen Koszul rings one can establish such bounds. Let \(R\) be a local ring with residue field \(k\), and let \(k[\chi_{1},\dots,\chi_{n}]\) denote a maximal polynomial subalgebra of the graded \(k\)-algebra \(\operatorname{Ext}_{R}(k,k)\), generated by elements in even degree. For example, if \(R\) is complete intersection, then \(n\) is the codimension of \(R\). If \(R\) is Cohen Koszul, then for any finite free \(R\)-complex \(F\) with \(\operatorname{H}(F)\neq 0\) one has the inequality \[\sum_{n\in\mathbb{Z}}\ell\ell_{R}\operatorname{H}_{n}(F)\geqslant n+1\,.\] One can use similar ideas to those in [1], as well as [1], to establish this bound; here, however, formality of the derived fiber of a Cohen presentation of \(R\) is a main ingredient. Moreover, when \(R\) is complete intersection it agrees with the common bounds from [1, 1]. ### Properties of Koszul homomorphisms Before moving on to examples we establish some basic change of rings properties for Koszul homomorphisms. First, we note that being Koszul is invariant under certain flat base changes, and in particular under completion. **Proposition 2.16**.: _Given a finite local homomorphism \(\varphi\colon Q\to R\) and a flat local homomorphism \(\psi\colon Q\to Q^{\prime}\) inducing an isomorphism on residue fields, \(\varphi\) is Koszul if and only if \(\varphi\otimes Q^{\prime}\colon Q^{\prime}\to R\otimes_{Q}Q^{\prime}\) is Koszul._ Proof.: The natural map \(R\otimes_{Q}^{\mathsf{L}}k\to(R\otimes_{Q}Q^{\prime})\otimes_{Q^{\prime}}^{ \mathsf{L}}k\) is a quasi-isomorphism of dg \(k\)-algebras. Indeed, if \(A\) is a dg algebra resolution of \(R\) over \(Q\), then \(A\otimes_{Q}Q^{\prime}\) is a dg algebra resolution of \(R\otimes_{Q}Q^{\prime}\) over \(Q^{\prime}\), and \((A\otimes_{Q}Q^{\prime})\otimes_{Q^{\prime}}k\cong A\otimes_{Q}k\). The next proposition will often be useful in reducing the dimension of \(Q\) or \(R\). **Proposition 2.17**.: _Let \(\varphi\colon Q\to R\) be a finite local homomorphism, and let \(x\in\mathfrak{m}_{Q}\) and \(y\in\mathfrak{m}_{R}\)._ 1. _If_ \(x\) _is regular on_ \(Q\) _and_ \(y\) _is regular on_ \(R\)_, with_ \(\varphi(x)=y\)_, then_ \(\varphi\) _is Koszul if and only if the map of quotients_ \(Q/(x)\to R/(y)\) _is Koszul._ 2. _If_ \(y\) _is regular on_ \(R\) _then_ \(\varphi\) _is Koszul if and only if the composition_ \(Q\to R/(y)\) _is Koszul._ 3. _If_ \(\varphi(x)=0\) _and_ \(x\) _generates a free_ \(R\)_-module summand of_ \(\ker(\varphi)/\ker(\varphi)^{2}\)_, then_ \(\varphi\) _is Koszul if and only if the induced map_ \(Q/(x)\to R\) _is Koszul._ Proof.: For part (1), let \(A\xrightarrow{\simeq}R\) be a dg algebra resolution of \(R\) over \(Q\). The assumptions on \(x\) and \(y\) imply that \(A\otimes_{Q}Q/(x)\) is a dg algebra resolution of \(R/(y)\) over \(Q/(x)\). In particular there are quasi-isomorphisms \[R\otimes_{Q}^{\mathsf{L}}k\simeq A\otimes_{Q}k\cong(A\otimes_{Q}Q/(x))\otimes _{Q/(x)}k\simeq R/(y)\otimes_{Q/(x)}^{\mathsf{L}}k\] of dg \(k\)-algebras, and the claim follows. For part (2), if \(A\) is is a dg algebra resolution of \(R\) over \(Q\), as above, then there is an element \(\tilde{y}\in\mathfrak{m}_{Q}A_{0}\) mapping to \(y\in R\). Taking an exterior variable \(e\) of degree \(1\), and setting \(\partial(e)=\tilde{y}\), the extension \(A\langle e\rangle\) is then a dg algebra resolution of \(R/(y)\) over \(Q\); see [1, 6.1]. We see that \[R/(y)\otimes_{Q}^{\mathsf{L}}k\simeq A\langle e\rangle\otimes_{Q}k\cong(A \otimes_{Q}k)\otimes_{k}\Lambda_{k}(e)\,,\] where \(\Lambda_{k}(e)\) is the exterior algebra over \(k\) on the degree \(1\) variable \(e\). Hence it reamins to note that the tensor product of dg \(k\)-algebras is formal if and only if both of its factors are formal, and Koszul if and only if both of its factors are Koszul; see [10, Theorem 2] for the latter. For part (3) we invoke [10, Proposition 2.1] to obtain a dg algebra resolution \(A\) of \(R\) over \(Q\) and an isomorphism of dg \(k\)-algebras \(A\otimes_{Q}k\cong W\otimes_{k}\Lambda_{k}(e)\), where \(W\) is a dg subalgebra of \(A\otimes_{Q}k\) and \(\Lambda_{k}(e)\) is the exterior algebra on a generator of degree \(1\) (this result is based upon Andre's theory of special cycles [1]). Moreover, the proof in [10] identifies the inclusion \(\Lambda_{k}(e)\to A\otimes_{Q}k\) with the natural map \(Q/(x)\otimes_{Q}^{\mathsf{L}}k\to R\otimes_{Q}^{\mathsf{L}}k\). It follows that \[R\otimes_{Q/(x)}^{\mathsf{L}}k\simeq(R\otimes_{Q}^{\mathsf{L}}k)\otimes_{Q/(x) \otimes_{Q}^{\mathsf{L}}k}^{\mathsf{L}}k\simeq(W\otimes_{k}\Lambda_{k}(e)) \otimes_{\Lambda_{k}(e)}k\cong W\,.\] As in part (2) we may deduce that \(R\otimes_{Q/(x)}^{\mathsf{L}}k\) is Koszul if and only if \(R\otimes_{Q}^{\mathsf{L}}k\simeq(R\otimes_{Q/(x)}^{\mathsf{L}}k)\otimes_{k} \Lambda_{k}(e)\) is Koszul. _Remark 2.18_.: Many other Koszul-like properties have appeared in the literature. Within local commutative algebra, Herzog, Reiner, and Welker introduced a notion of Koszul local ring in [11], and the same condition is investigated in [10]. The local ring \(k\llbracket x,y\rrbracket/(x^{2}-y^{3})\) is Koszul in the sense of these references, but it is not a Koszul \(k\)-algebra according to Definition 2.1, since it does not admit a quadratic presentation. However, the same ring \(k\llbracket x,y\rrbracket/(x^{2}-y^{3})\) is Koszul as a \(k\llbracket y\rrbracket\)-algebra by Example 3.1, and it is Koszul as a \(k\llbracket x,y\rrbracket\)-algebra by Example 3.2--in other words, it is Cohen Koszul. Myers studied a Koszulity condition in [13] that is closely related to ours. That work begins with a standard graded \(k\)-algebra \(R\), and its Koszul homology \(\operatorname{H}(K^{R})\) is said to be _strand Koszul_ if it is Koszul with respect to the induced weight grading by strands: \(\operatorname{H}(K^{R})_{(w)}=\bigoplus_{i+j=w}\operatorname{H}_{i}(K^{R})_{j}\) (the total of the homological and internal gradings). According to [13, Theorem B] the Koszul complex \(K^{R}\) is automatically quasi-formal in this situation. In contrast, \(R\) is Cohen Koszul if \(K^{R}\) is formal and \(\operatorname{H}(K^{R})\) is Koszul with respect to _any_ weight grading. In the next section we see that there are many natural examples for which \(\operatorname{H}(K^{R})\) is Koszul with respect to a different grading than the strand grading. The authors of [1] have also investigated how the Koszul condition on a local ring affects the algebra structure of the Koszul homology \(\operatorname{H}(K^{R})\). While this is connected to the present work, we note that there are many examples of local \(k\)-algebras that are Cohen Koszul but not Koszul as \(k\)-algebras. _Remark 2.19_.: We end this section with remarks on the generality of Definition 2.6. We have chosen to focus on the setting of finite \(Q\)-algebras because this is necessary to meaningfully talk about transferring homological information from \(Q\) to \(R\). However, the notion of Koszul homomorphism can be extended fruitfully to all local homomorphisms, with some additional technicalities. In particular, to accommodate non-finite algebras, Definition 2.1 should be adapted to require an isomorphism on the completion at the augmentation ideal \(\widehat{K}\cong\prod_{w\geqslant 0}K_{(w)}\), and that \(k\) admit a linear resolution over the corresponding graded ring \(\bigoplus_{w\geqslant 0}K_{(w)}\). For \(Q\) non-local, one can say a \(Q\)-algebra \(R\) is _Koszul at a prime \(\mathfrak{p}\in\operatorname{Spec}(Q)\)_ if \(\kappa(\mathfrak{p})\otimes_{Q}^{\mathsf{L}}R\) is a Koszul \(\operatorname{dg}\) algebra over \(\kappa(\mathfrak{p})=Q_{\mathfrak{p}}/\mathfrak{p}Q_{\mathfrak{p}}\). From this perspective it is natural to replace \(R\) with a sheaf of algebras on some scheme; examples related to this have appeared in the literature, such as the sheaf of Clifford algebras constructed by Buchweitz in [1]. In this work we focus on applications to commutative algebra. The natural generalization to non-commutative algebras is interesting as well, using exactly the same definitions. ## 3. Examples of Koszul homomorphisms This section contains examples (and counterexamples) demonstrating the ubiquity of the Koszul condition. The first class of examples generalizes the class of Koszul algebras over a field in a straightforward manner. **Example 3.1** (Flat homomorphisms with Koszul fiber).: A flat finite local homomorphism \(\varphi\colon Q\to R\) is Koszul if and only if its fiber \(R\otimes_{Q}k\) is a Koszul \(k\)-algebra. Such examples are readily constructed by deforming presentations of known Koszul algebras. For example, the \(k\)-algebra \(k[x]/(x^{2})\) is Koszul, and so the homomorphism \[Q\to Q[x]/(x^{2}-ax-b)\] is Koszul for any \(a,b\in\mathfrak{m}_{Q}\). We will see that the non-flat case is significantly more interesting, and crucially _there is no need for the map \(\varphi\colon Q\to R\) to be quadratic in any sense_, as the following examples demonstrate. Nonetheless, later we return to the idea that Koszul homomorphisms look like deformations of classical Koszul presentations; cf. Theorem 7.1. **Example 3.2** (Complete intersection homomorphisms).: Let \(\varphi\colon Q\to R\) be a surjective, local, complete intersection homomorphism of codimension \(c\). That is, \(\ker(\varphi)\) is generated by a \(Q\)-regular sequence \(\boldsymbol{f}=f_{1},\ldots,f_{c}\). In this case, the Koszul complex \(A=\operatorname{Kos}^{Q}(\boldsymbol{f})\) is a dg algebra resolution of \(R\) over \(Q\). Then \[R\otimes_{Q}^{\mathsf{L}}k=A\otimes_{Q}k=\Lambda_{k}(\Sigma A_{1}\otimes_{Q}k)\] is the exterior algebra on a \(k\)-space of rank \(c\) in homological degree one, with zero differential. Thus the derived fiber of \(\varphi\) is clearly formal, and it is well-known to be a Koszul \(k\)-algebra with its weight homological gradings coinciding; cf. [13, Examples 2.2(2)]. In particular, a local complete intersection ring is Cohen Koszul. _Remark 3.3_.: For a Cohen Koszul ring \(R\), the homological and weight grading on \(\operatorname{H}(K^{R})\) coincide if and only if \(R\) is complete intersection. Indeed, the reverse implication was indicated in Example 3.2, and the forward implication follows from a Theorem of Wiebe [10]; see also [1, Theorem 2.3.15]. **Example 3.4** (Trivial extension algebras).: Given a graded ring \(B\) and a graded \(B\)-module \(U\), let \(B\ltimes U\) denote the trivial extension of \(B\) by \(U\). This is the graded module \(B\oplus U\) with multiplication \[(b,u)\cdot(b^{\prime},u^{\prime})=(bb^{\prime},bu^{\prime}+b^{\prime}u)\,.\] Tha main case of interest is that \(B\) is augmented to \(k\), and \(U\) is a graded \(k\)-space thought of as a trivial \(B\)-module. If \(B\) is also a \(k\)-algebra, then \(B\) is a Koszul if and only if \(B\ltimes U\) is by [10]. In particular, for any \(k\)-space \(U\) the local \(k\)-algebra \(k\ltimes U\) is Koszul. It is also Cohen Koszul, according to Example 3.5. For a surjective local map \(\varphi\colon Q\to R\) and \(R\)-module \(M\) there is the following coefficientwise inequality of Poincare series: \[\operatorname{P}_{M}^{R}(t)\preccurlyeq\frac{\operatorname{P}_{M}^{Q}(t)}{1-t (\operatorname{P}_{R}^{Q}(t)-1)}\,. \tag{3.4.1}\] This fact is due to Serre; see, for example, [11, Proposition 3.3.2]. **Example 3.5** (Golod homomorphisms).: Let \(\varphi\colon Q\to R\) be a surjective, local, Golod homomorphism. That is, the Serre bound (3.4.1) is an equality for the residue field \(M=k\). For other, equivalent definitions, see [11, Theorem 2.3]. In particular, a surjective, local homomorphism is Golod if and only if there is a quasi-isomorphism of dg algebras \[R\otimes_{Q}^{\mathsf{L}}k\simeq k\ltimes U\,,\] where \(U\) is a positively graded vector space over \(k\). The trivial algebra \(k\ltimes U\) is Koszul by [13, Proposition 3.4.9], see also Example 3.4, and hence \(\varphi\) is Koszul. The examples above provide many instances of Cohen Koszul \(k\)-algebras appearing in local commutative algebra, and _some_ of these examples are Koszul in the classical sense: for example a local complete intersection \(k\)-algebra is Koszul if and only if it is quadratic; see [14, 3.1]. We give a small example of a local \(k\)-algebra that is neither Koszul nor Cohen Koszul. **Example 3.6**.: Suppose \(R=k\llbracket a,b,c\rrbracket/(a^{2},bc,ac+b^{2})\). A computation shows \(\Lambda_{k}(e_{1},e_{2},e_{3})/(e_{1}e_{2}e_{3})\) is an algebra retract of \(\operatorname{H}(K^{R})\); one could, for example, use Macaulay2 [GS] for this calculation. In particular, \(\operatorname{H}(K^{R})\) has a relation of weight \(3\) in any weight grading, and so \(\operatorname{H}(K^{R})\) cannot be a Koszul \(k\)-algebra. Thus, \(R\) is not Cohen Koszul. Moreover, \(R\) is the completion of a quadratic algebra that is not Koszul in the classical sense. One can see this by computing the third differential in the minimal free resolution of \(k\) over \(R\); alternatively, see [1]. In Theorem 3.10, we see that this is part of an exceptional class of non-Cohen Koszul local rings among rings having embedding dimension at most three. **Example 3.7** (Short Gorenstein local rings).: A local ring \(R\) with maximal ideal \(\mathfrak{m}_{R}\) is called _short Gorenstein_ if it is Gorenstein and \(\mathfrak{m}_{R}^{3}=0\). Equivalently, these are the local rings having Hilbert series \(\operatorname{H}_{R}(t)=1+nt+t^{2}\) for some \(n\). This is an important class of local rings that occurs frequently in what follows. If \(R\) is also a \(k\)-algebra, then \(R\) is Koszul by [10] or [11] (this follows as well from the slightly earlier computations in [1]). Short Gorenstein rings are also Cohen Koszul by Example 3.15. ### Rings of small codepth Recall that for a local ring \(R\) with maximal ideal \(\mathfrak{m}_{R}\) and residue field \(k\), its codepth is \[\operatorname{codepth}(R):=\operatorname{rank}_{k}(\mathfrak{m}_{R}/ \mathfrak{m}_{R}^{2})-\operatorname{depth}(R)\,.\] This value is a measure of the singularity of \(R\) in the sense that \(\operatorname{codepth}(R)=0\) if and only if \(R\) is regular. The next result illustrates that a local ring of small codepth is almost always Cohen Koszul. First we remind the reader of the structure theorem on Koszul homology for rings having codepth three. **3.9**.: Assume \(R\) has codepth three and fix a minimal Cohen presentation \(\varphi\colon Q\to\widehat{R}\). The minimal \(Q\)-free resolution \(A\) of \(\widehat{R}\) supports a dg algebra structure; see, for example, [1]. Hence, \(\widehat{R}\otimes_{Q}^{\perp}k\) is formal and the algebra structure of its homology \(T=\operatorname{Tor}^{Q}(\widehat{R},k)=\operatorname{H}(K^{R})\) has been classified as follows. Fix bases \(\{e_{1},\ldots,e_{\ell}\}\), \(\{f_{1},\ldots,f_{m}\}\) and \(\{g_{1},\ldots,g_{n}\}\) for \(T_{1}\), \(T_{2}\) and \(T_{3}\), respectively. By [1], there are non-negative integer parameters \(p,q,r\), satisfying \[p\leqslant\ell-1,\quad q\leqslant m-p,\quad r\leqslant\min\{\ell,m\}\,,\] such that \(T\) is one of the graded-commutative algebras determined below, where products between the basis elements not listed are zero: * \(e_{1}e_{2}=f_{3}\), \(e_{1}e_{3}=f_{2}\), \(e_{2}e_{3}=f_{1}\), \(e_{i}f_{i}=g_{1}\) for \(i=1,2,3\). * \(e_{1}e_{2}=f_{3}\), \(e_{1}e_{3}=f_{2}\), \(e_{2}e_{3}=f_{1}\). * \(e_{1}e_{2}=f_{3}\), \(e_{1}f_{1}=g_{1}\), \(e_{2}f_{2}=g_{1}\). * \(e_{i}f_{i}=g_{1}\) for \(i=1,\ldots,r\) and \(r\geq 2\) * \(\operatorname{H}(p,q)\): \(e_{p+1}e_{i}=f_{i}\) for \(i=1,\ldots,p\), and \(e_{p+1}f_{p+i}=g_{i}\) for \(i=1,\ldots,q\). In each case, let \(T^{\prime}\) denote the corresponding subalgebra on the basis elements appearing in the multiplication table above. If \(U\) is the \(k\)-space spanned by the basis of elements of \(T\) not recorded in the same multiplication table, then note that there is an isomorphism of graded \(k\)-algebras \[T\cong T^{\prime}\ltimes U\,. \tag{3.9.1}\] Finally, as a matter of terminology, we say a local ring \(R\) belongs to one of these classes if \(T=\operatorname{Tor}^{Q}(\widehat{R},k)\) has the corresponding algebra structure. **Theorem 3.10**.: _A local ring of codepth two or less is Cohen Koszul. A local ring of codepth three is Cohen Koszul if and only if it belongs to_ **CI**_,_ **B**_,_ **G\((r)\) or_ **H\((p,q)\) Proof.: Let \(R\) be a local ring of codepth \(c\) with residue field \(k\). Fix a minimal Cohen presentation \(\varphi\colon Q\to\widehat{R}\) and set \(T=\operatorname{Tor}^{Q}(\widehat{R},k)\). If \(c\leqslant 2\), then \(R\) must be complete intersection or Golod; that is, \(\varphi\) is a complete intersection or Golod homomorphism. This follows from the Hilbert-Burch theorem (see [10], as well [1, Theorem 1.4.17]) combined with a result of [13, Theorem 2.3]; see also [11, Proposition 5.3.4]. Therefore by Examples 3.2 and 3.5\(R\) is Cohen Koszul in either case. Now assume \(c=3\). The simplest case is when \(R\) belongs to \(\mathbf{CI}\), since in this case \(R\) is complete intersection and so \(R\) is Cohen Koszul; cf. Example 3.2. For the remainder of the proof we adopt the notation from 3.9 and analyze the graded algebra structure of \(T\). By Example 3.4 and (3.9.1), \(T\) is Koszul if and only if \(T^{\prime}\) is, and so we replace \(T\) by \(T^{\prime}\) in what follows. If \(T\) is \(\mathbf{H}(p,q)\), then it is the tensor product of a trivial extension algebra on \(e_{1},\dots,e_{p},f_{p+1},\dots,f_{p+q}\) and the exterior algebra on \(e_{p+1}\), where each of these have weight \(1\). Hence, \(T\) is a tensor product of Koszul \(k\)-algebras and so it is Koszul. If \(T\) is \(\mathbf{G}(r)\), we give \(\{e_{i}\}\) and \(\{f_{i}\}\) weight one and \(g_{1}\) weight two. Then \(T\) is a short Gorenstein \(k\)-algebra and hence Koszul; see Example 3.7. If \(T\) is \(\mathbf{B}\), then we give \(T\) weight grading \[T_{(w)}=\begin{cases}k&w=0\\ ke_{1}\oplus ke_{2}\oplus kf_{1}\oplus kf_{2}&w=1\\ kf_{3}\oplus kg_{1}&w=2\\ 0&w\geqslant 3\,.\end{cases}\] As an algebra \(T\) is the quotient of the exterior algebra \[T\cong\Lambda_{k}(e_{1},e_{2},f_{1},f_{2})/(e_{1}f_{2},e_{2}f_{1},f_{1}f_{2}, e_{1}f_{1}-e_{2}f_{2})\,.\] The graded \(k\)-algebra \(T\) is Koszul since its defining ideal has a quadratic Grobner basis, and is therefore Koszul by [14]. Finally, if \(T\) is type \(\mathbf{TE}\), then \(T\) is not quadratic with respect to any weight grading. Indeed, the products in 3.9 force \(e_{1},e_{2},e_{3}\) to all have weight one, as well as the minimal relation \(e_{1}e_{2}e_{3}=0\). Hence, \(T\) is not Koszul. _Remark 3.11_.: Theorem 3.10 deals with the 'absolute' case. That is to say, when a local ring is Cohen Koszul. However given a surjective map of local rings \(\varphi\colon Q\to R\) one has that this is always Koszul for \(\operatorname{proj}\dim_{Q}(R)\leqslant 2\). Indeed, in this case \(\varphi\) is either a complete intersection homomorphism, or a Golod homomorphism. For the case \(\operatorname{proj}\dim_{Q}(R)=3\), the structure theorem on \(T=\operatorname{Tor}^{Q}(R,k)\) discussed in 3.9 can be applied assuming that \(\ker(\varphi)\) is a perfect ideal. In this case \(\varphi\) is Koszul except in the case that \(T\) belongs to \(\mathbf{TE}\). A surjective local homomorphism \(\varphi\colon Q\to R\) of finite projective dimension is _Gorenstein of projective dimension \(d\)_ if \[\operatorname{Ext}^{i}_{Q}(R,Q)=\begin{cases}R&i=d\\ 0&i\neq d\,.\end{cases} \tag{3.11.1}\] For example, Gorenstein rings of codimension \(d\) are exactly those whose minimal Cohen presentations are Gorenstein of projective dimension \(d\). If \(d=3\) then \(\operatorname{Tor}^{Q}(R,k)\) belongs to \(\mathbf{G}(r)\) and the dg algebra structure on the minimal resolution of \(R\) over \(Q\) can be described explicitly; one can hence verify directly, as is done below, that such maps are Koszul. **Example 3.12** (Gorenstein homomorphisms of projective dimension \(3\)).: Assume that \(\varphi\) is Gorenstein of projective dimension \(3\). Buchsbaum and Eisenbud constructed the minimal free resolution of \(R\) over \(Q\) in [1, Theorem 2.1 & 4.1]: \[A=0\to Q\to Q^{r}\xrightarrow{\psi}Q^{r}\to Q\to 0\] where \(r\geqslant 3\) is odd and the first and third differential of \(A\) depend on Pfaffians of submatrices of the alternating matrix \(\psi\). Furthermore \(A\) is a graded-commutative dg algebra with the following multiplication: We fix bases \(\{e_{i}\}_{i=1}^{r}\), \(\{f_{i}\}_{i=1}^{r}\), and \(\{g\}\) for \(A_{1}\), \(A_{2}\), and \(A_{3}\), respectively. The multiplication is determined by \[e_{i}e_{j}:=\sum_{\ell=1}^{r}(\pm 1)\operatorname{pf}(\psi_{ij\ell})f_{\ell} \quad\text{for $i<j$}\,,\quad e_{i}f_{j}:=\delta_{ij}g\quad\text{and}\quad f_{i}f_{j}=0\] where \(\psi_{ij\ell}\) is the submatrix of \(\psi\) obtained by deleting the \(i\)th, \(j\)th and \(\ell\)th row and column, and \(\delta_{ij}\) is the Kronecker delta function. The exact description is not important for the sequel, see [1, Example 2.1.3] for detail. When \(r=3\), it follows that \(A\) is a Koszul complex on three elements and so \(\varphi\) is a surjective complete intersection map, hence \(\varphi\) is Koszul by Example 3.2. When \(r\geqslant 5\), we have that \(\operatorname{pf}(\psi_{ij\ell})\in\mathfrak{m}_{Q}\) for any \(i\), \(j\) and \(\ell\). Hence the only non-zero products in the graded algebra \(A\otimes_{Q}k\) are \[e_{i}f_{i}=f_{i}e_{i}=g\quad\text{for $1\leqslant i\leqslant r$}\,.\] Giving \(A\otimes_{Q}k\) the weight grading \[(A\otimes_{Q}k)_{(w)}=\begin{cases}A_{0}\otimes_{Q}k&w=0\\ (A_{1}\otimes_{Q}k)\oplus(A_{2}\otimes_{Q}k)&w=1\\ A_{3}\otimes_{Q}k&w=2\\ 0&\text{else}\,,\end{cases}\] it belongs to \(\mathbf{G}(r)\) in 3.9. The fact that \(A\otimes_{Q}k\) is a Koszul \(k\)-algebra was established in the proof of Theorem 3.10. Thus \(\varphi\) is a Koszul homomorphism. ### Almost Golod Gorenstein rings We discuss here a large class of local Gorenstein rings displaying interesting homological behavior, that has been studied before in [14, Section 6]. **Definition 3.14**.: We say that an artinian local ring \(R\) is _almost Golod_ if the socle quotient \(R/\operatorname{soc}(R)\) is Golod. A general local ring is _almost Golod_ if it is Cohen-Macaulay and \(R/(\boldsymbol{x})\) is an almost Golod artinian ring, where \(\boldsymbol{x}\) is a maximal regular sequence that is part of a minimal generating set for \(\mathfrak{m}_{R}\). **Example 3.15** (Almost Golod Gorenstein rings).: Let \(R\) be an almost Golod local ring that is also Gorenstein of dimension \(d\). Fix a minimal Cohen presentation \(\varphi\colon Q\to\widehat{R}\) and set \(T=\operatorname{Tor}^{Q}(\widehat{R},k)\). Since \(Q\) is regular and \(R\) is Gorenstein, \(T\) is a Poincare duality algebra by [1]. That is to say, for each \(0\leqslant i\leqslant d\) the multiplication maps \[T_{i}\times T_{d-i}\to T_{d}\cong k\] are perfect pairings. Furthermore, by [14, Theorem 1] the quotient \(T/T_{d}\) is a subalgebra of the trivial extension algebra \(\operatorname{Tor}^{Q}(R/\operatorname{soc}(R),k)\), and hence is itself a trivial extension algebra. It follows that \(T\) is a short Gorenstein \(k\)-algebra. Moreover, prescribing \(T\) with the following weight grading \[T_{(0)}=T_{0},\quad T_{(1)}=\bigoplus_{i=1}^{d-1}T_{i},\;\;\text{and}\quad T_{ (2)}=T_{d}\] makes \(T\) a Koszul \(k\)-algebra, with the multiplication of \(T\) being equivalent to a perfect pairing on \(T_{(1)}\); see the proof of Theorem 3.10. We prove that these rings are Cohen Koszul under the assumption that \(R\) contains a field of characteristic zero and \(d\) is odd, by giving a characterization analogous to Avramov's characterization of Golod rings [1]. We do not know whether the assumption on the characteristic or on \(d\) is necessary. **Theorem 3.16**.: _Let \(R\) be a local with a minimal Cohen presentation \(Q\to\widehat{R}\). If there is a quasi-isomorphism of dg \(k\)-algebras \(\widehat{R}\otimes_{Q}^{\mathbb{L}}k\simeq T\), where \(T\) is a short Gorenstein graded \(k\)-algebra, then \(R\) is almost Golod Gorenstein. Assuming that \(R\) contains a field of characteristic zero, and that \(\operatorname{codepth}(R)\) is odd, the converse holds as well. In particular, almost Golod Gorenstein rings (of characterstic zero and odd codepth) are Cohen Koszul._ Proof.: If \(R\) is almost Golod Gorenstein, we have already seen that \(T=\operatorname{Tor}^{Q}(\widehat{R},k)\) is a short Gorenstein algebra, and in particular Koszul. The proof that \(\widehat{R}\otimes_{Q}^{\mathbb{L}}k\) is formal under the stated assumptions will be given in Theorem 5.7 and Lemma 8.17. Conversely assume that \(\widehat{R}\otimes_{Q}^{\mathbb{L}}k\) is quasi-isomorphic to a short Gorenstein algebra \(T\), and write \(e\) for the embedding dimension of \(R\). By Proposition 2.11, \[\operatorname{P}_{k}^{R}(t)=\frac{(1+t)^{e}}{1-(\operatorname{H}_{T}(t)-1-t^{ e})t+t^{e+2}}\,,\] and hence as a consequence of [14, Proposition 6.2], \(R\) is almost Golod. _Remark 3.17_.: The prototypical example of an almost Golod Gorenstein ring is a short Gorenstein local ring \(R\). In this case \(R/\operatorname{soc}(R)=R/\mathfrak{m}_{R}^{2}\) is Golod by [10]. Among the complete intersection local rings, the almost Golod Gorenstein rings are exactly those having codimension two or less; see Theorem 3.16. If \(R\) is a Gorenstein local ring of codimension \(3\) that is not complete intersection, then \(R\) is almost Golod Gorenstein by Example 3.12 and Theorem 3.16. By [14, Proposition 6.3] every Gorenstein compressed local ring of socle degree at least \(4\) is almost Golod Gorenstein. Moreover for fixed emdedding dimension and socle degree, the generic Gorenstein local \(k\)-algebra is compressed by [13, Theorem I]. Hence the almost Golod Gorenstein condition is extremely common. If \(Q\) is a standard graded polynomial algebra, a homogeneous quotient \(R=Q/I\) is said to have an _almost linear_ resolution over \(Q\) if the ideal \(I\) is generated by forms of degree \(d\), and for all \(0<i<\operatorname{proj}\dim_{Q}(R)\) we have \(\operatorname{Tor}_{i}^{Q}(R,k)_{j}=0\) unless \(j-i=d-1\)[1]. This yields another large class of examples: One can show that graded Gorenstein rings with almost linear resolutions are almost Golod Gorenstein using techniques from later in the paper (the \(\operatorname{A}_{\infty}\)-structure on \(\operatorname{Tor}^{Q}(R,k)\) will satisfy Theorem 7.1 for degree reasons, and then Theorem 3.16 can be applied; we leave these details to the interested reader). _Remark 3.18_.: In rational homotopy theory, Golod rings correspond to spaces that are (rationally) homotopy equivalent to a wedge of spheres, while Gorenstein rings are analogous to manifolds, or more generally Poincare duality spaces; see the looking glass [1] for more information. To be more precise, if \(M\) is a simply connected manifold and the punctured space \(M\smallsetminus\{\operatorname{pt}\}\) is rationally homotopy equivalent to a wedge of spheres, then the cohomology ring \(\operatorname{H}^{*}(M;\mathbb{Q})\) is an almost Golod Gorenstein ring. In [16], Stasheff proved such spaces are formal, and therefore they are Koszul in the sense of Berglund [1]. A well studied class of manifolds satisfying this property are the _highly connected manifolds_, that is, those \(M\) with \(\operatorname{H}^{i}(M;\mathbb{Q})=0\) when \(0<i<\lfloor\dim(M)/2\rfloor\). Since Gorenstein rings that are not regular or hypersurfaces are never Golod, the Serre bound (3.4.1) must be strict for such rings. However, a tighter bound can be established for Gorenstein local rings, as we show now. The case of equality below is equivalent (when \(d=0\)) to the formula for \(\operatorname{P}^{R}_{k}(t)\) given in [14, Proposition 6.2], and our proof is essentially equivalent to that of _loc. cit_. **Proposition 3.19**.: _Let \(R\) be a local ring having dimension \(d\) and embedding dimension \(e\), with residue field \(k\) and Koszul complex \(K^{R}\). If \(R\) is Gorenstein but not regular or a hypersurface, then there is a coefficientwise inequality_ \[\frac{\operatorname{P}^{R}_{k}(t)}{(1+t)^{d}-t^{2}\operatorname{P}^{R}_{k}(t) }\preccurlyeq\frac{(1+t)^{e-d}}{1-t^{2}(1+t)^{e-d}+t^{e-d+2}-\sum_{i=1}^{e-d-1 }\operatorname{rank}_{k}\operatorname{H}_{i}(K^{R})t^{i+1}}\] _and equality holds if and only if \(R\) is almost Golod._ While the left-hand side is not equal to \(\operatorname{P}^{R}_{k}(t)\), it increases monotonically with \(\operatorname{P}^{R}_{k}(t)\), and so it directly measures the growth of the resolution of \(k\). Therefore within the class of Gorenstein local rings, almost Golod rings display extremal behavior analogous to Golod rings. Proof.: As \(R\) is Gorenstein, prime avoidance yields a regular sequence \(\boldsymbol{x}=x_{1},\ldots,x_{d}\) that is part of a minimal generating set for \(\mathfrak{m}_{R}\), so that \(\bar{R}=R/(\boldsymbol{x})\) is artinian Gorenstein and of embedding dimension \(e-d\). By Nagata's theorem [13, Section 27] we have \(\operatorname{P}^{R}_{k}(t)=\operatorname{P}^{\bar{R}}_{k}(t)(1+t)^{d}\) and so the first equality below holds: \[\frac{\operatorname{P}^{R}_{k}(t)}{(1+t)^{d}-t^{2}\operatorname{P }^{R}_{k}(t)} =\frac{\operatorname{P}^{\bar{R}}_{k}(t)}{1-t^{2}\operatorname{P }^{\bar{R}}_{k}(t)}\] \[=\operatorname{P}^{\bar{R}/\operatorname{soc}(\bar{R})}_{k}(t)\] \[\preccurlyeq\frac{\operatorname{P}^{Q}_{k}(t)}{1-t(\operatorname{ P}^{Q}_{\bar{R}/\operatorname{soc}(\bar{R})}(t)-1)}\] \[=\frac{\operatorname{P}^{Q}_{k}(t)}{1-t(\operatorname{P}^{Q}_{ \bar{R}}(t)-t^{e-d}+t\operatorname{P}^{Q}_{k}(t)-t^{e-d+1}-1)}\,.\] The second equality holds using [1, Theorem 2], the coefficientwise inequality is the Serre bound (3.4.1) for \(\bar{R}/\operatorname{soc}(\bar{R})\), and the last equality holds using [1, Theorem 1]. Since \(\operatorname{P}^{Q}_{k}(t)=(1+t)^{e-d}\) and \(\operatorname{P}^{Q}_{\bar{R}}(t)=\sum_{i=0}^{i=e-d}\operatorname{rank}_{k} \operatorname{H}_{i}(K^{R})t^{i}\), we obtain the claimed inequality. It remains to note that \(R\) is almost Golod if and only if \(\bar{R}/\operatorname{soc}(\bar{R})\) is Golod, if and only if equality holds in the third line above. Later, in Section 8.15, we explicitly construct resolutions over almost Golod Gorenstein rings that achieve the bound of Proposition 3.19. ### Monomial rings In this subsection we consider rings of the form \(R=Q/I\), where \(Q=k[x_{1},\dots,x_{n}]\) and \(I\) is generated by monomials \(m_{1},\dots,m_{r}\). By Froberg's theorem [10], \(R\) is Koszul as a \(k\)-algebra if and only if each \(m_{i}\) is quadratic. However, the condition that \(R\) is Cohen Koszul is more common and more subtle. **Example 3.21** (Almost linear resolutions).: Let \(d\geqslant 3\) be an integer and let \(m\) be a monomial in \(Q\) that is divisible by all monomials of degree \(\deg(m)-d+1\). Let \(R=Q/I\), where \(I\) is the ideal generated by all monomials of degree \(d\) not dividing \(m\). Then by [10, Theorem 4.2] the ring \(R\) is Gorenstein and has an almost linear resolution over \(Q\), and moreover every zero dimensional monomial Gorenstein ring having an almost linear resolution is of this form. These rings are almost Golod Gorenstein, and in particular Cohen Koszul, by Remark 3.17. An exact combinatorial characterization of which monomial rings are Cohen Koszul would be very interesting; this seems possible but likely non-trivial. We describe a special case that produces a large number of explicit examples. **Proposition 3.22**.: _Let \(Q=k[x_{1},\dots,x_{n}]\) be a polynomial ring over a field \(k\), and let \(I=(m_{1},\dots,m_{r})\) a monomial ideal such that each \(m_{i}\) contains a variable not in any other \(m_{j}\). Then \(R=Q/I\) is Cohen Koszul._ Proof.: The Taylor resolution \(A\) of \(R\) over \(Q\) has a basis \(\{e_{I}\}\) indexed by subsets \(I\subseteq\{1,\dots,n\}\), with \(e_{I}\) in homological degree \(|I|\), and the differential is defined by \[\partial(e_{I})=\sum_{i\in I}\pm\frac{m_{I}}{m_{I\smallsetminus\{i\}}}e_{I \smallsetminus\{i\}}\,,\] where \(m_{I}=\operatorname{lcm}\left\{m_{i}\,|\,i\in I\right\}\); see [11] for details and signs. The hypothesis of the proposition exactly guarantees that the Taylor resolution is minimal. Gemeda [1] proved that the Taylor resolution has a dg algebra structure with product \[e_{I}e_{J}=\pm\frac{m_{I}m_{J}}{m_{I\cup J}}e_{I\cup J}\] when \(I\cap J=\varnothing\), and with \(e_{I}e_{J}=0\) otherwise. By Remark 2.7 it follows that \(R\otimes_{Q}^{\perp}k=A\otimes_{Q}k\) is formal. It remains to show that \(A\otimes_{Q}k\) is a Koszul \(k\)-algebra. Let \(M\) be the graph with vertices \(\{1,\dots,r\}\) and an edge connecting \(i\) and \(j\) if and only if \(\gcd(m_{i},m_{j})\neq 1\). From the description of \(A\) above it follows that \[A\otimes_{Q}k=\frac{k[e_{I}\mid I\subseteq M\text{ connected}]}{(e_{I}e_{J} \mid\gcd(m_{I},m_{J})\neq 1)}\,,\] where \(k[e_{I}]\) is the free graded-commutative algebra on the indicated \(e_{I}\); compare this with [1, 6.2]. Assigning each \(e_{I}\) weight \(1\), we are done because quadratic monomial quotients of free graded-commutative algebras are Koszul by Froberg's theorem [10] (such rings belong to class B in [10, Section 3], and Froberg constructs linear resolutions of the residue field for all rings of class B). _Remark 3.23_.: One can readily exhibit monomial rings satisfying the hypothesis of the proposition, and not falling into the other classes described above. For example, \(R=k[\![a,b,c,d,e,f]\!]/(abc,cd,ae,acf)\). The next examples are \(k\)-algebras that fail to be Cohen Koszul; the first is a Koszul \(k\)-algebra and the second has \(\operatorname{H}(K^{R})\) a Koszul \(k\)-algebra. Both examples fail to be Cohen Koszul since in each case \(K^{R}\) admits a nonzero triple Massey product, and hence is not formal; cf. [11] for more details on Massey products. **Example 3.24**.: The \(k\)-algebra \(R=k[\![a,b,c,d]\!]/(a^{2},ab,bc,cd,d^{2})\) is the completion of a Koszul \(k\)-algebra (in the classical sense) by [10, Corollary 1]. However, the map \(k[\![a,b,c,d]\!]\to R\) is not Koszul. Indeed, by [1, Example 5.1.4], \(K^{R}\) has a nonzero triple Massey product, and so \(K^{R}\) is not formal. **Example 3.25**.: Let \(Q=k[\![a,b,c,d,e]\!]\) and consider the quotient map \[\varphi\colon Q\to R:=Q/(ab^{2},cd^{2},e^{3},abcd,d^{2}e^{2},b^{2}e^{2},ace,b^{ 2}d^{2}e)\,.\] In [16, Theorem 3.1], it is shown that \(\operatorname{H}(K^{R})\) is a trivial extension that admits a nonzero triple Massey product; the latter is an obstruction to the formality of \(K^{R}\), while the former justifies that \(\operatorname{H}(K^{R})\) is a Koszul \(k\)-algebra. _Remark 3.26_.: To any monomial ideal \(I\subseteq Q\) one may associate a square-free monomial ideal \(I^{\circ}\) in a larger polynomial ring \(Q^{\circ}\), known as the _polarization_ of \(I\). Froberg [10] proved that there is a regular sequence of linear forms \(y_{1},\dots,y_{t}\) in the quotient \(R^{\circ}=Q^{\circ}/I^{\circ}\) such that \(R=R^{\circ}/(y_{1},\dots,y_{t})\). From Proposition 2.17 it follows that \(R\) is Cohen Koszul if and only if \(R^{\circ}\) is Cohen Koszul. A _simplicial complex_\(\Delta\) on \([n]=\{1,\dots,n\}\) is a nonempty family of subsets of \([n]\), closed under taking subsets. The _Stanley-Reisner ring_ associated to \(\Delta\), denoted \(k[\Delta]\), is the quotient of \(k[x_{1},\dots,x_{n}]\) by the ideal generated by monomials \(x_{i_{1}}\cdots x_{i_{t}}\) such that \(\{i_{1},\dots,i_{t}\}\notin\Delta\). Every square free monomial ring is the Stanley-Reisner ring of some simplical complex, and so by Remark 3.26 we may restrict to such monomials rings. _Remark 3.27_.: We make some remarks about the connections to toric topology; for precise definitions and background on this area the reader may consult [1]. To a simplical complex \(\Delta\) one also associates the moment angle complex \(\mathcal{Z}_{\Delta}\), a finite CW-complex with an action of the torus \((S^{1})^{n}\). The homotopy quotient \(\mathcal{DJ}_{\Delta}=\mathcal{Z}_{\Delta}/\!/(S^{1})^{n}\) is known as the Davis-Januszkiewicz space of \(\Delta\). By [1, Theorem 4.8] and [14, Theorem 4.8] the cochain algebra of this space is quasi-isomorphic to the Stanley-Reisner ring: \[C^{*}(\mathcal{DJ}_{\Delta};k)\simeq k[\Delta]\,,\] where the variables \(x_{i}\) are given cohomological degree \(2\). From this it follows that \(k[\Delta]\) is a Koszul \(k\)-algebra if and only if \(\mathcal{DJ}_{\Delta}\) is a Koszul space in the sense of [1]. As remarked in [1, Example 5.8], this happens exactly when \(k[\Delta]\) is a quadratic algebra, or equivalently if \(\Delta\) is a _flag complex_, that is, the minimal faces not belonging to \(\Delta\) are all edges. The question of when \(\mathcal{Z}_{\Delta}\) is a Koszul space seems to be more interesting. By [1, Lemma 3.1] there is a quasi-isomorphism of dg \(k\)-algebras \[C^{*}(\mathcal{Z}_{\Delta};k)\simeq k[\Delta]\otimes_{Q}^{\mathsf{L}}k\,.\] Thus \(k[\Delta]\) is Cohen Koszul if and only if \(\mathcal{Z}_{\Delta}\) is a Koszul space. The related condition that \(\mathcal{Z}_{\Delta}\) is formal has been investigated in [1, 15]. The almost Golod condition is also connected with the minimally non-Golod condition for simplicial complexes introduced in [1]. Indeed, the proof of [1, Theorem 1.1] shows that if \(M=\mathcal{Z}_{\Delta}\) is a moment angle manifold, and if \(M\smallsetminus\{\operatorname{pt}\}\) is rationally homotopy equivalent to a wedge of spheres, then \(\Delta\) is minimally non-Golod (over \(\mathbb{Q}\)). ## 4. Background on \(\mathrm{A}_{\infty}\)-algebras and coalgebras Stasheff introduced \(\mathrm{A}_{\infty}\)-algebras in topology to characterize loop spaces [14, 15], and they have since proven a powerful tool in algebra as a flexible generalization of dg algebras; for an overview see [13]. In our context, the minimal \(Q\)-free resolution of a finite \(Q\)-algebra \(R\) can be equipped with an \(\mathrm{A}_{\infty}\)-algebra structure (see Section 5), and this will be leveraged to characterize Koszul homomorphisms in terms of presentations similar to the quadratic presentations for classical Koszul algebras (see Section 7). From now on \(Q\) is always a local ring with maximal ideal \(\mathfrak{m}_{Q}\) and residue field \(k\), and unadorned tensor products and \(\mathrm{Hom}\) sets are taken over \(Q\). **4.1**.: An _\(\mathrm{A}_{\infty}\)-algebra_ is a graded \(Q\)-module \(A\) equipped with \(Q\)-linear maps \[m_{n}\colon A^{\otimes n}\to A\quad\text{for $n\geqslant 1$}\] of degree \((n-2)\) satisfying the _Stasheff identities_ \[\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r+st}m_{r+1+t}\left(\mathrm{id}^{ \otimes r}\otimes m_{s}\otimes\mathrm{id}^{\otimes t}\right)=0\,. \tag{4.1.1}\] Taking \(n=1\) this says that \(m_{1}\) is a degree \(-1\) square zero endomorphism of \(A\), so we can (and will) make \(A\) a complex with \(\partial=m_{1}\). Taking \(n=2\) yields a product satisfying the Leibniz rule \(\partial m_{2}=m_{2}(\partial\otimes\mathrm{id}+\mathrm{id}\otimes\partial)\). The next Stasheff identity, for \(n=3\), can be interpreted as saying that \(m_{2}\) is associative up to a homotopy given by \(m_{3}\), that is, \[m_{2}(\mathrm{id}\otimes m_{2}-m_{2}\otimes\mathrm{id})=\partial m_{3}+m_{3}( \partial\otimes\mathrm{id}\otimes\mathrm{id}+\mathrm{id}\otimes\partial\otimes \mathrm{id}+\mathrm{id}\otimes\mathrm{id}\otimes\partial)\,.\] If for some \(n\) the Stasheff identity (4.1.1) holds for every integer less than \(n\), then the obstruction \[\mathrm{obs}_{n}^{A}:=\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,n>s>1\end{subarray}}(-1)^{r+st}m_{r+1+t}\left(\mathrm{id}^{ \otimes r}\otimes m_{s}\otimes\mathrm{id}^{\otimes t}\right) \tag{4.1.2}\] is a chain map \(A^{\otimes n}\to A\); see [10, Corollaire B.1.2]. A _morphism of \(\mathrm{A}_{\infty}\)-algebras \(\varphi\colon A\to B\)_ consists of \(Q\)-linear maps \[\varphi_{n}\colon A^{\otimes n}\to B\quad\text{for $n\geqslant 1$}\] of degree \((n-1)\) satisfying \[\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r+st}\varphi_{r+1+t}\left( \mathrm{id}^{\otimes r}\otimes m_{s}^{A}\otimes\mathrm{id}^{\otimes t}\right)\] \[=\sum_{p=1}^{n}\sum_{\begin{subarray}{c}\boldsymbol{\alpha}\in \mathbb{N}^{p}\\ |\boldsymbol{\alpha}|=n\end{subarray}}(-1)^{v(\boldsymbol{\alpha})}m_{p}^{B}( \varphi_{\alpha_{1}}\otimes\ldots\otimes\varphi_{\alpha_{p}}) \tag{4.1.3}\] where \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{p})\) and \(|\boldsymbol{\alpha}|=\sum_{k=1}^{p}\alpha_{k}\), with \(v(\boldsymbol{\alpha})=\sum_{k=1}^{p}(p-k)(\alpha_{k}-1)\). If for some \(n\) the Stasheff identity (4.1.3) holds for every integer less than \(n\), then we define \[\operatorname{obs}_{n}^{\varphi}:=\sum_{\begin{subarray}{c}r+s+t=n \\ r,t\geqslant 0,s\geqslant 2\end{subarray}}(-1)^{r+st}\varphi_{r+1+t}\left( \operatorname{id}^{\otimes r}\otimes m_{s}^{A}\otimes\operatorname{id}^{ \otimes t}\right)\\ -\sum_{p=1}^{n-1}\sum_{\begin{subarray}{c}\boldsymbol{\alpha} \in\mathbb{N}^{p}\\ |\boldsymbol{\alpha}|=n\end{subarray}}(-1)^{v(\boldsymbol{\alpha})}m_{p}^{B}( \varphi_{\alpha_{1}}\otimes\ldots\otimes\varphi_{\alpha_{p}})\,.\] Then the Stasheff identity (4.1.3) holds if and only if \[\operatorname{obs}_{n}^{\varphi}=m_{1}\varphi_{n}+(-1)^{n}\varphi_{n}(m_{1} \otimes\operatorname{id}^{\otimes(n-1)}+\cdots+\operatorname{id}^{\otimes(n -1)}\otimes m_{1})\,.\] A morphism \(\varphi\) of \(\operatorname{A}_{\infty}\)-algebras is a _quasi-isomorphism_ if the chain map \(\varphi_{1}\) is a quasi-isomorphism of complexes. The morphism \(\varphi\) is _strict_ if \(\varphi_{n}=0\) for \(n>1\). The composition of morphisms \(\varphi\colon A\to B\) and \(\psi\colon B\to C\) is defined by \[(\psi\circ\varphi)_{n}:=\sum_{p=1}^{n}\sum_{\begin{subarray}{c}\boldsymbol{ \alpha}\in\mathbb{N}^{p}\\ |\boldsymbol{\alpha}|=n\end{subarray}}(-1)^{v(\boldsymbol{\alpha})}\psi_{p}( \varphi_{\alpha_{1}}\otimes\ldots\otimes\varphi_{\alpha_{p}})\,.\] An \(\operatorname{A}_{\infty}\)-algebra \(A\) is _strictly unital_ if there exists \(1_{A}\in A_{0}\) such that \[\begin{split} m_{2}(1_{A}\otimes a)=a=m_{2}(a\otimes 1_{A}) \quad\text{for all $a\in A$}\quad\text{and}\\ m_{n}(a_{1}\otimes\ldots\otimes a_{i-1}\otimes 1_{A}\otimes a_{i+1} \otimes\ldots\otimes a_{n})=0\quad\text{for all $1\leqslant i\leqslant n$}\end{split} \tag{4.2.1}\] for any \(a_{1},\ldots,a_{n}\in A\) and \(n>2\). A morphism of strictly unital \(\operatorname{A}_{\infty}\)-algebras \(\varphi\colon A\to B\) is a morphism of \(\operatorname{A}_{\infty}\)-algebras such that \[\varphi_{1}(1_{A})=1_{B}\quad\text{and}\] \[\varphi_{n}(a_{1}\otimes\ldots\otimes a_{i-1}\otimes 1_{A} \otimes a_{i+1}\otimes\ldots\otimes a_{n})=0\quad\text{for all $1\leqslant i \leqslant n$} \tag{4.2.2}\] for any \(a_{j}\in A\) and \(n>1\). An \(\operatorname{A}_{\infty}\)-algebra is _connective_ if it is concentrated in non-negative degrees. If \(A=Q\oplus\bar{A}\) is a graded module and \(1_{A}\) a free generator of the direct summand \(Q\), then \(A\) is an \(\operatorname{A}_{\infty}\)_-algebra with a split unit_. A split unital \(\operatorname{A}_{\infty}\)-algebra structure on a graded module concentrated in non-negative degrees is equivalent to the existence of \(Q\)-linear maps \[\bar{m}_{n}\colon\bar{A}^{\otimes n}\to\bar{A}\quad\text{for $n\geqslant 1$}\] of degree \((n-2)\) and \(Q\)-linear maps \[h_{1}\colon\bar{A}\to Q\quad\text{and}\quad h_{2}\colon\bar{A}^{\otimes 2}\to Q\] of degrees \(-1\) and \(0\), respectively, such that for \(n\neq 2,3\) the Stasheff identities (4.1.1) hold when replacing \(m_{i}\) by \(\bar{m}_{i}\), and for \(n=2,3\) \[\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r+st}\bar{m}_{r+1+t}(\operatorname{id }^{\otimes r}\otimes\bar{m}_{s}\otimes\operatorname{id}^{\otimes t})+(h_{n-1} \otimes\operatorname{id}-\operatorname{id}\otimes h_{n-1})=0\,,\] and additionally \[h_{1}\bar{m}_{1}=0\,,\quad h_{1}\bar{m}_{2}-h_{2}(\bar{m}_{1} \otimes\operatorname{id}+\operatorname{id}\otimes\bar{m}_{1})=0\quad\text{and}\] \[h_{1}\bar{m}_{3}+h_{2}(\bar{m}_{2}\otimes\operatorname{id}- \operatorname{id}\otimes\bar{m}_{2})=0\,;\] the former replaces the second and third Stasheff identity and the latter supplements the first three Stasheff identities. In particular, for \(n>2\) the maps \(\bar{m}_{n}\) are the appropriate restrictions of \(m_{n}\). For \(n=2\), we obtain \(m_{2}\) by \(\bar{m}_{2}+h_{2}\) and additionally enforcing (4.2.1). For \(n=1\), we have \(m_{1}=\bar{m}_{1}+h_{1}\). This treatment is similar to [1, Section 3], but is slightly more general since we allow \(\bar{A}_{0}\neq 0\) and hence need \(h_{2}\) as well as \(h_{1}\). If \(A\) were not connective then we would also need maps \(h_{n}\) for \(n\geqslant 3\). Taken together the \(h_{n}\) will correspond to the curvature term on the bar construction of \(A\); see 4.7. _Remark 4.4_.: Let \(A\) be a connective \(\mathrm{A}_{\infty}\)-algebra with a split unit. Then the projection \(A\to Q\) onto the free summand containing the unit need not be a morphism of strictly unital \(\mathrm{A}_{\infty}\)-algebras. In fact, this happens if and only if \(h_{1}=0\) and \(h_{2}=0\). Such \(\mathrm{A}_{\infty}\)-algebras are called _augmented_. **4.5**.: Fix a graded coalgebra \((C,\Delta)\). Recall \(C\) is _counital_ if there exists a counit map \(\varepsilon\colon C\to Q\) such that \[(\mathrm{id}\otimes\!\varepsilon)\Delta=\mathrm{id}=(\varepsilon\otimes \mathrm{id})\Delta\,.\] We say \(C\) is a _curved dg coalgebra_ it is equipped with a coderivation \(\partial\) of degree \(-1\) and a curvature term \(h\colon C\to Q\) of degree \(-2\) such that \[\partial^{2}=(h\otimes\mathrm{id}-\mathrm{id}\otimes\!h)\Delta\quad\text{and} \quad h\partial=0\,.\] A curved dg coalgebra \(C\) is _connected_ if it is non-negatively graded, counital and \(C_{0}=Q\). In this setting, we write \(C=Q\oplus\bar{C}\) for \(\bar{C}=\ker(\varepsilon)\) and set \[\bar{\Delta}:=\left(\bar{C}\to C\xrightarrow{\Delta}C\otimes C \to\bar{C}\otimes\bar{C}\right)\,,\] \[\bar{\partial}:=\left(\bar{C}\to C\xrightarrow{\partial}C\to \bar{C}\right)\quad\text{and}\quad\bar{h}:=\left(\bar{C}\to C\xrightarrow{h}Q\right)\] for the restrictions to \(\bar{C}\). These maps satisfy the same relations as \(\Delta\), \(\partial\) and \(h\). **4.6**.: The tensor algebra \(\mathrm{T}^{a}(V)\) on a graded \(Q\)-module \(V\) has underlying graded module \(\mathrm{T}(V):=\bigoplus_{n\geqslant 0}V^{\otimes n}\), and the multiplication \[\mu((v_{1}\otimes\cdots\otimes v_{k})\otimes(v_{1}^{\prime}\otimes\cdots \otimes v_{\ell}^{\prime})):=v_{1}\otimes\cdots\otimes v_{k}\otimes v_{1}^{ \prime}\otimes\cdots\otimes v_{\ell}^{\prime}\,.\] The tensor algebra is bigraded by \(\mathrm{T}^{a}_{(n)}(V)_{i}=(V^{\otimes n})_{i}\). The tensor coalgebra \(\mathrm{T}^{c}(V)\) on a graded \(Q\)-module \(V\) has underlying graded module \(\mathrm{T}(V)\), and the comultiplication \[\Delta(v_{1}\otimes\cdots\otimes v_{n}):=\sum_{i=0}^{n}(v_{1}\otimes\cdots \otimes v_{i})\otimes(v_{i+1}\otimes\cdots\otimes v_{n})\,.\] The tensor coalgebra is bigraded by \(\mathrm{T}^{c}_{(n)}(V)_{i}=(V^{\otimes n})_{i}\). The data of an \(\mathrm{A}_{\infty}\)-algebra can equivalently be encoded as a differential on a tensor coalgebra, as we see next. **4.7**.: Let \(A\) be a split unital connective \(\mathrm{A}_{\infty}\)-algebra. Then the tensor coalgebra \(\mathrm{T}^{c}(\Sigma\bar{A})\) has an induced curved dg coalgebra structure. The curvature term has components \[h_{1}\upx^{-1}\colon\mathrm{T}^{c}_{(1)}(\Sigma\bar{A})\to Q\quad\text{and} \quad h_{2}(\upx^{-1})^{\otimes 2}\colon\mathrm{T}^{c}_{(2)}(\Sigma\bar{A})\to Q\] and zero otherwise. The coderivation \(\partial\) has components \[(-1)^{\frac{k(k+1)}{2}}\sum_{\begin{subarray}{c}i+j=n-k\\ i,j\geqslant 0\end{subarray}}(\mathrm{id}^{\otimes i}\otimes\!\upx\bar{m}_{k}( \upx^{-1})^{\otimes k}\otimes\mathrm{id}^{\otimes j})\colon\mathrm{T}^{c}_{(n) }(\Sigma\bar{A})\to\mathrm{T}^{c}_{(n-k)}(\Sigma\bar{A})\] for \(k\geqslant 1\), and zero otherwise. The map \(\partial\) is well-defined since \(A\) is concentrated in non-negative homological degree. With this structure \(\mathrm{T}^{c}(\Sigma\bar{A})\) is a connected curved dg coalgebra, and we call \[\mathsf{B}_{(*)}(A)_{\bullet}:=\left(\mathrm{T}^{c}_{(*)}(\Sigma\bar{A})_{ \bullet},h,\partial,\Delta\right)\] the _bar construction of \(A\)_. For \(a_{1},\ldots,a_{n}\in\bar{A}\) we write \[[a_{1}|\ldots|a_{n}]:=(\mathfrak{x}a_{1}\otimes\ldots\otimes\mathfrak{x}a_{n}) \in\mathsf{B}_{(n)}(A)\,.\] For a split unital connective \(\mathrm{A}_{\infty}\)-algebra the canonical projection and inclusion induce a degree \(-1\) map of graded modules \[\mathsf{B}(A)\twoheadrightarrow\Sigma\bar{A}\hookrightarrow\Sigma A\to A\,.\] Let \(C\) be a connected curved dg coalgebra. Then the algebra \(\mathrm{T}^{a}(\Sigma^{-1}\bar{C})\) has an induced dg algebra structure. The differential \(m_{1}\) has components \[-\bar{h}_{\mathfrak{x}}\colon\mathrm{T}^{a}_{(1)}(\Sigma^{-1} \bar{C})\to\mathrm{T}^{a}_{(0)}(\Sigma^{-1}\bar{C})\,,\quad-\Sigma^{-1}\bar{ \partial}_{\mathfrak{x}}\colon\mathrm{T}^{a}_{(1)}(\Sigma^{-1}\bar{C})\to \mathrm{T}^{a}_{(1)}(\Sigma^{-1}\bar{C})\] \[\text{and}\quad(\mathfrak{x}^{-1})^{\otimes 2}\bar{\Delta} \mathfrak{x}\colon\mathrm{T}^{a}_{(1)}(\Sigma^{-1}\bar{C})\to\mathrm{T}^{a}_{ (2)}(\Sigma^{-1}\bar{C})\,;\] and zero otherwise. With this structure \(\mathrm{T}^{a}(\Sigma^{-1}\bar{C})\) is a split unital connective dg algebra, and we call \[\Omega_{(*)}(C)_{\bullet}:=\left(\mathrm{T}^{a}_{(*)}(\Sigma^{-1}\bar{C})_{ \bullet},m_{1},m_{2}\right)\] the _cobar construction of \(C\)_. For \(c_{1},\ldots,c_{n}\in\bar{C}\) we write \[\langle c_{1}|\ldots|c_{n}\rangle:=(\mathfrak{x}^{-1}c_{1}\otimes\ldots \otimes\mathfrak{x}^{-1}c_{n})\in\Omega_{(n)}(C)\,.\] For a connected curved dg coalgebra the canonical inclusion and projection maps induces a degree \(-1\) map of graded modules \[C\to\Sigma^{-1}C\twoheadrightarrow\Sigma^{-1}\bar{C}\hookrightarrow\Omega(C )\,.\] _Remark 4.8_.: The bar and cobar constructions define an adjoint pair of functors when restricted to split unital connective dg algebras and connected curved dg coalgebras; see [13, Section 3]. It remains an adjunction when restricted to augmented connective dg algebras and connected dg coalgebras. **4.9**.: A morphism \(\varphi\colon(C,\Delta,\varepsilon,\partial,h)\to(C^{\prime},\Delta^{\prime}, \varepsilon^{\prime},\partial^{\prime},h^{\prime})\) of connected curved dg coalgebras consists of \(Q\)-linear maps \[\varphi_{0}\colon C\to Q\quad\text{and}\quad\varphi_{1}\colon C\to C^{\prime}\] of degree \(-1\) and \(0\), respectively, satisfying \[\varepsilon=\varepsilon^{\prime}\varphi_{1}\,,\quad h^{\prime} \varphi_{1}=h-\varphi_{0}\partial+(\varphi_{0}\otimes\varphi_{0})\Delta\,,\] \[\partial^{\prime}\varphi_{1}=\varphi_{1}\partial+(\varphi_{0} \otimes\varphi_{1}-\varphi_{1}\otimes\varphi_{0})\Delta\quad\text{and}\quad \Delta^{\prime}\varphi_{1}=(\varphi_{1}\otimes\varphi_{1})\Delta\,;\] see [13, Chapter 4]. This induces a map of dg algebras \(\Omega(\varphi)\colon\Omega(C)\to\Omega(C^{\prime})\), and we say \(\varphi\) is a _weak equivalence_ if \(\Omega(\varphi)\) is a quasi-isomorphism. **4.10**.: Let \(A\) be an \(\mathrm{A}_{\infty}\)-algebra. An _\(\mathrm{A}_{\infty}\)-module over \(A\)_ is a graded module \(M\) equipped with maps \[m_{n}^{M}\colon A^{\otimes(n-1)}\otimes M\to M\quad\text{for }n\geqslant 1\] of degree \((n-2)\), satisfying \[\sum_{\begin{subarray}{c}r+s+t=n\\ r\geqslant 0,s,t\geqslant 1\end{subarray}}(-1)^{r+st}m_{r+1+t}^{M}\left(\operatorname{id }^{\otimes r}\otimes m_{s}\otimes\operatorname{id}^{\otimes t}\right)+\sum_{ \begin{subarray}{c}r+t=n\\ r\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r}m_{r+1}^{M}\left(\operatorname{id }^{\otimes r}\otimes m_{s}^{M}\right)=0\,.\] If \(A\) is strictly unital, we say an \(\operatorname{A_{\infty}}\)-module \(M\) over \(A\) is _strictly unital_ if \[m_{2}(1_{A}\otimes m)=m\quad\text{for all }m\in M\quad\text{and}\] \[m_{n}(a_{1}\otimes\ldots\otimes a_{i-1}\otimes 1_{A}\otimes a_{i+1 }\otimes\ldots\otimes a_{n-1}\otimes m)=0\quad\text{for all }1\leqslant i \leqslant n-1\] for any \(a_{1},\ldots,a_{n-1}\in A\) and \(m\in M\) with \(n\neq 2\). If \(A\) is connective and has a split unit, then a strictly unital \(\operatorname{A_{\infty}}\)-module structure over \(A\) on \(M\) is equivalent to the existence of maps \[\bar{m}_{n}^{M}\colon\bar{A}^{\otimes(n-1)}\otimes M\to M\quad\text{for }n\geqslant 1\] of degree \((n-2)\) such that for \(n\neq 2,3\) the Stasheff identities hold when replacing \(m\) by \(\bar{m}\), and for \(n=2,3\) there is an extra curvature term \(h_{n-1}\otimes\operatorname{id}\) similar to 4.3. **4.11**.: Let \(A\) be a split unital connective \(\operatorname{A_{\infty}}\)-algebra. The data of a strictly unital \(\operatorname{A_{\infty}}\)-module structure over \(A\) is equivalent to that of a strictly unital dg module structure over \(\Omega(\operatorname{\mathsf{B}}(A))\). Explicitly, if \(\{\bar{m}_{n}^{M}\}\) is a strictly unital \(\operatorname{A_{\infty}}\)-module structure on a graded module \(M\), then the dg module structure on \(M\) is given by the same differential \(m_{1}^{M}\), and the multiplication \(\Omega(\operatorname{\mathsf{B}}(A))\otimes M\to M\) induced by \[-(-1)^{\frac{n(n-1)}{2}}m_{n+1}^{M}((\operatorname{\Sigma}^{-1})^{\otimes n} \operatorname{\Sigma}\otimes\operatorname{id}_{M})\colon\Sigma^{-1}\bar{ \operatorname{\mathsf{B}}}_{(n)}(A)\otimes M\to M\,.\] Moreover, this construction is natural in \(A\) and \(M\), and any quasi-isomorphism of \(\operatorname{A_{\infty}}\)-modules over \(A\) yields a quasi-isomorphism of dg modules over \(\Omega(\operatorname{\mathsf{B}}(A))\). ## 5. Transfer of \(\operatorname{A_{\infty}}\)-algebra structures In this section, as above, \(Q\) is a local ring with maximal ideal \(\mathfrak{m}_{Q}\) and residue field \(k\). Let \(R\) be an \(\operatorname{A_{\infty}}\)-algebra over \(Q\), and let \(A\to R\) be a quasi-isomorphism of complexes over \(Q\). We would like to know whether the \(\operatorname{A_{\infty}}\)-algebra structure on \(R\) induces an \(\operatorname{A_{\infty}}\)-algebra structure on \(A\). This is well-understood in the case that \(Q\) is field, so that \(A\to R\) is a homotopy equivalence; the first result is due to Kadeishvili [10] when \(A=\operatorname{H}(R)\). For general homotopy equivalences this was studied, for example, in [11]. Burke has shown that if \(R\) is a quotient of \(Q\) and \(A\) is a \(Q\)-free resolution of \(R\), then the product on \(R\) lifts to an \(\operatorname{A_{\infty}}\)-structure on \(A\)[1, Proposition 3.6]. We give a proof in a more general situation. **Proposition 5.1**.: _Let \(R\) be a strictly unital connective \(\operatorname{A_{\infty}}\)-algebra and \(\varepsilon\colon A\to R\) a surjective quasi-isomorphism of complexes over \(Q\), with \(A\) degree-wise free and concentrated in non-negative degrees. Then there exists an \(\operatorname{A_{\infty}}\)-algebra structure with a split unit on \(A\) such that \(\varepsilon\) is a strict quasi-isomorphism of \(\operatorname{A_{\infty}}\)-algebras._ Proof.: Since \(\varepsilon\) is surjective we may choose a splitting \(A=\bar{A}\oplus Q\) such that \(\varepsilon\) maps the free generator of \(Q\) to the unit of \(R\). We inductively construct higher multiplication maps \(m_{n}\) on \(A\) satisfying the \(n\)th Stasheff identity. To begin with we set \(m_{1}:=\partial\) where \(\partial\) is the differential of \(A\). For \(n=2\) we consider the diagram Since the right vertical arrow is a surjective quasi-isomorphism and the left vertical arrow is injective in each degree and the cokernel in each degree is projective, there exists a lift \(m_{2}^{A}\colon A^{\otimes 2}\to A\) such that the diagram commutes; see for example [11, Section 7]. This morphism \(m_{2}^{A}\) satisfies the desired properties by construction. For \(n>2\), the obstruction \(\operatorname{obs}_{n}^{A}\) from (4.1.2) is a chain map. We have a short exact sequence of complexes \[0\to\sum_{i+j=n-1}A^{\otimes i}\otimes Q\otimes A^{\otimes j}\xrightarrow{ \eta_{n}}A^{\otimes n}\to\bar{A}^{\otimes n}\to 0\,.\] By direct computation we obtain \(\operatorname{obs}_{n}^{A}\eta_{n}=0\). So the obstruction \(\operatorname{obs}_{n}^{A}\) factors through \(\bar{A}^{\otimes n}\). We consider the diagram Since \(\bar{A}^{\otimes n}\) is, as graded modules, a direct summand of \(A^{\otimes n}\), and the higher multiplications \(m_{i}^{A}\) for \(i<n\) commute with \(\varepsilon\), the right triangle commutes up to homotopy \(m_{n}^{R}\varepsilon^{\otimes n}\). Then there exists a chain map \(\alpha\) such that the left triangle commutes up to homotopy \(\sigma\). Since \(\varepsilon\) is surjective, we may assume \(m_{n}^{R}\varepsilon^{\otimes n}=\varepsilon\sigma\). That is \[m_{n}^{A}:=\left(A^{\otimes n}\to\bar{A}^{\otimes n}\xrightarrow{\sigma}A\right)\] satisfies the \(n\)th Stasheff identity. **5.2**.: In the setup of Proposition 5.1 we can also transfer \(\operatorname{A_{\infty}}\)-module structures: If \(M\) is a strictly unital \(\operatorname{A_{\infty}}\)-module over \(R\) and with semifree resolution \(\gamma\colon G\to M\) over \(Q\), in the sense discussed later in 6.2, then there exists a strictly unital \(\operatorname{A_{\infty}}\)-module structure on \(G\) over \(A\) and \(\gamma\) is a strict morphism of \(\operatorname{A_{\infty}}\)-modules; compare with [1]. When the homology of \(M\) is bounded below (for example, if \(M\) is an honest module), one can take \(G\) to be a bounded below complex of free \(Q\)-modules. **Proposition 5.3**.: _Let \(\varepsilon\colon R\to S\) be a surjective strict quasi-isomorphism of strictly unital \(\operatorname{A_{\infty}}\)-algebras over \(Q\). Further let \(A\) be a split unital, connective, degree-wise free \(\operatorname{A_{\infty}}\)-algebra and \(\varphi\colon A\to S\) a morphism of strictly unital \(\operatorname{A_{\infty}}\)-algebras. Then there exists a morphism of strictly unital \(\operatorname{A_{\infty}}\)-algebras \(\psi\colon A\to R\) such that \(\varphi=\varepsilon\psi\)._ Proof.: The unit \(Q\to S\) factors through \(\varepsilon\) and \(\varphi_{1}\), so by [11, Section 7], there is a chain map \(\psi_{1}\colon A\to R\) such that \(\varphi_{1}=\varepsilon\psi_{1}\) and \(\psi_{1}(1_{A})=1_{R}\). Let \(n\geqslant 2\) and assume that for \(i<n\) the chain maps \(\psi_{i}\colon A^{\otimes i}\to R\) exist, the \(i\)th Stasheff identities (4.1.3) and (4.2.2) hold, and \(\varepsilon\psi_{i}=\varphi_{i}\). A computation shows that \(\operatorname{obs}_{n}^{\psi}\) and \(\operatorname{obs}_{n}^{\varphi}\) vanish when any of its inputs is \(1_{A}\). Hence we can view \(\operatorname{obs}_{n}^{\psi}\) and \(\operatorname{obs}_{n}^{\varphi}\) as maps on \(\bar{A}^{\otimes n}\). Taking homology classes in \(\operatorname{Hom}(\bar{A}^{\otimes n},S)\) we have \[\varepsilon[\operatorname{obs}_{n}^{\psi}]=[\operatorname{obs}_{n}^{\varphi} ]=0\,,\] since \(\varepsilon\operatorname{obs}_{n}^{\psi}=\operatorname{obs}_{n}^{\varphi}\) and \(\varphi\) is a morphism of strictly unital \(\operatorname{A}_{\infty}\)-algebras; cf. 4.1. Since \(\varepsilon\) is a surjective quasi-isomorphism, and using the assumptions on \(A\), the induced map \(\operatorname{Hom}(\bar{A}^{\otimes n},R)\to\operatorname{Hom}(\bar{A}^{ \otimes n},S)\) is a quasi-isomorphism and hence \(\operatorname{obs}_{n}^{\psi}\) is a boundary in \(\operatorname{Hom}(\bar{A}^{\otimes n},R)\). That is, there is \(\bar{\psi}_{n}\colon\bar{A}^{\otimes n}\to R\) such that \[\operatorname{obs}_{n}^{\psi}=m_{1}^{R}\bar{\psi}_{n}+(-1)^{n}\bar{\psi}_{n}( \bar{m}_{1}^{A}\otimes\operatorname{id}^{\otimes n}+\dots+\operatorname{id}^{ \otimes n}\otimes\bar{m}_{1}^{A})\,.\] Setting \(\psi_{n}:=(A^{\otimes n}\to\bar{A}^{\otimes n}\xrightarrow{\bar{\psi}_{n}}R)\), we now have \(\psi_{1},\dots,\psi_{n}\) satisfying the required identities (4.1.3) and (4.2.2), completing the induction. It is well-known that \(\operatorname{A}_{\infty}\)-algebras can be used to characterize formality of dg algebras over fields [10]. We record the following generalization in local algebra. **Proposition 5.4**.: _Let \(\varphi\colon Q\to R\) be a finite local homomorphism. The fiber \(R\otimes^{\mathbb{L}}_{Q}k\) is formal as a dg \(k\)-algebra if and only if the minimal \(Q\)-free resolution \(\varepsilon\colon A\to R\) admits an \(\operatorname{A}_{\infty}\)-structure \(\{m_{n}\}\) making \(\varepsilon\) a strict quasi-isomorphism of \(\operatorname{A}_{\infty}\)-algebras, and satisfying \(m_{n}\otimes_{Q}k=0\) for \(n\geqslant 3\)._ Proof.: Suppose that the minimal \(Q\)-free resolution \(A\) of \(R\) has an \(\operatorname{A}_{\infty}\)-structure \(\{m_{n}\}\) with the stated property. If \(A^{\prime}\) is a \(Q\)-free dg algebra resolution of \(R\), then it follows from Proposition 5.3 that \(A\) and \(A^{\prime}\) are quasi-isomorphic as \(\operatorname{A}_{\infty}\)-algebras over \(Q\). Therefore \(A\otimes_{Q}k\) and \(R\otimes^{\mathbb{L}}_{Q}k=A^{\prime}\otimes_{Q}k\) are quasi-isomorphic as \(\operatorname{A}_{\infty}\)-algebras over \(k\). By assumption \(m_{n}\otimes_{Q}k=0\) for \(n\geqslant 3\), so \(A\otimes_{Q}k\) is a graded algebra, canonically isomorphic to \(\operatorname{Tor}^{Q}(R,k)\). Two dg \(k\)-algebras are quasi-isomorphic as dg algebras if and only if they are quasi-isomorphic as \(\operatorname{A}_{\infty}\)-algebras [10], and we can conclude that \(R\otimes^{\mathbb{L}}_{Q}k\) is formal. Suppose conversely that \(R\otimes^{\mathbb{L}}_{Q}k\) is formal. By Proposition 5.1 the minimal \(Q\)-free resolution \(A\) of \(R\) admits an \(\operatorname{A}_{\infty}\)-structure \(\{m^{\prime}_{n}\}\). Using the same reasoning as above, since \(R\otimes^{\mathbb{L}}_{Q}k\) is formal \(A\otimes_{Q}k\) and \(\operatorname{Tor}^{Q}(R,k)\) are quasi-isomorphic as \(\operatorname{A}_{\infty}\)-algebras over \(k\). By the uniqueness of minimal models (that is, \(\operatorname{A}_{\infty}\)-algebras over a field having zero differential; see [10]) there is an isomorphism of \(\operatorname{A}_{\infty}\)-algebras \[\psi\colon(\operatorname{Tor}^{Q}(R,k),0,\mu,0,\dots)\xrightarrow{\cong}(A \otimes_{Q}k,0,m^{\prime}_{2}\otimes k,m^{\prime}_{3}\otimes k,\dots)\,,\] where \(\mu\) is the ordinary product on \(\operatorname{Tor}^{Q}(R,k)\). We may make the identification \(A\otimes_{Q}k=\operatorname{Tor}^{Q}(R,k)\) and choose lifts \(\Psi_{i}\colon A^{\otimes i}\to A\) with \(\Psi_{i}\otimes_{Q}k=\psi_{i}\). By Nakayama's lemma \(\Psi_{1}\) is an isomorphism and we can inductively define operations \(m_{n}\colon A^{\otimes n}\to A\) by the formula \(m_{n}:=\) \[\Psi_{1}^{-1}\Big{(}-\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r+st}\Psi_{r+1+t}\left( \operatorname{id}^{\otimes r}\otimes m_{s}\otimes\operatorname{id}^{\otimes t }\right)+\sum_{\begin{subarray}{c}p,\boldsymbol{\alpha}\in\mathbb{N}^{p}\\ |\boldsymbol{\alpha}|=n\end{subarray}}(-1)^{v(\boldsymbol{\alpha})}m^{\prime}_ {p}\boldsymbol{\Psi}^{\otimes\boldsymbol{\alpha}}\Big{)}\,.\] By construction the map \(\Psi\colon(A,\{m^{\prime}_{n}\})\to(A,\{m_{n}\})\) now satisfies the Stasheff morphism identities (4.1.3), and it follows that \((A,\{m_{n}\})\) is an \(\operatorname{A}_{\infty}\)-algebra, isomorphic to \((A,\{m^{\prime}_{n}\})\). Finally, from \(\Psi\otimes_{Q}k=\psi\) it follows that \(m_{2}\otimes_{Q}k=\mu\) and \(m_{n}\otimes_{Q}k=0\) for \(n\geqslant 3\), as stated in the proposition. The following technical lemma will be used later to help generate examples, by showing that certain \(\operatorname{A}_{\infty}\)-operations are minimal. **Lemma 5.5**.: _Let \(\varphi\colon A\to T\) be a map of connective split unital \(\operatorname{A}_{\infty}\)-algebras, where \(T\) is a trivial algebra. If for some \(N\) the map \((\varphi_{1})_{<N}\colon A_{<N}\to T_{<N}\) is injective, _then the \(\mathrm{A}_{\infty}\)-structure of \(A\) vanishes in degrees less than \(N\), in the sense that \((\bar{m}_{n}(\bar{A}^{\otimes n}))_{i}=0\) for all \(n\geqslant 1\) and \(i<N\)._ Proof.: We prove this by induction on \(n\). It is clear for \(n=1\) since \(\varphi_{1}\) is a chain map. For \(n\geqslant 2\), since \(\bar{m}_{s}^{T}=0\) for all \(s\), we can rearrange the Stasheff morphism identities (4.1.3): \[\bar{\varphi}_{1}\bar{m}_{n}=-\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{r+st}\bar{\varphi}_{r+1+t} \left(\mathrm{id}^{\otimes r}\otimes\bar{m}_{s}\otimes\mathrm{id}^{\otimes t} \right)\,.\] We can assume by induction that \((\bar{m}_{s}(\bar{A}^{\otimes s}))_{<N}=0\) for \(s<n\). Since each \(\varphi_{r}\) increases degree by \(r-1\), this implies that the the right-hand side above is zero in degrees \(i<N\). Since \((\bar{\varphi}_{1})_{<N}\) is injective, it follows that \(\bar{m}_{n}(\bar{A}^{\otimes n}))_{<N}=0\). ### Cyclic \(\mathrm{A}_{\infty}\)-algebras For Gorenstein algebras the minimal resolution satisfies a Poincare duality property that allows us, in favorable situations, to construct \(\mathrm{A}_{\infty}\)-resolutions with additional duality properties. A _cyclic \(\mathrm{A}_{\infty}\)-algebra_ of degree \(d\) over \(Q\) is a complex \(A\) of finitely generated free \(Q\)-modules with a perfect, \(Q\)-bilinear pairing \[\langle-,-\rangle\colon A\otimes A\to\Sigma^{d}Q\,,\] and an \(\mathrm{A}_{\infty}\)-structure \(\{m_{n}\}\) on \(A\) such that for each \(n\) \[\langle m_{n}(a_{1},\ldots,a_{n}),a_{n+1}\rangle=(-1)^{n+|a_{1}|(|a_{2}|+ \cdots+|a_{n+1}|)}\langle m_{n}(a_{2},\ldots,a_{n+1}),a_{1}\rangle\,;\] see [10]. There is for each \(n\) an isomorphism of complexes \[\operatorname{cyc}\colon\operatorname{Hom}(A^{\otimes n},A)\xrightarrow{ \cong}\operatorname{Hom}(A^{\otimes(n+1)},\Sigma^{d}Q)\,,\quad\operatorname{ cyc}(f)=\langle f(-),-\rangle\,.\] We give \(A^{\otimes(n+1)}\) the action of the cyclic group \(C_{n+1}=\langle c\rangle\) with generator acting by \(c\cdot(a_{1}\otimes\cdots\otimes a_{n+1})=(-1)^{|a_{1}|(|a_{2}|+\cdots+|a_{n+ 1}|)}(a_{2}\otimes\cdots\otimes a_{n+1}\otimes a_{1})\). From this perspective, an \(\mathrm{A}_{\infty}\)-structure \(\{m_{n}\}\) is cyclic if and only if \[\operatorname{cyc}(m_{n})\cdot c=(-1)^{n}\operatorname{cyc}(m_{n})\quad \text{for all }n\,.\] Let \(\varphi\colon Q\to R\) be a surjective local Gorenstein homomorphism of projective dimension \(d\), and let \(A\) be the minimal resolution of \(R\) over \(Q\). Let \(\mu\colon A^{\otimes 2}\to A\) be a chain map lifting the product on \(R\); we can assume that \(\mu\) is unital and graded-commutative by [1, 3.4.3]. The Gorenstein condition (3.11.1) guarantees that \(A\otimes_{Q}k=\operatorname{Tor}^{Q}(R,k)\) is a Poincare duality algebra with the product induced from \(\mu\). It follows from Nakayama's lemma that \(A_{d}\cong Q\) and we obtain a perfect pairing \[\langle-,-\rangle\colon A\otimes A\xrightarrow{\mu}A\twoheadrightarrow\Sigma ^{d}A_{d}=\Sigma^{d}Q\,. \tag{5.6.1}\] **Theorem 5.7**.: _Let \(Q\to R\) be a surjective local Gorenstein homomorphism of odd projective dimension \(d\). Assume that \(Q\) contains a field of characteristic zero. The minimal resolution \(A\) of \(R\) over \(Q\) admits the structure of a split unital, cyclic \(\mathrm{A}_{\infty}\)-algebra of degree \(d\), making the map \(A\to R\) a strict \(\mathrm{A}_{\infty}\)-algebra quasi-isomorphism._ We first need a lemma about projective resolutions. **Lemma 5.8**.: _Let \(M\) be a finitely generated \(Q\)-module of projective dimension \(d>0\), with a projective resolution \(A\to M\), and set \(V=A_{<d}/A_{0}\). Then for any \(n\) we have \(\mathrm{H}_{i}(V^{\otimes n+1})=0\) whenever \(i>n(d-1)+1\) and \(i\neq(n+1)(d-1)\)._ Proof.: We show this by inducing on \(n\). Since \(A\) is a projective resolution of \(R\), the homology of \(V\) is concentrated in degrees \(1\) and \(d-1\), and there is an exact triangle \[\Sigma^{d-1}Q^{s}\longrightarrow V\longrightarrow\Sigma N\,,\] of complexes of \(Q\)-modules, where \(N=\ker(A_{0}\to M)\) and \(A_{d}=Q^{\oplus s}\). This justifies the case \(n=0\), and for each \(n\geqslant 1\) yields another exact triangle \[\Sigma^{d-1}(V^{\otimes n})^{\oplus s}\longrightarrow V^{\otimes n+1} \longrightarrow\Sigma V^{\otimes n}\otimes N\,.\] Clearly \(\operatorname{H}_{i}(\Sigma V^{\otimes n}\otimes N)=0\) for \(i>n(d-1)+1\), so by the long exact sequence in homology the map \(\operatorname{H}_{i}(\Sigma^{d-1}V^{\otimes(n-1)})^{\oplus s}\to\operatorname {H}_{i}(V^{\otimes n})\) is surjective for \(i>n(d-1)+1\). By the induction hypothesis \(\operatorname{H}_{i}(\Sigma^{d-1}V^{\otimes(n-1)})=0\) if \(i\neq n(d-1)\) and \(i>(n-1)(d-1)+1\). From this we conclude that the lemma holds for \(n\). Proof of Theorem 5.7.: Recall from (5.6.1) that the pairing on \(A\) was defined from a unital and graded-commutative product \(\mu\colon A^{\otimes 2}\to A\). This restricts to a perfect pairing on \(V=A_{<d}/A_{0}\), and we start by constructing operations \(m_{n}^{V}\colon V^{\otimes n}\to V\). If \(|a|+|b|=d+1\) then \(\mu(a\otimes b)=0\) in \(A\), so \[\mu(\partial(a)\otimes b)+(-1)^{|a|}\mu(a\otimes\partial(b))=\partial(\mu(a \otimes b))=0\,.\] Using commutativity of \(\mu\), this is equivalent to the cyclic identity \[\langle m_{1}^{V}(a),b\rangle=(-1)^{1+|a||b|}\langle m_{1}^{V}(b),a\rangle\,,\] where \(m_{1}^{V}:=\partial\). Next, we truncate \(\mu\) to obtain \(\mu^{V}\colon V^{\otimes 2}\to V\), and we define \(m_{2}^{V}\) by symmetrizing \(\mu^{V}\) with respect to the \(C_{3}\)-action: \[\operatorname{cyc}(m_{2}^{V}):=\operatorname{cyc}(\mu^{V})\cdot\frac{1}{3}(1 +c+c^{2})\,.\] The obtained \(m_{2}^{V}\) satisfies the required cyclic property by construction. However, the Stasheff identity (4.1.1) does not hold for \(n=2\), and instead \[\partial(m_{2}^{V}(a,b))-m_{2}^{V}(\partial(a),b)-(-1)^{|a|}m_{2}^{V}(a, \partial(b))=\langle a,b\rangle\partial(\omega)\,, \tag{5.8.1}\] where \(\omega\in A_{d}\) is the generator with \(\langle\omega,1\rangle=1\). Nonetheless, since \(\langle-,-\rangle\) is a chain map, the same computation as in (4.1.2) shows that the obstruction \(\operatorname{obs}_{3}^{V}\) is a chain map, that is, a cycle in \(\operatorname{Hom}(V^{\otimes 3},V)\). We proceed to construct \(m_{n}^{V}\) for \(n\geqslant 3\) by induction, satisfying the Stasheff identities (4.1.1) for \(n\geqslant 3\), and all satisfying \(\operatorname{cyc}(m_{n}^{V})\cdot c=(-1)^{n}\operatorname{cyc}(m_{n}^{V})\). The argument is similar to the proof of Proposition 5.1. If \(m_{i}^{V}\) have been constructed for \(i<n\) with required cyclic symmetry, a computation shows that the obstruction \(\operatorname{obs}_{n}^{V}\) from (4.1.2) is cyclic as well: \[\operatorname{cyc}(\operatorname{obs}_{n}^{V})=(-1)^{n}\operatorname{cyc}( \operatorname{obs}_{n}^{V})\cdot c\,.\] Since \(\operatorname{Hom}(V^{\otimes n},V)\cong\Sigma^{-nd}V^{\otimes(n+1)}\) we can use Lemma 5.8 with \(M=R\) to conclude that \[\operatorname{H}_{i}\bigl{(}\operatorname{Hom}(V^{\otimes n},V)\bigr{)}=0\ \text{ for }i>1-n\text{ and }i\neq d-n-1\,.\] Since \(d\) is odd, it is impossible to have \(|\operatorname{obs}_{n}^{V}|=n-3=d-n-1\), hence the complex \(\operatorname{Hom}(V^{\otimes n},V)\) is acyclic in degree \(n-3\), and the class \([\operatorname{obs}_{n}^{V}]\) vanishes. This shows that there is an operation \(\tilde{m}_{n}^{V}\) in \(\operatorname{Hom}(V^{\otimes n},V)_{n-2}\) such that \(\partial(\tilde{m}_{n}^{V})=\operatorname{obs}_{n}^{V}\), and we symmetrize this to define \(m_{n}\): \[\operatorname{cyc}(m_{n}^{V}):=\operatorname{cyc}(\tilde{m}_{n}^{V})\cdot \sum_{i=0}^{n}\tfrac{(-1)^{in}c^{i}}{n+1}\,.\] By construction \(m_{n}^{V}\) has the required cyclic symmetry. We note that \[\partial(\operatorname{cyc}(m_{n}^{V}))=\operatorname{cyc}(\operatorname{ obs}_{n}^{V})\cdot\sum_{i=0}^{n}\tfrac{(-1)^{in}c^{i}}{n+1}=\operatorname{cyc}( \operatorname{obs}_{n}^{V})\,.\] Therefore \(\partial(m_{n}^{V})=\operatorname{obs}_{n}^{V}\) and the operations \(\{m_{n}^{V}\}\) satisfy the \(n\)th Stasheff identity. This concludes the induction. To finish the proof we define the following operations on \(A\): \[m_{2}(a_{1},a_{2}):=\begin{cases}m_{2}^{V}(a_{1},a_{2})&\text{if $|a_{1}|,|a_{2}|>0$ and $|a_{1}|+|a_{2}|<d$},\\ \langle a_{1},a_{2}\rangle\omega&\text{if $|a_{1}|+|a_{2}|=d$},\\ a_{1}a_{2}&\text{if $|a_{1}|=0$ or $|a_{2}|=0$},\end{cases}\] and for \(n\geqslant 3\) \[m_{n}(a_{1},\dots,a_{n}):=\begin{cases}m_{n}^{V}(a_{1},\dots,a_{n})&\text{if all $|a_{i}|>0$ and $|a_{1}|+\dots+|a_{2}|<d$},\\ 0&\text{if $|a_{1}|+\dots+|a_{2}|=d$ or any $|a_{i}|=0$}.\end{cases}\] The \(n=2\) Stasheff identity for \(A\) is equivalent to the identity (5.8.1) above. To verify the \(n\)th Stasheff identity, with \(n\geqslant 3\), we need to divide into cases depending on the inputs: when any of the inputs have degree zero; when the output has degree less than \(d\); and when the output has degree \(d\). The first two of these cases follow easily from the Stasheff identities for \(\{m_{n}^{V}\}\). To check the third case, we suppose that \(|a_{1}|+\dots+|a_{n}|+n-3=d\), and we compute \[\sum_{r+s+t=n}(-1)^{r+st}m_{r+1+t}\left(\operatorname{id}^{ \otimes r}\otimes m_{s}\otimes\operatorname{id}^{\otimes t}\right)(a_{1},\dots,a_{n})=\] \[(-1)^{|a_{1}|(n-1)+1}\langle a_{1},m_{n-1}^{V}(a_{2},\dots,a_{n}) \rangle+(-1)^{n-1}\langle m_{n-1}^{V}(a_{1},\dots,a_{n-1}),a_{n}\rangle\,;\] and this vanishes by the cyclic symmetry condition for \(m_{n-1}^{V}\). It follows that \(A\) is an \(\operatorname{A_{\infty}}\)-algebra with the operations \(\{m_{n}\}\). Finally, the cyclic symmetry condition on \(\{m_{n}^{V}\}\) implies \(A\) is a cyclic \(\operatorname{A_{\infty}}\)-algebra. _Remark 5.9_.: The construction in the proof yields a bijection between unital cyclic \(\operatorname{A_{\infty}}\)-algebra structures on \(A\) and nonunital cyclic \(\operatorname{A_{\infty}}\)-algebra structures on \(V\), but with a modified version of the second Stasheff identity in the latter case. The cyclic condition is necessary to make this correspondence work. _Remark 5.10_.: We suspect that the restriction to odd \(d\) is not necessary in Theorem 5.7. More cases can be established by analyzing the proof; for example the case \(d\equiv 2\) modulo \(4\). ## 6. Twisted tensor products Twisted tensor products are an important tool in homological algebra, especially in the construction of resolutions. In this section we develop their theory over a commutative ring, producing universal resolutions that will be applied to Koszul homomorphisms in later sections. However, our method of construction is new even in when the base ring \(Q\) is a field. Similar results have been obtained using different methods in unpublished work of Burke [Bur]. We do not explicitly use the language of twisting cochains, but these objects are present implicitly; and the reader may consult [LV12, Section 2.1] for more information on twisted tensor products and twisting cochains. Let \(C\) be a connected curved differential graded (dg) coalgebra over \(Q\) that is free as a graded module. For a right dg module \(M\) and a left dg module \(N\) over \(\Omega(C)\), we construct a complex \(M\otimes^{\tau}C\otimes^{\tau}N\), with a "twisted" differential; we call this complex a _twisted tensor product_. First, we define a dg bimodule \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C)\) over \(\Omega(C)\). _Construction 6.1_.: Ignoring differentials for now, there is a well-known exact sequence of graded \(\Omega(C)\)-bimodules \[0\to\Omega(C)\otimes\Sigma^{-1}\bar{C}\otimes\Omega(C)\xrightarrow{\iota} \Omega(C)\otimes Q\otimes\Omega(C)\xrightarrow{\mu}\Omega(C)\to 0\,, \tag{6.1.1}\] where \(\iota(x\otimes\langle c\rangle\otimes y)=\mu(x,\langle c\rangle)\otimes 1 \otimes y-x\otimes 1\otimes\mu(\langle c\rangle,y)\) for \(c\in\bar{C}\) and \(x,y\in\Omega(C)\), and \(\mu\) the multiplication map. We give \(\Omega(C)\otimes\Sigma^{-1}\bar{C}\otimes\Omega(C)\) the unique differential \(\partial^{\iota}\) making \(\iota\) a chain map, and we set \[\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C):=\operatorname{cone}(\Omega(C )\otimes\Sigma^{-1}\bar{C}\otimes\Omega(C)\xrightarrow{\iota}\Omega(C)\otimes Q \otimes\Omega(C))\,.\] We write \(\partial^{\tau}\) for the differential on \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C)\), this is a dg \(\Omega(C)\)-bimodule whose underlying graded bimodule is \(\Omega(C)\otimes C\otimes\Omega(C)\), using the evident multiplication by \(\Omega(C)\) on either side. For a right dg \(\Omega(C)\)-module \(F\) and a left dg \(\Omega(C)\)-module \(G\), we define \[F\otimes^{\tau}C\otimes^{\tau}G:=F\otimes_{\Omega(C)}\Omega(C)\otimes^{\tau}C \otimes^{\tau}\Omega(C)\otimes_{\Omega(C)}G\,.\] Explicitly its underlying graded module is \(F\otimes C\otimes G\) and the differential is \[\partial^{\tau} =\partial^{F}\otimes\operatorname{id}_{C}\otimes\operatorname{id }_{G}+\operatorname{id}_{F}\otimes\partial^{C}\otimes\operatorname{id}_{G}+ \operatorname{id}_{F}\otimes\operatorname{id}_{C}\otimes\partial^{G}\] \[\quad+\left(\mu(\operatorname{id}_{F}\otimes\Sigma^{-1}p)\otimes \operatorname{id}_{G}\otimes\operatorname{id}_{G}-\operatorname{id}_{F} \otimes\operatorname{id}_{C}\otimes\mu(\Sigma^{-1}p\otimes\operatorname{id} _{G})\right)(\operatorname{id}_{F}\otimes\Delta\otimes\operatorname{id}_{G})\] where \(p\colon C\to\bar{C}\) is the natural projection and we use \(\mu\) for the right action of \(\Omega(C)\) on \(F\) and the left action of \(\Omega(C)\) on \(G\). **6.2**.: Given a dg algebra \(A\), recall that a dg \(A\)-module \(F\) is _semifree_ if it admits an exhaustive filtration \[0=F(-1)\subseteq F(0)\subseteq F(1)\subseteq\ldots\subseteq F\] where each subquotient \(F(i)/F(i-1)\) is a sum of shifts of \(A\). As a matter of terminology, a semifree dg \(A\)-bimodule is a semifree dg module over \(A\otimes A^{\operatorname{op}}\). Every dg \(A\)-module \(M\) admits a semifree resolution in the sense that there exists a surjective quasi-isomorphism \(F\xrightarrow{\alpha}M\), with \(F\) a semifree dg \(A\)-module. Such resolutions are unique up to homotopy; see [FHT01, Chapter 6] for this fact, as well as other details regarding semifree dg modules. **Lemma 6.3**.: _The map \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C)\longrightarrow\Omega(C)\) induced from (6.1.1) is a semifree resolution of \(\Omega(C)\) as a dg \(\Omega(C)\)-bimodule._ Proof.: By construction, (6.1.1) is a short exact sequence of dg \(\Omega(C)\)-bimodules, and so it induces an exact triangle in the derived category of dg \(\Omega(C)\)-bimodules. We obtain the quasi-isomorphism by comparing this triangle to the triangle associated to the cone construction for \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C)\). The dg module \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}\Omega(C)\) is semifree as a dg \(\Omega(C)\)-bimodule since \(C\) is free as a module over \(Q\) and non-negatively graded. **6.4**.: Let \(\varphi\colon Q\to R\) be a finite local homomorphism with \(A\to R\) a free resolution of \(R\) over \(Q\). Fix an \(R\)-complex \(M\) and a semifree resolution \(\gamma\colon G\to M\) over \(Q\). By Proposition 5.1, there exists a split unital \(\mathrm{A}_{\infty}\)-algebra structure \(\{m_{n}\}\) on \(A\) and a strictly unital \(\mathrm{A}_{\infty}\)-module structure \(\{m_{n}^{G}\}\) over \(A\) on \(G\). Then by 4.11, this induces a dg module structure over \(\Omega(\mathsf{B}(A))\) on \(G\). Suppose further that \(C\) is a connected curved dg coalgebra with counit \(\varepsilon\colon C\to Q\), equipped with a weak equivalence of connected curved dg coalgebras \(C\to\mathsf{B}(A)\); cf. 4.9. Then \(R\) and \(G\) each have an induced dg module structure over \(\Omega(C)\). **Theorem 6.5**.: _In the setting of 6.4, the map_ \[R\otimes^{\tau}C\otimes^{\tau}G\longrightarrow M\quad\text{given by}\quad r \otimes c\otimes g\mapsto r\varepsilon(c)\gamma(g)\] _is a semifree resolution of \(M\) over \(R\)._ Proof.: The map \(G\to M\) is a quasi-isomorphism of \(\mathrm{A}_{\infty}\)-modules over \(A\), and by 4.11, it is a quasi-isomorphism of dg modules over \(\Omega(\mathsf{B}(A))\), and thus over \(\Omega(C)\). From Lemma 6.3, we obtain the quasi-isomorphism of (left) dg \(\Omega(C)\)-modules \[\beta\colon\Omega(C)\otimes^{\tau}C\otimes^{\tau}G=\Omega(C)\otimes^{\tau}C \otimes^{\tau}\Omega(C)\otimes_{\Omega(C)}G\xrightarrow{\simeq}\Omega(C) \otimes_{\Omega(C)}G\cong G\,.\] Since \(G\) is semifree over \(Q\), it follows that \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}G\) is semifree as a left dg module over \(\Omega(C)\). The claimed quasi-isomorphism fits into the commutative diagram below where \(\alpha\) is the map induced by the quasi-isomorphism of dg algebras \[\Omega(C)\xrightarrow{\simeq}\Omega(\mathsf{B}(A))\xrightarrow{\simeq}R\,.\] Since the latter is a quasi-isomorphism and \(\Omega(C)\otimes^{\tau}C\otimes^{\tau}G\) is a semifree resolution of \(M\) over \(\Omega(C)\), it follows that \(\alpha\) is a quasi-isomorphism. We have already justified that \(\beta\) and \(\gamma\) are quasi-isomorphisms, accounting for the downward arrow on the right. Therefore the horizontal map is a quasi-isomorphism, as claimed. ## 7. \(\mathrm{A}_{\infty}\)-algebra presentations for Koszul homomorphisms We now have the machinery to show that Koszul homomorphisms admit presentations analogous to those of classical Koszul \(k\)-algebras. The next result lifts these classical quadratic presentations to local algebra, and explains how one may think of Koszul homomorphisms as \(\mathrm{A}_{\infty}\)-deformations of Koszul algebras over fields. **Theorem 7.1**.: _A finite local homomorphism \(\varphi\colon Q\to R\) is Koszul if and only if there is_ 1. _a non-negatively graded, degreewise finite rank free_ \(Q\)_-module_ \(V\)_, and a direct summand_ \(W\subseteq V\otimes V\)_,_ 2. _an_ \(\mathrm{A}_{\infty}\)_-structure_ \(\{m_{n}\}\) _on the_ \(Q\)_-module_ \(\mathrm{T}(V)/(W)\) _with grading induced from the grading of_ \(V\)_, and_ 3. \(a\) \(Q\)_-linear map_ \(V_{0}\to R\)_,_ _such that_ 1. _the_ \(k\)_-algebra_ \(\mathrm{T}^{a}(V\otimes k)/(W\otimes k)\) _is Koszul with respect to the tensor algebra weight grading,_ 2. \(\{m_{n}\}\) _agrees with the algebra structure on_ \(\mathrm{T}^{a}(V)/(W)\) _modulo_ \(\mathfrak{m}_{Q}\)_; that is_ \[m_{2}\otimes k=\mu\otimes k\quad\text{and}\quad m_{n}\otimes k=0\ \ \text{for}\ \ n\neq 2\,,\] _where_ \(\mu\) _is the usual product on the quotient of a tensor algebra, and_ 3. _with the structure_ \(\{m_{n}\}\)_, the induced map_ \(\mathrm{T}(V)/(W)\to R\) _is a strict_ \(\mathrm{A}_{\infty}\)_-algebra quasi-isomorphism._ _Moreover, \(V\) can be taken to be finite rank whenever \(R\) has finite projective dimension over \(Q\)._ Proof.: If the stated conditions hold then \(R\otimes^{1}_{Q}k\) is formal by Proposition 5.4, and \(\mathrm{Tor}^{Q}(R,k)\cong\mathrm{T}^{a}(V\otimes k)/(W\otimes k)\) is Koszul, therefore \(\varphi\) is Koszul by definition. Assume, conversely, that \(\varphi\) is Koszul. Since \(\mathrm{Tor}^{Q}(R,k)\) is Koszul, it admits a compatible weight grading making it quadratic. That is, we have an isomorphism \[\mathrm{Tor}^{Q}_{i}(R,k)_{(w)}\cong\big{(}\mathrm{T}^{a}_{(w)}(\bar{V})/(\bar {W})\big{)}_{i}\] identifying the product on \(\mathrm{Tor}\) with the product on the quotient of the tensor algebra, where \(\bar{V}=\mathrm{Tor}^{Q}(R,k)_{(1)}\) and \(\bar{W}\subseteq\bar{V}\otimes_{k}\bar{V}\) is a graded subspace. Let \(V\) be a free graded \(Q\)-module such that \(V\otimes k=\bar{V}\), and choose a direct summand \(W\subseteq V\otimes V\) such that \(W\otimes k=\bar{W}\). If we define \(A:=\mathrm{T}(V)/(W)\), then \(A\) is a free, bigraded \(Q\)-module and \[A\otimes_{Q}k=\mathrm{T}(V)/(W)\otimes k=\mathrm{T}(\bar{V})/(\bar{W})\cong \mathrm{Tor}^{Q}(R,k)\,.\] Therefore we may equip \(A\) with a differential making it the minimal \(Q\)-resolution of \(R\). We have constructed (1) and (3) satisfying condition (i). Since \(R\otimes^{1}_{Q}k\) is formal, by Proposition 5.4 there is an \(\mathrm{A}_{\infty}\)-structure on \(A\) as required for (2), inducing the algebra structure on \(\mathrm{Tor}^{Q}(R,k)\) and satisfying the conditions (ii) and (iii). In Section 8 we illustrate Theorem 7.1 in detail using the examples in Section 3. ### Strictly Koszul presentations For a local homomorphism \(\varphi\colon Q\to R\), Theorem 6.5 allowed us to obtain free resolutions over \(R\) starting from free resolutions over \(Q\). The main input to this theorem was a curved dg coalgebra \(C\) over \(Q\) with quasi-isomorphism \(\Omega(C)\to R\). Our philosophy is that when \(\varphi\) is Koszul \(C\) should have a simple description. In this section we introduce additional technical assumptions that will allow us to explicitly construct \(C\), mimicking a classical construction of Priddy. **Definition 7.3**.: Let \(\varphi\colon Q\to R\) be Koszul. Recall from Theorem 7.1 that \(R\) admits an \(\mathrm{A}_{\infty}\)-algebra resolution \(A\) over \(Q\) with a quadratic presentation \(A\cong\mathrm{T}(V)/(W)\) satisfying the conditions (i)-(iii). The data \((A,V,W)\) is called a _strictly Koszul presentation_ for \(\varphi\) if, in addition to these conditions, \[\bar{m}_{1}(V)\subseteq V\quad\text{and}\quad\bar{m}_{n}\big{(}\bigcap_{i+2+j= n}V^{\otimes i}\otimes W\otimes V^{\otimes j}\big{)}\subseteq V\quad\text{ for }n\geqslant 2\,, \tag{7.3.1}\] where we have used the inclusion \(\bigcap_{i+2+j=n}V^{\otimes i}\otimes W\otimes V^{\otimes j}\subseteq V^{ \otimes n}\subseteq\bar{A}^{\otimes n}\) to apply the \(\mathrm{A}_{\infty}\)-operations \(\bar{m}_{n}\) of \(\bar{A}\). If the homomorphism \(\varphi\) admits a strictly Koszul presentation, then \(\varphi\) is called _strictly Koszul_. In this setting, we define \[\mathsf{C}_{(n)}(V,W):=\bigcap_{i+2+j=n}(\Sigma V)^{\otimes i}\otimes\Sigma^{2}W \otimes(\Sigma V)^{\otimes j}\subseteq\mathsf{B}_{(n)}(A)\,.\] By definition, the curvature term, the coderivation and the comultiplication on \(\mathsf{B}(A)\) restrict to maps on \(\mathsf{C}(V,W)\), and hence \(\mathsf{C}(V,W)\) is a counital curved dg coalgebra. We call \(\mathsf{C}(V,W)\) the _Priddy coalgebra associated to \((A,V,W)\)_. If the presentation is clear from the context, we say it is the _Priddy coalgebra of \(\varphi\)_, and we write \[\mathsf{C}(\varphi):=\mathsf{C}(V,W)\,.\] What we call the Priddy coalgebra first appeared, for algebras over a field, in the work of Priddy [20, Section 3], where it is called the Koszul complex. See also [1, Section 2.6] and [13, Chapter 3] (where our notation is taken from). In Section 8 we show that complete intersection and Golod homomorphisms are strictly Koszul, as well as Cohen presentations of almost Golod Gorenstein local rings. In fact, we are not able to construct surjective Koszul homomorphisms that are not strictly Koszul, therefore we ask: _Question 7.4_.: For a surjective Koszul homomorphism \(\varphi\colon Q\to R\), is it always possible to construct a strictly Koszul presentation \((A,V,W)\) as in Definition 7.3? We think of \(R\) and \(\mathsf{C}(\varphi)\) as being Koszul dual to each other relative to \(Q\). The next result justifies this, and in particular it says that this specializes, at the maximal ideal of \(Q\), to classical Koszul duality over \(k\). **Theorem 7.5**.: _Let \(\varphi\colon Q\to R\) be a strictly Koszul homomorphism. Then \(\mathsf{C}(\varphi)\) is minimal in the sense that \(\partial(\mathsf{C}(\varphi))\subseteq\mathfrak{m}_{Q}\mathsf{C}(\varphi)\), and the inclusion \(\mathsf{C}(\varphi)\to\mathsf{B}(A)\) is a weak equivalence of connected curved dg coalgebras. Moreover, both \(T=\operatorname{Tor}^{Q}(R,k)\) and \(E=\operatorname{Hom}(\mathsf{C}(\varphi),k)\) are Koszul \(k\)-algebras with_ \[\operatorname{Ext}_{T}(k,k)\cong E\quad\text{and}\quad\operatorname{Ext}_{E} (k,k)\cong T\,.\] Proof.: We fix a strictly Koszul presentation, so that \(\mathsf{C}(\varphi)=\mathsf{C}(V,W)\). By Theorem 7.1 the \(\mathrm{A}_{\infty}\)-structure on \(A\) satisfies \(m_{n}\otimes k=0\) for \(n\neq 2\), and \(A\otimes k=\mathrm{T}^{a}(V\otimes k)/(W\otimes k)\) is a quadratic algebra. It then follows from [13, Proposition 3.3.2] that the differential of \(\mathsf{C}(V,W)\otimes k=\mathsf{C}(V\otimes k,W\otimes k)\subseteq\mathsf{ B}(A\otimes k)\) is zero. Therefore \(\mathsf{C}(V,W)\) is minimal. Since \(T=A\otimes k\) is Koszul by assumption, \(\mathsf{C}(V\otimes k,W\otimes k)\to\mathsf{B}(A\otimes k)\) is a weak equivalence by [13, Theorem 3.4.6]. Since \(V\) is free and \(W\subseteq V\otimes V\) is a summand, \(\Omega(\mathsf{C}(V\otimes k,W\otimes k))=\Omega(\mathsf{C}(V,W))\otimes k\) and \(\Omega(\mathsf{B}(A\otimes k))=\Omega(\mathsf{B}(A))\otimes k\), so it follows from the the derived version of Nakayama's lemma that \(\mathsf{C}(V,W)\to\mathsf{B}(A)\) is a weak equivalence as well. Since \(\mathsf{C}(\varphi)\) is minimal, the coproduct induces the structure of a graded \(k\)-algebra on \(\operatorname{Hom}(\mathsf{C}(\varphi),k)\), with zero differential. Writing \((-)^{\vee}=\operatorname{Hom}(-,Q)\), we can directly compute from the definition of the Priddy coalgebra: \[\mathsf{C}(\varphi)^{\vee}=\mathrm{T}^{a}(\Sigma^{-1}V^{\vee})/(\Sigma^{-2}W ^{\perp}) \tag{7.5.1}\] where \(W^{\perp}=\{f\in(V\otimes V)^{\vee}\,|\,f(W)=0\}\) (this uses that \(V\) is free and \(W\subseteq V\otimes V\) is a summand). It follows that \[E=\mathsf{C}(\varphi)^{\vee}\otimes k=\mathrm{T}^{a}(\Sigma^{-1}V^{\vee} \otimes k)/(\Sigma^{-2}W^{\perp}\otimes k)\,.\] Therefore \(E\) is the quadratic dual of \(T=\operatorname{T}^{a}(V\otimes k)/(W\otimes k)\), and the final statement follows from [1, 2.10]. ### The Priddy resolution We have arrived at one of the main applications of our techniques. The next result provides explicit "universal resolutions" for modules over the target of a strictly Koszul homomorphism. It recovers the Shamash resolution in the case of complete intersection homomorphisms, and the bar resolution of Iyengar and Burke in the case of Golod homomorphisms. We present these and other examples in the next section. **Theorem 7.7**.: _Let \(\varphi\colon Q\to R\) be a Koszul homomorphism with a strictly Koszul presentation \((A,V,W)\). Assume that \(M\) is an \(R\)-complex with a semifree resolution \(G\to M\) over \(Q\) and that \(G\) has a strictly unital \(\operatorname{A}_{\infty}\)-module structure over \(A\). Then_ \[R\otimes^{\tau}\operatorname{\mathsf{C}}(V,W)\otimes^{\tau}G\longrightarrow M\] _is a semifree resolution over \(R\), with differential given by_ \[\partial^{\tau} =\sum_{\begin{subarray}{c}r+s+t=n\\ r,t\geqslant 0,s\geqslant 1\end{subarray}}(-1)^{\frac{s(s+1)}{2}}\operatorname{id }_{R}\otimes(\operatorname{id}^{\otimes\tau}\otimes\!\underline{\tau}\bar{m}_ {s}(\mathfrak{x}^{-1})^{\otimes s}\otimes\operatorname{id}^{\otimes t}) \otimes\operatorname{id}_{G}\] \[\quad+\sum_{\begin{subarray}{c}i+j=n+1\\ i\geqslant 0,j\geqslant 1\end{subarray}}(-1)^{\frac{(j-1)(j-2)}{2}} \operatorname{id}_{R}\otimes\operatorname{id}^{\otimes i}\otimes\!\bar{m}_{j }^{G}((\mathfrak{x}^{-1})^{\otimes(j-1)}\otimes\operatorname{id})\,.\] Proof.: By Theorem 7.5 we can apply Theorem 6.5 to obtain the result. We call \(R\otimes^{\tau}\operatorname{\mathsf{C}}(V,W)\otimes^{\tau}G\) the _Priddy resolution of \(M\)_ associated to the strictly Koszul presentation \((A,V,W)\). We emphasize that (as long as \(M\) and \(R\) have finite projective dimension over \(Q\)) there is _only a finite amount of data needed to construct the Priddy resolution_. Therefore, it would be especially interesting to give an effectively computable answer to Question 7.4. **7.8**.: For any surjective map \(\varphi\colon Q\to R\) of local rings with common residue field \(k\), and any finitely generated \(R\)-module \(M\), Lescot [10] established the coefficient-wise inequality \[\operatorname{P}_{M}^{R}(t)\cdot\operatorname{P}_{k}^{Q}(t)\preccurlyeq \operatorname{P}_{M}^{Q}(t)\cdot\operatorname{P}_{k}^{R}(t)\,.\] If equality holds, \(M\) is said to be _inert_ by \(\varphi\). **7.9**.: A surjective map \(\varphi\colon Q\to R\) of local rings with common residue field \(k\) is called _small_ if the induced map \(\operatorname{Tor}^{Q}(k,k)\to\operatorname{Tor}^{R}(k,k)\) is injective [11]. For example, any minimal Cohen presentation is small. When \(\varphi\) is small, there is an equality \(\operatorname{P}_{k}^{Q}(t)\cdot\operatorname{P}_{k}^{R\otimes_{Q}^{k}k}(t)= \operatorname{P}_{k}^{R}(t)\) by [11, Corollary 5.3]. The next result addresses the (non-)minimality of the Priddy resolution. **Theorem 7.10**.: _Let \(\varphi\colon Q\to R\) be a surjective map of local rings with common residue field \(k\). If \(\varphi\) is small and strictly Koszul with Priddy coalgebra \(\operatorname{\mathsf{C}}(\varphi)\), then_ \[\sum_{i}\operatorname{rank}_{Q}(\operatorname{\mathsf{C}}(\varphi)_{i})t^{i} =\frac{\operatorname{P}_{k}^{R}(t)}{P_{k}^{Q}(t)}\,.\] _Moreover, for any finitely generated \(R\)-module \(M\) there is a coefficientwise inequality_ \[\operatorname{P}_{M}^{R}(t)\preccurlyeq\frac{\operatorname{P}_{M}^{Q}(t) \cdot\operatorname{P}_{k}^{R}(t)}{P_{k}^{Q}(t)}\,,\] Equality holds if and only if \(M\) is inert by \(\varphi\), if and only if its Priddy resolution with respect to \(\varphi\) is a minimal resolution. In particular, the Priddy resolution of the residue field \(k\) is minimal._ Proof.: We may compute \(\operatorname{H_{\mathsf{C}}}(t)\mathrel{\mathop{:}}=\sum_{i}\operatorname{ rank}_{Q}(\mathsf{C}(\varphi)_{i})t^{i}\) as follows: \[\operatorname{H_{\mathsf{C}}}(t)=\sum_{i}\operatorname{rank}_{k}(\operatorname{ Hom}(\mathsf{C}(\varphi)_{i},k))t^{i}=\operatorname{P}_{k}^{\operatorname{Tor}^{Q}(R,k )}(t)=\operatorname{P}_{k}^{R\otimes_{Q}^{\mathsf{L}}k}(t)=\frac{\operatorname {P}_{k}^{R}(t)}{P_{k}^{Q}(t)}\,;\] the second equality follows from that last statement in Theorem 7.5; the third uses formality of \(R\otimes_{Q}^{\mathsf{L}}k\); and the last uses the small hypothesis, explained in 7.9. Theorem 7.7 directly yields the inequality \[\operatorname{P}_{M}^{R}(t)\preccurlyeq\operatorname{P}_{M}^{Q}(t)\cdot \operatorname{H_{\mathsf{C}}}(t)\,, \tag{7.10.1}\] with equality if and only if the Priddy resolution is minimal. At the same time, the computation of \(\operatorname{H_{\mathsf{C}}}(t)\) above transforms (7.10.1) into the inequality stated in the theorem, and equality holds there by definition when \(M\) is inert; see 7.8. _Remark 7.11_.: Theorem 7.10 recovers Lescot's bound in 7.8 for the homomorphisms considered. One cannot directly recover the former from the latter using manipulations of formal power series as the coefficients of \(\operatorname{P}_{k}^{Q}(t)^{-1}\) can be negative. ## 8. Examples of strictly Koszul presentations In this final section we will apply the theory developed above in a series of examples, obtaining explicit resolutions for modules over various classes of rings. We also survey how these constructions relate to known resolutions in the literature. We fix a local ring \(Q\) with residue field \(k\). **Example 8.1** (Flat Koszul homomorphisms).: We begin with a presentation for a commutative Koszul \(k\)-algebra: \[K=k[x_{1},\dots,x_{n}]/(f_{1},\dots,f_{m})\,,\] where \(f_{1},\dots,f_{m}\) are quadratic polynomials. To deform this presentation, we consider the \(Q\)-algebra \(Q[x_{1},\dots,x_{n}]\), weight graded by polynomial degree. We choose elements \(F_{1},\dots,F_{m}\) such that for each \(i\) \[F_{i}=F_{i,(2)}+F_{i,(1)}+F_{i,(0)}\quad\text{with}\quad F_{i,(w)}\in Q[x_{1},\dots,x_{n}]_{(w)}\,,\] and such that modulo \(\mathfrak{m}_{Q}\), in \(k[x_{1},\dots,x_{n}]\), we have \[F_{i,(2)}=f_{i}\quad\text{and}\quad F_{i,(1)}=F_{i,(0)}=0\,.\] Figure 1. Illustration of the weight and homological gradings of the \(\operatorname{A_{\infty}}\)-resolution \(A\) for various examples. By construction, the homomorphism \[\varphi\colon Q\longrightarrow R:=\frac{Q[x_{1},\ldots,x_{n}]}{(F_{1},\ldots,F_{m })}\] is flat, and its fiber \(K=R\otimes_{Q}k\) is Koszul. Therefore \(\varphi\) is a Koszul homomorphism, as in Example 3.1. To show that \(\varphi\) is strictly Koszul, we take \(V=Q[x_{1},\ldots,x_{n}]_{(1)}\) and \[W=\big{\langle}\{x_{i}\otimes x_{j}-x_{j}\otimes x_{i}\}_{ij},\ \widetilde{F}_{ 1,(2)},\ldots,\widetilde{F}_{m,(2)}\big{\rangle}\subseteq V\otimes V\,,\] where \(\widetilde{F}_{i,(2)}\) are preimages of \(F_{i,(2)}\) in \(V\otimes V\). Then \(\mathrm{T}^{a}(V)/(W)\otimes k\cong K\), so we may choose a compatible isomorphism of \(Q\)-modules \[R\cong\mathrm{T}^{a}(V)/(W)\] that restricts to the identity of \(V\). We obtain a presentation satisfying the conditions of Theorem 7.1, using \(A=R\) with only \(m_{2}\) nonzero. To show that the presentation is strict we note that since \(R\) is commutative \[m_{2}(x_{i}\otimes x_{j}-x_{j}\otimes x_{i})=0\,,\] and we note that since \(F_{i}=0\) in \(R\), \[m_{2}(\widetilde{F}_{i,(2)})+F_{i,(1)}+F_{i,(0)}=m_{2}(\widetilde{F}_{i,(2)}+ F_{i,(1)}\otimes 1+F_{i,(0)}\otimes 1)=0\,.\] This shows that \[\bar{m}_{2}(W)\subseteq\big{\langle}F_{1,(1)},\ldots,F_{m,(1)}\big{\rangle} \subseteq V\,.\] We can conclude that \((R,V,W)\) is a strictly Koszul presentation for \(\varphi\). In Fig. 0(a) we illustrate the grading of \(A\). **Example 8.2** (Golod homomorphisms).: This is the primary example treated by Burke in [1], at least when \(Q\) is regular. Continuing Example 3.5, let \(\varphi\colon Q\to R\) be a surjective local Golod homomorphism, with a minimal resolution \(A\) of \(R\) over \(Q\). By [1, Theorem 6.13], for every \(\mathrm{A}_{\infty}\)-algebra structure \(\{m_{n}\}\) on \(A\) one has \(m_{n}\otimes_{Q}k=0\) for \(n\neq 2\). Then \[R\otimes_{Q}^{\mathbb{L}}k=A\otimes_{Q}k=k\ltimes U=\mathrm{T}^{a}(U)/(U \otimes U)\] where \(U\) is the graded \(k\)-vector space \(A_{\geqslant 1}\otimes k\). In particular, lifting this isomorphism to \(Q\) we obtain \(A\cong\mathrm{T}^{a}(V)/(W)\) with \(V=A_{\geqslant 1}\) and \(W=V\otimes V\). This presentation satisfies the conditions of Theorem 7.1. Further, the data \((A,V,W)\) is a strictly Koszul presentation, and the Priddy coalgebra of \(\varphi\) is the bar construction \(\mathsf{C}(V,W)=\mathsf{B}(A)\). In this case the Priddy resolution of a module \(M\) recovers the bar resolution \(R\otimes^{\tau}\mathsf{B}(A)\otimes^{\tau}G\) from [1, Theorem 3.13]. Comparing 7.8 with (3.4.1), the resolution is minimal if and only if \(M\) is inert with respect to \(\varphi\), if and only if \(M\) is a \(\varphi\)-Golod module (i.e. the Serre bound (3.4.1) is an equality), by Theorem 7.10. _Remark 8.3_.: There are many examples of Golod (in particular, Koszul) homomorphisms \(\varphi\colon Q\to R\) such that the minimal resolution \(A\) of \(R\) over \(Q\) does not admit a dg \(Q\)-algebra structure. In fact, this behavior seems to be typical. One way to construct them is as follows. Let \(I\) be an ideal in a local ring \(P\), and consider the map \(\varphi\colon Q\to R\) with \[Q=P[x]_{(\mathfrak{m}_{P},x)}\quad\text{and}\quad R=Q/(xI)\,.\] By [13, Theorem 2.4], see also [14], \(\varphi\) is Golod. If \(B\) is the minimal resolution of \(P/I\) over \(P\), then the minimal resolution \(A\) of \(R\) over \(Q\) can be described as \[A_{i}=B_{i}\otimes_{P}Q,\quad\text{with}\quad\partial_{1}^{A}=x\cdot\partial_{1 }^{B}\otimes_{P}Q\quad\text{and}\quad\partial_{\geqslant 2}^{A}=\partial_{ \geqslant 2}^{B}\otimes_{P}Q\,.\] If \(A\) were to admit a dg \(Q\)-algebra structure, then localizing would produce a dg \(P(x)\)-algebra structure on \[A\otimes_{Q}Q_{(\mathfrak{m}_{P}[x])}\cong B\otimes_{P}P(x)\,,\] and this is a minimal resolution of \(P(x)/I(x)\) over \(P(x)\). However, we can start with examples of \(P\) and \(I\) such that this is impossible. To be concrete, Example 3.24 (replacing \(k\) with \(k(x)\)) shows that there is no such dg algebra structure when \(P(x)=k(x)[\![a,b,c,d]\!]\) and \(I(x)=(a^{2},ab,bc,cd,d^{2})\). It follows that the homomorphism \[\varphi\colon k[\![a,b,c,d,x]\!]\longrightarrow\frac{k[\![a,b,c,d,x]\!]}{(a^ {2}x,abx,bcx,cdx,d^{2}x)}\] is Golod and there is no dg algebra structure on the minimal resolution of the target over the source. **Example 8.4** (Gorenstein homomorphisms of projective dimension \(3\)).: Assume that \(\varphi\colon Q\to R\) is a surjective local Gorenstein map of \(\operatorname{proj}\dim_{Q}(R)=3\). In Example 3.12 the dg algebra resolution \(A\) of \(R\) is described, with bases \(\{e_{i}\}\), \(\{f_{i}\}\) and \(\{g\}\) for \(A_{1}\), \(A_{2}\) and \(A_{3}\) respectively. The multiplication induces a perfect pairing \[\langle-,-\rangle\colon A\otimes A\longrightarrow\Sigma^{3}A_{3}\cong\Sigma^ {3}Q\] that makes \(A\) a cyclic \(\operatorname{A}_{\infty}\)-algebra. We take \(V=A_{1}\oplus A_{2}\) and \(W=\ker(\langle-,-\rangle|_{V\otimes V})\); alternatively, \(W\) is freely spanned as a graded \(Q\)-module by \[\{e_{i}\otimes e_{j},f_{i}\otimes f_{j},e_{i}\otimes f_{i}-f_{j}\otimes e_{j} \}_{i,j}\cup\{e_{i}\otimes f_{j},f_{i}\otimes e_{j}\}_{i\neq j}\,.\] The short Gorenstein description of \(A\otimes_{Q}k\) lifts to an isomorphism of graded \(Q\)-modules \(A\cong\operatorname{T}(V)/(W)\) satisfying the conditions of Theorem 7.1. Using the explicit description in Example 3.12 \[\bar{m}_{1}(V)\subseteq A_{1}\subseteq V\,,\quad\bar{m}_{2}(W)\subseteq A_{2 }\subseteq V\quad\text{and}\quad\bar{m}_{n}=0\quad\text{for }n\geqslant 3\,.\] Therefore the homomorphism \(\varphi\) is strictly Koszul, using the presentation \((A,V,W)\). The corresponding Priddy coalgebra is given by \[\mathsf{C}_{(n)}(V,W)=\Bigl{\{}\sum\!v_{1}\otimes\cdots\otimes v_{n}\;\Bigl{|} \sum\!v_{1}\otimes\cdots\otimes\langle v_{i},v_{i+1}\rangle v_{i+2}\otimes \cdots\otimes v_{n}=0,\;1\leqslant i<n\Bigr{\}}.\] Alternatively, \(\mathsf{C}(V,W)\) can be described explicitly using the basis of \(W\) above. We also note that the Priddy coalgebra is dual to the non-commutative hypersurface \[\mathsf{C}(V,W)^{\vee}\cong\mathbb{T}^{a}(V^{\vee})/(\rho)\,,\] where \(\rho=e_{1}^{\vee}\otimes f_{1}^{\vee}+f_{1}^{\vee}\otimes e_{1}^{\vee}+ \cdots+e_{r}^{\vee}\otimes f_{r}^{\vee}+f_{r}^{\vee}\otimes e_{r}^{\vee}\). _Remark 8.5_.: In [1, Example 3.10], Burke examines the specific Gorenstein ring \(R=Q/I\), of codimension three, where \[Q=k[\![x,y,z]\!]\quad\text{and}\quad I=(x^{2},yz,xy+z^{2},xz,y^{2})\,.\] In particular, Burke explicitly computes the \(\operatorname{A}_{\infty}\)-module \(A\)-structure on \(K^{Q}\) and uses this to obtain the (non-minimal) bar resolution \(R\otimes^{\tau}\mathsf{B}(A)\otimes^{\tau}K^{Q}\) of \(k\). In comparison, by Theorem 7.7 the Priddy resolution \(R\otimes^{\tau}\mathsf{C}(V,W)\otimes^{\tau}K^{Q}\) of \(k\), with respect to \((A,V,W)\) in Example 8.4, is minimal by Theorem 7.10. _Remark 8.6_.: A similar argument to Example 8.4 shows that if \(\varphi\colon Q\to R\) is a minimal Cohen presentation for an almost Golod Gorenstein ring, and if the minimal \(Q\)-free resolution of \(R\) admits a dg algebra structure, then \(\varphi\) is strictly Koszul. The minimal resolution is known in the case of a compressed artinian Gorenstein ring [17], and it is suspected to carry a dg algebra structure. In Theorem 8.18 we will generalize this substantially, showing that it is only necessary for the minimal resolution to admit a cyclic \(\mathrm{A}_{\infty}\)-algebra resolution. ### Complete intersection homomorphisms Next we show that surjective complete intersection homomorphisms are strictly Koszul, and that the resulting Priddy resolution recovers a well known construction of Eisenbud [10] and Shamash [11]. The latter uses _systems of higher homotopies_ to obtain free resolutions over the target of a surjective complete intersection homomorphism, starting from data over the source. We first recall this story, which provides context for some of the results in this subsection. We then proceed to verify that such maps are strictly Koszul, and conclude by laying out the connection between \(\mathrm{A}_{\infty}\)-structures and systems of higher homotopies. In what follows, we return to the setting of Example 3.2. Namely, \(\varphi\colon Q\to R\) is a surjective, local homomorphism where \(\ker\varphi\) is generated by a \(Q\)-regular sequence \(\boldsymbol{f}=f_{1},\ldots,f_{c}\), and \(A=\operatorname{Kos}^{Q}(\boldsymbol{f})\). **8.8**.: Let \(M\) be an \(R\)-module and \(G\to M\) a free resolution over \(Q\). A _system of higher homotopies_, corresponding to \(\boldsymbol{f}\), on \(G\) is a collection of maps \(\sigma^{(\boldsymbol{\alpha})}\colon G\to G\), one for each \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{c}\), of degree \(2|\boldsymbol{\alpha}|-1\) such that: 1. \(\sigma^{(\boldsymbol{0})}=\partial^{G}\) where \(\boldsymbol{0}=(0,\ldots,0)\); 2. \(\sigma^{(\boldsymbol{0})}\sigma^{(\mathbf{e}_{i})}+\sigma^{(\mathbf{e}_{i})} \sigma^{(\boldsymbol{0})}=f_{i}\operatorname{id}_{G}\) where \(\mathbf{e}_{i}=(0,\ldots,0,1,0,\ldots 0)\); 3. for any \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{c}\) with \(|\boldsymbol{\alpha}|>1\) one has \(\sum_{\boldsymbol{\alpha}=\boldsymbol{\beta}+\gamma}\sigma^{(\boldsymbol{ \beta})}\sigma^{(\boldsymbol{\gamma})}=0\). Such a system of maps always exists by [10, Section 7]. The utility of this data is summarized in the following construction: if \(D\) denotes the graded \(Q\)-linear dual of \(Q[\chi_{1},\ldots,\chi_{c}]\), where each \(\chi_{i}\) has homological degree \(-2\), then the \(R\)-complex \(R\otimes D\otimes G\) with differential \(\sum_{\boldsymbol{\alpha}\in\mathbb{N}_{0}^{c}}1\otimes\boldsymbol{\chi}^{ \boldsymbol{\alpha}}\otimes\sigma^{(\boldsymbol{\alpha})}\) is a free resolution of \(M\) over \(R\); see [10, Section 7] for more details. When \(M\) is an \(R\)-complex, one can take \(\varepsilon\colon G\to M\) to be a semifree resolution over \(Q\) and impose also the following condition to obtain analogous results: 1. \(\varepsilon\sigma^{(\boldsymbol{\alpha})}=0\) for \(|\boldsymbol{\alpha}|>0\). Such a system of maps exists and can be used to transfer semifree resolutions over \(Q\) to ones over \(R\), by an argument similar to the classical one in [10]; this will be contained in future joint work of Grifo with the first and fourth author. _Remark 8.9_.: Let \(M\) be an \(R\)-module and \(G\to M\) a free resolution over \(Q\). In [11], Burke notes that when \(c=1\) an \(\mathrm{A}_{\infty}\)-module structure on \(G\) over \(A\) is equivalent to a system of higher homotopies on \(G\). Furthermore the bar resolution of \(M\) in Burke's paper agrees with the Priddy resolution of \(M\), introduced above. Such maps are also Golod, and so we are also in the setting of Example 8.2. For arbitrary codimension \(c\) the bar resolution is not minimal. However, in unpublished work, Burke constructs an acyclic twisting cochain \(D\to A\) and uses this to transfer a semifree resolution of an \(R\)-complex \(M\) over \(Q\) to one over \(R\) that agrees with the construction of Eisenbud and Shamash; cf. 8.8 (see also [1, 17]). The connection between higher homotopies and \(\mathrm{A}_{\infty}\)-structures is also implicit in Burke's work. We will give an explicit description of how these structures relate in Theorem 8.12. **8.10**.: The narrative above is subsumed by the one in this article. Specifically, the dg algebra resolution \(A=\operatorname{Kos}^{Q}(\boldsymbol{f})\) of \(R\) over \(Q\) has a quadratic presentation \(\operatorname{T}^{a}(V)/(W)\), with \[V=A_{1}=\Sigma Q^{c}\quad\text{and}\quad W=\langle\{a\otimes a\}_{a\in A_{1}} \cup\{a\otimes b+b\otimes a\}_{a,b\in A_{1}}\rangle\subseteq V^{\otimes 2}\,.\] The graded module \(V\) is concentrated in degree \(1\), and the weight and homological gradings agree. It is straightforward to check that this presentation satisfies the conditions of Theorem 7.1. By construction, \[\bar{m}_{1}(V)=0\,,\quad\bar{m}_{2}(W)=0\quad\text{and}\quad\bar{m}_{n}=0 \quad\text{for }n\geqslant 3\,. \tag{8.10.1}\] Hence \(\varphi\) is strictly Koszul. To conclude that the constructions in 8.8 and Remark 8.9 are recovered by Theorem 7.7, we end this subsection with the following analysis. **8.11**.: Let \(\operatorname{S}_{n}\) be the symmetric group. For \(\boldsymbol{\alpha}=(\alpha_{i})\in\mathbb{N}_{0}^{c}\) with \(|\boldsymbol{\alpha}|=n\) we let \[\operatorname{S}_{\boldsymbol{\alpha}}:=\{\tau\in\operatorname{S}_{n}|\,\tau (\alpha_{1}+\cdots+\alpha_{i}+1)<\cdots<\tau(\alpha_{1}+\cdots+\alpha_{i+1}) \text{ for }0\leqslant i\leqslant c-1\}\] denote the subgroup of \(\boldsymbol{\alpha}\)_-shuffles_[1, Chapter IV, SS5.3]. The symmetric group \(\operatorname{S}_{n}\) acts on \((\Sigma V)^{\otimes n}\) by permuting simple tensors, there are no signs appearing since \(\Sigma V\) is in degree \(2\). The _module of symmetric tensors on \(\Sigma V\)_ is the graded module \(\Gamma(\Sigma V)=\bigoplus_{n\geqslant 0}\Gamma_{(n)}(\Sigma V)\), with \[\Gamma_{(n)}(\Sigma V):=\operatorname{T}_{(n)}(\Sigma V)^{\operatorname{S}_{ n}}\,.\] The coalgebra structure on \(\operatorname{T}^{c}(\Sigma V)\) restricts to a coalgebra structure on \(\Gamma(\Sigma V)\). We call this the _coalgebra of symmetric tensors on \(\Sigma V\)_ and denote it by \(\Gamma^{c}(\Sigma V)\). We denote the basis of \(\Sigma V=\Sigma^{2}Q^{c}\) corresponding to \(f_{1},\ldots,f_{c}\) by \(y_{1},\ldots,y_{c}\). A basis of \(\Gamma(\Sigma V)\) is given by \[y^{(\boldsymbol{\alpha})}:=\sum_{\tau\in\operatorname{S}_{\boldsymbol{\alpha }}}\tau\cdot(y_{1}^{\otimes\alpha_{1}}\otimes\cdots\otimes y_{c}^{\otimes \alpha_{c}})\in\Gamma^{c}_{(|\boldsymbol{\alpha}|)}(\Sigma V)\,.\] **Theorem 8.12**.: _Let \(\varphi\colon Q\to R\) be a surjective complete intersection homomorphism with kernel generated by a \(Q\)-regular sequence \(\boldsymbol{f}=f_{1},\ldots,f_{c}\), and let \(M\) denote an \(R\)-complex._ 1. \(\varphi\) _is strictly Koszul and its Priddy coalgebra is the curved coalgebra of symmetric tensors_ \(\Gamma^{c}(\Sigma^{2}Q^{c})\)_, with curvature term_ \((f_{1},\ldots,f_{c})\colon\Sigma^{2}Q^{c}\to Q\)_._ 2. _Given a semifree resolution_ \(G\to M\) _over_ \(Q\) _there exists a strictly unital_ \(\operatorname{A}_{\infty}\)_-module structure_ \(\{m_{n}^{G}\}\) _over_ \(A=\operatorname{Kos}^{Q}(\boldsymbol{f})\) _making_ \(G\to M\) _a strict morphism of_ \(\operatorname{A}_{\infty}\)_-modules over_ \(A\)_, where_ \(A\) _acts on_ \(M\) _via restricting scalars along the dg algebra map_ \(A\to R\)_. Then setting_ \[\sigma^{(\boldsymbol{\alpha})}:=(-1)^{\frac{|\boldsymbol{\alpha}|(|\boldsymbol {\alpha}|-1)}{2}}m_{|\boldsymbol{\alpha}|+1}^{G}((\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{ \operatorname{ \operatorname{ \operatorname{ \cdot}}}}}}}}}}}} \operatorname{})\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\cdot}}}}}}}}}}}}}} \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatorname{\cdot\cdot\cdot{\cdot\cdot}}}}}}}}}}}} \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatorname{\operatorname{\cdot\cdot\cdot\cdot\cdot{\cdot\cdot\cdot Proof.: We saw in 8.10 that \(\varphi\) is strictly Koszul. Using the same notation we first show that \(\mathsf{C}(V,W)=\Gamma(\Sigma V)\). Indeed for the nontrivial element \(\tau\in\mathrm{S}_{2}\) one has \[\Sigma^{2}W=\ker\left(\Sigma V\otimes\Sigma V\xrightarrow{\mathrm{id}-\tau} \Sigma V\otimes\Sigma V\right)=\Gamma_{(2)}(\Sigma V)\,,\] and so using the transposition \(\tau_{i}=(i\ \ i+1)\in\mathrm{S}_{n}\) we obtain \[(\Sigma V)^{\otimes i-1}\otimes\Sigma^{2}W\otimes(\Sigma V)^{\otimes n-i-1}= \ker\left((\Sigma V)^{\otimes n}\xrightarrow{\tau_{i}-\mathrm{id}}(\Sigma V)^ {\otimes n}\right).\] Since \(\mathrm{S}_{n}\) is generated by the transpositions \(\tau_{i}\), it follows that \(\mathsf{C}(V,W)=\Gamma(\Sigma V)\). The coalgebra structure on \(\mathsf{C}(V,W)\) is inherited from \(\mathsf{B}(A)\), and this coincides with the coalgebra structure on \(\mathrm{T}^{c}(\Sigma V)\) because of the compatible inclusions \[\mathsf{C}(V,W)=\Gamma^{c}(\Sigma V)\subseteq\mathrm{T}^{c}(\Sigma V)\subseteq \mathrm{T}^{c}(\Sigma\bar{A})=\mathsf{B}(A)\,.\] The differential on \(\mathsf{C}(V,W)\) is zero by (8.10.1). It is straightforward to see that the curvature term on \(\mathsf{C}(V,W)\) is (up to a shift) the first differential of \(A\). This completes the proof of (1). For (2), such an \(\mathrm{A}_{\infty}\)-module structure making \(G\to M\) a strict morphism exists by 5.2. Then, by definition 8.8(1) holds. The fact that 8.8(2) holds follows from using the second Stasheff identity from 4.10. Another computation using the Stasheff identities, the unital structure on \(G\), and the fact that \(A\) is graded-commutative show that 8.8(3) holds. The verification of 8.8(3) for \(\boldsymbol{\alpha}=\mathbf{e}_{1}+\mathbf{e}_{2}\) is illustrative of the proof for the general case and so we sketch this case below. Fix basis elements \(e_{i}\in A_{1}\) with \(\partial^{A}(e_{i})=f_{i}\), and note that \[(\mathsf{z}^{-1})^{\otimes 2}y^{(\boldsymbol{\alpha})}=e_{1}\otimes e_{2}+e_{ 2}\otimes e_{1}\,. \tag{8.12.1}\] Observe that for \(\{i,j\}=\{1,2\}\) one has \[m_{2}^{G} (e_{i}\otimes m_{2}^{G}(e_{j}\otimes\mathrm{id})-(e_{i}\cdot e_{j })\otimes\mathrm{id})\] \[=\partial^{G}m_{3}^{G}(e_{i}\otimes e_{j}\otimes\mathrm{id})+m_{3 }^{G}(f_{i}\otimes e_{j}\otimes\mathrm{id}-e_{j}\otimes f_{i}\otimes\mathrm{ id}+e_{i}\otimes e_{j}\otimes\partial^{G})\] \[=\partial^{G}m_{3}^{G}(e_{i}\otimes e_{j}\otimes\mathrm{id})+m_{ 3}^{G}(e_{i}\otimes e_{j}\otimes\mathrm{id})\partial^{G}\,,\] where the first precomposes the third Stasheff identity from 4.10 with \(e_{i}\otimes e_{j}\otimes\mathrm{id}\), while the third equality uses that \(\{m_{n}^{G}\}\) is a strictly unital \(\mathrm{A}_{\infty}\)-module structure. It follows that \[\partial^{G}m_{3}^{G}(e_{i}\otimes e_{j}\otimes\mathrm{id})-m_{2}^{G}(e_{i} \otimes m_{2}^{G}(e_{j}\otimes\mathrm{id}))+m_{2}^{G}(e_{i}\cdot e_{j}\otimes \mathrm{id})+m_{3}^{G}(e_{i}\otimes e_{j}\otimes\mathrm{id})\partial^{G}=0\,,\] and so adding these expressions for \((i,j)=(1,2)\) and \((i,j)=(2,1)\), and recalling (8.12.1), we obtain \[\sum_{\boldsymbol{\beta}+\boldsymbol{\gamma}=\boldsymbol{\alpha}}\sigma^{( \boldsymbol{\beta})}\sigma^{(\boldsymbol{\gamma})}+m_{2}^{G}(e_{1}\cdot e_{2} \otimes\mathrm{id})+m_{2}^{G}(e_{2}\cdot e_{1}\otimes\mathrm{id})=0\,.\] It now remains to observe that since \(A\) is graded-commutative \[m_{2}^{G}(e_{1}\cdot e_{2}\otimes\mathrm{id})+m_{2}^{G}(e_{2}\cdot e_{1} \otimes\mathrm{id})=m_{2}^{G}((e_{1}\cdot e_{2}+e_{2}\cdot e_{1})\otimes \mathrm{id})=0\,.\] Thus 8.8(3) holds for \(\boldsymbol{\alpha}=\mathbf{e}_{1}+\mathbf{e}_{2}\). The condition 8.8(4) holds since \(\varepsilon\) is a strict morphism, and since \(M\) is a dg \(A\)-module where the \(e_{i}\)'s act trivially. This completes the proof of (2). It remains to show (3). By [1, IV.SS.5.11] we have a natural isomorphism of algebras \[Q[\chi_{1},\ldots,\chi_{c}]\cong\Gamma^{c}(\Sigma^{2}Q^{c})^{\vee}=\mathsf{C }(V,W)^{\vee}\] determined by \(\chi_{i}\mapsto y_{i}^{\vee}\); this correspondence can also be seen via (7.5.1): \[\mathsf{C}(V,W)^{\vee}\cong\mathrm{T}^{a}(\Sigma^{-2}(Q^{c})^{\vee})/(\Sigma^{-1 }W^{\perp})=\mathrm{Sym}(\Sigma^{-2}(Q^{c})^{\vee})\cong Q[\chi_{1},\dots,\chi _{n}]\] where \(W^{\perp}=\left\{f\otimes g-g\otimes f\,\big{|}\,f,g\in\Sigma^{-1}(Q^{c})^{ \vee}\right\}\), identifying \((y^{(\boldsymbol{\alpha})})^{\vee}\) with \(\boldsymbol{\chi^{\boldsymbol{\alpha}}}\) for each \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{c}\). As a consequence, dualizing the correspondence above yields an isomorphism of graded \(Q\)-modules \(D\cong\mathsf{C}(V,W)\) inducing an isomorphism of graded \(R\)-modules \[\psi\colon R\otimes D\otimes G\stackrel{{\cong}}{{\longrightarrow }}R\otimes\mathsf{C}(V,W)\otimes G\,.\] It remains to observe that \[\psi\circ\sum_{|\boldsymbol{\alpha}|=n-1}\boldsymbol{\chi}^{\boldsymbol{ \alpha}}\otimes\sigma^{(\boldsymbol{\alpha})}=(-1)^{\frac{(n-1)(n-2)}{2}}\; \bar{m}_{n}^{G}((\mathsf{\Sigma}^{-1})^{\otimes(n-1)}\otimes\mathrm{id}) \Big{|}_{\Gamma_{(n-1)}^{c}(\Sigma^{2}Q^{c})\otimes G}\;.\] Therefore \(\psi\) is compatible with the differentials of its target and source, and so it is an isomorphism of \(R\)-complexes; cf. Theorem 7.7 and 8.8. _Remark 8.13_.: The higher homotopies \(\sigma^{(\boldsymbol{\alpha})}\) with \(|\boldsymbol{\alpha}|=n\) induce a \(\Gamma^{c}(\Sigma^{2}Q^{c})\)-comodule structure on \(\Gamma^{c}(\Sigma^{2}Q^{c})\otimes G\); in fact conditions 8.8(1-4) are equivalent to this. On the other hand an \(\mathrm{A}_{\infty}\)-module structure on \(G\) is equivalent to a \(\mathsf{B}(A)\)-comodule structure on \(\mathsf{B}(A)\otimes G\). Hence a system of higher homotopies on \(G\) captures the'symmetric' part of an \(\mathrm{A}_{\infty}\)-module structure on \(G\) over \(A\). _Remark 8.14_.: Moving beyond finite projective dimension, complete intersection homomorphisms fit into the well-studied class of quasi-complete intersection homomorphisms; cf. [1, 1]. In residual characteristic zero and two, it is straightforward to check that such maps are strictly Koszul; this provides more examples of strictly Koszul homomorphisms of infinite projective dimension. In odd characteristic the presence of divided powers prevents \(\mathrm{Tor}^{Q}(R,k)\) from admitting a quadratic presentation. ### Almost Golod Gorenstein rings To end the paper, we return to the class of almost Golod Gorenstein rings that we studied in Section 3.13. We show that these rings are strictly Cohen Koszul, i.e. every Cohen presentation is a strictly Koszul map, and we thereby obtain concrete free resolutions for all modules over such rings, using the machinery developed in Section 7.2. The next lemma is a general construction in the homological algebra of Gorenstein rings, building on work of Avramov and Levin [1]. **Lemma 8.16**.: _Let \(R\) be a zero dimensional Gorenstein ring of codimension \(d\), with a minimal Cohen presentation \(Q\to R\), and let \(A\stackrel{{\simeq}}{{\to}}R\) be the minimal \(Q\)-free resolution of \(R\). The inclusion of the socle lifts to a chain map_ _where \(K^{Q}\) is the Koszul complex of \(Q\). The subcomplex_ \[A^{\prime}:=\mathrm{cone}(K^{Q}_{<d}\to A_{<d})\subseteq\mathrm{cone}(K^{Q} \to A)\] _is then a minimal \(Q\)-free resolution of \(R/\,\mathrm{soc}(R)\)._ Proof.: By the exact sequence of homology groups \(\operatorname{H}_{*}(\operatorname{cone}(K^{Q}\to A))\) is isomorphic to \(R/\operatorname{soc}(R)\), concentrated in degree zero. The proof of [10, Theorem 1] shows that the map \(K^{Q}_{i}\otimes_{Q}k\to A_{i}\otimes_{Q}k\) is an isomorphism for \(i=d\) and zero for \(i<d\). The former fact implies that the inclusion \(A^{\prime}\subseteq\operatorname{cone}(K^{Q}\to A)\) is a quasi-isomorphism, and the later implies that \(A^{\prime}\) is minimal as a complex. Altogether this shows that \(A^{\prime}\) is the minimal resolution of \(R/\operatorname{soc}(R)\). **Lemma 8.17**.: _Let \(R\) be an almost Golod Gorenstein ring of codimension \(d\) having a minimal Cohen presentation \(Q\to R\). Assume that the minimal \(Q\)-free resolution \(A\) of \(R\) is equipped with a cyclic \(\operatorname{A}_{\infty}\)-structure. Then \((\bar{m}_{n}(\bar{A}^{\otimes n}))_{i}\subseteq\mathfrak{m}_{Q}\bar{A}_{i}\) for all \(i<d\), and \((\bar{m}_{n}(\bar{A}^{\otimes n}))_{d}=0\) for \(n\geqslant 3\). In particular \(R\otimes^{\operatorname{L}}_{Q}k\) is formal and Koszul._ Proof.: We first address what happens in degree \(d\), and for this we use the fact that \(A\) is a cyclic \(\operatorname{A}_{\infty}\)-algebra. If \(n\geqslant 3\) and \(m_{n}(a_{1},\dots,a_{n})\) has degree \(d\), then \[\langle m_{n}(a_{1},\dots,a_{n}),1\rangle=(-1)^{n}\langle m_{n}(1,a_{1},\dots, a_{n-1}),a_{n}\rangle=0\] since \(A\) is strictly unital. But \(\langle-,1\rangle\) is the projection onto the (rank 1) degree \(d\) part of \(A\), so this implies \(m_{n}(a_{1},\dots,a_{n})=0\). For the rest of the argument we need to reduce to the case that \(R\) has dimension zero. We may find a sequence \(\boldsymbol{x}\) that is part of a minimal generating set of \(\mathfrak{m}_{Q}\), and that maps to a maximal regular sequence in \(\mathfrak{m}_{R}\). All of the hypotheses, and the remaining assertions to prove, are unchanged if we replace \(Q\), \(R\) and \(A\) with \(Q/(\boldsymbol{x})\), \(R/(\boldsymbol{x}R)\) and \(A\otimes(Q/(\boldsymbol{x}))\) respectively, using Proposition 2.17 for the Koszul conclusion. Therefore we may assume that \(R\) has dimension zero. We now use the notation and results of Lemma 8.16. Since \(A^{\prime}=\operatorname{cone}(K^{Q}_{<d}\to A_{<d})\) is the minimal \(Q\)-free resolution of \(R/\operatorname{soc}(R)\) there is a splitting \[A^{\prime}\xleftarrow{\simeq}\operatorname{cone}(K^{Q}\to A),\] and we define \(\varphi_{1}\) to be the composition \(A\to\operatorname{cone}(K^{Q}\to A)\to A^{\prime}\). By construction \((\varphi_{1})_{i}\colon A_{i}\to A^{\prime}_{i}\) is a split injection for \(i<d\). Since \(R/\operatorname{soc}(R)\) is Golod we may endow \(A^{\prime}\) with a strictly unital \(\operatorname{A}_{\infty}\)-structure \(\{m^{\prime}_{n}\}\) satisfying \(\bar{m}^{\prime}_{n}(\bar{A}^{\otimes n})\subseteq\mathfrak{m}_{Q}\bar{A}^{\prime}\) for all \(n\geqslant 1\) by [14, Theorem 6.13]. Having done this, the chain map \(\varphi_{1}\) can be extended to a strictly unital map of \(\operatorname{A}_{\infty}\)-algebras using Proposition 5.3. We apply Lemma 5.5 to the morphism \(A\otimes_{Q}k\to A^{\prime}\otimes_{Q}k\) to deduce that the \(\operatorname{A}_{\infty}\)-structure of \(A\) satisfies \((\bar{m}_{n}(\bar{A}^{\otimes n}))_{i}\subseteq\mathfrak{m}_{Q}\bar{A}_{i}\) for all \(n\geqslant 1\) and all \(i<d\). Since the induced higher \(\operatorname{A}_{\infty}\)-structure on \(A\otimes_{Q}k\) vanishes, \(R\otimes^{\operatorname{L}}_{Q}k\simeq A\otimes_{Q}k\) is formal by Proposition 5.4. It also follows that \(A\otimes_{Q}k\) is a short Gorenstein ring, and in particular it is Koszul by Example 3.7. We are finally able to prove that almost Golod Gorenstein rings satisfying certain technical assumptions are strictly Cohen Koszul, as promised in the proof of Theorem 3.16, and substantially generalizing the class of Gorenstein local rings of codimension three covered by Example 8.4. **Theorem 8.18**.: _If \(R\) is an almost Golod Gorenstein ring of odd codimension \(d\), containing a field of characteristic zero, with a minimal Cohen presentation \(\varphi\colon Q\to R\), then \(\varphi\) is strictly Koszul. More precisely, if the minimal resolution \(A\) of \(R\) admits a cyclic \(\operatorname{A}_{\infty}\)-structure, then (regardless of \(d\) or the characteristic) \(A\cong\operatorname{T}(V)/W\) where_ \[V=\bigoplus_{i=1}^{d-1}A_{i}\quad\text{and}\quad W=\ker\left(\langle-,- \rangle\colon V\otimes V\to\Sigma^{d}Q\right),\] and \((A,V,W)\) is a strictly Koszul presentation for \(\varphi\)._ Proof.: Since \(d\) is odd and \(R\) is Gorenstein of characteristic zero, we may endow \(A\) with a cyclic \(\mathrm{A}_{\infty}\)-structure by Theorem 5.7. The pairing \(\langle-,-\rangle\) defined in Section 5.6 is nondegenerate, and this implies that \(W\) is a summand of \(V\otimes V\). We know that \(A\otimes k\cong\mathrm{T}^{a}(V\otimes k)/(W\otimes k)\) since \(A\otimes k\) is short Gorenstein. It follows from Nakayama's lemma that \(A\cong\mathrm{T}(V)/W\) as graded \(Q\)-modules. The assertion \((\bar{m}_{n}(\bar{A}^{\otimes n}))_{d}=0\) from Lemma 8.17 implies that \(m_{n}(V^{\otimes n})\subseteq V\) for all \(n\), and therefore the presentation \((A,V,W)\) is strict. Taking an almost Golod Gorenstein ring \(R\), with \(Q\) and \(A\) as in the theorem, we can describe the Priddy coalgebra explicitly: \[\mathsf{C}_{(n)}(V,W)=\Bigl{\{}\sum\!v_{1}\otimes\cdots\otimes v_{n}\;\Bigl{|} \;\sum\!v_{1}\otimes\cdots\otimes\langle v_{i},v_{i+1}\rangle v_{i+2}\otimes \cdots\otimes v_{n}=0,\;1\leqslant i<n\Bigr{\}}.\] This is also the dual of a noncommutative hypersurface, as in Example 8.4. If we let \(M\) be a bounded complex of finitely generated \(R\)-modules, then there is a finite free \(Q\)-resolution \(G\to M\), and \(G\) can be given a strictly unital \(\mathrm{A}_{\infty}\)-module structure over \(A\) by 5.2. All of this data can be constructed with finitely many computations, and it can be assembled into a resolution \[R\otimes^{\tau}\mathsf{C}(V,W)\otimes^{\tau}G\stackrel{{\simeq}}{ {\longrightarrow}}M\] with an explicit differential given in Theorem 7.7 in terms of the \(\mathrm{A}_{\infty}\)-structures of \(A\) and \(G\). When \(M=k\) is the residue field and \(G=K^{Q}\) is the Koszul complex of \(Q\), the Priddy resolution is minimal by Theorem 7.10.
2302.08611
Computing the Characteristic Polynomial of Endomorphisms of a finite Drinfeld Module using Crystalline Cohomology
We present a new algorithm for computing the characteristic polynomial of an arbitrary endomorphism of a finite Drinfeld module using its associated crystalline cohomology. Our approach takes inspiration from Kedlaya's p-adic algorithm for computing the characteristic polynomial of the Frobenius endomorphism on a hyperelliptic curve using Monsky-Washnitzer cohomology. The method is specialized using a baby-step giant-step algorithm for the particular case of the Frobenius endomorphism, and in this case we include a complexity analysis that demonstrates asymptotic gains over previously existing approaches
Yossef Musleh, Éric Schost
2023-02-16T22:33:12Z
http://arxiv.org/abs/2302.08611v1
Computing the Characteristic Polynomial of Endomorphisms of a finite Drinfeld Module using Crystalline Cohomology ###### Abstract. We present a new algorithm for computing the characteristic polynomial of an arbitrary endomorphism of a finite Drinfeld module using its associated crystalline cohomology. Our approach takes inspiration from Kedlaya's \(p\)-adic algorithm for computing the characteristic polynomial of the Frobenius endomorphism on a hyperelliptic curve using Monsky-Washnitzer cohomology. The method is specialized using a baby-step giant-step algorithm for the particular case of the Frobenius endomorphism, and in this case we include a complexity analysis that demonstrates asymptotic gains over previously existing approaches. Drinfeld module; algorithms; complexity + Footnote †: journal: Computing methodologies Symbolic and algebraic algorithms + Footnote †: journal: Computing methodologies Symbolic and algebraic algorithms + Footnote †: journal: Computing methodologies Symbolic and algebraic algorithms ## 1. Introduction Drinfeld modules were first introduced by Vladimir Drinfel'd in order to prove the Langlands conjecture for \(\operatorname{GL}_{n}\) over a global function field [11]. Since then, Drinfeld modules have attracted attention due to the well established correspondence between elliptic curves and the rank two case. Moreover, the rank one case, often referred to as _Carlitz modules_, provides a function field analogy of cyclotomic extensions; the role played in class field theory over number fields by elliptic curves with complex multiplication shows strong parallels with that of Drinfeld modules of rank two for the function field setting. This has motivated efforts to translate constructions and algorithms for elliptic curves, including modular polynomials [6], isogenies [6], and endomorphism rings [13, 27]. Naturally, cryptographic applications of Drinfeld modules have also been explored [28], but were long anticipated to be vulnerable for public key cryptography based on isogenies [23, 36]. This question was finally put to rest by Wesolowski who showed that isogenies between Drinfeld modules of any rank could be computed in polynomial time [38]. Drinfeld modules of rank \(r>2\) do not have such a clear parallel, although an analogy exists between abelian surfaces and so called \(t\)-modules [1]. Owing to this discrepancy, rank two Drinfeld modules have been studied far more closely than the case of more general ranks. The main goal of this work is to study a Drinfeld module analogue of \(p\)-adic techniques such as Kedlaya's algorithm [25] for computing the characteristic polynomial of the Frobenius endomorphism acting on an elliptic or hyperelliptic curve over a finite field. Algorithms for elliptic curves compute the action of the Frobenius on a basis of a particular subspace of the de Rham cohomology of a characteristic \(0\) lift of the curve, with coefficients in \(\mathbb{Q}_{p}\). Our approach follows a very similar outline, but turns out to be remarkably simpler to describe, resting crucially on a suitable version of crystalline cohomology for Drinfeld modules due Gekeler and Angles [2]. More generally, the approach we present can be used to compute the characteristic polynomial of any Drinfeld module endomorphism. ## 2 Background and Main result ### Basic Preliminaries Let \(R\) be any ring, \(r\in R\), and \(\sigma:R\to R^{\prime}\) a ring homomorphism. We will follow the notational convention that writes \(\sigma(r)=\sigma_{r}=r^{\sigma}\) throughout this work. If \(R\) is a polynomial ring and \(\sigma\) acts on its coefficient ring, \(r^{\sigma}\) denotes coefficient-wise application. Let \(q\) be a prime power, and let \(\mathbb{F}_{q}\) denote a finite field of order \(q\), fixed throughout. We also fix a field extension \(\mathbb{L}\) of \(\mathbb{F}_{q}\) such that \([\mathbb{L}:\mathbb{F}_{q}]=n\). Explicitly, \(\mathbb{L}\) is defined as \(\mathbb{L}=\mathbb{F}_{q}[t]/(\ell(t))\) for some degree \(n\) irreducible \(\ell(t)\in\mathbb{F}_{q}[t]\), so elements of \(\mathbb{L}\) are represented as polynomials in \(\mathbb{F}_{q}[t]\) of degree less than \(n\). We will discuss below an alternative representation, better suited for some computations. ### Drinfeld Modules In general, Drinfeld modules can be defined over a ring \(A\) consisting of the functions of a projective curve over \(\mathbb{F}_{q}\) that are regular outside of a fixed place at infinity. For our purposes, we will restrict ourselves to the consideration of Drinfeld modules defined over the regular function ring of \(\mathbb{P}^{1}-\{\infty\}\); that is \(A=\mathbb{F}_{q}[x]\). We fix a ring homomorphism \(\gamma:A\to\mathbb{L}\) and let \(\mathfrak{p}\in A\) the monic irreducible generator of \(\ker\gamma\). Then \(\mathbb{F}_{\mathfrak{p}}=\mathbb{F}_{q}[x]/(\mathfrak{p})\) is isomorphic to a subfield of \(\mathbb{L}\); we let \(m=\deg(\mathfrak{p})\), so that \(m\) divides \(n\). This gives us an isomorphism \(\mathbb{L}\simeq\mathbb{F}_{q}[x,t]/(\mathfrak{p}(x),g(x,t))\), with \(g\) monic of degree \(n/m\) in \(t\). It will on occasion be convenient to switch from the representation of elements of \(\mathbb{L}\) as univariate polynomials in \(t\) to the corresponding bivariate representation in \(x,t\); in that case, for instance, \(\gamma_{x}\) is simply the residue class of \(x\) modulo \((\mathfrak{p}(x),g(x,t))\). We assume that \(\mathfrak{p}\) and \(g\) are given as part of the input. To define Drinfeld modules, we also have to introduce the ring \(\mathbb{L}\{\tau\}\) of skew polynomials, namely \[\mathbb{L}\{\tau\}=\{U=u_{0}+u_{1}\tau+\dots+u_{s}\tau^{s}\ \mid\ s\in \mathbb{N},u_{0},\dots,u_{s}\in\mathbb{L}\},\] where multiplication is induced by the relation \(\tau u=u^{q}\tau\), for all \(u\) in \(\mathbb{L}\). **Definition 1**: _A Drinfeld \(A\)-module of rank \(r\) over over \((\mathbb{L},\gamma)\) is a ring homomorphism \(\phi:A\to\mathbb{L}\{\tau\}\) such that_ \[\phi_{x}=\gamma_{x}+\Delta_{1}\tau^{1}+\dots+\Delta_{r}\tau^{r}\] _with \(\Delta_{i}\) in \(\mathbb{L}\) for all \(i\) and \(\Delta_{r}\neq 0\)._ For readers interested in the more general setting under which Drinfeld modules are typically defined, we recommend the survey by Deligne and Husemoller in [9]. A Drinfeld module is defined over the _prime field_ when \(\mathbb{L}\cong\mathbb{F}_{\mathfrak{p}}\) (that is, \(m=n\)). Algorithms for Drinfeld modules in the prime field case tend to be algorithmically simpler, and we will often highlight the distinction with the more general case. **Example 1**: _Let \(\mathbb{F}_{q}=\mathbb{Z}/5\mathbb{Z}\) and \(n=4\). Set \(\ell(t)=t^{4}+tx^{2}+4t+2\) and \(\mathbb{L}=\mathbb{F}_{5}[t]/(\ell(t))\). Let \(\gamma_{x}=t\bmod\ell(t)\), in which case \(\mathbb{L}=\mathbb{F}_{\mathfrak{p}}\). A rank two Drinfeld module is given by \(\phi_{x}=\tau^{2}+\tau+t\)._ _We may instead take \(\gamma_{x}=t^{3}+t^{2}+t+3\bmod\ell(t)\) in which case \(\mathfrak{p}=x^{2}+4x+2\) and \(\mathbb{F}_{\mathfrak{p}}\cong\mathbb{F}_{25}\). The field \(\mathbb{L}\) admits the representations_ \[\mathbb{L}=\mathbb{F}_{5}[t]/(\ell(t))\simeq\mathbb{F}_{5}[x,t]/(\mathfrak{p} (x),g(x,t)),\] _with \(g(x,t)=t^{2}+4tx+3t+x\). A rank three Drinfeld module is given by \(\phi_{x}=\tau^{3}+(t^{3}+1)\tau^{2}+tx+t^{3}+t^{2}+t+3\)._ Computing the Characteristic Polynomial of Endomorphisms of a finite Drinfeld Module using Crystalline Cohomology Given Drinfeld \(A\)-modules \(\phi,\psi\) defined over \((\mathbb{L},\gamma)\), an \(\mathbb{L}\)-morphism \(u:\phi\rightarrow\psi\) is a \(u\in\mathbb{L}\{\tau\}\) such that \(u\phi_{a}=\psi_{a}u\) for all \(a\in A\). The set \(\operatorname{End}_{\mathbb{L}}(\phi)\) is the set of \(\mathbb{L}\)-morphisms \(\phi\rightarrow\phi\); it is therefore the centralizer of \(\phi_{\mathbf{x}}\) in \(\mathbb{L}\{\tau\}\). It admits a natural ring structure, and contains the _Frobenius endomorphism_\(\tau^{n}\). ### Characteristic Polynomials The characteristic polynomial of an endomorphism \(u\in\operatorname{End}_{\mathbb{L}}(\phi)\) can be defined through several points of view. Through the action of \(\phi\), \(A=\mathbb{F}_{q}[x]\) and its fraction field \(K=\mathbb{F}_{q}(x)\) can be seen as a subring, resp. subfield of the skew field of fractions \(\mathbb{L}(\tau)\) of \(\mathbb{L}\{\tau\}\). Then, \(\operatorname{End}_{\mathbb{L}}^{0}(\phi)=\operatorname{End}_{\mathbb{L}}( \phi)\otimes_{A}K\) is the centralizer of \(\phi_{\mathbf{x}}\) in \(\mathbb{L}(\tau)\); this is a division ring that contains \(K\) in its center. Definition 2: The characteristic polynomial \(\operatorname{CharPoly}(u)\) of \(u\in\operatorname{End}_{\mathbb{L}}(\phi)\) is its reduced characteristic polynomial, relative to the subfield \(K\) of \(\operatorname{End}_{\mathbb{L}}^{0}(\phi)\)[35, Section 9.13]. The characteristic polynomial of \(u\) has degree \(r\) and coefficients in \(A\subset K\), so that it belongs to \(A[Z]\). More precisely, if \(\deg(u)=d\), \(\operatorname{CharPoly}(u)\) has coefficients \(a_{0},\ldots,a_{r-1}\in A\) with \(\deg(a_{i})\leq d(r-i)/r\) for all \(i\)[27, Prop. 4.3] and satisfies \[u^{r}+\sum_{i=0}^{r-1}\phi_{a_{i}}u^{i}=0. \tag{1}\] Another definition of \(\operatorname{CharPoly}(u)\) follows from the introduction of the _Tate modules_ of \(\phi\). The Drinfeld module \(\phi\) induces an \(A\)-module structure on the algebraic closure \(\overline{\mathbb{L}}\) of \(\mathbb{L}\) by setting \(a*c=\phi_{a}(c)\) for \(a\in A\), \(c\in\overline{\mathbb{L}}\) (defining \(\tau^{i}(c)=c^{q^{i}}\)). Then, for \(\mathbb{I}\in A\), the \(\mathbb{I}\)-torsion module of \(\phi\) is defined as \(\phi[\mathbb{I}]=\{c\in\overline{\mathbb{L}}\mid\mathbb{I}*c=0\}\). Setting \(\mathbb{I}\) to be any irreducible element of \(A\) different from \(\mathfrak{p}\), we can define the \(\mathbb{I}\)-adic Tate module as \(T_{\mathbb{I}}(\phi)=\varprojlim\phi[\mathbb{I}^{i}]\). Letting \(A_{\mathbb{I}}\) be the \(\mathbb{I}\)-adic completion of \(A\), \(T_{\mathbb{I}}(\phi)\) becomes a free \(A_{\mathbb{I}}\)-module of rank \(r\) and elements of \(\operatorname{End}_{\mathbb{L}}(\phi)\) induce endomorphisms on \(T_{\mathbb{I}}(\phi)\). Then, for \(u\in\operatorname{End}_{\mathbb{L}}(\phi)\), the characteristic polynomial \(\operatorname{CharPoly}_{A_{\mathbb{I}}}(u)\) of the induced endomorphism \(u\in\operatorname{End}_{A_{\mathbb{I}}}(T_{\mathbb{I}}(\phi))\) agrees with \(\operatorname{CharPoly}(u)\)[2, 17]. Example 2: Let \(\mathbb{F}_{q}\), \(\mathbb{L}\) be as in the context of example 1, and \(\gamma_{x}=t^{3}+4t^{2}+t+1\) mod \(\ell(t)\). A rank 5 Drinfeld module is given by \(\phi_{\mathbf{x}}=(4t^{3}+t^{2}+2)\tau^{5}+(t^{3}+3t^{2}+t+1)\tau^{4}+(4t+3) \tau^{3}+(3t^{2}+4t+4)\tau^{2}+(4t^{3}+4t^{2}+4t)\tau+\gamma_{X}\). The characteristic polynomial of \(\tau^{n}\) on \(\phi\) is \(Z^{5}+3Z^{4}+(x^{3}+4x^{2}+x)Z^{3}+(2x^{2}+4x+3)Z^{2}+(x^{3}+2x^{2}+4x+2)Z\)\(+2x^{4}+3x^{2}+4x+2\)_ The results in this paper are based on another interpretation of \(\operatorname{CharPoly}(u)\), as the characteristic polynomial of the endomorphism induced by \(u\) in a certain _crystalline cohomology_ module, due to Gekeler and Angles [2]. Our first main result is an algorithm for computing the characteristic polynomial of the Frobenius endomorphism. Here, \(\omega\) is a real number such that two \(s\times s\) matrices over a ring \(R\) can be multiplied in \(O(s^{\omega})\) ring operations in \(R\); the current best value is \(\omega\leq 2.372\)[12]. We will also let \(\lambda\) denote an exponent such that the characteristic polynomial of an \(s\times s\) matrix over a ring \(R\) can be computed in \(O(s^{\lambda})\) ring operations in \(R\). When \(R\) is a field, this can be done at the cost of matrix multiplication and therefore \(\lambda=\omega\)[32]. For more general rings, the best known value to date is \(\lambda\approx 2.7\)[24]. Theorem 1: _Let \(\phi\) be a rank \(r\) Drinfeld module over \((\mathbb{L},\gamma)\). There is a deterministic algorithm to compute the characteristic polynomial of the Frobenius endomorphism \(\tau^{n}\) with bit complexity_ * \((r^{\omega}n^{1.5}\log q+n\log^{2}q)^{1+\omega(1)}\) _for the prime field case (_\(m=n\) * \(((r^{\lambda}/m+r^{\alpha j}/\sqrt{m})n^{2}\log q+n\log^{2}q)^{1+\omega(1)}\) _for the general case_ \(m<n\)_._ When \(r\) and \(q\) are fixed, the runtime in the theorem is thus essentially linear in \(n^{2}/\sqrt{m}\), which is \(n^{1.5}\) in the prime field case and gets progressively closer to \(n^{2}\) as \(m\) decreases. The best prior results [30] were limited to the case \(r=2\), with runtimes essentially linear in \(n^{1.5}\) in the prime field case and \(n^{2}\) otherwise (for fixed \(q\)). This first algorithm builds upon techniques for linear recurrences originating from [10], which are so far limited to the particular case of the Frobenius endomorphism. We also obtain two algorithms that can be applied to any \(u\in\operatorname{End}_{\mathbb{L}}(\phi)\). The complexity in this case partly depends on that of multiplication and Euclidean division in \(\mathbb{L}\{\tau\}\), which we will denote \(\operatorname{\mathsf{SM}}(d,n,q)\) and which will be discussed in more detail in Section 3. Theorem 2.2.: _With assumptions as in Theorem 2.1, there are deterministic algorithms to compute the characteristic polynomial of an endomorphism \(u\) of degree \(d\) with bit complexities_ * \(\bigl{(}\frac{\min(dr^{2},(d+r)r^{\alpha-1})}{m}(d+m)n\log q+r^{\lambda}n(d+m) /m\log q+n\log^{2}q\bigr{)}^{1+\omega(1)}\)__ * \((r\operatorname{\mathsf{SM}}(d+r,n,q)+r^{\lambda}n(d+m)/m\log q+n\log^{2}q)^{ 1+\omega(1)}\)_._ Again, it is worth considering the situation with \(r\) and \(q\) fixed. In this case, the runtimes we obtain are, respectively, essentially linear in \(d(d+m)n/m\) and \(\operatorname{\mathsf{SM}}(d,n,q)\). In the next section, we review known values for \(\operatorname{\mathsf{SM}}\); for the best known value of \(\omega\), and fixed \(q\), it is \((d^{1.69}n)^{1+\omega(1)}\) for \(d\leq n^{0.76}\), and \((dn^{1.52})^{1+\omega(1)}\) otherwise. In the case \(d=\Theta(n)\), the runtimes are thus essentially linear in \(n^{3}/m\) and \(n^{2.53}\), respectively (so which is the better algorithm depends on the value of \(m\)). For \(u=\tau^{n}\), the algorithm in the previous theorem is of course superior. ## 3. Computational Preliminaries The key element in our complexity analyses is the cost of the following operations in \(\mathbb{L}\): addition/subtraction, multiplication, inverse and (iterated) Frobenius application. Some of the algorithms we use below (multiplication and Euclidean division in \(\mathbb{L}\{\tau\}\) from [7, 34]) assume that all these operations can be done using \(O^{*}(n)\) operations in \(\mathbb{F}_{q}\). For the representation of \(\mathbb{L}\) we use, this is however not known to be the case; Couveignes and Lercier proved the existence of "elliptic bases" that satisfy these requirements [8], but conversion to our representation does not appear to be obvious. This explains why in our main result, we do not count operations in \(\mathbb{F}_{q}\), but bit operations instead (our complexity model is a standard RAM); we explain below how this allows us to bypass the constraints above. Using FFT based algorithms, polynomials of degree at most \(n\) with coefficients in \(\mathbb{F}_{q}\) can be multiplied in boolean time \((n\log q)^{1+\omega(1)}\)[5, 20]. It follows that elementary field operations (addition, multiplication, inversion) in \(\mathbb{L}=\mathbb{F}_{q}[t]/(\ell(t))\) can be done with the same asymptotic cost. Conversions between univariate and bivariate representations for elements of \(\mathbb{L}\) take the same asymptotic runtime. Denote by \(\alpha\) the isomorphism \(\mathbb{L}=\mathbb{F}_{q}[t]/(\ell(t))\to\mathbb{F}_{q}[x,t]/(\mathfrak{p}(x,g(x,t))\); then, given \(f\) of degree less than \(n\) in \(\mathbb{F}_{q}[t]\), we can compute the image \(\alpha(f\bmod\ell(t))\) in \((n\log q)^{1+\omega(1)}\) bit operations; the same holds for \(\alpha^{-1}\)[22, 33]. The last important operation is the application of the \(q\)-power Frobenius in \(\mathbb{L}\). Recall that given polynomials \(f,g,h\in\mathbb{F}_{q}[x]\) of degree at most \(n\), _modular composition_ is the operation that computes \(f(g)\bmod h\). As showed in [15], for \(c\) in \(\mathbb{L}=\mathbb{F}_{q}[t]/(\ell(t))\), \(c^{q}\) can be computed in the same asymptotic time (up to logarithmic factors) as degree \(n\) modular composition, following a one-time precomputation that takes \((n\log^{2}q)^{1+\omega(1)}\) bit operations. This then extends to arbitrary powers (positive and negative) of the Frobenius. We should point out that modular composition techniques also underlie the algorithms for switching between the two representations of the elements in \(\mathbb{L}\) mentioned above. In [26], Kedlaya and Umans proved that modular composition in degree \(n\) can be computed in \((n\log q)^{1+o(1)}\) bit operations (see also the refinement due to van der Hoeven and Lecerf [22]), whence a similar cost for (iterated) Frobenius in \(\mathbb{L}\). Here, the fact that we work in a boolean model is crucial: Kedlaya and Umans' algorithm is not known to admit a description in terms of \(\mathbb{F}_{q}\)-operations. From this, we can directly adapt the cost analyses in [7, 34] to our boolean model. In particular, following the latter reference (which did so in an algebraic cost model), we let \(\mathsf{SM}(d,n,q)\) be a function such that * degree \(d\) multiplication and right Euclidean division in \(\mathbb{L}\{\tau\}\) can be done in \(O(\mathsf{SM}(d,n,q))\) bit operations * for \(n,q\) fixed, \(d\mapsto\mathsf{SM}(d,n,q)/d\) is non-decreasing. The latter condition is similar to the super-linearity of multiplication functions used in [14], and will allow us to streamline some cost analyses. Unfortunately, there is no simple expression for \(\mathsf{SM}(d,n,q)\): on the basis of the algorithms in [7, 34], the analysis done in [7] gives the following upper bounds: * for \(d\leq n^{(5-\omega)/2}\), we can take \(\mathsf{SM}(d,n,q)\) in \((d^{(\omega+1)/2}n\log q)^{1+o(1)}\) * else, we can take \(\mathsf{SM}(d,n,q)\) in \((dn^{4/(5-\omega)}\log q)^{1+o(1)}\) For instance, with \(d=n\), this is \((n^{(9-\omega)/(5-\omega)}\log q)^{1+o(1)}\). With \(\omega=2.37\), the cost is \((d^{1.69}n\log q)^{1+o(1)}\) for \(d\leq n^{0.76}\), and \((dn^{1.52}\log q)^{1+o(1)}\) otherwise; the exponent for \(d=n\) is \(2.53\). For completeness, we point out that these algorithms heavily rely on Frobenius applications, and as such, require spending the one-time cost \((n\log^{2}q)^{1+o(1)}\) mentioned previously. One should also keep in mind that these asymptotic cost analyses are not expected to reflect practical runtimes. To the authors' knowledge, software implementations of the Kedlaya-Umans algorithm achieving its theoretical complexity, or of matrix multiplication with exponent close to \(2.37\), do not currently exist. For practical purposes, implementations of modular composition use an algorithm due to Brent and Kung [4], with an algebraic complexity of \(O(n^{(\omega+1)/2})\) operations in \(\mathbb{F}_{q}\). Revisiting skew polynomial algorithms and their analyses on such a basis is work that remains to be done. ## 4 Prior Work The question of computing the characteristic polynomial, particularly of the Frobenius endomorphism, was studied in detail in [18] for the rank two case only. The most general approach constructs a linear system based on the degree constraints of the coefficients \(a_{i}=\sum_{j=0}^{n(r-i)/r}a_{i,j}x^{j}\). Evaluating the characteristic polynomial at the Frobenius element and equating coefficients gives a linear system based on \[\tau^{nr}+\sum_{i=0}^{r-1}\sum_{j=0}^{\frac{n(r-i)}{r}}\sum_{k=0}^{n(r-i)}a_{ i,j}f_{j,k}\tau^{k+ni}=0, \tag{2}\] with \(f_{j,k}\) the coefficients of \(\phi_{x^{j}}\). Letting \(\mathrm{MinPoly}(\tau^{n})\) denote the minimal polynomial of \(\tau^{n}\) (as an element of the division algebra \(\mathrm{End}_{\mathbb{L}}^{0}(\phi)\) over the field \(K=\mathbb{F}_{q}(x)\)), the solution of the preceding system is unique and yields the characteristic polynomial if and only if \(\mathrm{MinPoly}(\tau^{n})=\mathrm{CharPoly}(\tau^{n})\). Garai and Papikian gave an algorithm for computing the characteristic polynomial [13, SS5.1] valid for the prime field case only. As with the previous approach, this relies on the explicit computation of \(\phi_{x^{j}}\), which is the dominant computational step. This can be done by \(O(n^{2})\) evaluations of the recurrence \[f_{i+1,j}=\gamma_{x}^{q^{j}}f_{i,j}+\sum_{t=1}^{r}\Delta_{t}^{q^{j-t}}f_{i,j-t}.\] Thus the bit complexity of computing all of \(\phi_{x},\phi_{x^{2}},\ldots,\phi_{x^{n}}\) is \((rn^{3}\log(q))^{1+\omega(1)}\). Further study of algorithms for the specific case of the Frobenius endomorphism in rank \(r=2\) was done in [31] and [30]. The latter focused on the complexity of the algorithms and used the same computational model that will be used here. As we reported after Theorem 3.1, the best known runtime to date was quadratic in \(n\), except in the case where \(\mathrm{MinPoly}(\tau^{n})=\mathrm{CharPoly}(\tau^{n})\), or in the prime field case where a bit cost of \((n^{1.5}\log q+n\log^{2}q)^{1+\omega(1)}\) is possible [10]. To our knowledge, no previous analysis is available for an arbitrary endomorphism \(u\). In the context of elliptic curves, Kedlaya's algorithm [25] computes the characteristic polynomial of a matrix representation of the lift of the Frobenius action to a subspace of the Monsky-Washnitzer cohomology, up to some finite precision. Our algorithm follows the same high-level approach: we compute a matrix for the endomorphism acting on the crystalline cohomology with coefficients in a power series ring analogue to Witt vectors. The induced endomorphism turns out to be quite simple to describe in terms of skew-polynomial multiplication, which eliminates the need for a complicated lifting step. ## 5. Crystalline Cohomology In this section, we first review the construction of the crystalline cohomology of a Drinfeld module and its main properties; this can be found in [2], where the definition is credited to unpublished work of Gekeler. Then, we introduce truncated versions of these objects, which reduce the computation of characteristic polynomials of endomorphisms of a Drinfeld module to characteristic polynomial computations of matrices over truncated power series rings. ### Definition The contents of this subsection is from [2; 16]. The set of _derivations_\(D(\phi,\mathbb{L})\) of a Drinfeld module \(\phi\) is the set of \(\mathbb{F}_{q}\)-linear maps \(\eta:A\to\mathbb{L}\{\tau\}\tau\) satisfying the relation \[\eta_{ab}=\gamma_{a}\eta_{b}+\eta_{a}\phi_{b},\quad a,b\in A\] Let then \(y\) be a new variable. The set \(D(\phi,\mathbb{L})\) can be made into an \(\mathbb{L}[y]\)-module in the following manner. Definition 3 ().: _[_2_, Section 2]_ _The set \(D(\phi,\mathbb{L})\) is an \(\mathbb{L}[y]\)-module under \((cy^{i}*\eta)_{a}=c\eta_{a}\phi_{x^{i}}\), for \(\eta\) in \(D(\phi,\mathbb{L})\), \(c\) in \(\mathbb{L}\), \(i\geq 0\) and \(a\) in \(A\)._ Let further \(I\) be the ideal of \(\mathbb{L}[y]\) generated by \(y-\gamma_{x}\); for \(k\geq 1\), we set \[W_{k}=\mathbb{L}[y]/I^{k}\] and \[W=\varprojlim W_{k}\cong\mathbb{L}[[y-\gamma_{x}]].\] Thus \(W\) comes equipped with projections \(\pi_{k}:W\to W_{k}\) obtained by truncation of a power series, written as sum of powers of \((y-\gamma_{x})\), in degree \(k\). We have canonical ring homomorphisms \(u_{k}:A\to W_{k}\) given by \(u_{k}(x)=y\bmod I^{k}\). They lift to an inclusion \(\iota:A\to W\), simultaneously commuting with each \(\pi_{k}\), which represents elements of \(A\) via their \(I\)-adic expansion. Computing the Characteristic Polynomial of Endomorphisms of a finite Drinfeld Module using Crystalline Cohomology The _crystalline cohomology_\(H^{*}_{\text{crys}}(\phi,\mathbb{L})\) of \(\phi\) is the \(W\)-module \(W\otimes_{\mathbb{L}[y]}D(\phi,\mathbb{L})\), that is, the completion of \(D(\phi,\mathbb{L})\) at the ideal \(I=(y-\gamma_{x})\) of \(\mathbb{L}[y]\). Gekeler proved that \(D(\phi,\mathbb{L})\) is a projective, hence free, \(\mathbb{L}[y]\)-module of rank \(r\)[16], with canonical basis \(\hat{\eta}^{(i)}\) such that \(\hat{\eta}^{(i)}(x)=\tau^{i}\) for \(1\leq i\leq r\). From this, it follows that \(H^{*}_{\text{crys}}(\phi,\mathbb{L})\) is a free \(W\)-module of rank \(r\) as well, as pointed out in [2]. **Remark 1**.: _In that reference, \(A\) is not necessarily a polynomial ring, and \(\mathbb{L}[y]\) is replaced by \(A_{\mathbb{L}}:=\mathbb{L}\otimes_{\mathbb{F}_{q}}\mathbb{A}\). In this case, \(D(\phi,\mathbb{L})\) is a projective \(A_{\mathbb{L}}\)-module of rank \(r\), the definition of ideal \(I\) changes, but it remains maximal in \(A_{\mathbb{L}}\), so the completion \(W\) of \(A_{\mathbb{L}}\) at \(I\) is still a local ring and \(H^{*}_{\text{crys}}(\phi,\mathbb{L})\) is still free of rank \(r\) over \(W\)._ An endomorphism \(u\) of \(\phi\) induces an \(\mathbb{L}[y]\)-endomorphism \(u^{*}\) of \(D(\phi,\mathbb{L})\), defined as \((u^{*}(\eta))_{x}=\eta_{x}u\), for \(\eta\) in \(D(\phi,\mathbb{L})\); the same holds for the completion \(H^{*}_{\text{crys}}(\phi,\mathbb{L})\). Following [2], using the fact that \(H^{*}_{\text{crys}}(\phi,\mathbb{L})\) is free over \(W\), one can then define the characteristic polynomial \(\operatorname{CharPoly}_{W}(u^{*})\) in the usual manner. Recall now that \(\operatorname{CharPoly}(u)\) denotes the characteristic polynomial of \(u\), as defined in Section 2.3. The following theorem due to Angles [2, Thm. 3.2] relates this characteristic polynomial to that of the induced endomorphism on \(H^{*}_{\text{crys}}(\phi,\mathbb{L})\), where \(\iota\) below acts coefficient-wise. **Theorem 3**.: _For \(u\) in \(\operatorname{End}_{\mathbb{L}}(\phi)\), \(\operatorname{CharPoly}(u)^{t}=\operatorname{CharPoly}_{W}(u^{*})\)._ ### Truncated Cohomology Recall now that \(\mathfrak{p}\in A\) is the minimal polynomial of \(\gamma_{x}\in\mathbb{L}\) over \(\mathbb{F}_{q}\). For \(k\geq 1\), we are going to define an \(\mathbb{F}_{q}\)-linear homomorphism \(\chi_{k}\) such that the following diagram commutes: There exists an isomorphism \[T_{k}:\mathbb{F}_{q}[x,y]/(\mathfrak{p}(x),(y-x)^{k})\to\mathbb{F}_{q}[y]/( \mathfrak{p}(y)^{k});\] see e.g. [29, Lemma 13]. On the other hand, recall that \(\mathbb{L}=\mathbb{F}_{q}[\iota]/(\ell(t))\) is isomorphic to \[\mathbb{F}_{q}[x,t]/(\mathfrak{p}(x),g(x,t)),\] for some \(g\) in \(\mathbb{F}_{q}[x,t]\), monic of degree \(n/m\) in \(t\); in this representation of \(\mathbb{L}\), \(\gamma_{x}\) is simply (the residue class of) \(x\). As a result, we get \[W_{k} =\mathbb{F}_{q}[t,y]/(\ell(t),(y-\gamma_{x})^{k})\] \[\simeq\mathbb{F}_{q}[x,t,y]/(\mathfrak{p}(x),g(x,t),(y-x)^{k})\] \[\simeq\mathbb{F}_{q}[y,t]/(\mathfrak{p}(y)^{k},G_{k}(y,t)), \tag{3}\] for a certain polynomial \(G_{k}\in\mathbb{F}_{q}[y,t]\), monic of degree \(n/m\) in \(t\). We can then define \(\chi_{k}:W_{k}\to\mathbb{F}_{q}[y]/(\mathfrak{p}(y)^{k})\) by \[\chi_{k}:\sum_{0\leq i<n/m}c_{i}t^{i}\mapsto c_{0},\] and we verify that it satisfies our claim. The details of how to compute this homomorphism are discussed in Section 6. For \(k\geq 1\), we further define the _precision_\(k\) cohomology space \(H^{*}_{k}(\phi,\mathbb{L})\) as the \(W_{k}\)-module \[D(\phi,\mathbb{L})/I^{k}\,D(\phi,\mathbb{L})\simeq H^{*}_{\text{crys}}(\phi, \mathbb{L})/I^{k}\,H^{*}_{\text{crys}}(\phi,\mathbb{L}).\] It is thus free of rank \(r\), and an endomorphism \(u\) of \(\phi\) induces a \(W_{k}\)-linear endomorphism \(u^{*}_{k}\) of \(H^{*}_{k}(\phi,\mathbb{L})\). Remark 2 ().: _In [16], Gekeler introduced de Rham cohomology of Drinfeld modules; this is the case \(k=1\) in this construction (in which case \(W_{k}=\mathbb{L}\))._ In the following claim, recall that for a polynomial \(P\) and for any map \(\chi\) acting on its coefficient ring, we let \(P\mathcal{X}\) denote coefficient-wise application of \(\chi\) to \(P\). Corollary 4 ().: _For \(u\) in \(\operatorname{End}_{\mathbb{L}}(\phi)\) and \(k\geq 1\), \(\operatorname{CharPoly}(u)^{\theta_{k}}=\operatorname{CharPoly}_{W_{k}}(u^{*}_{ k})^{\chi_{k}}\)._ Proof.: Apply \(\chi_{k}\circ\pi_{k}\) coefficient-wise to the equality in Theorem 3. If \(u\) has degree \(d\) in \(\tau\), we know that all coefficients of \(\operatorname{CharPoly}(u)\) have degree at most \(d\), so they can be recovered from their reductions modulo \(\mathfrak{p}^{k}\) for \(k=\lceil\frac{d+1}{m}\rceil\in O((d+m)/m)\). In the prime field case, where \(m=n\), and for the special case \(u=\tau^{n}\), the above formula gives \(k=2\), but we can take \(k=1\) instead; this is discussed in Section 6.4. Note also that if we take \(k=d+1\), there is no need to consider the map \(\chi_{k}\): on the representation of \(W_{d+1}\) as \[W_{d+1}=\mathbb{F}_{q}[x,t,y]/(\mathfrak{p}(x),g(x,t),(y-x)^{d+1}),\] for \(f\) of degree up to \(d\), \(u_{k}(f)\) is simply the polynomial \(f(y)\), so we can recover \(f\) from \(u_{k}(f)\) for free. We will however refrain from doing so, as it causes \(k\) to increase. ## 6. Main Algorithms We will now see how the former discussion can be made more concrete, by rephrasing it in terms of skew polynomials only. The evaluation map \(\eta\mapsto\eta_{x}\) gives an additive bijection \(D(\phi,\mathbb{L})\to\mathbb{L}\{\tau\}\tau\). This allows us to transport the \(\mathbb{L}[y]\)-module structure on \(D(\phi,\mathbb{L})\) to \(\mathbb{L}\{\tau\}\tau\): one verifies that it is given by \((cy^{i}*\eta)=c\eta\phi_{x^{i}}\), for \(\eta\) in \(\mathbb{L}\{\tau\}\tau\), \(c\) in \(\mathbb{L}\) and \(i\geq 0\), and that \(\mathcal{B}=(\tau,\ldots,\tau^{r})\) is a basis of \(\mathbb{L}\{\tau\}\tau\) over \(\mathbb{L}[y]\). Further, an endomorphism \(u\in\operatorname{End}_{\mathbb{L}}(\phi)\) now induces an \(\mathbb{L}[y]\)-linear endomorphism \(u^{\star}:\mathbb{L}\{\tau\}\tau\to\mathbb{L}\{\tau\}\tau\) simply given by \(u^{\star}(v)=su\) for \(v\) in \(\mathbb{L}\{\tau\}\tau\). Reducing modulo the ideal \(I^{k}\subset\mathbb{L}[y]\), we denote by \(u^{\star}_{k}\) the corresponding \(W_{k}\)-linear endomorphism on the quotient module \(\mathbb{L}\{\tau\}\tau/l^{k}_{\mathbb{L}}\mathbb{L}\{\tau\}\tau\simeq H^{*}_{k }(\phi,\mathbb{L})\). We can then outline the algorithm referenced in Theorems 3.1 and 3.2; its correctness follows directly from Corollary 4 and the bound on \(k\) given previously. 1. Set \(k=\lceil\frac{d+1}{m}\rceil\), with \(d=\deg_{\tau}(u)\), except if \(n=m\) and \(u=\tau^{n}\) (in which case we can take \(k=1\)) 2. Compute the coefficients \(u_{i,1},\ldots,u_{i,r}\in W_{k}\) of \(\tau^{i}u\) mod \(I^{k}\) on the basis \(\mathcal{B}\), for \(i=1,\ldots,r\) 3. Using the coefficients computed in step 2, construct the matrix for \(u^{\star}_{k}\) acting on \(\mathbb{L}\{\tau\}\tau/l^{k}_{\mathbb{L}}\mathbb{L}\{\tau\}\tau\) and compute its characteristic polynomial \(\operatorname{CharPoly}_{W_{k}}(u^{\star}_{k})\in W_{k}[Z]\) 4. Apply the map \(\chi_{k}\) to the coefficients of \(\operatorname{CharPoly}_{W_{k}}(u^{\star}_{k})\) to recover \(\operatorname{CharPoly}(u)^{\theta_{k}}\), and thus \(\operatorname{CharPoly}(u)\). Computing the Characteristic Polynomial of Endomorphisms of a finite Drinfeld Module using Crystalline Cohomology In Subsections 6.1 to 6.3, we discuss how to complete Step 2: we give two solutions for the case of an arbitrary endomorphism \(u\), and a dedicated, more efficient one, for \(u=\tau^{n}\). We freely use the following notation: * for \(c\) in \(\mathbb{L}\) and \(t\in\mathbb{Z}\), let \(c^{[t]}\) denote the value of the \(t\)th power Frobenius applied to \(c\), that is, \(c^{[t]}=c^{q^{t}}\) * for \(f\) in \(\mathbb{L}[y]\), \(f^{[t]}\in\mathbb{L}[y]\) is obtained by applying the former operator coefficient-wise, so \(\deg(f)=\deg(f^{[t]})\) * for \(M=(m_{i,j})_{1\leq i\leq u,1\leq j\leq o}\) in \(\mathbb{L}[y]^{u\times o}\), \(M^{[t]}\) is the matrix with entries \((m_{i,j}^{[t]})_{1\leq i\leq u,1\leq j\leq o}\). Finally, we define \(\mu=(y-\gamma_{x})^{k}\in\mathbb{L}[y]\) (with the value of \(k\) defined above); it generates the ideal \(I^{k}\) in \(\mathbb{L}[y]\). ### Using a Recurrence Relation The following lemma is a generalization of a recurrence noted by Gekeler ([19, Section 5]) for \(r=2\). Recall that we write \(\phi_{x}=\gamma_{x}+\Delta_{1}\tau^{1}+\ldots+\Delta_{r}\tau^{r}\), with all \(\Delta_{i}\)'s in \(\mathbb{L}\); in the expressions below, we write \(\Delta_{0}=\gamma_{x}\). Lemma 1 ().: _For any \(t\geq 1\), the following relation holds in the \(\mathbb{L}[y]\)-module \(\mathbb{L}\{\tau\}\):_ \[\sum_{i=0}^{r}\Delta_{i}^{[t]}\tau^{t+i}=y*\tau^{t}. \tag{4}\] Proof.: This follows directly from the module action of \(\mathbb{L}[y]\) on \(\mathbb{L}\{\tau\}\), by commuting \(\tau^{t}\) across the defining coefficients \(\Delta_{i}\) of \(\phi\): \[y*\tau^{t}=\tau^{t}\phi_{x}=\tau^{t}\sum_{i=0}^{r}\Delta_{i}\tau^{i}=\sum_{i=0} ^{r}\Delta_{i}^{[t]}\tau^{t+i}.\qed\] For \(i=0,\ldots,r-1\), let \(\Lambda_{i}=-\frac{\Delta_{i}}{\Delta_{r}}\) and define the order \(t\) companion matrix for the recurrence, \(\mathcal{A}_{t}\in\mathbb{L}[y]^{r\times r}\), as \[\mathcal{A}_{t}=\begin{bmatrix}\Lambda_{r-1}^{[t]}&\Lambda_{r-2}^{[t]}&\ldots &\Lambda_{1}^{[t]}&\Lambda_{0}^{[t]}+\frac{y}{\Delta_{i}^{[t]}}\\ 1&0&\ldots&0&0\\ 0&1&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&1&0\end{bmatrix} \tag{5}\] For \(t\geq 1\), let \(\kappa_{t}\in\mathbb{L}[y]^{1\times r}\) denote the coefficient vector of \(\tau^{t}\) with respect to the standard basis \(\mathcal{B}\). Then, we have the following relation between \(r\times r\) matrices over \(\mathbb{L}[y]\): \[\begin{bmatrix}\kappa_{t+r}\\ \kappa_{t+r-1}\\ \vdots\\ \kappa_{t+1}\end{bmatrix}=\mathcal{A}_{t}\begin{bmatrix}\kappa_{t+r-1}\\ \kappa_{t+r-2}\\ \vdots\\ \kappa_{t}\end{bmatrix} \tag{6}\] For \(k\geq 1\), these relations can be taken modulo \(\mu\), to give equalities over \(W_{k}=\mathbb{L}[y]/\mu\); below, we will write \(\bar{\kappa}_{t}=\kappa_{t}\bmod\mu\in W_{k}^{1\times r}\). Starting from \(\bar{\kappa}_{t},\ldots,\bar{\kappa}_{t+r-1}\), we obtain \(\bar{\kappa}_{t+r-1}\) using \(O(r)\) operations (divisions, Frobenius) in \(\mathbb{L}\) to obtain the coefficients appearing on the first row of \(\mathcal{A}_{t}\), followed by \(O(kr)\) operations in \(\mathbb{L}\) to deduce the entries of \(\bar{\kappa}_{t+r}\). Below, we will need \(\bar{\kappa}_{1},\ldots,\bar{\kappa}_{d+r}\). Altogether, computing them takes \(((d+r)krn\log q)^{1+o(1)}\) bit operations; with our chosen value of \(k\), this is also \[((d+r)(d+m)rn/m\log q+)^{1+o(1)}.\] Let us then write \(u=u_{0}+\cdots+u_{d}\tau^{d}\). For \(i=1,\ldots,r\), we have \[\tau^{i}u=u_{0}^{[i]}\tau_{i}+\cdots+u_{d}^{[i]}\tau^{d+i},\] so the coefficient vector \([u_{i,1}\cdots u_{i,r}]\in W_{k}\) of \(\tau^{i}u\bmod I^{k}\) on the basis \(\mathcal{B}\) is given by the product \[[u_{0}^{[i]}\ \cdots\ u_{d}^{[i]}]\begin{bmatrix}\kappa_{i}\\ \bar{\kappa}_{i+1}\\ \vdots\\ \bar{\kappa}_{i+d}\end{bmatrix}\in W_{k}^{1\times r}.\] Each such operation takes \(O(dkrn)\) operations in \(\mathbb{L}\), for a total of \((d(d+m)r^{2}n/m\log q)^{1+o(1)}\) bit operations if done independently of one another (this is the dominant cost in the algorithm). In cases when \(d\) is not small compared to \(r\), we can reduce the cost slightly using matrix arithmetic, since all coefficient vectors we want can be read off an \(r\times(d+r)\times r\) matrix product, \[\begin{bmatrix}u_{0}^{[1]}&\cdots&u_{d}^{[1]}&0&\cdots&\cdots&0\\ 0&u_{0}^{[2]}&\cdots&u_{d}^{[1]}&0&\cdots&0\\ &&\ddots&&\ddots&&\\ 0&\cdots&\cdots&0&u_{0}^{[r]}&\cdots&u_{d}^{[r]}\end{bmatrix}\begin{bmatrix} \bar{\kappa}_{1}\\ \bar{\kappa}_{i+1}\\ \vdots\\ \bar{\kappa}_{d+r}\end{bmatrix}\in W_{k}^{r\times r}.\] This takes \(((d+r)(d+m)r^{\omega-1}n/m\log q)^{1+o(1)}\) bit operations. ### Using Euclidean Division This section describes an alternative approach to computing the coefficients of an endomorphism \(u\) on the canonical basis \(\mathcal{B}\). Computations are done in \(\mathbb{L}[y]\) rather than \(W_{k}=\mathbb{L}[y]/\mu\) (we are not able to take reduction modulo \(\mu\) into account in the main recursive process). The algorithm is inspired by a well-known analogue for commutative polynomials (14, Section 9.2): for a fixed \(a\in\mathbb{L}[y]\) of degree \(r\), we can rewrite any \(f\) in \(\mathbb{L}[y]\) as \(f=\sum_{0\leq i<r}f_{i}(a)y^{i}\), for some coefficients \(f_{0},\ldots,f_{r-1}\) in \(\mathbb{L}[y]\). This is done in a divide-and-conquer manner. This approach carries over to the non-commutative setting. We start by showing how \(f\) of degree \(d\) in \(\mathbb{L}\{r\}\) can be rewritten as \[f=\sum_{i}f_{i}\phi_{x}^{i},\] for some \(f_{i}\) of degree less than \(r\) in \(\mathbb{L}\{\tau\}\). If we let \(K\) be such that \(d<Kr\leq 2d\), with \(K\) a power of \(2\), index \(i\) in the sum above ranges from \(0\) to \(K-1\). If \(K=1\), we are done. Else set \(K^{\prime}=K/2\), and compute the quotient \(g\) and remainder \(h\) in the right Euclidean division of \(f\) by \(\phi_{x}^{K^{\prime}}\), so that \(f=g\phi_{x}^{K^{\prime}}+h\). Recursively, we compute \(g_{0}\ldots,g_{K^{\prime}-1}\) and \(h_{0},\ldots,h_{K^{\prime}-1}\), such that \[g=\sum_{0\leq i<K^{\prime}}g_{i}\phi_{x}^{i}\quad\text{and}\quad h=\sum_{0\leq i <K^{\prime}}h_{i}\phi_{x}^{i}.\] Then, we return \(h_{0},\ldots,h_{K^{\prime}-1},g_{0},\ldots,g_{K^{\prime}-1}\). The runtime of the whole procedure is \(O^{\ast}(\mathsf{SM}(d,n,q))\) bit operations, with \(\mathsf{SM}\) as defined in Section 3 (the analysis is the same as the one done in the commutative case in (14), and uses the super-linearity of \(\mathsf{SM}\) with respect to \(d\)). From there, we are able to compute the coefficients of \(f\in\mathbb{L}\{\tau\}\tau\) on the monomial basis \(\mathcal{B}\). This essentially boils down to using the procedure above, taking care of the fact that \(f\) is a multiple of \(\tau\). Factor \(\tau\) on the left, writing \(f\) as \(rg\): if \(f=F\tau\), \(g=F^{[-1]}\). Apply the previous procedure, to write \(g=\sum_{0\leq i\leq s}g_{i}\phi_{x}^{i}\), with all \(g_{i}^{\prime}\) of degree less than \(r\) and \(s\leq d/r\). This gives \(f=rg=\sum_{0\leq i\leq s}(g_{i}^{[1]}\tau)\phi_{x}^{i}\), with all coefficients \(g_{i}^{[1]}\tau\) supported on \(\tau,\ldots,\tau^{r}\). Extracting coefficients of \(\tau,\ldots,\tau^{r}\), we obtain polynomials \(G_{1},\ldots,G_{r}\) of degree at most \(s\) in \(\mathbb{L}[\tau]\) such that \(f=\sum_{1\leq i\leq r}G_{i}*\tau^{i}\). The cost of left-factoring \(\tau\) in \(f\), and of multiplying all coefficients of \(g\) back by \(\tau\), is \((dn\log q)^{1+o(1)}\), so the dominant cost is \(O^{*}(\operatorname{SM}(d,n,q))\) bit operations from the divide-and-conquer process. To obtain the matrix of an endomorphism \(u\) of degree \(d\), we apply \(r\) times this operation, to the terms \(\tau^{i}u\), \(i=1,\ldots,r\). The runtime is then dominated by \(O^{*}(r\operatorname{SM}(d+r,n,q))\). Finally, reducing the entries of the matrix modulo \(\mu=(y-\gamma_{x})^{k}\) takes softly linear time in the size of these entries, so can be neglected. ### Special Case of the Frobenius Endomorphism In the particular case where \(u=\tau^{n}\), we may speed up the computation using a baby-step giant-step procedure, based on the approach used in (Han et al., 2017). As a first remark, note that for \(u=\tau^{n}\), \(d=n\) and \(k\) in \(O(n/m)\). In this case, it is enough to compute the vectors \(\bar{\kappa}_{n+1},\ldots,\bar{\kappa}_{n+r}\). They are given by \[\left[\begin{matrix}\bar{\kappa}_{n+r}\\ \bar{\kappa}_{n+r-1}\\ \vdots\\ \bar{\kappa}_{n+1}\end{matrix}\right]=\bar{\mathcal{A}}_{n}\ldots\bar{ \mathcal{A}}_{1}, \tag{7}\] with \(\bar{\mathcal{A}}_{t}\) the image of \(\mathcal{A}_{t}\) modulo \(\mu=(y-\gamma_{x})^{k}\) for all \(t\). To compute the matrix product \(\bar{\mathcal{A}}=\bar{\mathcal{A}}_{n}\ldots\bar{\mathcal{A}}_{1}\), we slightly extend the approach used in (Han et al., 2017) (which dealt with the case \(k=1\)). Consider the following element of \(\mathbb{L}[y]^{r\times r}\): \[\mathcal{B}=\left[\begin{matrix}\Lambda_{r-1}&\Lambda_{r-2}&\ldots&\Lambda_{1 }&\Lambda_{0}\\ 1&0&\ldots&0&0\\ 0&1&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&1&0\end{matrix}\right]+\left[\begin{matrix}0&0&\ldots&\Delta_{r}^{ -1}\\ 0&0&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&0\end{matrix}\right]y. \tag{8}\] It follows in particular that for \(t\geq 1\), \[\mathcal{A}_{t}=\mathcal{B}^{[t]}\quad\text{and}\quad\bar{\mathcal{A}}_{t}= \mathcal{B}^{[t]}\bmod\mu,\] with reduction applied coefficient-wise. Write \(n^{*}=\lceil\sqrt{nk}\rceil\in O(n/\sqrt{m})\), and let \(n\) be written as \(n=n^{*}n_{1}+n_{0}\) with \(0\leq n_{0}<n^{*}\), so that \(n_{1}\leq\sqrt{n/k}\). Setting \[C=\mathcal{B}^{[n^{*}+n_{0}]}\ldots\mathcal{B}^{[n_{0}+1]}\] and \[C_{0}=\mathcal{B}^{[n_{0}]}\ldots\mathcal{B}^{[1]},\] the matrix \(\mathcal{A}\) is the product \[\mathcal{A}=C^{[(n_{1}-1)n^{*}]}\cdots C^{[n^{*}]}CC_{0}.\] Our goal is to compute \(\bar{\mathcal{A}}=\mathcal{A}\bmod\mu\), without computing \(\mathcal{A}\) itself. Any Frobenius application (of positive or negative index) in \(\mathbb{L}\) takes \((n\log q)^{1+o(1)}\) bit operations. In particular, computing all matrices \(\mathcal{B}^{[i]}\) that arise in the definitions of \(\mathcal{C}\) and \(\mathcal{C}_{0}\) takes \((rn^{2}/\sqrt{m}\log q)^{1+o(1)}\) bit operations. Once they are known, the next stage of the algorithm computes \(\mathcal{C}\) and \(\mathcal{C}_{0}\) in \(\mathbb{L}[y]\). This is done using a matrix subproduct-tree algorithm (Lapor, 2007, Chapter 10), using a number of operations in \(\mathbb{L}\) softly linear in \(r^{\omega}n^{*}\). This is \((r^{\omega}n^{2}/\sqrt{m}\log q)^{1+o(1)}\) bit operations. To deduce the shifted matrices \[\mathcal{C}^{[(n_{1}-1)n^{*}]}\bmod\mu,\ldots,\mathcal{C}^{[n^{*}]}\bmod\mu,\] we use the following lemma. Lemma 2 ().: _For \(f\) in \(\mathbb{L}[y]\) and \(t\geq 0\),_ \[f^{[t]}\bmod\mu=(f\bmod\mu^{[-t]})^{[t]}\] Proof.: Let \(g=f\bmod\mu^{[-t]}\), so that we have an equality of the form \(f=a\mu^{[-t]}+g\) in \(\mathbb{L}[y]\). We raise this to the power \(q^{t}\) coefficient-wise; this gives \(f^{[t]}=a^{[t]}\mu+g^{[t]}\). Since \(g\), and thus \(g^{[t]}\), have degree less than \(k\), this shows that \(g^{[t]}=f^{[t]}\bmod\mu\). Applying this entry-wise, we compute \(\mathcal{C}^{[in^{*}]}\bmod\mu\) by reducing all entries of \(\mathcal{C}\) modulo \(\mu^{[-in^{*}]}\), then raising all coefficients in the result to the power \(q^{in^{*}}\), for \(i=1,\ldots,(n_{1}-1)\). Matrix \(\mathcal{C}\) has degree \(O(n/\sqrt{m})\), and the sum of the degrees of the moduli \(\mu^{[-t]}\) is \(kn_{1}\), which is \(O(n/\sqrt{m})\) as well. Altogether, this takes \(O(r^{2}n/\sqrt{m})\) applications of Frobenius in \(\mathbb{L}\), together with \(O(r^{2}n/\sqrt{m})\) arithmetic operations in \(\mathbb{L}\) to perform all Euclidean divisions (Lapor, 2007, Chapter 10). Thus, the runtime is \((r^{2}n^{2}/\sqrt{m}\log q)^{1+o(1)}\) bit operations. Finally, we multiply all matrices \(\mathcal{C}^{[in^{*}]}\bmod\mu\) and \(\mathcal{C}_{0}\bmod\mu\). This takes \((r^{\omega}n^{2}/\sqrt{m}\log q)^{1+o(1)}\) bit operations. ### Other Operations Once the coefficients of the skew polynomials \(\tau^{i}u\) on the basis \(\mathcal{B}\) are known modulo \(\mu\), we compute the characteristic polynomial of the matrix formed from these coefficients. This can be done with a bit cost of \((r^{2}kn\log q)^{1+o(1)}\) when the matrix has entries in \(W_{k}\), with \(\lambda\) the exponent defined in Section 2.3. At this stage, we have all coefficients of \(\operatorname{CharPoly}_{W_{k}}(u_{k}^{\star})\) in \(W_{k}\). It remains to apply the map \(\chi_{k}\) to each of them to recover \(\operatorname{CharPoly}(u)\). ### Example Let \(\mathbb{F}_{q}=\mathbb{Z}/2\mathbb{Z}\), \(n=3\) and set \(\ell(t)=t^{3}+t+1\) and \(\mathbb{L}=\mathbb{F}_{2}[t]/(\ell(t))\). Let \(\gamma_{x}=t+1\bmod\ell(t)\), so that \[\mathfrak{p}=x^{3}+x^{2}+1=\ell(x+1),\] and \(\mathbb{L}\cong\mathbb{F}_{\mathfrak{p}}=\mathbb{F}_{q}[x]/(\mathfrak{p}(x))\), with the isomorphism given by \(f(t)\mapsto f(x+1)\). Consider the rank \(4\) Drinfeld module \(\phi_{x}=t\tau^{4}+(t^{2}+t)\tau^{3}+\tau^{2}+t^{2}\tau+t+1\). We proceed to compute the characteristic polynomial using the de Rham cohomology, that is, crystalline cohomology truncated in degree \(k=1\). In other words, all computations are done over \(\mathbb{L}\) The recurrence of equation (4) becomes \(\tau^{k+4}=(t+1)^{2^{k}}\tau^{(k+3)}+(t^{2}+1)^{2^{k}}\tau^{k+2}+t^{2^{k}}\tau^ {k+1}+(1+t^{1-2^{k}})\tau^{k}\). Running the recurrence for \(n=3\) iterations gives: * \(\tau^{5}=(t^{2}+1)\tau^{4}+(t^{2}+t+1)\tau^{3}+t^{2}\tau^{2}+t^{2}\tau^{1}\) * \(\tau^{6}=(t^{2}+1)\tau^{4}+(t^{2}+1)\tau^{3}+(t^{2}+t)\tau^{2}+\tau^{1}\) * \(\tau^{7}=\tau^{4}+t\tau^{3}+(t+1)\tau^{2}+\tau^{1}\) A matrix for the Frobenius endomorphism can be inferred to be \[\begin{bmatrix}1&t&t+1&1\\ t^{2}+1&t^{2}+1&t^{2}+t&1\\ t^{2}+1&t^{2}+t+1&t^{2}&t^{2}\\ 1&0&0&0\end{bmatrix}.\] It has characteristic polynomial \(Z^{4}+(t+1)Z^{2}+(t+1)Z\). Using the expression for \(a_{0}\) which is valid in the prime field case, the Frobenius norm can be inferred to be \(a_{0}=x^{3}+x^{2}+1\). To recover the final coefficients, observe that \(t\mapsto x+1\) gives the required map \(\chi_{1}:W_{1}=\mathbb{L}\to\mathbb{F}_{\mathbb{p}}\). Finally, we conclude that the characteristic polynomial of \(\tau^{n}\) is \(Z^{4}+xZ^{2}+xZ+x^{3}+x^{2}+1\). ## 7. Experimental Results An implementation of the algorithm of section (6.3) was created in SageMath (SageMath, 1977) and is publicly available at [https://github.com/ymusleh/drinfeld-magma](https://github.com/ymusleh/drinfeld-magma) and was used to generate the experimental results included in this work. Our implementation differs from our theoretical version in a few ways. * The Kedlaya-Umans algorithm is most likely not used by MAGMA for computing Frobenius mappings of elements of \(\mathbb{L}\). * To compute the images of coefficients under the map \(\chi_{k}\), we leverage a simpler procedure using reduction modulo bivariate Grobner bases, rather than the tangling map of van der Hoeven and Lecerf. In any case, this does not impact the run times presented. ###### Acknowledgements. We thank Xavier Caruso, Antoine Leudiere and Pierre-Jean Spaenlehauer for interesting discussions. Schost is supported by an NSERC Discovery Grant.
2306.14917
Towards Enriched Controllability for Educational Question Generation
Question Generation (QG) is a task within Natural Language Processing (NLP) that involves automatically generating questions given an input, typically composed of a text and a target answer. Recent work on QG aims to control the type of generated questions so that they meet educational needs. A remarkable example of controllability in educational QG is the generation of questions underlying certain narrative elements, e.g., causal relationship, outcome resolution, or prediction. This study aims to enrich controllability in QG by introducing a new guidance attribute: question explicitness. We propose to control the generation of explicit and implicit wh-questions from children-friendly stories. We show preliminary evidence of controlling QG via question explicitness alone and simultaneously with another target attribute: the question's narrative element. The code is publicly available at github.com/bernardoleite/question-generation-control.
Bernardo Leite, Henrique Lopes Cardoso
2023-06-21T11:21:08Z
http://arxiv.org/abs/2306.14917v1
# Towards Enriched Controllability for ###### Abstract _Question Generation_ (QG) is a task within Natural Language Processing (NLP) that involves automatically generating questions given an input, typically composed of a text and a target answer. Recent work on QG aims to control the type of generated questions so that they meet educational needs. A remarkable example of _controllability_ in educational QG is the generation of questions underlying certain _narrative elements_, e.g., causal relationship, outcome resolution, or prediction. This study aims to enrich controllability in QG by introducing a new guidance attribute: _question explicitness_. We propose to control the generation of explicit and implicit (_wh_)-questions from children-friendly stories. We show preliminary evidence of controlling QG via question explicitness alone and simultaneously with another target attribute: the question's narrative element. The code is publicly available at github.com/bernardoleite/question-generation-control. Keywords:Natural Language Processing Question Generation Controllability Question Explicitness. ## 1 Introduction In the educational context, Question Generation (QG) can potentially automate and assist the teacher in what can be a time-consuming and effortful task. QG may also be helpful for the learner's formative assessment via self-study and engagement with computer-generated practice questions. However, automatic QG tools are not widely used in classrooms [2, 8], namely because generated questions are generally limited in types and difficulty levels [2]. As pointed by Wang _et al._[8], there is a strong desire for user control, where humans provide input to QG systems and can decide when to use their output. Inspired by this need, this study proposes a QG framework for controlling the generation of explicit and implicit questions, using question explicitness as a guidance attribute during the generation process. Generally, explicit questions center on a particular story fact, whereas implicit questions rely on summarizing1 and drawing inferences from implicit information in the text. As stated by Xu _et al._[9], explicit and implicit questions are formally defined as follows: * **Explicit**_questions ask for answers that can be directly found in the stories. In other words, the source of answer are spans of text._ * **Implicit**_questions ask for answers that cannot be directly found in the text. Answering the questions requires either reformulating language or making inferences. In other words, the answer source is "free-form", meaning that the answers can be any free-text, and there is no limit to where the answer comes from._ Noteworthy, prior research [6, 11, 9] suggests that a combination of explicit and implicit questions contributes to a more balanced difficulty in the assessments. To achieve our goal, we use a recent dataset called FairytaleQA[9], which contains question-answering (QA) pairs derived from children-friendly stories. Each question is categorized as "explicit" or "implicit" by expert annotators. Some previous work has addressed controllability in educational QG. For instance, Ghanem _et al._[1] control the reading comprehension skills required by the question, e.g., figurative language and summarization. Similarly, Zhao _et al._[10] control the narrative elements underlying the generated questions, such as causal relationship, outcome resolution, or prediction. They use the same dataset as this study, FairytaleQA, where each question, beyond explicitness, is also categorized according to the referred narrative elements. ## 2 Generating Explicit and Implicit Questions In this study, we fine-tune the T5 pre-trained model [5] with the controllable mechanism for generating explicit and implicit questions. T5 is a text-to-text generation model which has achieved state-of-the-art results on multiple natural language generation benchmarks, including QA and summarization. We train the model to generate both questions and answers for a particular story text. To control the explicitness of the generated questions, we prepend a special token \(<\)ex\(>\) followed by "explicit" or "implicit" attribute at the beginning of the input, before the story text. This attribute guides the system to generate a question of the desired type. Other special tokens (\(<\)section\(>\), \(<\)question\(>\) and \(<\)answer\(>\)) are used to delimit the input and output information of the model. This technique is based on a recent study [10] with the purpose of controlling QG conditioned on another target attribute: the question's narrative elements. We also investigate controlling simultaneously the question's explicitness along with that target attribute. To that end, beyond \(<\)ex\(>\), we prepend \(<\)nar\(>\) followed by the narrative attribute name. ## 3 Experimental Setup **Data**: We use FairytaleQA[9], in which educational experts have manually created 10,580 QA pairs from 278 children-friendly stories. Each question is annotated with an explicitness label, which can be "explicit" or "implicit". Also, each question is labeled with one of the following narrative elements4: "character", "setting", "action", "feeling", "causal relationship", "outcome resolution", or "prediction". Statistically, each story has \(\approx\)15 sections and each section (composed of multiple sentences) has \(\approx\)3 questions. Explicit questions represent \(\approx\)75% of all questions. We use the original train/val/test splits composed of 8,548/1,025/1,007 QA pairs. Footnote 4: Detailed information of each aspect is described in the FairytaleQA paper [9]. **Models**: From the original dataset, we have trained different models5: (A) question-section:answer; (B) answer-section:question; (C) section:question-answer; (D) ex-section:question-answer; (E) nar-section:question-answer; and (F) nar-ex-section:question-answer. Models A and B will serve as a baseline comparison with the QA and QG models from the FairytaleQA paper. Model C only contains the section text as input, so its purpose is to serve as a baseline to compare with models D-F, which include control attributes. Model D includes the question's explicitness attribute in the input. Model E includes the narrative attribute in the input. Model F has both control attributes included. Figure 1 shows an illustrative example of the models with controllability prompts. Footnote 5: A colon separates the input and output information used by the models. _controlled test_** set**: For assessing the effectiveness of controllability along models D-F, we have prepared a reorganized version from the original _test_ set which we call _controlled test_: each example includes a section and all ground-truth QA pairs regarding that section, being that these QA pairs belong to one explicitness type (explicit or implicit) and narrative element. Also, for comparability between models C and D-F, each section only appears once. **Implementation Details**: We use the _t5-base6_ model version. We have set 512 and 128 for the maximum token input and output, respectively. We train the models with a maximum of 10 epochs, early stopping with a patience of 2, and a batch size of 32. For inference, we use beam search with a beam width of 5. Footnote 6: [https://huggingface.co/t5-base](https://huggingface.co/t5-base) Figure 1: An illustrative example of question-answer pairs generated by different models. ## 4 Results **Baselines**: FairytaleQA authors have reported _n_-gram similarity ROUGE\({}_{L}\)-F1 [3] values of 0.536 (QA) and 0.527 (QG) on the _test_ set. Using our baseline models (A and B) we correspondingly obtained 0.559 (QA) and 0.529 (QG). This shows that our baseline models are quantitatively aligned with previously obtained results. **QA results by question explicitness**: More on baseline model A for QA, our ROUGE\({}_{L}\)-F1 QA results for explicit and implicit questions are 0.681 and 0.194, respectively. This notable difference is also observe by Xu _et al._[9]. According to the authors, this situation is expected since the answers to explicit questions can be directly found in the text. In contrast, implicit questions call for in-depth inference and summarization. We use this rationale to evaluate the controllability of the question's explicitness. We hypothesize that the QA model obtained in setup A will perform significantly better on explicit than implicit questions generated from models D and F. **Controllability**: We look for evidence of the question's controllability by employing both QA and QG tasks. For QA, we use the ROUGE\({}_{L}\)-F1 metric and EXACT MATCH, which is a strict all-or-nothing score between two strings. For QG, we use _n_-gram similarity ROUGE\({}_{L}\)-F1 and BLEU-4 [4]. Also, we use BLEURT [7], which is a more recent text generation performance metric. Table 1 refers to the QA results, which have been obtained as follows. We use the QA model (obtained in setup A) for answering the generated questions from models D and F. Then, the answers obtained from the QA model are compared against the answers generated from models D and F, yielding the reported results. For both evaluation metrics, the QA model performs significantly better on explicit than implicit generated questions (confirming our hypothesis). Thus, we conclude that these scores indicate compelling evidence that it is possible to control the question's explicitness using the proposed controllable mechanism. Table 2 presents the obtained QG results. Here the traditional evaluation procedure in QG is employed, which is to directly compare the generated questions with the ground-truth7. We find no significant differences in the QG scores obtained by model D compared to C, which can be explained as follows: controlling the question's explicitness has more influence on the type of answer required to respond to the generated question than on the syntax of that generated question. Therefore, we consider the non-significant differences between models C and D in the QG results to be expected. In contrast, a significant improvement is observed in models E and F (which receive narrative controllability prompts) compared to model C. This can be explained as follows: controlling the question's narrative elements strongly influences the syntax of the generated questions. For instance, we empirically observe that when requesting the model to generate questions about the "causal relationship" element, it generates (in many cases) questions starting with "Why did...?". As for "outcome resolution", the model generates "What happened...?" questions. As for "prediction", the model generates "How will...?" questions. Finally, it should be noted that model F (which receives both explicitness and narrative controllability prompts) is shown to be effective for controlling _simultaneously_ question's explicitness and question's narrative elements. ## 5 Conclusion In this study, we work towards enriched controllability for educational QG. Through automatic evaluation, the results show preliminary evidence that it is possible to (1) control the question's explicitness and (2) _simultaneously_ control both the question's explicitness and question's narrative elements. We argue that the next developments in educational QG should involve enriching (even more) the controllability process with multiple guidance and educationally relevant attributes. Looking for additional effective control mechanisms is also an interesting route. For future work, we intend to perform a large-scale human evaluation focusing on QG controllability in an actual educational environment. #### Acknowledgments This work was financially supported by Base Funding - UIDB/00027/2020 of the Artificial Intelligence and Computer Science Laboratory - LIACC - funded by national funds through the FCT/MCTES (PIDDAC). Bernardo Leite is supported by a PhD studentship (with reference 2021.05432.BD), funded by Fundacao para a Ciencia e a Tecnologia (FCT). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{3}{|c|}{**ROUGE\({}_{L}\)-F1**} & \multicolumn{3}{|c|}{**EXACT MATCH**} \\ \hline **Models** & Overall & Explicit & Implicit & Overall & Explicit & Implicit \\ \hline ex-section:question-answer (D) & 0.656 & 0.741 & 0.431 & 0.434 & 0.483 & 0.306 \\ \hline nar-ex-section:question-answer (F) & 0.671 & 0.730 & 0.514 & 0.449 & 0.489 & 0.343 \\ \hline \end{tabular} \end{table} Table 1: QA results (0-1) for assessing the question’s controllability (_controlled test_). \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Models** & **ROUGE\({}_{L}\)-F1** & **BLEU-4** & **BLEURT** \\ \hline section:question-answer (C) & 0.305 & 0.099 & 0.370 \\ \hline ex-section:question-answer (D) & 0.303 & 0.104 & 0.369 \\ \hline nar-section:question-answer (E) & 0.432 & 0.189 & 0.432 \\ \hline nar-ex-section:question-answer (F) & 0.432 & 0.195 & 0.424 \\ \hline \end{tabular} \end{table} Table 2: QG results (0-1) for assessing the question’s controllability (_controlled test_).
2310.18613
Cobordism Obstructions to Complex Sections
A notion of vector field cobordism for oriented manifolds was defined by B\"okstedt and Svane. We extend this notion to define complex section cobordism for almost complex manifolds. We then determine the complex section cobordism groups and a relevant cobordism category. We describe an obstruction which tells us when a cobordism class contains a manifold, which can be equipped with $r$ linearly independent complex sections, in terms of the Chern classes. Finally, we show that this obstruction vanishes for certain multiplicative generators in the complex cobordism ring.
Dennis Nguyen
2023-10-28T06:42:47Z
http://arxiv.org/abs/2310.18613v4
# Cobordism obstructions to complex sections ###### Abstract. A notion of vector field cobordism for oriented manifolds was defined by Bokstedt and Svane. We extend this notion to define complex section cobordism for almost complex manifolds. We then determine the complex section cobordism groups and a relevant cobordism category. We describe an obstruction which tells us when a cobordism class contains a manifold, which can be equipped with \(r\) linearly independent complex sections, in terms of the Chern classes. Finally, we show that this obstruction vanishes for certain multiplicative generators in the complex cobordism ring. ## 1. Introduction ### Summary of Results There is a classical problem to determine whether a manifold admits \(r\) linearly independent tangent vector fields. In the case of \(r=1\) vector field, this problem was solved by Hopf and the obstruction is the Euler characteristic of the manifold. Bokstedt, Dupont and Svane [2] approached this problem by instead determining the obstruction to finding a cobordant manifold with \(r\) vector fields. For small \(r\), they were able to solve this problem in [2]. We extend their results by looking at obstructions to finding linearly independent complex sections of the tangent bundle of almost complex manifolds. In this case, we are able to describe the obstruction for almost complex manifolds when the number of complex sections is less than half the (complex) dimension of the manifold. This obstruction is given in terms of Chern characteristic numbers. Our main result is the following sufficient condition for finding \(r\) complex sections: **Theorem 1.1**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold. There exists a complex cobordant manifold \(N^{2d}\) with \(r\) linearly independent complex sections on \(TN\) and \(r<d/2\) if \(s_{\omega}(M^{2d})=0\) for all \(\omega\) of length greater than or equal to \(d-r+1\) and for all \(\omega\) containing a number greater than \(d-r\)._ In the rational case, we can describe a condition which is both necessary and sufficient. **Theorem 1.2**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold. Then there exists constant \(c\) such that the cobordism class \(c[M^{2d}]\) contains a manifold \(N^{2d}\) with \(r\) complex sections on \(TN\) if and only if \(s_{\omega}(M^{2d})=0\) for all \(\omega\) of length greater than or equal to \(d-r+1\)._ The characteristic classes \(s_{\omega}\) and the proof of Theorem 1.1 and Theorem 1.2 are given in section 4. Bokstedt and Svane have defined the vector field cobordism in [3]. We extend their definition to _complex section cobordism_, although we needed to split into even and odd dimensional cases. **Definition 1.1**.: _Two \(2d\) dimensional manifolds \(M\) and \(N\) with almost complex structure and \(r\) complex sections on \(TM\) and \(TN\) are defined to be complex section cobordant, if there is a cobordism \(W\) with boundary \(M\cup\overline{N}\) such that \(TW\oplus\mathbb{R}\) has complex structure and \(r\) linearly independent complex sections compatible with the structures on \(TM\oplus\mathbb{C}\) and \(TN\oplus\mathbb{C}\). The even dimensional complex section cobordism groups are the equivalence classes under this relation._ **Definition 1.2**.: _Two \(2d-1\) dimensional manifolds \(M\) and \(N\) with complex structure and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\) are defined to be complex section cobordant, if there is a cobordism \(W\) with boundary \(M\cup\overline{N}\) such that \(TW\) has complex structure and \(r\) linearly independent complex sections compatible with the structures on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\). The odd dimensional complex section cobordism groups are the equivalence classes under this relation._ Note that a priori, even for \(r=0\), these definitions are not the same as the definition of the classical complex cobordism groups, since the classical complex cobordism groups allow stabilization of arbitrarily high dimension. We can describe how many distinct ways we can equip the sections onto a cobordant manifold. This is equivalent to describing the kernel of the forgetful map from the complex section cobordism group to the complex cobordism groups. In section 4, we prove that this kernel is finite and is one of the homotopy groups of a particular spectrum. We note that this kernel is not necessarily finite in the oriented case. In particular, the forgetful map from the Reinhart cobordism group [10] to the oriented cobordism group is \(\mathbb{Z}\) in even dimensions. **Theorem 1.3**.: _Let \([M]\in\Omega^{U}_{2d}\), be such that \(M\) can be equipped with \(r\) linearly independent complex sections on \(TM\). Then, there are only finitely many ways to equip a manifold \(N\in[M]\) with \(r\) linearly independent complex sections up to complex section cobordism._ The ways of equipping a manifold with \(r\) complex sections are indexed by a group which is described in section 4. This theorem is a consequence of Theorem 4.1. In odd dimensions, it is well known that the complex cobordism group is zero, thus the odd dimensional complex section cobordism group parameterizes all the ways of equipping a manifold in the unique cobordism class with \(r\) complex sections. We note the odd dimensional complex section cobordism group is in general more difficult to compute than the even dimensional complex section cobordism group, however we note that: **Theorem 1.4**.: _The odd dimensional complex section cobordism group is finite._ Our final question is finding multiplicative generators of the complex cobordism ring which can be equipped with \(r\) complex sections. While we were unable to give an integral result stating which dimensions will have a generator with the obstruction vanishing, we showed that they could be found in rational cobordism. **Theorem 1.5**.: _There exists a manifold \(M^{2d}\) in \(\Omega^{U}_{2d}\) which can be equipped with \(r\) linearly independent complex sections on \(TM\) and whose image in \(\Omega^{U}_{2d}\otimes\mathbb{Q}\) is a multiplicative generator._ ### Plan of the paper We start in section 2.1 by constructing the spectra \(\mathbf{MTU}(d)\), and \(\mathbf{MTU}(d,r)\). This constructions closely mirrors work done in [2]. We present two constructions of the spectrum \(\mathbf{MTU}(d)\). One of which is useful for calculating the cohomology and the other has clearer geometric features. Section 2.2 is devoted to compute the cohomology of those spaces and construct the colimit spectrum \(\overline{\mathbf{MTU}}(d)\) based on a stabilization map. Section 3 describes the complex section cobordism category and provides the geometric definitions of the complex section cobordism groups. We use the results of Galatius, Tillmann, Madsen and Weiss [5] to connect these groups to the homotopy groups of the spectrum \(\mathbf{MTU}(d)\). This section is divided into two parts, one describing the odd dimensional and one describing the even dimensional case. We prove the main results of the paper in section 4 using the homotopy exact sequences and the Adams Novikov spectral sequence. ### Acknowledgments I would like to thank all the people who have helped me as I have completed this paper. First, I would like to thank the University of Oregon Department of Mathematics for supporting me during this work. I most especially thank my advisor, Boris Botvinnik, for the time he spent guiding and advising me throughout this project. I am also indebted to Soren Galatius for his valuable feedback on my results. Finally, I would like to thank the Young Topologists Meeting and the University of Copenhagen for giving me an opportunity to share an early version of this project. ## 2. Construction of Spectra ### The spectra MTU(d) and MTU(d,r) Here we introduce the spectra \(\mathbf{MTU}(d)\) and \(\mathbf{MTU}(d,r)\). The homotopy groups of these spectra will be the natural objects of study. The real case is described in [2] and [5]. It should be noted that there is some disagreement about the proper indexing of these spectra in [2] and [5]. We follow the convention given in Bokstedt, Dupont and Svane. Write \(G_{\mathbb{C}}(d,n)\) for the complex Grassmannian of \(d\) dimensional complex subspaces of \(\mathbb{C}^{d+n}\). Let \(U_{\mathbb{C},d,n}\to G_{\mathbb{C}}(d,n)\) be the tautological \(d\) dimensional complex vector bundle and let \(U_{\mathbb{C},d,n}^{\perp}\to G_{\mathbb{C}}(d,n)\) be the \(n\) dimensional orthogonal complement of \(U_{\mathbb{C},d,n}\). **Definition 2.1**.: _Define \(\mathbf{MTU}(d)\) to be the spectrum whose \(2n\)-th space is \(\mathbf{MTU}(d)_{2n}=Th(U_{\mathbb{C},d,n}^{\perp})\)._ There exists a canonical map \(G_{\mathbb{C}}(d,n)\to G_{\mathbb{C}}(d,n+1)\) defined using the composition: \[\mathbb{C}^{d}\rightarrow\mathbb{C}^{d+n}\hookrightarrow\mathbb{C}\oplus \mathbb{C}^{d+n}\cong\mathbb{C}^{d+n+1}\] The restriction of the bundle \(U_{\mathbb{C},d,n+1}^{\perp}\) to \(G_{\mathbb{C}}(d,n)\) under this map is \(U_{\mathbb{C},d,n}^{\perp}\oplus\mathbb{C}\). (Here \(\mathbb{C}\) is a one dimensional complex trivial bundle.) In other words, there is a bundle map \(U_{\mathbb{C},d,n}^{\perp}\oplus\mathbb{C}\to U_{\mathbb{C},d,n+1}^{\perp}\) covering the map \(G(d,n)\to G(d,n+1)\). This map induces a map of Thom spaces and the following composition gives the spectrum map \(\Sigma^{2}\mathbf{MTU}(d)_{2n}\rightarrow\mathbf{MTU}(d)_{2n+2}\): \[\Sigma^{2}(Th(U_{\mathbb{C},d,n}^{\perp}))\cong S^{2}\wedge Th(U_{\mathbb{C}, d,n}^{\perp})\cong Th(\mathbb{C}\oplus U_{\mathbb{C},d,n}^{\perp})\to Th(U_{ \mathbb{C},d,n+1}^{\perp})\] There is also a map \(G(d-r,n)\to G(d,n)\) which takes a \((d-r)\)-complex plane \(P\subset\mathbb{C}^{d-r+n}\) to the \(d\)-plane \(P\oplus\mathbb{C}^{r}\subset\mathbb{C}^{d-r+n}\oplus\mathbb{C}^{r}\). Under this map, the pullback of \(U_{\mathbb{C},d,n}^{\perp}\) is \(U_{\mathbb{C},d-r,n}^{\perp}\). Thus, there are maps of Thom spaces \(\mathbf{MTU}(d-r)_{2n}\rightarrow\mathbf{MTU}(d)_{2n}\). Since this map commutes with the spectrum map, it defines a map of spectra \[\mathbf{MTU}(d-r)\rightarrow\mathbf{MTU}(d) \tag{1}\] **Definition 2.2**.: _Let \(\mathbf{MTU}(d,r)\) be the cofiber of the map (1)._ We note that, having defined \({\bf MTU}(d,r)\), we immediately get a cofibration for \(k\leq d-r\). \[{\bf MTU}(d-r,k)\to{\bf MTU}(d,r+k)\to{\bf MTU}(d,r) \tag{2}\] This cofibration reduces to the definition when \(k=d-r\). There is a second construction of spectrum \({\bf MTU}(d)\) due to [2] which will be used in section 3 to study the complex section cobordism category. For any \(d\)-dimensional complex fiber bundle \(E\to X\) equipped with a Hermitian inner product there is a complex frame bundle \(V_{{\mathbb{C}},r}(E)\to X\) with fiber \(V_{{\mathbb{C}},d,r}\). The space \(V_{{\mathbb{C}},d,r}\) is the frame manifold of \(r\) ordered complex frames in \({\mathbb{C}}^{d}\). There is a related bundle \(W_{{\mathbb{C}},r}(E)\to X\), whose fiber is the space of ordered \(r\)-tuples in \({\mathbb{C}}^{d}\) which are (hermitian) orthogonal and of the same length which may be between \(0\) and \(1\). The fiber \(W_{{\mathbb{C}},d,r}\) is the cone over \(V_{{\mathbb{C}},d,r}\) by construction. Now consider the specific case where the bundle is \(U_{{\mathbb{C}},d,n}\to G_{{\mathbb{C}}}(d,n)\). Elements of \(V_{{\mathbb{C}},r}(U_{{\mathbb{C}},d,n})\) consist of a complex \(d\) dimensional plane \(P\subset{\mathbb{C}}^{d+n}\) along with an \(r\) complex-frame in that plane. There is a map \[\eta^{r}:G_{{\mathbb{C}}}(d-r,n)\to V_{{\mathbb{C}},r}(U_{{\mathbb{C}},d,n})\] which takes a \((d-r)\)-plane \(P\subset{\mathbb{C}}^{d-r+n}\) to \(P\oplus{\mathbb{C}}^{r}\subset{\mathbb{C}}^{d-r+n}\oplus{\mathbb{C}}^{r}\) with \(r\) frame consisting of the standard basis vectors of \({\mathbb{C}}^{r}\). We extend \(\eta^{r}\) to a section \(\eta:G_{{\mathbb{C}}}(d,n)\to W_{{\mathbb{C}},2r}(U_{{\mathbb{C}},d,n})\) as follows. Let \(\varepsilon>0\) be small and \(N\) be the open neighborhood of \(G_{{\mathbb{C}}}(d-r,n)\) of planes in \(G_{{\mathbb{C}}}(d,n)\) which differ from a plane in \(G_{{\mathbb{C}}}(d-r,n)\) by a rotation by angle less than \(\varepsilon\). Let \(P\) be a \(d\)-dimensional complex plane in \(\overline{N}\), the closure of \(N\). If \(\varepsilon\) is sufficiently small, there is a unique shortest smooth rotation \(A\in SU(d+n)\) from \(P\) to a unique \(P_{0}\in G_{{\mathbb{C}}}(d-r,n)\). (By shortest, we mean smallest angle.) Then choose a frame by applying the rotation \(A^{-1}\) to \(\eta^{r}(P_{0})\). This will rotate the standard frame in \({\mathbb{C}}^{r}\) to be in \(P\). If \(\varepsilon\) is small, this map will be continuous. Define the section \(\eta\) on \(\overline{N}\) by taking plane \(P\) and equipping it with this frame produced by the above rotation, scaling the frame by \((\varepsilon-\theta)/\varepsilon\) where \(\theta\) is the angle of the rotation. Note that if \(P\in G_{{\mathbb{C}}}(d-r,n)\), this process will give us the standard \(r\) frame in \({\mathbb{C}}^{r}\). Moreover on the boundary of \(N\), all \(d\)-planes are equipped by this map with the zero frame. (The zero frame is the frame with all vectors being zero vectors; it is the cone point of the fiber.) Thus we can extend this map to all of \(G_{{\mathbb{C}}}(d,n)\) by mapping planes outside of \(N\) to the plane equipped with the zero frame. This section \(\eta:G_{{\mathbb{C}}}(d,n)\to W_{{\mathbb{C}},r}(U_{{\mathbb{C}},d,n})\) is the right vertical map in the left diagram below. The right commutative diagram lies over the first one where the maps \(p_{V_{{\mathbb{C}},r}}:V_{{\mathbb{C}},r}(U_{{\mathbb{C}},d,n})\to G_{{ \mathbb{C}}}(d,n)\) and \(p_{W_{{\mathbb{C}},r}}:W_{{\mathbb{C}},r}(U_{{\mathbb{C}},d,n})\to G_{{ \mathbb{C}}}(d,n)\) are the corresponding projections. Then we have the Thom spaces \(Th(p_{V_{{\mathbb{C}},r}}^{*}U_{{\mathbb{C}},d,n}^{\perp})\) and \(Th(p_{W_{{\mathbb{C}},r}}^{*}U_{{\mathbb{C}},d,n}^{\perp})\) which we form into spectra \({\bf MTU}(d)_{V_{r}}\) and \({\bf MTU}(d)_{W_{r}}\) respectively. The maps \(\eta^{r}\) and \(\eta\) determine maps of spectra: \({\bf MTU}(d)_{V_{r}}\to{\bf MTU}(d-r)\) and \({\bf MTU}(d)_{W_{r}}\to{\bf MTU}(d)\) The top maps in the above diagram induce a map of Thom spaces \(Th(p_{V_{\mathbb{C},r}}^{*}U^{\perp}_{\mathbb{C},d,n})\to Th(p_{W_{\mathbb{C},r}}^{* }U^{\perp}_{\mathbb{C},d,n})\) and a map of spectra, \(\mathbf{MTU}(d)_{V_{r}}\to\mathbf{MTU}(d)_{W_{r}}\). From this map we form the cofiber which we call \(\mathbf{MTU}^{\prime}(d,r)\). **Proposition 2.1**.: _In the below commutative diagram, all vertical maps are homotopy equivalences._ Proof.: We know the section \(\eta:G_{\mathbb{C}}(d,n)\to W_{\mathbb{C},r}(U_{\mathbb{C},d,n})\) is a homotopy inverse of \(p_{W_{\mathbb{C},r}}\) because \(W_{\mathbb{C},r}(U_{\mathbb{C},d,n})\) has contractible fibers. Next, there is a fiber bundle \[V_{\mathbb{C},r}(U_{\mathbb{C},d,n})\to V_{\mathbb{C},d+n,r}\] The total space \(V_{\mathbb{C},r}(U_{\mathbb{C},d,n})\) consists of a \(d\) dimensional plane in \(\mathbb{C}^{d+n}\) and an order \(r\) frame in that plane. The projection forgets the plane leaving only the frame. For a given frame in \(V_{\mathbb{C},d+n,r}\), the fiber consists of any \(d\) dimensional complex plane containing the frame. This is equivalent to choosing a \((d-r)\)-plane in the orthogonal complement of the frame. The orthogonal complement is \(\mathbb{C}^{d+n-r}\), so the fiber is \(G_{\mathbb{C}}(d-r,n)\). The fiber inclusion takes a complex plane \(P^{d-r}\subseteq\mathbb{C}^{n+d-r}\) to the plane \(P^{d-r}\oplus\mathbb{C}^{r}\subseteq\mathbb{C}^{n+d}\) equipped with the frame given by the \(r\) standard basis vectors in \(\mathbb{C}^{r}\). This is the map \(\eta^{r}\). We know the base of the fibration, \(V_{\mathbb{C},d+n,r}\), is \((2n+2d-2r-1)\) connected and thus the pair \((V_{\mathbb{C},r}(U_{\mathbb{C},d,n}),G_{\mathbb{C}}(d-r,n))\) has the same connectivity. Furthermore, by the Thom isomorphism theorem gives that the pair \((Th(p_{V_{\mathbb{C},r}}^{*}U^{\perp}_{\mathbb{C},d,n}),Th(U^{\perp}_{\mathbb{ C},d-r,n}))\) is \((4n+2d-2r-1)\) connected. Letting \(n\to\infty\), we can see that \(\eta^{r}:\mathbf{MTU}(d)_{V_{r}}\to\mathbf{MTU}(d-r)\) is a homotopy equivalence. By the five lemma, the right vertical map is also a homotopy equivalence. ### Cohomology of the spectra We will next compute the cohomology of the spectra \(\mathbf{MTU}(d)\) and \(\mathbf{MTU}(d,r)\) **Proposition 2.2**.: _There is an isomorphism:_ \[\phi:\mathbb{Z}[c_{1},c_{2},...,c_{d}]\cong H^{*}(BU(d);\mathbb{Z})\to H^{*}( \mathbf{MTU}(d);\mathbb{Z})\] _where \(H^{*}(\mathbf{MTU}(d);\mathbb{Z})\) is considered as a \(H^{*}(BU(d);\mathbb{Z})\) module._ Proof.: These statements are consequences of the Thom isomorphism theorem. In particular, we have a Thom class \(\overline{u}_{d,n}\in H^{2n}(Th(U^{\perp}_{\mathbb{C},d,n}))\) associated to the bundle \(U^{\perp}_{\mathbb{C},d,n}\). In the \(2n\)-th spaces, there is a Thom isomorphism \(H^{*}(G_{\mathbb{C},d,n})\to H^{*+2n}(Th(U^{\perp}_{\mathbb{C},d,n}))\). In the limit we get a stable class \(\overline{u}_{d}\in H^{0}(\mathbf{MTU}(d))\). The Thom isomorphism theorem states that \(H^{*}(\mathbf{MTU}(d))\) is the rank \(1\) free module over \(H^{*}(BU(d))\) generated by the Thom class. For the rest of this section, all cohomology will be with \(\mathbb{Z}\) coefficients. **Theorem 2.3**.: _The map \(H^{*}(\mathbf{MTU}(d,r))\to H^{*}(\mathbf{MTU}(d))\) is injective with image the \(H^{*}(BU(d))\) module generated by \(\phi(c_{d-r+1}),...,\phi(c_{d})\)._ Proof.: Observe the following commutative diagram. The horizontal maps come from the cofiber exact sequence in cohomology. As \(n\) goes to infinity the bottom right map becomes \(\mathbb{Z}[c_{1},..c_{d}]\cong H^{k}(BU(d))\to H^{k}(BU(d-r))\cong\mathbb{Z}[c_ {1},...,c_{d-r}]\). This map is a surjection mapping \(c_{i}\to c_{i}\) for \(i\leq d-r\). So \(H^{*}(BU(d),BU(d-r))\) is the kernel, namely the \(BU(d)\) submodule generated by \(c_{d-r+1},...,c_{d}\). The vertical maps are the Thom isomorphisms. Thus we conclude that the map \[H^{*}(\mathbf{MTU}(d,r))\to H^{*}(\mathbf{MTU}(d))\] is injective with image being the \(BU(d)\) submodule generated by \(\phi(c_{d-r+1}),...,\phi(c_{d})\). **Corollary 2.4**.: _The spectrum \(\mathbf{MTU}(d,r)\) is \((2(d-r)+1)\) connected._ We note an additional stabilization which will be useful in our computations. **Theorem 2.5**.: \(\pi_{q}(\mathbf{MTU}(d,r))\cong\pi_{q}(\mathbf{MTU}(d+k,r+k))\) _for \(q\leq 2d\)._ Proof.: There is a homotopy exact sequence: \[\pi_{q+1}(\mathbf{MTU}(d+k,k))\rightarrow\pi_{q}(\mathbf{MTU}(d,r))\to \pi_{q}(\mathbf{MTU}(d+k,r+k))\rightarrow\pi_{q}(\mathbf{MTU}(d+k,k))\] By the Hurewicz theorem, \(\mathbf{MTU}(d+k,k)\) is \(2(d+k-k)+1\) connected. So \(\pi_{q}(MT(d,r))\cong\pi_{q}(MT(d+k,r+k))\) for \(q\leq 2d\). It will be useful to look at the sequence: \[\mathbf{MTU}(d)\rightarrow\mathbf{MTU}(d+1)\rightarrow\mathbf{MTU}(d+2) \rightarrow...\] The colimit of this sequence is homotopy equivalent to \(\mathbf{MU}\). The cohomology of the spaces computed above implies that the map \(\mathbf{MTU}(d)\rightarrow\mathbf{MU}\) is \(2d+1\) connected. (The map on cohomology is the quotient \(\mathbb{Z}[c_{1},..,c_{d},...]\rightarrow\mathbb{Z}[c_{1},...,c_{d}]\).) **Definition 2.3**.: _Let \(\overline{\mathbf{MTU}}(d)\) be the cofiber of the map \(\mathbf{MTU}(d)\rightarrow\mathbf{MU}\)._ **Proposition 2.6**.: _The map \(H^{*}(\overline{\mathbf{MTU}}(d),\mathbb{Z})\to H^{*}(\mathbf{MU}, \mathbb{Z})\) is injective. Specifically \(H^{*}(\overline{\mathbf{MTU}}(d)),\mathbb{Z})\) is torsion-free and has non-zero cohomology only in even degrees._ Proof.: There is an exact sequence: \[H^{q-1}(\mathbf{M}\mathbf{U},\mathbb{Z})\to H^{q-1}(\mathbf{M}\mathbf{T} \mathbf{U}(d))\to H^{q}(\overline{\mathbf{M}\mathbf{T}\mathbf{U}}(d)),\mathbb{Z })\to H^{q}(\mathbf{M}\mathbf{U},\mathbb{Z})\to H^{q}(\mathbf{M}\mathbf{T} \mathbf{U}(d))\] If \(q\) is even, then \(H^{q-1}(\mathbf{M}\mathbf{T}\mathbf{U}(d))\) is \(0\) so the map is injective. If \(q\) is odd, then the induced map \(H^{q-1}(\mathbf{M}\mathbf{U},\mathbb{Z})\to H^{q-1}(\mathbf{M}\mathbf{T} \mathbf{U}(d))\) coincides with \(\mathbb{Z}[c_{1},c_{2},...]\rightarrow\mathbb{Z}[c_{1},...,c_{d}]\) and is surjective. It immediately follows that \(H^{q}(\overline{\mathbf{M}\mathbf{T}\mathbf{U}}(d)),\mathbb{Z})\to H^{q}( \mathbf{M}\mathbf{U},\mathbb{Z})\) is injective. We can alternately think of \(\overline{\mathbf{M}\mathbf{T}\mathbf{U}}(d)\) as the colimit of the sequence of spectra: \[\mathbf{M}\mathbf{T}\mathbf{U}(d+1,1)\rightarrow\mathbf{M}\mathbf{T}\mathbf{U }(d+2,2)\rightarrow...\] as it follows from the following commutative diagram As a corollary of Theorem 2.5, **Corollary 2.7**.: _There is an isomorphism: \(\pi_{2d+1}(\mathbf{M}\mathbf{T}\mathbf{U}(d+1,r+1))\cong\pi_{2d+1}(\overline{ \mathbf{M}\mathbf{T}\mathbf{U}}(d-r))\) for all \(d\) and \(r\)_ This corollary will be helpful when we want to compute the homotopy groups \(\pi_{2d+1}(\overline{\mathbf{M}\mathbf{T}\mathbf{U}}(d-r))\). As a further consequence of Corollary 2.4, we conclude the following lemma. **Lemma 2.8**.: _The group \(\pi_{q}(MTU(d))\cong\pi_{q}(MU)\) for \(q\leq 2d\)._ ## 3. The cobordism category In order to provide a geometric interpretation for these spectra, we discuss cobordism categories. These categories were originally introduced by [5] and [3]. Here we specify what happens in the complex case. In order to describe the complex section cobordism category, we consider the even and odd dimensional cases separately. We start with the general cobordism category with tangential structure \(\theta\). **Definition 3.1**.: _Let \(\theta:X\to BO(d)\) be a fibration. The cobordism category \(\mathcal{C}^{\theta}_{d,n+d}\) has objects which are \((d-1)\) dimensional manifolds without boundary \(M\subset\ (-1,1)^{n+d-1}\subset\mathbb{R}^{n+d-1}\) with \(M\) closed in \(\mathbb{R}^{n+d-1}\) along with a chosen lift of the classifying map \(\xi:M\to G(d-1,n)\to G(d,n)\) to \(\theta^{*}(G(d,n))\). The space of morphisms from \(M_{0}\to M_{1}\) is the disjoint union of the identity morphism along with pairs \((W,a)\) with the following properties. \(a\) is a real number in \((0,\infty)\) and \(W\subseteq(-1,1)^{n+d-1}\times\mathbb{R}\subset\mathbb{R}^{n+d-1}\times\mathbb{R}\) is a manifold of dimension \(d\), which is closed in \(\mathbb{R}^{n+d}\) and such that for some \(\varepsilon>0\):_ \[W\cap(\mathbb{R}^{n+d-1}\times(-\infty,\varepsilon))=M_{0}\times(-\infty,\varepsilon)\] \[W\cap(\mathbb{R}^{n+d-1}\times(a-\varepsilon,\infty))=M_{1}\times(a- \varepsilon,\infty)\] \(W\) _is equipped with the data of a \(\theta\) structure and the above equalities must preserve the \(\theta\) structure._ In this paper, we will take the limit as \(n\to\infty\) of the categories \(\mathcal{C}^{\theta}_{d,n+d}\) and write it as \(\mathcal{C}^{\theta}_{d}\). Its classifying space will be denoted by \(BC^{\theta}_{d}\). In the complex case \(X=BU(d)\) and \(\theta:BU(d)\to BO(2d)\). So the \(U(d)\) cobordism category has as objects \(2d-1\) dimensional manifolds \(M\) with \(U(d)\) structure on \(TM\oplus\mathbb{R}\) and has as morphisms \(2d\) dimensional cobordisms with \(U(d)\) structure. ### The even dimensional complex section cobordism category We describe next the complex section cobordism category, which will be obtained by additionally requiring the objects and morphisms to be equipped with linearly independent complex sections. This corresponds to the tangential structure given by the map \(\theta_{\mathbb{C},r}:V_{\mathbb{C},r}(U_{\mathbb{C},d})\to BO(2d)\) which is defined by the following composition: The map \(V_{\mathbb{C},r}(U_{\mathbb{C},d})\to V_{r}(U_{2d})\) is the natural map which forgets the complex structure; note that this diagram is not a pull back square. The category \(\mathcal{C}^{\theta_{\mathbb{C},r}}_{2d}\) with this tangential structure has as objects \(2d-1\) dimensional manifolds \(M\) such that the bundle \(TM\oplus\mathbb{R}\) is given a complex structure and \(r\) linearly independent complex sections. A morphism \(W:M\to N\) is a \(2d\) dimensional almost complex cobordism equipped with \(r\) linearly independent complex sections such that the structures are compatible as given in definition below. We define two manifolds to be complex section cobordant if there is a morphism between them in this category. **Definition 3.2**.: _Two \(2d-1\) dimensional manifolds \(M\) and \(N\) with complex structure and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\) are defined to be complex section cobordant, if there is a cobordism \(W\) with boundary \(M\cup\overline{N}\) such that \(TW\) has complex structure and \(r\) linearly independent complex sections compatible with the structures on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\)._ **Proposition 3.1**.: _The relation defined in definition 3.2 is an equivalence relation._ Proof.: This relation is obviously reflexive and transitive. All that remains is to show that it is symmetric. Suppose \(M\) and \(N\) are \(2d-1\) dimensional manifolds with complex structure and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\) and suppose that there is a cobordism \(W\) with boundary \(M\cup\overline{N}\) such that \(TW\) has complex structure and \(r\) linearly independent complex sections compatible with the structures on \(TM\oplus\mathbb{R}\) and \(TN\oplus\mathbb{R}\). We may reverse \(W\) and get a complex cobordism between \(N\) and \(M\). However the complex sections will be reversed on the boundary. We will correct this reversal by showing there exists a cobordism reversing the sections. Let \(\nu:M\to V_{\mathbb{C},r}(TM\oplus\mathbb{R})\) be a map representing complex sections and \(-\nu\) be their reverse. Define a section \(\tilde{\nu}:M\times[0,1]\to V_{\mathbb{C},r}(TM\times\mathbb{R})\cong V_{ \mathbb{C},r}(TM\oplus\mathbb{R})\times[0,1]\) by \(\tilde{\nu}(x,t)=(e^{\pi it}\nu(x),t)\). This section restricts to \(\nu\) on the incoming boundary and \(-\nu\) on the outgoing boundary. We may perform a similar construction on \(N\). Composing these cobordisms with \(W\), we will get a cobordism from \(N\) to \(M\) with the correct complex section structure. **Definition 3.3**.: _The odd dimensional complex section cobordism groups are the equivalence classes under the above relation._ We can interpret the complex section cobordism groups as the connected components of the classifying space \(B\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\). **Proposition 3.2**.: _The connected components of \(B\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\) are the equivalence classes under complex section cobordism._ Proof.: If two manifolds are equivalent, then there is a morphism in \(\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\) connecting them and so they are in the same connected component of \(B\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\). If two manifolds are in the same connected component of \(B\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\), then there is a zig zig of morphisms connecting them by Theorem 3.4 of [3]. By Proposition 3.1, this means that the two manifolds are complex section cobordant. **Remark 3.1**.: _Note that we index the cobordism category by the dimension of the cobordisms but we index the cobordism groups by the dimension of the object manifolds._ Next we modify the results from [3], specializing to the complex case in order to connect these groups to the Thom spectra constructed in Section 2. For any structure \(\theta:X\to BO(d)\), we define \(\theta^{*}\mathbf{MTU}(d)\) to be the spectrum whose \(n\)-th space is given by \(Th(\theta^{*}U_{d,n}^{\perp})\). In particular, for \(\theta:BU(d)\to BO(2d)\), we observe that \(\theta^{*}(\mathbf{MTU}(2d))=\mathbf{MTU}(d)\). For the structure \(p_{V_{\mathbb{C},r}}:V_{\mathbb{C},r}(U_{\mathbb{C},2d})\to BU(d)\), recall that \(p_{V_{\mathbb{C},r}}^{*}\mathbf{MTU}(d)\) was called \(\mathbf{MTU}(d)_{V_{r}}\). We proved in Theorem 2.1 that \(\mathbf{MTU}(d)_{V_{r}}\cong\mathbf{MTU}(d-r)\). Note that \(\theta_{\mathbb{C},r}=\theta\circ p_{V_{\mathbb{C},r}}\) where \(\theta_{\mathbb{C},r}:V_{\mathbb{C},r}(U_{\mathbb{C},d})\to BO(2d)\) is as above. The next theorem of [5] and its corollary will allow us to study and compute the complex section cobordism groups by studying the homotopy groups of the Thom spectrum \(\mathbf{MTU}(d)\). **Theorem 3.3**.: _The spaces \(B\mathcal{C}_{2d}^{\theta_{\mathbb{C},r}}\) and \(\Omega^{\infty+2d-1}\mathbf{MTU}(d-r)\) are weakly homotopy equivalent._ Proof.: In [5], it was shown that there is a homotopy equivalence \[B\mathcal{C}_{2d}^{\theta_{2r}}\to\Omega^{\infty+2d-1}\theta_{\mathbb{C},r}^{* }\mathbf{MTU}(d)\cong\Omega^{\infty+2d-1}\mathbf{MTU}(d)_{V_{r}}\] Theorem 2.1 shows that \(\mathbf{MTU}(d)_{V_{r}}\) is homotopy equivalent to \(\mathbf{MTU}(d-r)\). The theorem follows immediately. **Corollary 3.4**.: _There is an isomorphism: \(\pi_{2d+2r-1}(\mathbf{MTU}(d))\cong\pi_{0}B\mathcal{C}_{2(d+r)}^{\theta_{ \mathbb{C},r}}\)_ The above corollary classifies all odd homotopy groups of \(\mathbf{MTU}(d)\). (The lower homotopy groups of \(\mathbf{MTU}(d)\) are classified by Lemma 2.8). **Remark 3.2**.: _We note that we are not truly finding sections on \(M^{2d-1}\) but on \(M^{2d-1}\times I\) and our cobordisms are equivalences of these cylinders not of the lower dimensional manifold._ ### The odd dimensional complex section cobordism category We define a complex cobordism theory with cobordisms of dimension \(2d+1\) and objects of dimension \(2d\). This is a specific case of definition 3.1 with structure \(V_{\mathbb{C},r}(U_{\mathbb{C},d+1})\to BO(2d+2)\). **Definition 3.4**.: _The cobordism category \(\mathcal{C}^{U}_{2d+1,2n+2d+1}\) has objects which are \(2d\) dimensional manifolds without boundary \(M\subset\ (-1,1)^{2n+2d}\subset\mathbb{R}^{2n+2d}\) with \(M\) closed in \(\mathbb{R}^{2n+2d}\) along with a chosen lift of the classifying map \(\xi:M\to G(2d,2n)\to G(2d+2,2n)\) to \(V_{\mathbb{C},r}(U_{\mathbb{C},d+1,n})\). The space of morphisms from \(M_{0}\to M_{1}\) is the disjoint union of the identity morphism along with pairs \((W,a)\) with the following properties. \(a\) is a real number in \((0,\infty)\) and \(W\subseteq(-1,1)^{2n+2d}\times\mathbb{R}\subset\mathbb{R}^{2n+2d}\times \mathbb{R}\) is a manifold of dimension \(2d+1\), which is closed in \(\mathbb{R}^{2n+2d+1}\) and such that for some \(\varepsilon>0\):_ \[W\cap(\mathbb{R}^{2n+2d}\times(-\infty,\varepsilon))=M_{0}\times(-\infty,\varepsilon)\] \[W\cap(\mathbb{R}^{2n+2d}\times(a-\varepsilon,\infty))=M_{1}\times(a- \varepsilon,\infty)\] _Additionally \(W\) is equipped with a chosen lift of of the classifying map \(W\to G(2d+1,2n)\to G(2d+2,2n)\) to \(V_{\mathbb{C},r}(U_{\mathbb{C},d+1,n})\) which is compatible with the structures on the cobordant manifolds._ For more details on the definition see [6] and [3]. We typically consider the limit as \(n\to\infty\) with and abbreviate the notation for this category as \(\mathcal{C}^{U,r}_{2d+1}\). Let us consider in more detail what this category is. Objects are manifolds \(M^{2d}\) with complex structure and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}^{2}\). Cobordisms \(W\) are \(2d+1\) dimensional manifolds with a complex structure and \(r\) linearly independent complex sections on \(TW\oplus\mathbb{R}\), compatible with the structure on the cobordant manifolds. We can define even dimensional complex section cobordism in a similar way to the odd case. **Definition 3.5**.: _Two \(2d\) dimensional manifolds \(M\) and \(N\) with almost complex structure and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}^{2}\) and \(TN\oplus\mathbb{R}^{2}\) are defined to be complex section cobordant, if there is a cobordism \(W\) with boundary \(M\cup\overline{N}\) such that \(TW\oplus\mathbb{R}\) has complex structure and \(r\) linearly independent complex sections compatible with the structures on \(TM\oplus\mathbb{R}^{2}\) and \(TN\oplus\mathbb{R}^{2}\)._ Once again, this relation is equivalent to the existence of a morphism in \(\mathcal{C}^{U,r}_{2d+1}\). **Proposition 3.5**.: _The relation defined in Definition 3.5 is an equivalence relation._ **Theorem 3.6**.: _The connected components of \(B\mathcal{C}^{U,r}_{2d+1}\) are the equivalence classes of \(2d\) dimensional manifolds under complex section cobordism._ The proofs of these theorems are identical to the odd case. Once again, we may use the classification of [5] to prove: **Theorem 3.7**.: _There is a homotopy equivalence_ \[B\mathcal{C}^{U,r}_{2d+1}\to\Omega^{\infty+2d}\mathbf{MTU}(d+1)_{V_{r}}\] **Corollary 3.8**.: _There is an isomorphism._ \[\pi_{0}B\mathcal{C}^{U,r}_{2d+1}\cong\pi_{2d}(\mathbf{MTU}(d+1-r))\] This corollary implies that \(\pi_{2d}(\mathbf{MTU}(d+1-r))\) is the even complex section cobordism group. Currently we have as objects of the \(2d\) dimensional even complex section cobordism group, manifolds \(M^{2d}\) with complex structures and \(r\) linearly independent complex sections on \(TM\oplus\mathbb{R}^{2}\). However, by looking instead at the morphisms of the even complex section cobordism group, we observe that the elements of the group \(\pi_{1}(B\mathcal{O}_{2d}^{\theta_{\mathrm{C},r}})\) can be represented by \(2d\) dimensional almost complex manifolds with \(r\) linearly independent complex tangent sections. This is an immediate consequence of Theorem 3.5 in [3] and Proposition 3.1. By using the established homotopy equivalences 3.4, we arrive at the following equivalence: **Proposition 3.9**.: _There is an isomorphism_ \[\pi_{1}(B\mathcal{O}_{2d}^{\theta_{\mathrm{C},r}})\cong\pi_{2d}\mathbf{MTU}(d -r)\] _Moreover every class in \(\pi_{2d}\mathbf{MTU}(d-r)\), the even dimensional complex cobordism group, has a representative which is a \(2d\) dimensional manifold with \(r\) linearly independent complex sections on the tangent bundle itself._ This group is the main object of study in the next section. There is a geometric description of this isomorphism. Suppose we have an (almost complex) manifold \(M^{2d}\) embedded in \(\mathbb{R}^{2d+2n}\) equipped with \(r\) linearly independent complex sections. Then we get a Gauss map \(M^{2d}\to G(2d,2n)\) which lifts as below. If \(\nu\) is the normal bundle of the embedding \(M^{2d}\to\mathbb{R}^{2d+2n}\), then we can construct the following commutative square. If we add one point, we can consider an embedding \(M^{2d}\to S^{2d+2n}\). A small tubular neighborhood of \(M\) will be diffeomorphic to the total space of \(\nu\). If we collapse outside this tubular neighborhood we get the Pontryagin-Thom map \(S^{2d+2n}\to Th(\nu)\) The above bundle maps induce maps \[S^{2d+2n}\to Th(\nu)\to Th(p^{*}_{V_{\mathrm{C},r}}(U^{\perp}_{\mathrm{C},d,n}))\] This composition represents the element \(M^{2d}\) of \(\pi_{2d}(\mathbf{MTU}_{V_{\mathrm{C}}}(d))\cong\pi_{2d}(\mathbf{MTU}(d-r))\). ## 4. The cobordism obstruction The Theorems 3.4, and 3.8 reduce the study of the complex section cobordism groups to the study of the homotopy groups of \(\mathbf{MTU}(d)\). We use the cofibration long exact sequence in homotopy to compute these groups and prove the main theorems. Consider the following segment: \[\rightarrow\pi_{2d+1}(\mathbf{MU})\rightarrow\pi_{2d+1}(\overline{ \mathbf{MTU}}(d-r))\rightarrow\pi_{2d}(\mathbf{MTU}(d-r))\rightarrow\] \[\pi_{2d}(\mathbf{MU})\stackrel{{\gamma^{r}}}{{ \longrightarrow}}\pi_{2d}(\overline{\mathbf{MTU}}(d-r))\rightarrow\pi_{2d-1}( \mathbf{MTU}(d-r))\rightarrow\pi_{2d-1}(\mathbf{MU})\] We already know these groups in this sequence: \(\pi_{2d+1}(\mathbf{MU})\cong\Omega^{U}_{2d+1}=0\) and \(\pi_{2d-1}(\mathbf{MU})\cong\Omega^{U}_{2d-1}=0\). So the two end groups vanish leaving a five term exact sequence. We know \(\pi_{2d}(\mathbf{MU})\cong\Omega^{U}_{2d}\). So, the above exact sequence reduces to \[0\rightarrow\pi_{2d+1}(\overline{\mathbf{MTU}}(d-r))\rightarrow\pi_{2d}( \mathbf{MTU}(d-r))\rightarrow\Omega^{U}_{2d}\stackrel{{\gamma^{ r}}}{{\longrightarrow}}\pi_{2d}(\overline{\mathbf{MTU}}(d-r))\rightarrow\pi_{2d-1}( \mathbf{MTU}(d-r))\to 0 \tag{3}\] For the rest of this section let \(i_{d,r}\) be the rank of the group \(H^{2d}(\mathbf{MTU}(d-r);\mathbb{Q})\) and \(j_{d,r}\) be the rank of \(H^{2d}(\overline{\mathbf{MTU}}(d-r);\mathbb{Q})\). A classical result of homotopy theory states that \(H^{*}(X;\mathbb{Q})\cong\pi_{*}(X)\otimes\mathbb{Q}\) for spectrum \(X\) of finite type. Thus, we determine the ranks of the homotopy groups of \(\mathbf{MTU}(d)\) and \(\overline{\mathbf{MTU}}(d-r)\) by using Propositions 2.2 and 2.6. Specifically, \(i_{d,r}=\operatorname{Rank}(\pi_{2d}(\mathbf{MTU}(d-r)))=\operatorname{Rank}( \operatorname{Ker}(\gamma^{r}))\). Since the rank of \(\pi_{q}(\overline{\mathbf{MTU}}(d-r))\otimes\mathbb{Q}\) is \(0\) for odd \(q\), then \(\pi_{q}(\overline{\mathbf{MTU}}(d-r))\) is a finite group for \(q\) odd. It follows from the above sequence that the map \(\pi_{2d+1}(\overline{\mathbf{MTU}}(d-r))\rightarrow\pi_{2d}(\mathbf{MTU}(d-r))\) is an injection. So the forgetful map \(\pi_{2d}(\mathbf{MTU}(d-r))\rightarrow\Omega^{U}_{2d}\) must have kernel \(\pi_{2d+1}(\overline{\mathbf{MTU}}(d-r))\). Moreover, since \(\Omega^{U}_{2d}\) is free abelian, the kernel must be the entire torsion of \(\pi_{2d}(\mathbf{MTU}(d-r))\). Thus the group, \(\pi_{2d}(\mathbf{MTU}(d-r))\) splits as \(\pi_{2d+1}(\overline{\mathbf{MTU}}(d-r))\oplus\mathbb{Z}^{\oplus i_{d,r}}\). By Theorem 2.5, we have the isomorphism from the spectrum maps \(\pi_{2d+1}(\overline{\mathbf{MTU}}(d-r))\cong\pi_{2d+1}(\mathbf{MTU}(d+1,r+1))\). This proves the following theorem. **Theorem 4.1**.: _There is an isomorphism_ \[\pi_{2d}(\mathbf{MTU}(d-r))\cong\pi_{2d+1}(\mathbf{MTU}(d+1,r+1))\oplus \mathbb{Z}^{\oplus i_{d,r}}\] _where \(i_{d,r}\) is the rank of \(\mathbb{Z}[c_{1},...,c_{d-r}]\) in degree \(2d\)._ Corollary 3.8 shows that \(\pi_{2d}(\mathbf{MTU}(d-r))\) is the complex section cobordism group of \(2d\) dimensional manifolds \(M\) with \(r\) linearly independent complex sections. We conclude the following. **Corollary 4.2**.: _Let \([M^{2d}]\in\Omega^{U}_{2d}\), be such that \(M^{2d}\) is equipped with \(r\) linearly independent complex sections on \(TM\). Then, there are only finitely many pairs, up to complex section cobordism, \((N^{2d},s)\) where the manifold \(N^{2d}\in[M^{2d}]\) and \(s:N\to V_{\mathbb{C},r}(TN)\) is the structure of \(r\) linearly independent complex sections on \(N^{2d}\). Moreover, such pairs are indexed by the group \(\pi_{2d+1}(\mathbf{MTU}(d+1,r+1))\)._ A similar result is also true for odd dimensional manifolds. **Proposition 4.3**.: _The odd dimensional complex section cobordism group is finite._ Proof.: The odd dimensional complex section cobordism group is \(\pi_{2d+2r-1}(\mathbf{MTU}(d))\). Since \(H^{2d+2r-1}(\mathbf{MTU}(d);\mathbb{Q})=0\), \(\pi_{2d+2r-1}(\mathbf{MTU}(d))\otimes\mathbb{Q}=0\). Thus \(\pi_{2d+2r-1}(\mathbf{MTU}(d))\) is finite. These provide the proofs of Theorems 1.3 and 1.4. Let \(\gamma^{r}:\Omega^{U}_{2d}\to\pi_{2d}(\overline{\mathbf{MTU}}(d-r))\) be the map in the exact sequence (3). By definition of an exact sequence, a manifold \(M\in\Omega^{U}_{2d}\) lifts to \(\pi_{2d}(\mathbf{MTU}(d-r))\) if and only if \(\gamma^{r}(M)=0\). We conclude that: **Theorem 4.4**.: _A manifold \(M\) is complex cobordant to a manifold equipped with \(r\) linearly independent complex sections on \(TM\) if and only if \(\gamma^{r}(M)=0\)._ To describe the obstruction map \(\gamma^{r}\), we use the characteristic classes \(s_{\omega}\). These classes are defined in [12] and [8]. Let \(\omega=\{i_{1}\geq i_{2}\geq...\geq i_{k}\neq 0\}\) be a partition of \(d\) and define the length of \(\omega\) to be \(l(\omega)=k\). Let \(f_{\omega}(t_{1},...,t_{k})\) be the least symmetric polynomial in variables \(t_{1},...,t_{d}\) with \(t_{1}^{i_{1}}...t_{k}^{i_{k}}\) as a summand. Write this polynomial in terms of the elementary symmetric polynomials \(\sigma_{1}(t_{1},...,t_{d})\),..., \(\sigma_{d}(t_{1},...,t_{d})\). Replace the elementary symmetric polynomials \(t_{1}+...+t_{d}\),...,\(t_{1}t_{2}...t_{d}\) with \(c_{1}\),...,\(c_{d}\). This gives a polynomial which is called \(s_{\omega}(c_{1},...,c_{d})\). Let \(M^{2d}\) be an almost complex manifold. Then, the characteristic classes \(s_{\omega}(M^{2d})\) are defined as \[s_{\omega}(M^{2d})=\langle s_{\omega}(c_{1}(TM),...,c_{d}(TM)),[M^{2d}]\rangle\] A special example is \(s_{1,1,...,1}(M^{2d})\). The least symmetric polynomial in variables \(t_{1},...,t_{d}\) with \(t_{1}...t_{d}\) as a summand is the elementary symmetric polynomial \(t_{1}t_{2}...t_{d}\) itself. So the polynomial \(s_{1,1,...,1}(c_{1},...,c_{d})=c_{d}\) and \(s_{1,1,...,1}(M^{2d})=\chi(M^{2d})\). We know by the theorem of Hopf that this is the only obstruction for \(r=1\) vector field. Recall the following property, see [8] or [12]. **Lemma 4.5**.: _If \(c_{k}=0\) for all \(k\geq N\), then \(s_{\omega}(c_{1},...,c_{d})=0\) for all \(\omega\) such that \(l(\omega)\geq N\)._ First we establish the following rational result in terms of characteristic classes. **Theorem 4.6**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold with \(r<d/2\). Let_ \[\gamma^{r}_{\mathbb{Q}}:=\gamma^{r}\otimes\mathbb{Q}:\Omega^{U}_{2d}\otimes \mathbb{Q}\to\pi_{2d}(\overline{\mathbf{MTU}}(d-r))\otimes\mathbb{Q}\] _Then \(\gamma^{r}_{\mathbb{Q}}(M^{2d})=0\) if and only if \(s_{\omega}(M^{2d})=0\) for all \(\omega\) of length greater than or equal to \(d-r+1\)._ **Corollary 4.7**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold. Then there exists constant \(c\) such that the cobordism class \(c[M^{2d}]\) contains a manifold \(N^{2d}\) with \(r\) complex sections on \(TN\) if and only if \(s_{\omega}(M^{2d})=0\) for all \(\omega\) of length greater than or equal to \(d-r+1\)._ We need the following theorem of Stong to determine \(\gamma^{r}_{\mathbb{Q}}\). [11] **Theorem 4.8**.: _The set \(\{s_{k_{1},...,k_{r}}\mid k_{1}+...+k_{r}=d\}\) is a basis of \(\mathbf{Hom}_{\mathbb{Q}}(\Omega^{U}_{2d}\otimes\mathbb{Q},\mathbb{Q})\)._ Proof of Theorem 4.6.: We start by proving the forward direction. Suppose \(\gamma_{r}(M^{2d})=0\). By Theorem 4.4, there is a cobordant manifold \(N^{2d}\) such that \(TN^{2d}\) has \(r\) linearly independent complex sections. Thus, \(TN^{2d}\) splits into \(E\oplus\mathbb{C}^{r}\) where \(E\) has complex dimension \(d-r\). Thus \(c_{k}(TN^{2d})=0\) for \(k\geq d-r+1\). By Lemma 4.5, \(s_{\omega}(N^{2d})=0\) for \(\omega\) of length greater than or equal to \(d-r+1\). Since cobordant manifolds have the same Chern classes, one direction is proven. We know that \(\gamma^{r}_{\mathbb{Q}}:\Omega^{U}_{2d}\otimes\mathbb{Q}\to\pi_{2d}(\overline{ \mathbf{MTU}}(d-r))\otimes\mathbb{Q}\cong\mathbb{Q}^{j_{d,r}}\) has rank \(j_{d,r}\) and is surjective. Each summand of this map can be written as a linear combination of \(s_{\omega}\) by Theorem 4.8. Let \(S^{\prime}\subseteq\mathbf{Hom}(\Omega^{U}_{2d}\otimes\mathbb{Q},\mathbb{Q})\) be the span of \(s_{\omega}\) such that \(l(\omega)\geq d-r+1\). Let \(S\subseteq\Omega^{U}_{2d}\otimes\mathbb{Q}\) be the vector space of all elements \(x\) such that \(r(x)=0\) for every element \(r\in S^{\prime}\). Let \([M^{2d}]\notin S\). Then, for some \(\omega\) with \(l(\omega)\geq d-r+1\), \(s_{\omega}([M^{2d}])\neq 0\). So \(c_{k}([M^{2d}])\neq 0\) for some \(k\geq d-r+1\) by Lemma 4.5. So, no manifold cobordant to \(M\) may have \(r\) sections, because it has non-zero characteristic class in dimension greater than \(d-r\). Moreover, no manifold cobordant to a multiple of \(M\) may have \(r\) sections. Thus, every element \([M^{2d}]\) not in \(S\) must have \(\gamma^{r}_{\mathbb{Q}}([M^{2d}])\neq 0\) by Theorem 4.4. We conclude that \(\operatorname{Ker}(\gamma^{r}_{\mathbb{Q}})\subseteq S\). It remains to show that \(S\) has rank \(i_{d,r}\). We recall that \(i_{d,r}\) is the rank of \(H^{*}(\mathbf{MTU}(d-r);\mathbb{Q})=\mathbb{Q}[c_{1},...,c_{d-r}]\) in degree \(2d\). This is equivalent to partitions of \(d\) by integers less than or equal to \(d-r\). The rank of \(S\) is the rank of \(\Omega^{U}_{2d}\otimes\mathbb{Q}\) minus the rank of \(S^{\prime}\). Since the rank of \(\Omega^{U}_{2d}\otimes\mathbb{Q}\) is the number of partitions of \(d\) and the rank of \(S^{\prime}\) is the number of partitions of \(d\) with length greater than \(d-r\), we conclude that the rank of \(S\) is the number of partitions of length less than or equal to \(d-r\). Thus, both \(S\) and \(\operatorname{Ker}(\gamma^{r}_{\mathbb{Q}})\) have rank \(i_{d,r}\). Since \(\operatorname{Ker}(\gamma^{r}_{\mathbb{Q}})\) is a vector space, \(\operatorname{Ker}(\gamma^{r}_{\mathbb{Q}})=S\). It will be useful to discuss the multiplicative structure of the complex cobordism ring. The following result of Milnor [7] is used to verify whether certain manifolds are multiplicative generators. **Theorem 4.9**.: _The complex cobordism ring \(\Omega^{U}\) has multiplicative structure isomorphic to the polynomial ring \(\mathbb{Z}[b_{1},b_{2},...]\) with generators \(b_{i}\) in dimension \(2i\). If \(i\neq p^{q}-1\) for any prime \(p\), then a manifold \(M^{2i}\) can be taken to be the generator if and only if \(s_{i}([M^{2i}])=1\). If \(i=p^{q}-1\) for some prime \(p\), then \(M^{2i}\) can be taken to be the generator if and only if \(s_{i}([M^{2i}])=p\)._ As a corollary, when we look at the rational complex cobordism ring, the generators are identified by the following theorem: **Theorem 4.10**.: _The rational complex cobordism ring is \(\Omega^{U}\otimes\mathbb{Q}\cong\mathbb{Q}[b_{1},b_{2},...]\) with generators \(b_{i}\) in dimension \(2i\). A manifold \(M^{2i}\in\Omega_{2i}\otimes\mathbb{Q}\) can be taken to be the multiplicative generator if \(s_{i}([M^{2i}])\neq 0\)_ For larger values of \(r\), the situation becomes less clear, however in the rational case, we can find a generator satisfying: **Theorem 4.11**.: _Let \(r<d/2\). There exists a manifold \(M^{2d}\) in \(\Omega^{U}_{2d}\) which can be equipped with \(r\) linearly independent complex sections on \(TM\) and whose image in \(\Omega^{U}_{2d}\otimes\mathbb{Q}\) is a generator._ Proof.: By [12], the maps \(s_{\omega}\) span the \(p(d)\)-dimensional vector space \(\mathbf{Hom}_{\mathbb{Q}}(\Omega^{U}_{2d}\otimes\mathbb{Q},\mathbb{Z})\). Since there are \(p(d)\) characteristic classes, they must also be linearly independent. Moreover, for every \(\omega\), we can find a dual object \(M^{2d}_{\omega}\in\Omega^{U}_{2d}\otimes\mathbb{Q}\) such that \(s_{\omega}(M^{2d}_{\omega})=1\) and all other characteristic numbers are \(0\). If we choose \(\omega\) to be the partition \(d\), then we get a rational generator \(M^{2d}_{d}\) by Theorem 4.10. We can choose integer \(c\) such that \(M^{2d}:=cM^{2d}_{d}\in\Omega^{U}_{2d}\). Then \(s_{\omega}(M^{2d})=0\) for \(\omega\neq d\) and Theorem 4.6 shows that some manifold \(\tilde{M}^{2d}\in[M^{2d}]\) has \(r\) linearly independent complex sections. Next, we deal with potential torsion in the image of \(\gamma^{r}\). We briefly recall the \(MU^{*}\) cohomology theory and the Adams Novikov spectral sequence [9]. The \(MU^{*}\) cohomology theory is defined as the homotopy group \(MU^{k}(X)=[\Sigma^{-k}X,MU]\). In particular, \(MU^{*}(pt)\cong\Omega^{*}_{U}\) and \(MU^{*}(\mathbf{MU})=A^{U}\), the algebra of operations for the \(MU^{*}\) cohomology theory. **Theorem 4.12**.: _For connective spectra of finite type \(X,Y\), there is a spectral sequence \(\{E_{k}^{s,t},d_{k}\}\) with differentials \(d_{k}:E_{k}^{s,t}\to E_{k}^{s+r,t+r-1}\): such that:_ 1. \(E_{2}^{s,t}\cong\mathbf{Ext}_{A^{U}}^{s,t}(MU^{*}(X),MU^{*}(Y))\)__ 2. _There is a filtration_ \[[\Sigma^{t-s}Y,X]=F^{0,t-s}\supseteq...\supseteq F^{s,t}\supseteq F^{s+1,t+1} \supseteq...\] _such that_ \(E_{\infty}^{s,t}=F^{s,t}/F^{s+1,t+1}\)_._ We are only concerned with the case where \(Y\) is the sphere spectrum. It is easy to calculate the \(MU\) cohomology for the spectra we are concerned with using the Atiyah-Hirzebruch spectral sequence [1]. **Proposition 4.13**.: _Let \(X\) be a spectrum with torsion-free cohomology concentrated in even degrees. Then, \(MU^{*}(X)\cong\Omega_{U}^{*}\otimes_{Z}H^{*}(X;\mathbb{Z})\) and is torsion-free concentrated in even degrees._ Proof.: The \(E_{2}\) page of the Atiyah-Hirzebruch spectral sequence for computing \(MU^{*}(X)\) is given by \[E_{2}^{p,q}=H^{p}(X;\Omega_{U}^{q})\] By the universal coefficient theorem, since \(H^{p}(X;\mathbb{Z})=0\) for \(p\) odd and \(\Omega_{U}^{q}\) is free abelian, the \(E_{2}\) page \(E_{2}\cong H^{*}(X;\mathbb{Z})\otimes\Omega_{U}^{*}\). Moreover, \(E_{2}^{p,q}=0\) if \(p\) and \(q\) are not both even. The differentials \(d_{r}\) have degree \((r,-r+1)\), which implies that either the source or the target must be zero. Therefore all differentials are zero and \(MU^{*}(X)\cong\Omega_{U}^{*}\otimes_{\mathbb{Z}}H^{*}(X;\mathbb{Z})\). As a consequence, the algebra of operations \(A^{U}\) is torsion free and concentrated in even degrees. We begin the proof of the main theorem with the following technical lemma. **Lemma 4.14**.: _Let \(1\leq n<d\) and suppose \([M]\in\Omega_{U}^{2d+2n}\) such that \(s_{\omega}(M)=0\) for all \(\omega\) containing a number greater than \(d\). Then \([M]\in\mathbb{Z}[b_{1},...,b_{d}]\subseteq\Omega_{U}^{*}\)._ Proof.: Let \([M]\) satisfy the assumptions of the lemma. First note that, \(s_{d+n}(M)=0\) and \(M\) is decomposable. Let \(M^{2i}\) be a manifold representing \(b_{i}\) for all \(i\). Let \(k\) be the maximal number such that \(M^{d+k}\) appears in the decomposition of \(M\). Suppose \(k>0\). Then \(M=M^{2(d+k)}\tilde{M}+\sum_{i}\prod_{j}N_{ij}\) where \(N_{ij}\) is a manifold of dimension less than \(2(d+k)\). By assumption \(s_{d+k,\tilde{\omega}}(M)=s_{d+k}(M^{2(d+k)})s_{\tilde{\omega}}(\tilde{M})=0\) for any partition \(\tilde{\omega}\) of \(2n-2k\). Since \(s_{d+k}(M^{2(d+k)})\neq 0\), we conclude \(s_{\tilde{\omega}}(\tilde{M})=0\) for all \(\tilde{\omega}\). Thus \(\tilde{M}\) must be null-cobordant and so \(M^{2(d+k)}\) may not appear in the decomposition of \(M\) for \(k>0\). So \([M]\in\mathbb{Z}[b_{1},...,b_{d}]\). **Theorem 4.15**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold and \(r<d/2\). Then there exists a manifold \(N^{2d}\) cobordant to \(M^{2d}\) with \(r\) linearly independent complex sections on \(TN^{2d}\) if \(s_{\omega}(M^{2d})=0\) for \(\omega\) of length greater than \(d-r\) and for all \(\omega\) containing a number greater than \(d-r\)._ Proof.: Consider the Adams-Novikov spectral sequence on \(\mathbf{MU}\) and \(\mathbf{MTU}(d)\). To compute the homotopy groups of a spectrum \(X\), the \(E_{2}\) page will be \(\mathbf{Ext}_{A^{U}}^{s,*}(MU^{*}(X),\Omega_{U}^{*})\). One choice of free resolution for \(MU^{*}(\mathbf{MU})\) over \(A^{U}\) is \[0\to A^{U}\to MU^{*}(\mathbf{MU})\to 0\] It quickly follows that \(\mathbf{Hom}_{A^{U}}^{s,*}(MU^{*}(\mathbf{MU}),\Omega_{U}^{*})\) is \(\Omega_{U}^{*}\) for \(s=0\) and \(0\) for \(s\neq 0\). Thus, the \(E_{1}\) page is \(\Omega_{U}^{*}\) and all differentials must be \(0\). Now construct a free resolution for \(MU^{*}(\mathbf{MTU}(d))\) over \(A^{U}\): \[\to F_{2}\to F_{1}\to A^{U}\to MU^{*}(MTU(d))\] The map \(MU^{*}(\mathbf{MU})\cong A^{U}\to MU^{*}(\mathbf{MTU}(d))\) is induced by the inclusion. We can conclude that \(E_{1}^{0,*}\) will be \(\mathbf{Hom}_{A^{U}}(A^{U},\Omega_{U}^{*})\cong\Omega_{U}^{*}\). By Lemma 2.8, \(\pi_{q}(\mathbf{MTU}(d))\cong\pi_{q}(\mathbf{MU})\cong\Omega_{U}^{-q}\) for \(q\leq 2d\). Thus in degrees less than or equal to \(2d\), \(E_{1}^{0,*}\) must survive to the \(E_{\infty}\) page. In particular, all differentials are zero in this range. (The elements that survive to the \(E_{\infty}\) page will be in the kernel of all differentials.) Let \(1\leq n<d\) and suppose \([M]\in\Omega_{U}^{2d+2n}\) such that \(s_{\omega}(M^{2d})=0\) for \(\omega\) of length greater than \(d-r\) and for all \(\omega\) containing a number greater than \(d-r\). The previous Lemma 4.14 says that \([M]\in\mathbb{Z}[b_{1},...,b_{d}]\). There is a natural action of \(\Omega_{U}^{*}\) on the \(E_{2}\) and higher pages of the Adams Novikov spectral sequence. Since \(d_{r}(b^{2i})=0\) for \(i\leq d\), we observe that \(d_{r}(b^{2i}\cdot x)=b^{2i}d_{r}(x)\) for \(x\) on the \(E_{r}\) page. In particular \(d_{r}(M)\) must be zero, if \(d_{1}(M)=0\). By Theorem 4.6, some multiple \(cM\) must survive to the \(E_{\infty}\) page. Since \(d_{1}\) has image in free abelian groups, \(d_{1}(M)\) must be zero. Thus \([M]\) survives to the \(E_{\infty}\) page. Thus \([M]\in\pi_{2d+2n}(MTU(2d))\) and an element of the cobordism class can be equipped with \(n\) complex sections by Theorem 4.4. Reindexing completes the proof of the theorem. We conclude by noting that the assumption that \(s_{\omega}\) be zero for all \(\omega\) of length greater than \(d-r\) is not a necessary condition. We expect that: **Conjecture 1**.: _Let \(M^{2d}\) be a \(d\)-dimensional almost-complex manifold and \(r<d/2\). Then there exists a manifold \(N^{2d}\) cobordant to \(M^{2d}\) with \(r\) linearly independent complex sections on \(TN^{2d}\) if and only if \(s_{\omega}(M^{2d})=0\) for \(\omega\) of length greater than \(d-r\)._ To show this, it would be sufficient to show that all differentials are zero in the relevant range of the Adams-Novikov spectral sequence. This theorem is known to be true [4] for \(r=d\); and the complex section cobordism group is the framed cobordism group in this case. This gives hope that some result may be extended to \(r\geq d/2\).
2303.00164
A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures
Despite huge gains in performance in natural language understanding via large language models in recent years, voice assistants still often fail to meet user expectations. In this study, we conducted a mixed-methods analysis of how voice assistant failures affect users' trust in their voice assistants. To illustrate how users have experienced these failures, we contribute a crowdsourced dataset of 199 voice assistant failures, categorized across 12 failure sources. Relying on interview and survey data, we find that certain failures, such as those due to overcapturing users' input, derail user trust more than others. We additionally examine how failures impact users' willingness to rely on voice assistants for future tasks. Users often stop using their voice assistants for specific tasks that result in failures for a short period of time before resuming similar usage. We demonstrate the importance of low stakes tasks, such as playing music, towards building trust after failures.
Amanda Baughan, Allison Mercurio, Ariel Liu, Xuezhi Wang, Jilin Chen, Xiao Ma
2023-03-01T01:35:16Z
http://arxiv.org/abs/2303.00164v2
# A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures ###### Abstract. Despite huge gains in performance in natural language understanding via large language models in recent years, voice assistants still often fail to meet user expectations. In this study, we conducted a mixed-methods analysis of how voice assistant failures affect users' trust in their voice assistants. To illustrate how users have experienced these failures, we contribute a crowdsourced dataset of 199 voice assistant failures, categorized across 12 failure sources. Relying on interview and survey data, we find that certain failures, such as those due to overcapturing users' input, derail user trust more than others. We additionally examine how failures impact users' willingness to rely on voice assistants for future tasks. Users often stop using their voice assistants for specific tasks that result in failures for a short period of time before resuming similar usage. We demonstrate the importance of low stakes tasks, such as playing music, towards building trust after failures. voice assistants, trust, survey, interview, dataset + Footnote †: c) 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4921-5/23/04. + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: thanks: (c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). + Footnote †: c) 2023 Copyright held by the owner/author(s). do users currently experience, and how do these failures affect user trust? A human-centered understanding of the types of NLP failures that occur and their impact on users trust would allow technologists to prioritize and address critical failures and enable long-term adoption of voice assistants for a wider variety of use cases. Further, while research has started to categorize types of breakdowns in communication between users and NLP agents (Kang et al., 2018; Wang et al., 2019), little work has looked into how users perceive these failures and subsequently trust and use their voice assistants. We draw from and extend past research to make the following contributions: * **C1:** Iterating on the existing taxonomy of NLP failures, we crowdsource a dataset of 199 failures users have experienced across 12 different sources of failure. * **C2:** A qualitative and quantitative evaluation on how these different failures affect user trust, specifically along dimensions of ability, benevolence, and integrity. * **C3:** A qualitative and quantitative analysis on how trust impacts intended future use. To accomplish this, we developed a mixed-methods, human-centered investigation into voice assistant failures. We first executed interviews with 12 voice assistant users to understand what types of failures they have experienced and how this affected their trust and subsequent use of their assistant. We concurrently crowdsourced a dataset of failures from voice assistant users on Amazon Mechanical Turk. Finally, we executed a survey to quantify how different types of failures impact users' trust in their voice assistants and their willingness to use them for various tasks in the future. We found that different types of voice assistant failures have a differential impact on trust. Our interviews and survey revealed that participants are more forgiving of failures due to spurious triggers or ambiguity of their own request. In the case of spurious triggers, the voice assistant activates due to mishearing the activation phrase when it was not said. Users forgave this more easily, as it did not hinder them from accomplishing a goal. Failures due to ambiguity occurred when there were multiple reasonable interpretations of a request, and the response was misaligned with what the user intended while still accurately answering the question. Users tended to blame themselves for these failures. However, failures due to overcapture more severely reduced users' trust, as when the voice assistant continued listening without any additional input, users considered their use a waste of time. We additionally find that on many occasions, users would discontinue using their voice assistant for a specific task for a short period of time following a failure, and then resume again once trust had been rebuilt. Trust was often rebuilt by using the voice assistant for tasks they considered simple, such as playing music, or alternatively, using the voice assistant for the same general task but in a different use case. In addition to these findings, we release a dataset of 199 voice assistant failures, capturing user input, voice assistant response, and the context for the failure, so that researchers may use these failures for future research on how users respond to voice assistant failures. As voice assistants continue to perform increasingly complex and high stakes tasks across various industries (Kang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), we hope that this research will help technologists understand, prioritize, and address natural language failures to increase and maintain user trust in voice assistants. ## 2. Related Work Prior research across many fields has examined the interaction between users and voice assistants, including human-computer interaction, human-centered AI, human-robotics interaction, science and technology studies (STS), computer-mediated communication (CMC), and social psychology. In addition, some work in natural language processing (NLP), especially NLP robustness, has approached technology failures in voice assistants and developed certain technical solutions to address them. Here, we provide an interdisciplinary review of research relevant to voice assistant failures during user interaction across these fields. The literature review is organized as follows: 1) literature on user expectations and trust in voice assistants; 2) human-computer interaction (HCI) approaches to understanding voice assistant failures and strategies for mitigation; 3) natural language processing (NLP) approaches to voice assistant failures, including disfluency and robustness. ### User Expectations and Trust in Voice Assistants Researchers have long tried to understand how people interact with automated agents, especially comparing and contrasting these experiences with human-to-human communication. When talking with other humans, conversations can broadly be understood as functional (also known as transactional or task-based) or social (interactional), and many conversations include a mix of both (Kang et al., 2018). Functional conversations serve towards the pursuit of a goal, and those who participate often have understood roles towards the pursuit of that goal. In contrast, social conversations have a goal of building, strengthening, or maintaining a positive relationship with one of the participants. These social conversations can help build trust, rapport, and common ground (Kang et al., 2018). People generally expect to have functional conversations with voice assistants (Kang et al., 2018). The lack of social conversations may reduce users' ability to build trust in their voice assistants. Indeed, past research has shown that users trust embodied conversational agents more when they engage in small talk (Bowman et al., 2019), although this varies by user personality type and level of embodiment of the agent (Bowman et al., 2019). As it stands, people report not using voice assistants for a broad range of tasks, even though they're technically capable of doing so (Kang et al., 2019). Prior work has illustrated the importance of trust for continued voice assistant use (Kang et al., 2018; Wang et al., 2019), as trust is pivotal to user adoption of voice assistants (Wang et al., 2019; Wang et al., 2019) and willingness to broaden the scope of voice assistant tasks (Kang et al., 2019). It is especially important to support trust-building between users and voice assistants as researchers continue to imagine and develop new capabilities for them, including complex tasks such as supporting healthcare tasks (Wang et al., 2019; Wang et al., 2019), giving mental health advice (Wang et al., 2019; Wang et al., 2019), and other high stakes decision-making (Kang et al., 2019). This then begs the question of how trust is built between users and voice assistants. Trust in machines is an increasingly important topic, as use of automated systems is widespread (Wang et al., 2019). Concretely, trust can be conceptualized as a combination of confidence in a system as well as willingness to act on its provided recommendations (Wang et al., 2019; Wang et al., 2019). Prior researchers have examined trust in machines in terms of people's confidence in a machine's _ability_ to perform as expected, _benevolence_ (well-meaning), and _integrity_ to adhere to ethical standards (Saleem et al., 2017) Broadly, past research has evaluated how various factors such as accuracy and errors affect people's trust in algorithms (Hong et al., 2017; Ma et al., 2018; Ma et al., 2018). In the case of voice assistants, Nasirian et al. (Nai et al., 2018) and Lee et al. (Lee et al., 2018) studied how quality affects trust in and adoption of voice assistants, and found that information and system quality did not impact users' trust in a voice assistant, but interaction quality did. Interaction quality was captured based on a study by Ekinci and Dawes (Ekinci and Dawes, 2018), in which Likert scale responses were captured regarding competence, attitude, service manner, and responsiveness of the voice assistant. In addition, customizing a voice assistant's personality to the user can lead to higher trust (Bahdan et al., 2018), while gender does not impact users' trust in a voice assistant (K New York Times articles, Reddit posts, and Amazon product reviews. Noisy input can also harm model performances. Lee et al. (2019) showed speech recognition errors have catastrophic impact on machine comprehension. Gupta et al. (2019) created a question answering dataset Disflu-QA where humans introduce contextual distiencies, which also lead to model performance drops. Although these works do not directly focus on voice assistant failures, topic domain changes, speech recognition errors and disfluencies are all very common during user interactions with voice assistants. Such similarities motivate us to draw parallels between the NLP robustness literature and HCI perspectives of system failures. By understanding how different types of failures affect trust in voice assistants overall, we can then try to pinpoint the underlying NLP components that are the root cause of the most critical failures that erode trust (Nakamura et al., 2018). Technical solutions can then be leveraged to improve the robustness of the most critical parts of the system in order to increase user trust and long-term engagement most efficiently. ## 3. Method Overview Now that we have established the importance of understanding of how voice assistant failures impact user trust, we proceed to conduct a mixed-method study. First, to prepare for the quantitative evaluation, we reviewed existing datasets in HCI and NLP to find failures that we could use as materials for our survey. Ultimately, the existing datasets were not sufficient for our needs. Therefore, we crowdsourced a dataset of failures from voice assistant users, which we also open source as part of the contributions of this study. Concurrently, we conducted interviews with 12 voice assistant users to understand which types of failures they have experienced, and how this affected their trust in and subsequent use of the assistant. These interviews were designed to provide a broad understanding of the thoughts, feelings, and behaviors that users have with regard to voice assistant failures and inform the quantitative survey design. Finally, we executed a survey to quantify how different types of failures impact user perceptions of trust in their voice assistants and their willingness to use them for various tasks in the future. To report these findings, we first describe our process of collecting the crowdsourced dataset of failures, and how we selected a subset to use in our survey. Next, we present the interviews and survey, first describing our data collection and analysis, and then presenting the results concurrently. ## 4. Crowdsourcing a Dataset of Voice Assistant Failures The first goal in our investigation was to determine which types of failures users experience when using voice assistants. We first evaluated existing datasets for fit and breadth of failures. We determined they were not sufficient for our purposes, so we proceeded to crowdsource a dataset of failures, adapting a taxonomy from Hong et al. (2018) to guide our collection. Finally, we cleaned and open-sourced this dataset as a contribution of our work. ### A Review of Existing HCI and NLP Datasets We first explored benchmark datasets in NLP, which contain a large number of either questions and answers (Zhu et al., 2018; Li et al., 2019; Li et al., 2019), or conversational dialogue (Li et al., 2019; Li et al., 2019; Li et al., 2019). We found that existing NLP datasets do not cover the wide breadth of possible conversational failure cases due to their emphasis on correct data for training. Additionally, their focus on specific task performance, such as answering questions or dialogue generation, is more narrow than the variety of use cases for voice assistants. As training data relies on accurate task completion, Figure 1. To analyze the impact of voice assistant failures on user trust, we used a mixed-methods approach, including interviews and a survey. As part of the materials for our survey, we crowdsourced 199 failures from 107 voice assistant users, and include this dataset as part of our contributions. these datasets did not contain failures. While testing these models produces a small percentage of errors (roughly 10%), the types of failures could only fall in the response and understanding categories, as attention and perception failures are excluded from the context of training these types of models. This limited their usefulness for our purpose of understanding voice assistant failures that occur in use and their impact on user trust. In addition to these benchmark datasets, we investigated datasets that incorporated spoken word speech patterns, such as the Spoken SQuAD dataset (Solar et al., 2017) and Disflu-QA dataset (Solar et al., 2018), as well as human-agent interaction datasets, such as the ACE dataset (Bauer et al., 2016), the Niki and Julie corpus (Bauer et al., 2016), and a video dataset of voice assistant failures (Kumar et al., 2017). In these cases, we found that the datasets were still restricted to only failures at the understanding and response level (Solar et al., 2018; Solar et al., 2018) or the context for the failures was very specific and did not necessarily capture the breadth of possible failures users experience (Bauer et al., 2016; Bauer et al., 2016). Cuadra et al. (Cuadra et al., 2017)'s video dataset was the closest available fit for our needs, but we still found the use of in-lab question-answering too narrow for our purposes. Therefore, we decided to crowdsource a dataset of voice assistant failures from users, and use these failures when conducting our quantitative survey on user trust. ### Dataset Collection #### 4.2.1. Procedure Crowd workers were asked to submit three failures they had experienced with a voice assistant. They were asked about three specific types of failures out of a taxonomy of 12, which were randomly chosen and displayed in equal measure across all workers. The taxonomy of failures that we used to ask about specific types of failures was adapted from previous work by Hong et al. (Hong et al., 2018), and identifies failures due to attention, perception, understanding, and response, as shown in Table 1. Each question began by asking users if they could recall a time when their voice assistant had failed, based on the definitions in our taxonomy. For example, to capture missed trigger failures we asked "Has there ever been a time when you intended to activate a voice assistant, but it did not respond?" If so, we asked these workers to include 1. what they had said to the voice assistant, 2. how the voice assistant responded, 3. the context for the failure, including what happened in the environment, and 4. the frequency at which the failure occurred from 1 (rarely when I use it) to 5 (every time I use it). These were all presented as text entry boxes except for the frequency question, which was multiple choice. Crowd workers were additionally asked to optionally share an additional failure that they had not had the \begin{table} \begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} **Failure Type** & **Sequential Coding Guide** & **Failure Source** & **Failure Scenario** \\ \hline \hline \multirow{8}{*}{**Attention**} & A lack of visual or audio evidence the voice assistant has started listening OR video or audio evidence that the voice assistant has started listening in the absence of a cue & **Missed Trigger** & Users say something to trigger the voice assistant, it fails to respond. \\ \cline{2-4} & \begin{tabular}{l} **A** lack of visual or audio evidence that the voice assistant \\ has started listening in the absence of a cue \\ \end{tabular} & **Spurious Trigger** & Users do not say something to trigger the voice assistant, but it activates anyways. \\ \cline{2-4} & \begin{tabular}{l} **A**elavera \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** \\ **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelavera** **Aelvera** **A** **Aelvera** **A** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Aelvera** **Ael chance to share already. This was included to capture failures that did not fit any of the three the categories they were presented with, and we then categorized these failures according to our taxonomy. Once we received these failures, we anonymized the type of voice assistant in the submitted examples, replacing activation words with "Voice Assistant" for consistency. We then edited grammatical and spelling errors for clarity. We also removed failures if they were not on-task, unclear, or exact repeats of other submitted failures. Finally, we noticed that some of the categories the users submitted the failures under were incorrect, so we re-categorized the failures according to the codebook we developed as outlined in Table 1. Two raters iteratively coded 101 submitted failures, with a final coding session achieving an interrater agreement of 70%. One researcher then went back and coded the entire dataset in its entirety. In total, our finalized dataset contains 199 failures across 12 categories, submitted by 107 unique crowd workers. #### 4.2.2. Crowd Worker Characteristics We used Amazon Mechanical Turk to recruit the crowd workers. In total, 107 crowd workers contributed to our dataset. We required workers to have the following qualifications: a HIT Approval Rate over 98%, over 1000 HITs approved, AMT Masters, from the United States, over the age of 18, and voice assistant users on at least a weekly basis. The plurality of users were in the age range of 35-44 (\(n=46\)), followed by 25-34 (\(n=32\)), and 45-54 (\(n=16\)), with the rest falling in 55-64 (\(n=8\)), 18-24 (\(n=1\)), and 1 preferring not to answer. Fifty-eight crowd workers were men, 44 were women, 1 preferred not to answer, and 1 identified as both a man and a woman. They used commercial voice assistants such as Amazon Alexa (\(n=59\)), Google Assistant (\(n=62\)), and Apple's Siri (\(n=40\)), with many using some combination of the three (\(n=47\)). 91 crowd workers were native English speakers, and 13 were not. The plurality identified as White (\(n=58\)), and 39 identified as Asian. Three crowd workers did not provide any demographic information. The task took 15-20 minutes to complete on average, and they received $5.00 USD compensation. #### 4.2.3. Final Dataset In total, our finalized dataset contained 199 failures from 107 users across 12 different types of failures according to the taxonomy based on Hong et al. (Hong et al., 2018), as updated in Table 1. The failures we received most often were due to misunderstanding (\(n=38\)), missed trigger (\(n=25\)), and noisy channel (\(n=22\)). Users least often submitted failures for truncation (\(n=7\)), overcapture (\(n=7\)), and delayed triggers (\(n=8\)). Most crowd workers submitted failures saying that they happened "rarely when I use it" (\(n=87\)) or "sometimes when I use it" (\(n=84\)). Example failures across the 12 categories can be found in Table 2. On average, the highest frequency of failures occurred for no understanding (\(m=2.15\), sometimes when I use it, \(sd=0.67\)) and action execution: incorrect (\(m=2.00\), sometimes when I use it, \(sd=0.88\)). The rest of the failure sources had an average reported frequency between 1.0 (rarely when I use it) and 2.0 (sometimes when I use it). The lowest frequency failures were due to delayed triggers (\(m=1.25\), \(sd=0.46\)) and ambiguity (\(m=1.39\), \(sd=0.78\)). We then used 60 of the failures from our dataset in our survey to quantify the impact of different failures on user trust. This is outlined in more detail in the following section. This dataset has been open sourced1 for researchers to use to answer future research questions related to voice assistant failures in the future. Footnote 1: [https://www.kaggle.com/datasets/google/voice-assistant-failures](https://www.kaggle.com/datasets/google/voice-assistant-failures) ## 5. Interview and Survey Methods Once we had gathered and categorized our dataset of voice assistant failures, we were ready to answer our research question: how do voice assistant failures impact user trust? To do so, we first conducted exploratory interviews with 12 people to gather their thoughts, feelings, and behaviors after experiencing voice assistant failures. We used these findings and the failures collected in the dataset to then design and execute a survey. This quantified how various voice assistant failures impact users' trust, as measured by their perceptions of the voice assistant's ability, benevolence, integrity, and their willingness to use it for future tasks. Here, we describe the methods for both the interviews and survey, and we follow this by jointly presenting the results from both studies. ### Interview Methods #### 5.1.1. Interview Procedure Interviews began with questions about why the participants chose to start using voice assistants and what types of questions they frequently would ask of them. We asked for common times and places they would use their voice assistants to understand their general experience with voice assistants. Once these were established, we asked participants to tell us about a time they were using their voice assistant and it made a mistake, in as much detail as they could recall. We asked what they had been trying to do and why, if others were present, and if anything else was happening in their environment. We probed for users' feelings once the failure occurred, and their perceptions about the voice assistant's ability to understand them and give them accurate information. We asked participants what they did in the moment to respond to the failure. Finally, we asked questions about their use of the voice assistant in the aftermath, including how much they trusted it and if they changed any of their behaviors to mitigate future failures. All interviews were conducted remotely. #### 5.1.2. Interview Participants During recruitment, we asked participants to submit their demographic information, how frequently they used voice assistants and on what types of devices. We additionally required participants to write a short (1-3 sentence) summary of a time they encountered a failure while using their voice assistant. We selected participants based on demographic distribution and the level of detail they included regarding the failure. All of our 12 participants lived in the United States. They used voice assistants at least 1-3 times a week (\(n=2\)), with the majority reporting using a voice assistant every day (\(n=8\)), and the rest (\(n=2\)) using it 4-6 times a week. The majority of participants used a voice assistant on their mobile device (\(n=11\)), and five of these participants also used a voice assistant smart home device. One participant only used a voice assistant smart home device. Participants reported using common commercial voice assistants such as Amazon Alexa (\(n=2\)), Google Assistant (\(n=7\)), and Apple's Siri (\(n=8\)). Participants' ages ranged from 18 to 50, with the plurality (\(n=5\)) in the age range of 18-23. 3 of our participants were 41-50, 2 were 31-40, and 2 were 24-30. Six of our participants identified as women, five participants identified as men, and one participant identified as non-binary. Three participants identified as Asian, three identified as White, three identified as Black or African American, two identified as Hispanic, Latino, or Spanish origin, and one identified as both White and Black or African American. All of our participants spoke English as a native language. Participants were compensated with a 550 gift card and each interview lasted roughly 30 minutes. #### 5.1.3. Interview Analysis Interviews were transcribed in their entirety by an automated transcription service and analyzed via a deductive and inductive process (Hernandez et al., 2017). We used deductive analysis to assess which types of failures these participants experienced. To ground our deductive analysis, we used the same codebook as we did for the dataset, as demonstrated in Table 1. We first identified instances in which participants were discussing distinct failures, and then applied our codebook to these instances. We used cues such as what was happening in their environment, and when appropriate, users' own perceptions of why the failure occurred. We began by first identifying if failures belonged in which of the four failure types: attention, perception, understanding, or response. First, to determine if there was an attention failure, we investigated if there was evidence that the voice assistant accurately responded to an activation phrase, as indicated by visual or auditory cues, or otherwise by the participant's narrative. Second, we evaluated if there was an error in perception, based on the participants' assumption of if the voice assistant accurately parsed the input from the participant, our own assessment from their narrative, or other audio/visual cues. Next, assuming that the input was correctly parsed, we sought to understand if the voice assistant accurately understood the semantic meaning of the input (understanding failures), using the same process. Finally, assuming all else had been correctly understood, we assigned response failures, indicating that the voice assistant either did not take action or took the incorrect action in response to an accurately understood command. Once a failure type was determined, we then further specified the failure sources as noted in Table 1. We resolved disagreements both asynchronously and in meetings, through discussion and comparison, over the course of several weeks. While conducting this analysis, we also inductively identified themes related to these failures' impact on future tasks and recovery strategies. To conduct this analysis, two researchers reviewed the twelve transcripts in their entirety, and one additional researcher reviewed five of these transcripts to further broaden and diversify themes. These researchers met over the course of several weeks to compare notes and themes, ultimately creating four different themes through inductive analysis. Of these themes, we report two \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} **Failure Source** & **Context** & **What the User Said** & **How the Voice Assistant Reacted** \\ \hline **Missed Trigger** & I tell her to set a timer for ten minutes, I was alone and no one present at the moment. & Voice Assistant, set a timer for 10 minutes. & [No response.] \\ **Spurious Trigger** & I was having a conference call with my team, and I was calling my coworker Sherry. The voice assistant mistakenly got turned on. & [While talking to the coworker] “Can you share your screen?” & [Responding to the conversation with the co-worker.]’ One moment, let me help you with that” \\ **Delayed Trigger** & I happened while I was driving a car. & Voice Assistant, show me the route to the national park. & [The voice assistant takes so much time to respond that before it can respond, you once again ask the route.] \\ \hline **Noisy Channel** & My children were playing in the background and the dog was barking, and I had to raise my voice and try several times to be heard by my phone even though it was inches from my face. & Voice Assistant, what’s the weather? & [It didn’t realize that my request had ended ] and kept spinning.]’ I’m sorry, I didn’t quite understand you.” \\ **Overcapture** & I was telling it to turn off the lights. I was the only one there. & Voice Assistant, turn off the lights. & [It continues listening for so long that you turn them off yourself.] \\ **Truncation** & I asked the voice assistant to calculate a math question, but it cut me off. & Voice Assistant, can you multiply & "54 times 39 times 33 is 69,498." \\ **Transcription** & I asked for the weather conditions in the city I live in. No others were present except for me. & Voice Assistant, what is the temperature in Murrieta, CA today? & "The temperature in Marietta, Georgia today is 65 degrees Fahrenheit." \\ \hline **Ambiguity** & I was at home, alone, watching UFC and asked how old a fighter was. & Voice Assistant, how old is Johnny Walker? & "Johnny Walker was founded in 1865." [It referred to the whiskey company instead of the fighter.] [It plays a scary sounds soundtrack instead of the song.] \\ **Noundstanding** & I was trying to run a routine to wake up my kids. & Voice Assistant, wake up the twins. & [It plays a scary sounds soundtrack instead of the song.] \\ \hline **Action Execution: No Action** & I asked when a movie was coming out in theaters, and it kept spinning its light over and over. & Voice Assistant, when does Shang-Chi come out in theatres? & [Fauses for a really long time, then turns its lights off and does not respond.] \\ **Action Execution: Incorrect Action** & I was at home, in my living room, alone. I was trying to find out how long Taco Bell was open. & Voice Assistant, when does the Taco Bell in open until 1am." [Upon driving to Taco Bell on Glenwood close. & "Taco Bell is open until 1am." [Toon driving to Taco Bell, I realized it closed at 11:30pm.] \\ \end{tabular} \end{table} Table 2. Table of voice assistant failures users submitted, including the context for the failure, what the user said, and what the voice assistant said. due to their novelty, specifically as related to future task orientation and recovery strategies. ### Survey Methods To quantify our findings from interviews, we developed a survey to explore users' trust in voice assistants following each of the twelve different types of failures from our taxonomy, as well as their willingness to use voice assistants for a variety of tasks in the aftermath. #### 5.2.1. Procedure The survey contained a screener, the core task, and a demographic section. We required participants be over 18 years old, use their voice assistant in English, and use a voice assistant with some regularity to participate. If participants passed the screener, they were required to review and agree to a digital consent form to continue. The core task stated, "_The following questions will ask you what you think about the abilities of a voice assistant, given that the voice assistant has made a mistake. Imagine these mistakes have been made by a voice assistant you have used before. Please consider each scenario as independent of any that come before or follow it. This survey will take approximately 20 minutes._" Participants were then presented with 12 different failure scenarios, and they were asked to rate their trust in two separate questions. The first question measured trust in voice assistants as a confidence score across three dimensions: ability, benevolence, and integrity. These were selected because prior work on trust has determined these elements explain a large portion of trustworthiness (Srivastava et al., 2017; Srivastava et al., 2017). In the context of voice assistants, ability refers to how capable the voice assistant is of accurately responding to users' input. Benevolence refers to how well-meaning the product is. And finally, integrity represents that it will adhere to ethical standards. We asked participants to rate their confidence in voice assistants' ability, benevolence, and integrity, as a percentage on a scale of 0-100, with steps of 10, to replicate how prior work has conceptualized trust (Srivastava et al., 2017). This was captured in response to the following statements: * (Ability) This voice assistant is generally capable of accurately responding to commands. * (Benevolence) This voice assistant is designed to satisfy the commands its users give. * (Integrity) This voice assistant will not cause harm to its users. The second question evaluated users' trust in the voice assistant to complete tasks that required high, medium, and low trust. To select these tasks, we ran a small survey on Mechanical Turk with 88 voice assistant users. We presented 12 different questions, which first gave an example voice assistant failure (one for each failure source), and then asked "_How much would you trust this voice assistant to do the following tasks_" give a weather forecast, play music, edit a shopping list, text a coworker, and send money. Users could choose that they would trust it completely, trust it somewhat, or not trust it at all. There was not a significant difference in how much people trusted the voice assistant to play music compared to forecast the weather (\(Z=2.06,p=0.078\)). There was also not a significant difference in how much people trusted the voice assistant to edit a shopping cart or text a coworker (\(Z=1.39,p=0.21\)) as determined by pairwise comparisons, using \(Z\)-tests, corrected with Holm's sequential Bonferroni procedure on an ANOVA of an ordinal mixed model. We found that there were significant differences between playing music, texting a coworker, and transferring money, with users having the most trust in the voice assistant playing music after a failure, less trust in testing a coworker, and still less in transferring money. Therefore, we selected playing music, texting a coworker, and transferring money to represent low, medium, and high levels of trust required. Therefore, after asking about ability, benevolence, and integrity, we asked participants how much they trusted their voice assistants to execute the following tasks: play music, text a coworker, and transfer money. These questions were displayed on a linear scale of 1 ("I do not trust it at all") to 5 ("I completely trust it"), with steps of 1. We completed the survey with an open-ended, optional question for participants to share anything else they would like to add. The survey concluded with demographic questions regarding gender, race, ethnicity, whether they were native English speakers, what type of voice assistants they used, and their general trust tendency as control variables. General trust tendency was measured based on responses to the following: "_Generally speaking, would you say that most people can be trusted, or that you need to be very careful in dealing with people?_" The options ranged from 1 (need to be very careful in dealing with people) to 5 (most people can be trusted). The questionnaire used for the survey has been submitted as supplementary materials. #### 5.2.2. Materials from our Dataset To present each of the twelve failure sources in our survey, we drew from the dataset we had created. We selected five failures from each of the twelve categories. We required that these failures had been coded by two of the team members who were in agreement (see dataset examples in Table 2). We used random selection to determine which of the five possible failures was presented to each user for each failure source. These are denoted in the dataset "Survey" column. #### 5.2.3. Participants We recruited participants from Amazon Mechanical Turk. We first ran a small pilot (\(n=27\)) in which we determined that participants completed the survey in roughly 20 minutes on average, and we set the compensation rate at 59 USD. After removing participants who did not pass the attention check or straight-lined, meaning they responded to every question with the same answer, we had a total of 268 participants. These participants were required to have the following qualifications: AMT Masters, with over 1000 HITs already approved, over 18 years old, live in the United States, an approval rate greater than 97%, and they must not have participated in any of our prior studies. The plurality of our participants were in the age range of 35-44 (\(n=106\)), followed by 25-34 (\(n=68\)), 45-54 (\(n=52\)), 55-64 (\(n=33\)), with 2-4 participants in each of the age brackets of 18-24, 65-74, and 75+. 134 of our participants identified as men, 132 identified as women, and 2 identified as non-binary genders. The majority of our participants were White (\(n=210\)), 21 participants were Black, and 15 were Asian. The rest of our participants identified as mixed race or preferred not to answer. ## 6. Results: Trust in Voice Assistants After Failures In interviews, we found that participants reported failures across all four failure types and ten of the twelve failure sources. The only two failure sources that were not mentioned in interviews were missed triggers and delayed triggers in the attention failure type. To understand which types of failures most significantly impacted user trust, we analyzed how various failures impacted users' confidence in their voice assistant's ability, benevolence, and integrity. We used six mixed-linear regression models with log-normalized confidence in either ability, benevolence, or integrity as the numeric dependent variable. Note that there are two different levels at which we conduct the analysis. The first is at the four broad "failure types" level (attention, perception, understanding, and response). Then we drill down to the detailed \(12\) "failure sources" nested within each failure type. Therefore, for each dimension of trust, we encoded failure type or failure source, as well as general trust tendency, as independent variables, so there were two regression models per dimension of trust. Failure type and failure source were encoded as categorical variables, and general trust tendency was encoded as an ordinal value. In all models, PID was encoded as a random, categorical variable. An ANOVA on the regression models revealed that failure type (attention, perception, understanding, response) does significantly impact perceptions of ability (\(F(3,2656.78)=17.17,p<.001\)), benevolence (\(F(3,2711.23)=8.87,p<0.001\)), and integrity (\(F(3,2772.09)=20.56,p<.001\)) when controlling for general trust tendency (see Fig. 2 and Table 3). We found that the failure type "Response" (which includes action execution: inaction and action execution: incorrect action) more significantly deteriorated user trust in voice assistants across ability (\(m=43.6,\beta=-0.155\), \(p<.001\)), benevolence (\(m=52.5,\beta=-0.072\), \(p=0.013\)), and integrity (\(m=57.3,\beta=-0.124\), \(p<.001\)), compared with failures due to "Attention" (which includes missed triggers, spurious triggers, and delayed triggers). Attention failures had a mean trust in ability of 49.9, benevolence of 56.0, and integrity of 67.7 on the scale of 0-100%. We also found that failures due to perception significantly reduced users' confidence in voice assistant's ability (\(m=44.8,\beta=-0.122,p<.001\)) and benevolence (\(m=53.6,\beta=-0.047,p=.05\)), but had no measurable effect on integrity (\(m=61.4,\beta=-0.014,p=0.484\)) compared with attention failures. Failures due to understanding maintained higher user confidence in benevolence (\(m=57.9,\beta=0.054,p=0.031\)) and integrity (\(m=65.0,\beta=0.063,p=0.003\)), but had no measurable effect on ability (\(m=50.2,\beta=0.015,p=0.592\)) compared with attention failures. Overall, response failures had the lowest average scores across ability, perception, and integrity. The most drastic difference between these categories is between failures due to understanding, which generally maintained the highest levels of trust in ability, benevolence, and integrity, as shown in Fig. 2. Therefore, in the analysis to follow that evaluates changes in trust across failure source, _response: incorrect action_ has been chosen as the reference variable, and all betas reported are in reference to this category. Below, we explore in more detail how users across both interviews and the survey responded to failures across the various failure sources. ### Attention Failures Attention failures are any failures in which a voice assistant does not accurately respond to an attempt for activation. These were the least commonly reported failures across interviews. In the survey, failures due to missed triggers were particularly harmful to users' Figure 2. Average scores across the three dimensions of trust (ability, benevolence, and integrity) by failure type. Participants expressed higher confidence across three trust dimensions after encountering attention and understanding failures, compared to perception and response. Error bars display the confidence interval. \begin{table} \begin{tabular}{c|c c c c|c c c c c c c} & \multicolumn{4}{c}{**Ability**} & \multicolumn{4}{c}{**Benevolence**} & \multicolumn{4}{c}{**Integrity**} \\ \hline \hline & \(F\) & df & residuals & \(p\) & \(F\) & df & residuals & \(p\) & \(F\) & df & residuals & \(p\) \\ \hline (Intercept) & 9844.95 & 1 & 422.35 & \textless{}0.001 & 8436.32 & 1 & 355.86 & \textless{}0.001 & 10153.41 & 1 & 325.54 & \textless{}0.001 \\ General Trust & 3.07 & 4 & 286.67 & 0.017 & 1.78 & 4 & 299.1 & 0.133 & 4.69 & 4 & 311.17 & 0.001 \\ Failure Type & 17.17 & 3 & 2656.78 & \textless{}.001 & 8.87 & 3 & 2711.23 & \textless{}0.001 & 20.56 & 3 & 2772.09 & \textless{}.001 \\ \end{tabular} \end{table} Table 3. Voice assistant failure types significantly impacted users trust in voice assistants, across ability, benevolence, and integrity when controlling for their baseline trust tendencies, based on an ANOVA of three linear mixed models. Failure type was encoded as a categorical variable, and general trust was encoded as an ordinal value. Participant ID was a random, categorical variable. confidence in voice assistants' ability (\(m=41.2\), \(\beta=-0.10\), \(p=0.05\)) and benevolence (\(m=47.1\), \(\beta=-0.228\), \(p<0.001\)). However, the impact on integrity was positive compared to the reference value (action execution: incorrect action) (\(m=61.8\), \(\beta=0.194\), \(p<0.001\)). None of the interview participants reported failures due to missed triggers. As all of the failures had a favorable impact on integrity compared to the reference value, we refrain from reporting it throughout the rest of the results. See Fig. 3 and Table 4 for more details. Only P7 and P12 reported experiencing attention failures in interviews, and they were both spurious triggers. As shown in Table 1, these are failures in which the voice assistant activates in the absence of an activation phrase. P7 reported that, "_I feel like in conversation if I have it plugged in and there's like multiple people in the room, and they're talking or whatever, I think sometimes it may hear an (activation phrase) where it's not. And if that's happened, where it's activated like once or twice completely out of nowhere, and that hasn't upset me or anything, but it's, it was just like, I didn't say [an activation phrase]. Why are you activating? What's happening? Why are you doing this?_" P7 additionally said they were working and"_It must have heard an [activation phrase] somewhere in there. And then it started speaking while I was trying to do my [job], and I had to like stop and be like, hey, stop._" They said, "_It would really piss [me off]._" Similarly, P12 reported that these types of failures were "_iritating but funny at the same time_" They said they were funny "_because sometimes like, when you're usually calling [the voice assistant] she'll take a longer time to respond, but when you're not talking to it, it automatically pops up...Like, I'm not talking to you, but you could answer me when I'm talking to you._" As demonstrated in Fig. 3 and Table 4, failures due to spurious triggers had a more favorable relative impact on users' impressions of trust in the voice assistant's ability (\(m=57.9\), \(\beta=0.36\), \(p<0.001\)) and benevolence (\(m=64.2\), \(\beta=0.236\), \(p<0.001\)). Overall, it appears that these are one of the least detrimental failures to users' trust. Similarly, failures due to delayed triggers were favorable to users perceptions of ability (\(m=49.3\), \(\beta=0.16\), \(p=0.001\)) relative to the reference variable (response: incorrect action). Delayed trigger failures are defined as failures in which the voice assistant experiences latency when activating, to the point of potentially, but not necessarily, providing a correct response too late to be useful. They had no measurable effect on benevolence (\(m=55.6\), \(\beta=0.043\), \(p=0.306\)). None of the participants reported a failure due to a delayed trigger in interviews. ### Perception Failures Users reported failures across all four failure types reported in Table 1, including truncation, overcapture, noisy channel, and transcription. Perception failures indicate that the voice assistant did not accurately capture the users' input. Transcription was by far Figure 3. Average scores across the three dimensions of trust (ability, benevolence, and integrity) by failure type. Participants expressed higher confidence across three trust dimensions after encountering failures due to ambiguity and spurious triggers than they are of other failure types, especially missed triggers and overcapture failures. the most common failure source, contrasted with only one failure recorded per truncation, overcapture, and noisy channel. Truncation failures indicate that the voice assistant stopped listening to input too early, and only acted on some of the user's intended input. P12 reported that "_I use [a voice assistant] to send messages and stuff, and sometimes it would write the text for some of the words, but not all of the words. So it takes me longer than expected to send a message, because it will take a little bit of the words and not fully listen._" They said, "_it's aggravating, very annoying_" Truncation failures had a favorable relative impact on perceptions of ability (\(m=48.7\), \(\beta=0.16,p<0.001\)) and benevolence (\(m=58.1,\beta=0.126,p=0.003\)). As shown in Fig. 3, these maintained higher relative trust compared to other failures in perception. Overcapture failures indicate that the voice assistant has listened beyond the point that a user has given their input. As P8 said, sometimes, "_it doesn't know when to search for what I said and just keeps listening without taking action, even though it shows it is listening._" They tried to make sense of this failure, saying "_find that_" _In different devices, the reaction time for it [is different]_. They said that, "_This is wasting my time. Which is only logically two to three minutes_," but they said, "_if you keep messing with it, it makes it worse._" Failures due to overcapture were particularly harmful to users' confidence in voice assistants' ability (\(m=38.5\), \(\beta=-0.10,p=.05\)) and benevolence (\(m=47.0\), \(\beta=-0.228,p<0.001\)), with the overall lowest means compared to all other failure types. There was one instance in which a user thought that the failure they experienced was because of noise in the background, indicative of noisy channel failures. P9 said, "_Sometimes...I'll try to use a feature where it tries to identify like a song...and it just won't be able to pick it up, and it'll just give me a message, like Sorry, I could not understand that._" They said, "_I get that it was loud...I would think that it would, it should be able to understand. So I feel like that is a little annoying._" However, they said the failure did not impact how they thought about the voice assistant's accuracy or ability, saying that "_it's pretty accurate for the most part, for other things._" Noisy channel failures were considered to more favorably impact user perceptions of ability (\(m=49.2\), \(\beta=0.14,p=0.003\)), with no measurable impact on benevolence (\(m=57.9\), \(\beta=0.076,p=0.07\)). As shown in Fig. 3, they achieved similar levels of trust as failures due to truncation. Nine of our participants mentioned failures relating to transcription of their input, indicating that they did not believe the voice assistant accurately captured what they had said. These failures varied from not understanding the name of a musical group (P7), incorrectly transcribing a text message (P2), incorrectly transcribing a sequence of numbers (P4), not understanding angry, slurred, or tumbled speech (P3, P5, P9), and not understanding accents (P8) or other languages (P6, P9). P7 said it caused a "_tiny little bit of frustration_" when it did not understand the musician they were requesting. However, they "_don't really demerit [the voice assistant] for that in particular because it's so good at everything else that it does._" However, when it came to using the voice assistants in other languages such as Spanish or French, "_there has not been a successful time where it's been it's been able to play that different song in a different language_" (P9). This led the participant to think, "_that it - it just has no ability to understand me in a different language_" (P9). Failures due to transcription did not have a measurable impact on perceptions of ability in the survey (\(m=42.3\), \(\beta=-0.06,p=0.225\)) relative to the reference variable, however they impacted trust more so than other failure sources within perception as shown in Fig. 3. Transcription failures did negatively impact perceptions of benevolence (\(m=51.0\), \(\beta=-0.101,p=0.016\)). ### Understanding Failures We found that participants submitted failures across all categories of understanding failures, as described below. Failures due to no understanding resulted in a complete inability to map the input to an action or response. P6 said, "_I was trying to plan a vacation...It was my friend's bachelorette party...And I was like, [Voice Assistant], where's Lake Havasar?How far is it?...And she's like, 'Sorry. I didn't understand what you're saying._" This led P6 to question, "_Why do I even use you?_" However, they said that, "_for timers, it works really well._" No understanding failures did not significantly impact trust relative to the reference variable, in terms of ability (\(m=45.3,\beta=0.06,p=0.24\)) or benevolence (\(m=52.5\), \(\beta=-0.043,p=0.306\)). \begin{table} \begin{tabular}{l|r r r r r r r r r r} & \multicolumn{4}{c}{**Ability**} & \multicolumn{4}{c}{**Benevolence**} & \multicolumn{4}{c}{**Integrity**} \\ \hline \hline & \(betas\) & se & \(Z\) & \(p\) & \(betas\) & se & \(Z\) & \(p\) & \(betas\) & se & \(Z\) & \(p\) \\ \hline **(Intercept)** & 3.56 & 0.046 & 77.413 & \(\sim\)0.001 & 3.796 & 0.048 & 79.083 & \(\sim\)0.001 & 3.754 & 0.045 & 83.422 & \(<\)0.001 \\ General Trust & 0.23 & 0.096 & 2.396 & 0.017 & 0.111 & 0.111 & 1 & 0.317 & 0.339 & 0.106 & 3.198 & 0.001 \\ \hline Missed Trigger & -0.10 & 0.048 & -1.958 & 0.05 & -0.228 & 0.043 & -5.302 & \(\sim\)0.001 & 0.194 & 0.037 & 5.243 & \(<\)0.001 \\ Spurious Trigger & 0.36 & 0.046 & 7.661 & -0.001 & 0.236 & 0.041 & 5.756 & \(\sim\)0.001 & 0.212 & 0.037 & 5.73 & \(<\)0.001 \\ Delayed Trigger & 0.16 & 0.046 & 3.391 & 0.001 & 0.043 & 0.042 & 1.024 & 0.306 & 0.249 & 0.036 & 6.917 & \(<\)0.001 \\ \hline Truncation & 0.16 & 0.046 & 3.522 & \(\sim\)0.001 & 0.126 & 0.042 & 3 & 0.003 & 0.263 & 0.037 & 7.108 & \(<\)0.001 \\ Overcapture & -0.13 & 0.047 & -2.809 & 0.005 & -0.193 & 0.042 & -4.595 & \(<\)0.001 & 0.133 & 0.037 & 3.595 & \(<\)0.001 \\ Noisy Channel & 0.14 & 0.046 & 2.978 & 0.003 & 0.076 & 0.042 & 1.81 & 0.07 & 0.326 & 0.036 & 0.905 & \(<\)0.001 \\ Transcription & -0.06 & 0.047 & -1.213 & 0.225 & -0.101 & 0.042 & -2.405 & 0.016 & 0.093 & 0.037 & 2.514 & 0.012 \\ \hline No Understanding & 0.06 & 0.047 & 1.17 & 0.242 & -0.043 & 0.042 & -1.024 & 0.306 & 0.28 & 0.037 & 7.568 & \(<\)0.001 \\ Misunderstanding & -0.017 & 0.046 & -0.37 & 0.711 & -0.023 & 0.042 & -0.548 & 0.584 & 0.202 & 0.037 & 5.459 & \(<\)0.001 \\ Ambiguity & 0.46 & 0.046 & 9.913 & \(\sim\)0.001 & 0.302 & 0.042 & 7.19 & \(<\)0.001 & 0.361 & 0.036 & 10.028 & \(<\)0.001 \\ \hline No Action & -0.003 & 0.047 & -0.064 & 0.949 & -0.09 & 0.042 & -2.143 & 0.032 & 0.186 & 0.037 & 5.027 & \(<\)0.001 \\ \hline \end{tabular} \end{table} Table 4. The results of three mixed-linear regression models, demonstrating how voice assistant failures impact users’ trust in voice assistants’ across ability, benevolence, and integrity. Reference failure source: Incorrect Action. Misunderstanding failures occurred when the voice assistant mapped the user's input to an action that was partially, but not fully, accurate to their intent. For example, P4 explained that when they ask their voice assistant "_to 'Take me home.' It usually directs me to my home, but on occasion, it shows me search results for the phrase 'Take me home._" Similarly, P1 explained how when using a voice assistant for online shopping, sometimes it would "_pull up the wrong item or, like, the wrong location_." They said they felt "_disappointed and frustrated._" Misunderstanding failures did not measurably impact perceptions of ability (\(m=42.4\), \(\beta=-0.017\), \(p=0.711\)) or benevolence (\(m=52.9\), \(\beta=-0.023\), \(p=0.584\)) relative to the reference variable. Failures due to ambiguity were situations in which one could see several reasonable interpretations of one's intent from the captured input, but the system failed to navigate the ambiguity. For example, P10 said, "_I was trying to get to Pizza Hut and...it kept on telling me one in the nearby city instead of the one that's I believe like 10 minutes away from me. So I asked a couple of times, and then it didn't work, and that's when I just pulled out my phone and then just looked it up myself and left._" They said that they were "_a bit baffled, since normally, like when I ask [a voice assistant] for something, I get the response I would expect._" As demonstrated in Fig. 3 and Table 4, failures due to ambiguity were more favorable to users' impressions of the voice assistant's ability (\(m=62.6\), \(\beta=0.46\), \(p<0.001\)) and benevolence (\(m=67.9\), \(\beta=0.302\), \(p<0.001\)). Overall, these failures maintained the highest level of user trust. ### Response Failures There were two possible types of response failures. These included incorrect action, in which the system gives information that is incorrect, or no action, in which a voice assistant fails to respond at all. Incorrect action failures were times when the command seemed to be accurately understood, but the information provided in response was incorrect. For example, P1 said that sometimes they would use "_the voice assistant to give me the best route to get to the location._" While it would usually accurately respond to this command, sometimes, "_it will give me a really like roundabout way, like really time-consuming way_" As shown in Fig. 3, failures due to incorrect action resulted in a relatively average perception of ability (\(m=44.0\)) and benevolence (\(m=54.3\)), and the lowest perception of integrity (\(m=53.6\)). Multiple users experienced failures due to no action, in which the voice assistant completely fails to respond to the input. P2 said, "_I did have a couple times that was also frustrating...I would say 'Reply' [to a text message]. And I would talk and nothing would get sent. And like, my hands are literally covered in stuff because I'm rolling these cookies out, and I had to stop what I'm doing, go back to my phone, and actually like manually text._" Another participant experienced failures due to no action, saying that "_This morning where I woke up. I said, [Voice Assistant], what's the weather outside? And it loaded for the first few seconds...and then after a couple of seconds, it said, There was an error. Please try again in a few minutes.! I wait one or two seconds, then I'll ask it again, and it gives me the information_" (P10). This participant said that because the information has been "_accurate_," they "_would still trust it to a very high degree._" Failures due to no action had no measurable relative impact on ability (\(m=43.2\), \(\beta=-0.003\), \(p=0.949\)) and had a slight but significant negative impact on benevolence (\(m=50.7\), \(\beta=-0.09\), \(p=0.032\)) compared to incorrect action. ## 7. Responses to Failures and Future Use of Voice Assistants Users described a variety of strategies for mitigating failures, given that they did occur. In some cases, users described completely stopping their use of a voice assistant for a particular task. For example, after encountering a truncation failure while using the voice assistant to send a text message, P12 said that they either "_have to red it, or I just, like, don't do it at all_" Eventually, P12 said that they stopped encountering that failure because they "_barely use it_" for that same task anymore. So while some users felt like they "_don't sweat it too much_" (P5) when a voice assistant failed at a task, others felt like they would use it "_not as much_" (P2) for those same tasks. We found that the pattern of continuing to use a voice assistant in general but excluding the tasks that resulted in a failure, at least for a short period of time, was consistent across many different types of failures, including transcription, misunderstanding, and ambiguity. For example, P2 said that they needed to be careful using a voice assistant, because sometimes they would say a name and "_it would come up [with] a different name._" They said that following an incident like that, "_I would still use [the voice assistant]. I think what would happen though is like you kind of build up that trust...So the next couple times I would go into my contacts and hit the button myself, you know, and then like if I was walking to my car and get my keys in one hand, and it's been a while. So, you know, let me try this again. Like I think that's something where you kind of have to like, build the trust back up and give it another try. At least that's what I do._" P12 echoed this, saying, "_Let's say you're opening Spotify or something like that. I think it will probably go on command, rather than sending a message...different tasks, you know, it has a different trust level_" P5 had a similar sentiment, saying "_I think the problem with the most voice assistant is, if I tried to give it a complex search query, it doesn't really understand me, or it gets frustrating and I just I'm going to go ahead and type in whatever it is I'm looking for._" Even when failures were mitigated in the moment, users remained wary of using their voice assistants for the same tasks. Interestingly, sometimes users would continue to use their voice assistant for the same general task following a failure, but they would make slight changes to their use. For example, P1 encountered a misunderstanding failure while trying to shop for a sweater online, and they started to "_rely on it a little less, and do more searching on my own._" They said that "_for future reference, I would just remember to not use it to do certain tasks and do certain tasks on my own, [especially] when I look for an item that's difficult to find._" However, in the meantime, "_I would just ask for other tasks._" For P1, this included "_looking for other items other than this sweater. I would tell her to search for like grocery items and do some comparison shopping online._" Shopping for different items was distinct enough to maintain this user's trust. P7 experienced a similar situation, in which they encountered a transcription error, which they mitigated by spelling the name of "_hyperpo duo 100 gecs_" as "_G-E-C-S_" They said this correct helped so that "_[the assistant did] understand what I was saying._" Even though they had experienced a failure for that particular artist, they "_continue to do that [use it to play songs] to this day. It's a very good music player,_" but they are "_a little weary when it comes to certain musicians that I feel that...[the voice assistant] would have trouble understanding._" Users often made sense of the failures based on the perceived task complexity. P12 thought that the task that they had the highest trust in was "_to open like apps_, followed by "_calling somewhere._" They explained that, "_I want to put that as number one, but sometimes, like the way the contact name is, is not registered. Like, you know the way for you to say it, it's not how like the voice [assistant] says it._" P2 similarly evaluated the voice assistant, saying, "_the best thing is picking up website information._" However, they similarly said "_to get more personalized messages, contacts, and that sort of thing, you have to be really careful what you say and how you say it._" Because of these findings, we hypothesized that users' trust in voice assistants after failures would affect their willingness to use it for different tasks to differing degrees. As shown in Table 5, we used three mixed-ordinal regressions to model trust in these three tasks, with scores for confidence in the voice assistant's ability, benevolence, and integrity as the independent variables. Trust in the voice assistant to play a song, text a coworker, and tranfer money was encoded as an ordinal value. Confidence in the voice assistant's ability, benevolence, and integrity were encoded as numerical values. General trust tendency was encoded as a numerical value and PID was encoded as a random categorical value. We found that user perceptions of voice assistant ability, benevolence, and integrity positively correlated with their willingness to use the voice assistant for future tasks. Overall, people were moderately trusting of their voice assistant to play a song (\(m=3.29\), \(sd=1.30\)), less trusting of their voice assistant to text a coworker (\(m=2.34\), \(sd=1.18\)), and least trusting of their voice assistant to transfer money (\(m=1.56\), \(sd=0.91\)). In particular, perceptions of ability had a stronger effect on people's willingness to use the voice assistant to play a song (\(\beta=0.048,p<.001\)) compared with benevolence, which also significantly impacted willingness to use the voice assistant to play songs, but to a slightly lesser degree (\(\beta=0.043,p<.001\)). Integrity was even less influential, though still significantly positively correlated without how much people trusted their voice assistant to play a song (\(\beta=0.019,p<.001\)). This pattern was repeated for texting a coworker and transferring money as well, with ability being most strongly positively correlated with people's willingness to trust the voice assistant to execute these tasks, followed by benevolence, and then integrity. ## 8. Discussion With interviews, a survey, and a crowdsourced voice assistant failures dataset, we conducted a mixed-method study of voice assistant failures and how they impact user trust in and future intended use of voice assistants. As the underlying technology for voice assistants continues to improve in accuracy and ability, and its applications become increasingly high stakes to human health and well-being (Han et al., 2017; Wu et al., 2017; Wu et al., 2018; Wu et al., 2018), we discuss our findings with the goal of improving user trust and long-term engagement in voice assistants. Our users consistently relied on their voice assistants to find information and execute tasks across varying levels of complexity. Similar to prior work (Wu et al., 2018), those who wanted to use a voice assistant consistently for tasks which might result in failures have developed complex mental models of which tasks they can trust their voice assistants with. Unlike prior work (Wu et al., 2018), people often did not necessarily entirely abandon the use of their voice assistant after it failed at complex tasks, even after repeated failures. Many users considered the accuracy of their voice assistants so consistently high that they could forgive failures and continue engaging with those tasks after a short period of time. While trust in the complex tasks was being repaired, many participants continued using their voice assistants for tasks they considered more simple, such as information retrieval and playing music. We find that failures that lead users to feel like they have wasted time, such as those due to missed triggers and overcapture, tend to lead to more deteriorated perceptions of ability and benevolence. This is contrasted with scenarios in which users have more understanding of _why_ the voice assistant failed, such as those due to ambiguity and transcription, which users generally felt like they could work around or anticipate. However, if the failure due to transcription was believed to be due to using the device in another language, this caused abandonment of the voice assistant in that language. Similarly, users did not feel like they lost out on the advantages of using voice assistants when spurious trigger failures occurred, so they were less damaging to perceptions of ability. The single most damaging failure source to voice assistant integrity was action execution: incorrect action, as participants were more skeptical of the claim that the voice assistant would not cause harm following these failures. Prior work has pointed to ways that trust can be repaired when failures do occur. Cuadra et al. (Cuadra et al., 2018) showed that when a voice assistant proactively attempts to acknowledge a failure and repair trust, \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} & \multicolumn{4}{c}{**Playing a Song**} & \multicolumn{4}{c}{**Texting a Coworker**} & \multicolumn{4}{c}{**Transferring Money**} \\ \hline & \(\beta\) & \(se\) & \(Z\) & \(\beta\) & \(\beta\) & \(se\) & \(Z\) & \(p\) & \(\beta\) & \(se\) & \(Z\) & \(p\) \\ \hline Ability & 0.048 & 0.003 & 16.000 & \(<\)0.001 & 0.064 & 0.003 & 21.333 & \(<\)0.001 & 0.076 & 0.005 & 15.2 & \(<\)0.001 \\ Benevolence & 0.043 & 0.003 & 14.333 & \(<\)0.001 & 0.028 & 0.003 & 9.333 & \(<\)0.001 & 0.017 & 0.005 & 3.4 & 0.001 \\ Integrity & 0.019 & 0.002 & 9.500 & \(<\)0.001 & 0.024 & 0.003 & 8 & \(<\)0.001 & 0.03 & 0.004 & 7.5 & \(<\)0.001 \\ General Trust & -0.146 & 0.108 & -1.352 & 0.176 & -0.094 & 0.134 & -0.701 & 0.483 & -0.192 & 0.218 & -0.881 & 0.378 \\ \end{tabular} \end{table} Table 5. The results of three mixed-ordinal regressions modeling user trust in the voice assistant to execute the task based on their perceptions of the voice assistant’s ability, benevolence, and integrity. We did not include cut point calculations and state 1 calculations in the table for ease of interpretability. this increased people's perception of its intelligence. Additionally, Mahmood et al. (Mahmood et al., 2018) has found that failure mitigation strategies such as apologies were effective in restoring perceptions of likability and intelligence of a voice assistant after a failure. Xiao et al. (Xiao et al., 2019) demonstrated that situating the voice assistant as a learner, and helping users understand when to give feedback to the voice assistant, improved users' perceptions of the voice assistant. Fischer et al. (Fischer et al., 2019) encourages voice assistant responses to support progressivity of the conversation, especially when the response does not help the user. Our work shows that users naturally repair trust with their voice assistants by relying on it for different tasks following a failure, or the same task but on a different topic, such as online shopping for different items or playing music by other artists than those that caused a failure. Quantitatively, we established that certain types of failures are more critical than others. This insight can be used to help prioritize the failure recovery strategies across HCI and NLP that are most effective for regaining trust. For example, self-repair for voice assistants such as Cuadra et al. (Cuadra et al., 2019) employed may be most useful in situations where the voice assistant has failed because of a missed trigger or overcapturing users' input. In addition, we can also try to identify the specific components in the voice assistant technology stack that cause critical failures, and leverage techniques in NLP robustness to improve how these models perform during user interactions. For example, noisy channel and transcription failures can be modeled as small perturbations to the input, which is well researched (Bahdan et al., 2018; Krizhevsky et al., 2019). Reliable transcription may also be important to address by speech recognition modules, especially for low resource languages (Krizhevsky et al., 2019). Our open-sourced dataset has also provided concrete and comprehensive example failures (199 real-world sourced examples with context, query, and response) for future researcher to reuse to develop failure mitigation strategies, along with a refined taxonomy for classifying voice assistant failures, as supported by prior work. While prior work (Krizhevsky et al., 2019) was useful in helping NLP practitioners anticipate and plan for failures across many types of NLP technologies, our dataset specifically addresses failures that occur with voice assistants. We anticipate this will allow future researchers to use human-centered example failures when conducting research related to voice assistant failures, trust, and mitigation strategies. ## Limitations There are a few methodological limitations of our study, which we detail here. First, our dataset collection and interviews relied on retropectives and recalling failures, rather than observing them _in situ_. This subjects our data to recall bias, and our results should be interpreted in this light. For instance, none of the interview participants recalled failures due to missed triggers or delayed triggers in interviews, although missed triggers were considered relatively damaging to perceptions of ability and benevolence in the survey. Additionally, our survey relied on collecting users' feedback regarding hypothetical scenarios. Future work may build on our findings by using our dataset to systematically introduce failures and capture the resulting impact on user trust via ESM or diary study. Our sample of participants were also frequent voice assistant users, which indicates that they likely forgave errors more easily than other populations (Xiao et al., 2019). Additionally, we did not address the use of conversational agents through interfaces other than voice, such as embodied conversational agents or text-based conversational agents. As embodied and text interfaces have more potential affordances with which users can judge and interact with the system (Bahdan et al., 2018; Bahdan et al., 2018), the impact of failures may not perfectly generalize to these use cases. ## 9. Conclusion In conclusion, through a mixed-method study, we found that voice assistant users experience a multitude of failures, ranging from a voice assistant incorrectly triggering to responding in a way that does not address users' needs. These different types of failures do differentially impact users' trust, which in turn affects intention to use their voice assistants for tasks in the future. In particular, we find that failures due to spurious triggers and ambiguity are less detrimental to user trust than failures due to incorrect action execution, missed triggers, or overcapture. We additionally find that people rebuild their trust in voice assistants through simple tasks, such as playing a song, before resuming using their full voice assistant functionality after a failure has occurred. We also contribute a dataset of 199 failures, to help future researchers and practitioners build on our work. By further working to understand, prevent, and repair voice assistant failures, we hope to build voice assistant users' trust in these devices and allow them to benefit from the increasing and varied functionality they provide.
2309.02728
Josephson quantum mechanics at odd parity
A Josephson junction may be in a stable odd parity state when a single quasiparticle is trapped in an Andreev bound state. Embedding such junction in an electromagnetic environment gives rise to a special quantum mechanics of superconducting phase that we investigate theoretically. Our analysis covers several representative cases, from the lifting of the supercurrent quench due to quasiparticle poisoning for a low ohmic impedance of the environment, to a Schmid transition in a current-biased junction that for odd parity occurs at four times bigger critical impedance. For intermediate impedances, the supercurrent in the odd state is higher than in the even one.
Manuel Houzet, Julia S. Meyer, Yuli V. Nazarov
2023-09-06T05:27:29Z
http://arxiv.org/abs/2309.02728v1
# Josephson quantum mechanics at odd parity ###### Abstract A Josephson junction may be in a stable odd parity state when a single quasiparticle is trapped in an Andreev bound state. Embedding such junction in an electromagnetic environment gives rise to a special quantum mechanics of superconducting phase that we investigate theoretically. Our analysis covers several representative cases, from the lifting of the supercurrent quench due to quasiparticle poisoning for a low ohmic impedance of the environment, to a Schmid transition in a current-biased junction that for odd parity occurs at four times bigger critical impedance. For intermediate impedances, the supercurrent in the odd state is higher than in the even one. The energy of a tunnel junction between two superconducting leads depends periodically on the difference of superconducting phases of the two, in short, on the phase. This is the celebrated Josephson effect [1]: the phase dependence of this energy gives rise to a persistent superconducting current between the leads. Later, it has been understood that the phase becomes a quantum-fluctuating variable if a Josephson junction is embedded in an electromagnetic circuit [2]. Earlier studies concentrated on a dissipative electromagnetic environment and were essential for establishing the modern theory of dissipative quantum mechanics [3; 4]. A highlight of this research was the prediction of the Schmid transition [5]: the vanishing of the Josephson energy at a critical value of the circuit impedance \(R\), \(2e^{2}R/\pi\hbar\equiv\alpha=1\). While this prediction is theoretically indistinguishable, the controversy concerning its experimental verification [6; 7] may have been resolved recently [8]. The further development of Josephson quantum mechanics evolved from dissipative circuits to dissipationless Coulomb islands. The resulting Josephson-based superconducting qubits [9; 10] are at the frontline of modern quantum technology applications. There is something to add to this well-established field. In fact, the Josephson energy is related to Andreev bound states (ABS) in the junction [11] and does depend on their occupation. Of the two equal-weight superpositions with respect to the right/left leads in which a quasiparticle may be in, only one gives rise to a bound state. Owing to parity conservation in superconductors [12], a state with a single quasiparticle trapped in the lowest ABS (the odd parity ground state) is stable despite having a bigger energy than the state without quasiparticles (the even parity state). Physically, the parity can only be relaxed if a stray quasiparticle from a lead comes to the junction and annihilates the trapped one. Since the concentration of the quasiparticles in the leads is vanishingly small at low temperatures, the lifetime of the odd parity ground state is macroscopically long: lifetimes of several minutes have been measured [13]. We note that a single quasiparticle trapped in a spin-degenerate ABS eventually quenches the contribution of this level to the Josephson energy: this is called the quasiparticle poisoning and has been observed in [14]. When spin-degeneracy is lifted (in finite-length junctions with spin-orbit coupling), the stability of these odd states provided the opportunity for a new kind of qubits: Andreev spin qubits, proposed in [15; 16] and realized in [17]. In recent years, there is an outburst of studies of ABS in superconducting nanostructures, including spectroscopically resolved ABS and odd parity ground states in a junction [18]. This makes it relevant to extend the Josephson quantum me Figure 1: a. The odd parity Josephson junction. A single quasiparticle is trapped in the lowest Andreev level separated by \(2E_{J}\sin^{2}\frac{\pi}{\pi}\ll\Delta\) from the edge of the continuous quasiparticle spectrum at the superconducting energy gap \(\Delta\). In the bound state, the quasiparticle is in a certain superposition, \(s=1\), the anti-bound state corresponding to \(s=-1\) (dashed curve) belongs to the continuous spectrum. b.-c. The Josephson quantum mechanics at odd parity: the odd parity Josephson junction is embedded in a linear electromagnetic environment with frequency-dependent impedance \(Z(\omega)\) that causes quantum fluctuations of the phase. b. and c. correspond to phase and current bias, respectively. chanics to the case of a circuit embedding a Josephson junction in the odd parity ground state. Such quantum mechanics at odd parity should be quite distinct from the conventional one. For instance, for a short single-channel junction, quasiparticle poisoning is expected to completely quench the supercurrent [11]. Thus a naive and, as we will see, wrong expectation is that the junction is not present in the circuit at all. In our pivotal study, we consider a tunnel junction where the ABS are close to the superconducting gap edge, disregard weak spin-orbit interaction, and mainly concentrate on the instructive single-level, single-junction case, see Fig. 1. In this Letter, we provide a general description of Josephson quantum mechanics at odd parity revealing its intriguing mathematical structure. We also present the detailed analysis for three relevant cases. For low ohmic impedance, we demonstrate the incompleteness of supercurrent quenching and reveal a supercurrent jump at zero phase. For arbitrary ohmic impedance and _phase_ bias, we establish a slower suppression of the Josephson energy in the odd state than in the even one: the supercurrent in the odd state thus becomes _higher_ than in the even one, both remaining finite at any \(\alpha\) as already shown in the even case [19]. At sufficiently large impedance, _both_ right/left superpositions form a bound state. While their phase-dependence is suppressed upon increasing the impedance, their average binding energy tends to a constant. In addition to this, for arbitrary ohmic impedance and _current_ bias, we encounter a Schmid transition at a higher value of the impedance than in the even state, namely, at \(\alpha=4\). The bound states persist for both superpositions and are _degenerate_ for \(\alpha>4\). These predictions can be tested in forthcoming experiments. Let us sketch here the general derivation: all details are provided in [20]. At even parity, the Hamiltonian describing a Josephson junction embedded in a general linear environment, see Fig. 1b, reads [21] \[H_{\rm e}=H_{\rm env}-E_{J}^{*}\cos\hat{\varphi}, \tag{1}\] where \(H_{\rm env}\) is a Hamiltonian of non-interacting bosons, the operator of the phase drop at the junction, \(\hat{\varphi}\), consists of the phase bias \(\varphi\) and a linear superposition of environmental bosons, and \(E_{J}^{*}\) is the even-state Josephson energy. The coefficients in the superposition are chosen such as to reproduce the frequency-dependent impedance of the environment, \(Z(\omega)\). An alternative description [22] employs a path integral over a variable \(\varphi(\tau)\) defined in imaginary time. The action that defines the path weight reads \[\mathcal{S}=\sum_{\omega}\frac{|\omega|}{8e^{2}Z(i|\omega|)}|\varphi(\omega)| ^{2}-E_{J}^{*}\int d\tau\cos\varphi(\tau), \tag{2}\] \(\varphi(\omega)\) being the Fourier transform of \(\varphi(\tau)\). To describe the odd parity situation, we first augment the Hilbert space with the states of a single quasiparticle to reduce it at a later stage of the derivation. Without the environment, this gives the binding energy \(\Omega\), measured from the edge of the continuum, in the following form: \[\sqrt{\Omega}=s\sqrt{2E_{J}}\sin\frac{\varphi}{2}. \tag{3}\] Here, \(E_{J}\) is the Josephson energy associated with the lowest ABS: \(E_{J}=E_{J}^{*}\) in the single-channel case, \(E_{J}^{*}>E_{J}\) in general, and \(s=\pm 1\) characterizes the superposition between right/left leads. Equation (3) with \(s=\mathrm{sign}(\sin\frac{\varphi}{2})\) reproduces the ABS dispersion in a short junction in the tunnel limit [23]. With the environment, the above relation is modified to a singular-value equation for a wave function \(|\Phi\rangle\) in the environmental degrees of freedom, that involves _square-roots_ of Hamiltonian-like operators, \[\left(\sqrt{\Omega+H}-s\sqrt{2E_{J}}\sin\frac{\hat{\varphi}}{2}\right)|\Phi \rangle=0, \tag{4}\] where \(H=H_{\rm e}-E_{g}^{(c)}\) with \(E_{g}^{(c)}\) the ground state energy in the even parity sector. The path integral approach is also non-trivial, bearing a similarity with the Green function treatment of a frozen disorder [24]. The key quantity is a propagator \(G(\tau,\tau^{\prime})\) defined in a rather standard way \[G(\tau,\tau^{\prime})=G_{0}(\tau-\tau^{\prime})+\int d\tau_{1}G_{0}(\tau-\tau_ {1})A(\tau_{1})G(\tau_{1},\tau^{\prime}). \tag{5}\] Here \(A(\tau)\equiv s\sqrt{2E_{J}}\sin\frac{\varphi(\tau)}{2}\) plays the role of the disorder, \(G_{0}(\tau)\equiv\Theta(\tau)/\sqrt{\pi\tau}\) is the bare propagator arising from the reduction of quasiparticle continuum states. The disorder averaging should be done with the weight \(e^{-\mathcal{S}}\), that is, with respect to the even parity ground state. The averaged propagator is uniform, its Fourier component reads \[\bar{G}(\omega)=\left(\sqrt{i\omega}-\langle A\rangle-\Sigma(\omega)\right)^{- 1}, \tag{6}\] the self-energy \(\Sigma(\omega)\) being a sum of diagrams involving the correlators of \(A(\tau)\) starting from the second order. Finally, the binding energy is found from \[\sqrt{\Omega}=\langle A\rangle+\Sigma(-i\Omega). \tag{7}\] Equations (4) and (7) demonstrate an involved structure of the resulting theory that is distinct from straightforward Hamiltonian or path-integral approaches. Nevertheless, we manage to get to experimentally verifiable predictions by using perturbation theory and renormalizations. Let us start with the case of small ohmic impedance, \(\alpha\ll 1\). For a concrete model, we add a capacitance and an inductance in parallel to the resistor \(R\), \(Z(\omega)=1/(-i\omega C+1/R+i/\omega L)\). This cuts the ohmic response both at high and low frequency, \(\omega_{H}=1/RC\) and \(\omega_{L}=R/L\), respectively. The inductance providing the low cut-off is required in order to phase bias the junction, \(E_{J}e^{2}L\ll 1\). (The opposite regime may be addressed as in [19] for the even case.) We concentrate on the single-channel case of quasiparticle poisoning: the odd ground state energy \(E_{g}^{(o)}\) does not depend on phase without fluctuations. We aim to compute the phase-dependent correction \(\delta E_{g}^{(o)}(\varphi)\) proportional to the fluctuations, which defines the supercurrent in the odd state. A simple ad hoc estimation would be \(\delta E_{g}^{(o)}\simeq\alpha E_{J}\cos\varphi\). While this may be a correct scale, the answer is more involved and interesting, see Fig. 2a. We note an extra dimensionless parameter \(\omega_{L}/E_{J}\) that can be large or small provided \(\alpha\ll 1\). We see that the current in the phase interval \(\varphi\in[0,\pi]\) is _negative_: the minimum odd-parity Josephson junction energy corresponds to \(\varphi=\pi\) rather than \(\varphi=0\). Let us note that this \(\pi\)-junction behaviour has a completely different origin than the one induced by magnetic correlations in the ground state of a superconductor-quantum dot-superconductor junction [25] or the one due to the continuum contribution that is left in the presence of poisoning when weak interactions and a finite length of the junction are taken into account [26]. At \(\omega_{L}\gg E_{J}\), \[\frac{I(\varphi)}{2e}=-\frac{\alpha E_{J}}{2}\mathrm{ln}\left(\frac{\omega_{ H}}{\omega_{L}}\right)\sin\varphi. \tag{8}\] The most interesting feature present for arbitrary ratios \(\omega_{L}/E_{J}\) is the current jump at \(\varphi=0\), its half-value being \[\frac{I_{\mathrm{hj}}}{2e}=-\pi\alpha E_{J}\sqrt{\frac{E_{J}}{\omega_{L}}}. \tag{9}\] At \(\omega_{L}\ll E_{J}\), the supercurrent is concentrated at small \(\varphi\simeq\sqrt{\omega_{L}/E_{J}}\) and reads \[I(\varphi)=-|I_{\mathrm{hj}}|f(\varphi/\sqrt{2\omega_{L}/E_{J}}) \tag{10}\] with \(f(0)=1\) and \(f(x)\rightarrow\sqrt{2}/\pi x\) at \(x\rightarrow\infty\). The full expression for the monotonous function \(f(x)\) is given in [20]. The current jump is associated with the fact that the perturbation theory formally ceases to hold at small \(\varphi\). However, the answer beyond perturbations is really simple and shown in Fig. 2b: namely the binding energy is shifted such that the bound state reaches the continuum edge not at \(\varphi=0\), but at \(\varphi=-s\varphi_{c}\) with \(\varphi_{c}\equiv(|I_{\mathrm{hj}}|/2e)/E_{J}\), i.e., the binding energy is given by \[\sqrt{\Omega}=\sqrt{E_{J}/2}\left(s\varphi+\varphi_{c}\right). \tag{11}\] The shifts being opposite for \(s=\pm 1\), this implies the presence of bound states for _both_ superpositions in an interval \(|\varphi|<\varphi_{c}\): this fact will become crucial for further analysis. Let us turn to the case of an arbitrary impedance, \(\alpha\simeq 1\), under conditions of _phase bias_. In this case, the low cut-off frequency is such that \(\omega_{L}\gg E_{J}\) and does not change upon renormalization of \(E_{J},E_{J}^{*}\). The renormalization is thus finite at any \(\alpha\): this implies that, as discussed in the even parity sector [19], no Schmid transition occurs under phase bias. While \(E_{J}=E_{J}^{*}\) in the single-channel case, they renormalize differently. The renormalization can be computed using the relation \(\langle e^{i\beta\varphi}\rangle=e^{i\beta\langle\varphi\rangle}e^{-\beta^{2} \langle\varphi^{2}\rangle/2}\), where \(\langle\!\langle\varphi^{2}\rangle\!\rangle=\langle\varphi^{2}\rangle- \langle\varphi\rangle^{2}\), valid for Gaussian fluctuations of the phase. At even parity, \[\tilde{E}_{J}^{*}=E_{J}^{*}e^{-\{\varphi^{2}\}/2}\simeq E_{J}^{*}\left( \omega_{L}/\omega_{H}\right)^{\alpha}. \tag{12}\] Here and further on the 'tilde' refers to renormalized quantities. To understand the renormalization at odd parity, we keep terms up to the second order in the self-consistency equation (7), \[\sqrt{\Omega}=\langle A\rangle+\Sigma^{(2)}(-i\Omega). \tag{13}\] The average \(A\) is phase-dependent and strongly suppressed, \[\langle A\rangle=s\sqrt{2\tilde{E}_{J}}\sin\frac{\varphi}{2}\quad\mathrm{ with}\quad\frac{\tilde{E}_{J}}{E_{J}}=e^{-\frac{\{s\varphi^{2}\}}{4}}\simeq \left(\frac{\omega_{L}}{\omega_{H}}\right)^{\frac{\alpha}{2}}. \tag{14}\] This suppression is two times weaker than for even parity. The superconducting current in the odd state at \(\alpha<2\) reads \[\frac{I(\varphi)}{2e}=(\tilde{E}_{J}^{*}-\tilde{E}_{J})\sin\varphi=E_{J}\left( e^{-\frac{\{s\varphi^{2}\}}{2}}-e^{-\frac{\{\varphi^{2}\}}{4}}\right)\sin\varphi \tag{15}\] Figure 2: a. The odd-parity supercurrent at small impedance. The curve labels are \(\omega_{L}/E_{J}\), we set \(\mathrm{ln}(\omega_{H}/\omega_{L})=5\). b. Bound states near zero phase for \(s=\pm 1\). Here \(\varphi_{c}=\pi\alpha\sqrt{E_{J}/\omega_{L}}\ll 1\). Dashed curves: no interaction. Figure 3: a. Critical currents at even and odd parity versus \(\langle\!\langle\varphi^{2}\rangle\!\rangle\), Eq. (15). The odd parity current dominates at \(\langle\!\langle\varphi^{2}\rangle\!\rangle>4\,\mathrm{ln}\,2\approx 2.8\). b. The bound regimes in the odd parity Josephson junction. A: only one superposition gives rise to a bound state (\(\alpha=0\)). B: two bound states in a finite phase interval (cf. Fig. 2b). C: separatrix between B and D. D: two \(4\pi\)-periodic bound states are present at all phases. E: the splitting of the two bound states is much smaller than their average phase-independent energy. F: The two states \(s=\pm 1\) with phase-independent energy are degenerate. and is bigger than that at even parity at sufficiently large phase fluctuations, see Fig. 3a. However, as far as the bound state spectrum is concerned, the second-order term \(\Sigma^{(2)}(-i\Omega)\) can become important since it has a phase-independent part. This leads to a variety of _bound_ regimes A-F listed in Fig. 3b. For estimates, we concentrate on the phase-independent terms in \(\Sigma^{(2)}\) and, since \(\Omega\ll\omega_{L}\), disregard the \(\Omega\) dependence. This yields \[\Sigma^{(2)}=E_{J}\int_{0}^{\infty}\frac{d\tau}{\sqrt{\pi\tau}}\langle\!\langle e ^{i\varphi(0)/2}e^{-i\varphi(\tau)/2}\rangle\!\rangle. \tag{16}\] The integrand at \(\omega_{H}^{-1}\ll\tau\ll\omega_{L}^{-1}\) is proportional to \(1/\tau^{1/2}(\omega_{H}\tau)^{\alpha/2}\). As a consequence, the integral converges at the lower cut-off if \(\alpha<1\) and at the upper cut-off if \(\alpha>1\). The estimations for \(\Sigma^{(2)}\) thus read: \[\Sigma^{(2)}\simeq\left\{\begin{array}{ll}\tilde{E}_{J}/\sqrt{\omega_{L}},& \alpha<1,\\ E_{J}/\sqrt{\omega_{H}},&\alpha>1.\end{array}\right. \tag{17}\] Comparing \(\langle A\rangle\) at \(\varphi=\pi\) and \(\Sigma^{(2)}\), we observe that the latter dominates for \(\alpha>2[1+\ln(\omega_{L}/E_{J})/\ln(\omega_{H}/\omega_{L})]\equiv\alpha_{c}>2\). This point (C) separates two different regimes. Now we can summarize the results. At \(\alpha<\alpha_{c}\), \(\Sigma^{(2)}\) can be neglected in zeroth approximation. The superconducting current is given by Eq. (15). Starting from the case without fluctuations (regime A), we find that the addition of small phase-independent terms in Eq. (13) when fluctuations are weak leads to the coexistence of two bound states corresponding to the two superpositions \(s=\pm 1\) in a small interval of phases around \(\varphi=0\) (regime B). At \(\alpha>1\), this interval grows with increasing \(\alpha\) until an important transition (regime C) takes place at \(\alpha_{c}\): two bound states are present at any phase. For \(\alpha>\alpha_{c}\), both bound states are separated from the continuum by a gap (regime D). Thus the odd parity state becomes stable upon an adiabatic sweep of the phase. The bound energies are given by \[\Omega=\left(s\sqrt{2\tilde{E}_{J}}\sin\frac{\varphi}{2}+\Sigma^{(2)}\right) ^{2}. \tag{18}\] The resulting superconducting current at a given \(s\) thus becomes \(4\pi\)-periodic, a phenomenon similar to that signifying the presence of Majorana modes [27]. The \(2\pi\)-periodicity is restored upon relaxation to the lowest energy state within the odd sector. At \(\alpha\) slightly (by \(\simeq 1/\ln(\omega_{H}/\omega_{L})\ll 1\)) exceeding \(\alpha_{c}\), the binding energy \(\Omega\simeq E_{J}^{2}/\omega_{H}\gg\tilde{E}_{J}\) hardly depends on the phase and \(\alpha\) (regime E). The remaining phase dependence results in a strongly suppressed \(4\pi\)-periodic supercurrent \[\frac{I(\varphi)}{2e}\,\simeq\,s\tilde{E}_{J}^{\rm eff}\cos\frac{\varphi}{2} ;\ \tilde{E}_{J}^{\rm eff}\simeq\sqrt{\tilde{E}_{J}\Omega}\simeq E_{J}\sqrt{ \frac{\tilde{E}_{J}}{\omega_{H}}}. \tag{19}\] Despite being suppressed, this supercurrent parametrically exceeds the one at even parity. Let us now turn to the case of an arbitrary impedance at _current bias_, see Fig. 1c. In contrast with the phase bias situation, there is no built-in low energy cut-off \(\omega_{L}\): the renormalization has to be cut-off self-consistently by the renormalized Josephson energy. Let us first reproduce the Schmid transition at even parity. The renormalized \(\tilde{E}_{J}^{*}\) is given by the same Eq. (12), yet \(\omega_{L}\) there has to be estimated as \(\tilde{E}_{J}^{*}\). With this, \[\frac{\tilde{E}_{J}^{*}}{E_{J}^{*}}=\left(\frac{E_{J}^{*}}{\omega_{H}}\right)^ {\frac{\alpha}{1-\alpha}}, \tag{20}\] such that \(\tilde{E}_{J}^{*}\) vanishes at the Schmid transition, \(\alpha=1\). Let us next turn to the odd parity sector. To start with, let us concentrate on the interval \(\alpha<1\). In this case, the lower cut-off can be unambiguously identified as \(\tilde{E}_{J}\). Applying Eq. (14), we thus obtain \[\frac{\tilde{E}_{J}}{E_{J}}=\left(\frac{E_{J}}{\omega_{H}}\right)^{\frac{ \alpha}{2-\alpha}}. \tag{21}\] The estimation of \(\Sigma^{(2)}\) with the help of Eq. (17) gives \(\Sigma^{(2)}\simeq\sqrt{\tilde{E}_{J}}\). In contrast with the phase-biased case, the first- and second-order contributions are of the same order of magnitude, as well as all higher orders. So in the current bias case, the accuracy of the method does not allow to predict the phase dependence of the energy, nor if bound states persist for both values of \(s\) (regimes B-C-D). However, we still may notice and use the difference in the renormalizations of phase-dependent and phase-independent parts of \(\sqrt{\Omega}\) depending on the value of \(\alpha\). This becomes important at \(\alpha>1\) where, in accordance with Eq. (17), the self-energy \(\Sigma^{(2)}\) does not depend on the low-energy cut-off anymore and saturates at the value \(\simeq E_{J}/\sqrt{\omega_{H}}\). As to the phase-dependent part, it will further decrease with increasing Figure 4: Renormalized Josephson energies \(\tilde{E}_{J}^{*}\) (green) at even and \(\tilde{E}_{J}\) (violet) at odd parity. Vertical dotted lines separate the bound regimes at odd parity indicated by capital letters. Left: phase bias, cf. Eqs. (12), (14), and (19); \(\tilde{E}_{J}\) never vanishes. The separating regime \(C\) occurs at \(\alpha=\alpha_{c}\). We plot \(\tilde{E}_{J}^{\rm eff}\) instead of \(\tilde{E}_{J}\) at \(\alpha>\alpha_{c}\). Right: current bias, cf. Eqs. (20), (21), and (22); the curves illustrate the suppression of \(\tilde{E}_{J}\) as \(\alpha\) increases, the Schmid transition where \(\tilde{E}_{J}\) vanishes is at \(\alpha=1\) for even parity and at \(\alpha=4\) for odd parity. The renormalization law at odd parity changes at \(\alpha=1\). Note the different vertical scales in the left and right plot. \(\alpha\). This brings us to regime E: the almost degenerate bound state associated with the supercurrent described by Eq. (19). In this case, the renormalization of \(E_{J}\) is cut off by \(\tilde{E}_{J}^{\rm eff}\) of Eq. (19), rather than \(\tilde{E}_{J}\). This yields \[\frac{\tilde{E}_{J}}{\tilde{E}_{J}}=\left(\frac{E_{J}}{\omega_{H}}\right)^{ \frac{2\alpha}{4-\alpha}};\ \frac{\tilde{E}_{J}^{\rm eff}}{\tilde{E}_{J}}=\left(\frac{E_{J}}{\omega_{H}} \right)^{\frac{\alpha+2}{4-\alpha}}. \tag{22}\] Therefore \(\tilde{E}_{J}\),\(\tilde{E}_{J}^{\rm eff}\) vanish at \(\alpha=4\). This is the new Schmid transition point for half of the Cooper pair charge, indeed corresponding to \(4\pi\)-periodicity in phase of the supercurrent. At \(\alpha>4\), the phase-independent bound state is completely degenerate with respect to \(s\) (regime F). Recalling the quasiparticle spin, we thus predict the realization of 4-fold degeneracy for the trapped quasiparticle. In conclusion, we have formulated the Josephson quantum mechanics for a junction in the odd parity state. The nontrivial structure of the theory is encapsulated in Eqs. (4) and (7). We concentrated on the single-channel case and predicted the lifting of the supercurrent quench due to quasiparticle poisoning at small \(\alpha\). The residual supercurrent is given by Eqs. (8)-(10). Furthermore, we have addressed the case of arbitrary impedance both at phase and current bias. The supercurrent at odd parity is less suppressed by quantum fluctuations and may dominate over the one at even parity. The presence of various bound regimes complicates the renormalization. For current bias, we predict a Schmid transition at \(\alpha=4\) and four-fold degenerate bound states at higher impedances. YVN acknowledges support from the Universite Grenoble Alpes for an extended stay in Grenoble during which most of the presented work was performed. MH and JSM acknowledge funding from the Plan France 2030 through the project NISQ2LSQ ANR-22-PETQ-0006.
2310.17520
On the nontrivial extremal eigenvalues of graphs
We present a finer quantitative version of an observation due to Breuillard, Green, Guralnick and Tao which tells that for finite non-bipartite Cayley graphs, once the nontrivial eigenvalues of their normalized adjacency matrices are uniformly bounded away from $1$, then they are also uniformly bounded away from $-1$. Unlike previous works which depend heavily on combinatorial arguments, we rely more on analysis of eigenfunctions. We establish a new explicit lower bound for the gap between $-1$ and the smallest normalized adjacency eigenvalue, which improves previous lower bounds in terms of edge-expansion, and is comparable to the best known lower bound in terms of vertex-expansion.
Wenbo Li, Shiping Liu
2023-10-26T16:13:08Z
http://arxiv.org/abs/2310.17520v2
# On the nontrivial extremal eigenvalues of graphs ###### Abstract We present a finer quantitative version of an observation due to Breuillard, Green, Guralnick and Tao which tells that for finite non-bipartite Cayley graphs, once the nontrivial eigenvalues of their normalized adjacency matrices are uniformly bounded away from \(1\), then they are also uniformly bounded away from \(-1\). Unlike previous works which depend heavily on combinatorial arguments, we rely more on analysis of eigenfunctions. We establish a new explicit lower bound for the gap between \(-1\) and the smallest normalized adjacency eigenvalue, which improves previous lower bounds in terms of edge-expansion, and is comparable to the best known lower bound in terms of vertex-expansion. ## 1 Introduction One of the main topics of spectral graph theory is to explore the relationship between structural properties of a graph and eigenvalues of associated matrices. Let \(G=(V,E)\) be a finite graph with \(n\) vertices. Denote by \[\mu_{n}\leq\mu_{n-1}\leq...\leq\mu_{2}\leq\mu_{1}\] the eigenvalues of its normalized adjacency matrix. We call an eigenvalue trivial if it equals \(1\) or \(-1\). Recall that \(\mu_{1}\) is always equal to \(1\) and \(\mu_{n}=-1\) if and only if the graph has a bipartite connected component. The well-known Cheeger inequality [2, 1, 11]\(h^{2}/2\leq 1-\mu_{2}\leq 2h\) relates the spectral gap \(1-\mu_{2}\) and the edge-expansion (also called Cheeger constant) \(h\). In this article we show that the following inequality involving \(\mu_{2}\), \(\mu_{n-1}\) and \(h\) holds. **Theorem 1**.: _Let \(G=(V,E)\) be a finite connected graph. Then we have_ \[1+\mu_{n-1}\geq\frac{(1-\mu_{2})^{2}}{2h^{2}}\left(\sqrt{1+\frac{h^{2}}{1-\mu _{2}}}-1\right)^{2}. \tag{1}\] As an application, we show the following estimates for the spectral gap \(1+\mu_{n}\) of a non-bipartite vertex-transitive graph. **Corollary 1**.: _Let \(G\) be a finite, non-bipartite, vertex-transitive graph. There hold_ \[1+\mu_{n}\geq\min\left\{\frac{2}{d},\frac{(\sqrt{3}-1)^{2}}{8}h^{2}\right\}, \tag{2}\] _and_ \[1+\mu_{n}\geq\min\left\{\frac{2}{d},2\left(\sqrt{1+\frac{1-\mu_{2}}{4}}-1 \right)^{2}\right\}. \tag{3}\] This provides a finer version of an observation due to Breuillard-Green-Guralnick-Tao [8, Proposition E.1], which tells that for finite non-bipartite Cayley graphs, combinatorial expansion implies spectral expansion. More precisely, they show that for such a connected graph, there exits \(\delta>0\) depending only on its degree \(d\) and vertex-expansion \(h_{out}\), such that the following holds \[1+\mu_{n}\geq\delta(d,h_{out}). \tag{4}\] Here the vertex-expansion \(h_{out}\) is closely related to the edge-expansion \(h\). Indeed, for \(d\)-regular graphs there holds \(h_{out}/d\leq h\leq h_{out}\). By Cheeger inequality, their observation simply tells that, for finite non-bipartite Cayley graphs, once the gap \(1-\mu_{2}\) is uniformly bounded away from \(0\), so does the gap \(1+\mu_{n}\). It is natural to seek for an explicit formula for the lower bound of \(1+\mu_{n}\) in terms of \(d\), \(h_{out}\) or other closely related constants like the edge-expansion \(h\) for Cayley, or more generally, vertex-transitive graphs. Various works have been done on this topic, see, for example, [4, 5, 6, 14, 15, 12]. A recent result of Saha [15] states that for non-bipartite vertex-transitive graphs there holds \[1+\mu_{n}\geq C\frac{h^{2}}{d^{2}}. \tag{5}\] We use \(C\) to denote an absolute constant which may change from line to line. Indeed, we have by the dual Cheeger inequality [16, 3]\(1+\mu_{n}\geq\beta^{2}/2\), where \(\beta\) is the bipartiteness constant of the graph \(G\). Saha [15] proves for non-bipartite vertex-transitive graphs that \(\beta\geq Ch/d\), solving an open question of Moorman-Ralli-Tetali [14]. Extending the work of Bobkov-Houdre-Tetali [7] on vertex-expansion to the setting of signed graphs, Hu-Liu [12] establish that for non-bipartite vertex-transitive graphs \[1+\mu_{n}\geq C\frac{h_{out}^{2}}{d}. \tag{6}\] Recalling \(h_{out}\geq h\), the estimate (6) improves (5). Notice that our estimates (2) and (3) cannot be derived from (6) and vice versa. We achieve our estimates via a very different strategy from previous works [4, 5, 6, 14, 15, 12]. We rely more on spectral graph theoretic methods than combinatorial arguments. We are motivated by the Remark in [8, Appendix E] to consider the multiplicity of the eigenvalue \(\mu_{n}\) and the product of two eigenfunctions, and find [10] inspiring in obtaining Lemma 3. Our strategy can be described as follows. By Theorem 1, all eigenvalues of a finite connected graph with multiplicity greater than \(1\) is bounded away from \(-1\). On the other hand, for non-bipartite vertex-transitive graphs there is a natural gap of at least \(2/d\) between any simple eigenvalue and \(-1\), due to the symmetry of the graph. Combining the above two cases leads to the bounds in (2) and (3). ## 2 Preliminaries Let \(G=(V,E)\) be a finite graph. Recall that the degree matrix \(D\) and adjacency matrix \(A\) of \(G\) is defined as follows: \[D_{ij}=\begin{cases}d_{i}&i=j,\\ 0&i\neq j,\end{cases}\] where \(d_{i}\) is the degree of \(i\in V\) and \[A_{ij}=\begin{cases}1&\{i,j\}\in E,\\ 0&\{i,j\}\notin E.\end{cases}\] The _normalized adjacency matrix_ of a graph is defined as \(D^{-1}A\). Let \(\mathbb{R}^{V}\) be the set of real-valued functions on \(V\). For any \(f,g\in\mathbb{R}^{V}\), we define their inner product as \[\langle f,g\rangle:=\sum_{u\in V}f(u)g(u)d_{u}.\] The corresponding \(\ell^{2}\)-norm of a function \(f\) is defined as \(\|f\|_{2}:=\langle f,f\rangle^{\frac{1}{2}}\). The following two identities are straightforward. **Lemma 1**.: _Let \(G=(V,E)\) be a finite graph and \(f\) be a function on \(V\). We have_ \[\sum_{\{u,v\}\in E}(f(u)-f(v))^{2}=\ \langle f,(I-D^{-1}A)f\rangle \tag{7}\] _and_ \[\sum_{\{u,v\}\in E}(f(u)+f(v))^{2}=\ \langle f,(I+D^{-1}A)f\rangle, \tag{8}\] _where \(I\) is the \(n\times n\) identity matrix._ We list the \(n:=|V|\) eigenvalues of \(D^{-1}A\) counting multiplicity as \[\mu_{n}\leq\mu_{n-1}\leq...\leq\mu_{2}\leq\mu_{1}.\] In fact, \(\mu_{2}=1\) if and only if \(G\) is disconnected, and \(\mu_{n}=-1\) if and only if G has a bipartite connected component. For any subset \(S\subset V\), we define its volume \(\operatorname{vol}(S)\) as \[\operatorname{vol}(S)=\sum_{i\in S}d_{i},\] and its boundary \(\partial S\) as \[\partial S=\left\{\{i,j\}\in E:i\in S,j\notin S\right\}.\] The edge-expansion of \(S\) is then defined to be \[h(S)=\frac{|\partial S|}{\min\left(\operatorname{vol}(S),\operatorname{vol}( V-S)\right)}. \tag{9}\] The edge-expansion (also called Cheeger constant) of \(G\), denoted by \(h\), is defined as \[h:=\min_{S\subset V}h(S). \tag{10}\] Note that \(0\leq h\leq 1\), and \(h=0\) if and only if \(G\) is disconnected. Let us recall two classic results involving \(h\). **Proposition 1** ([9, Corollary 2.10]).: _Let \(G=(V,E)\) be a finite graph. For any nonzero function \(f\) on \(V\) satisfying \(\sum_{u\in V}f(u)d_{u}=0\), we have_ \[\frac{\sum_{\{u,v\}\in E}|f(u)-f(v)|}{\sum_{u\in V}|f(u)|d_{u}}\geq\frac{1}{2 }h. \tag{11}\] **Theorem 2** (Cheeger inequality [2, 1, 11]).: _Let \(G=(V,E)\) be a finite graph. We have_ \[2h\geq 1-\mu_{2}\geq\frac{1}{2}h^{2}. \tag{12}\] ## 3 Proof of Theorem 1 First, we prepare the following lemmas. **Lemma 2**.: _Let \(G=(V,E)\) be a finite graph and \(f,g\) be two eigenfunctions of the eigenvalues \(\mu\) and \(\nu\) of the normalized adjacency matrix \(D^{-1}A\), respectively. We have the following estimate_ \[\sum_{\{u,v\}\in E}|f(u)g(u)-f(v)g(v)|\leq\frac{\sqrt{2}}{2}\left(\sqrt{1+\mu} +\sqrt{1+\nu}\right)\|f\|_{2}\|g\|_{2}.\] Proof.: By Cauchy-Schwarz inequality and Lemma 1, we calculate \[\sum_{\{u,v\}\in E}2|f(u)g(u)-f(v)g(v)|\] \[= \sum_{\{u,v\}\in E}|(f(u)-f(v))(g(u)+g(v))+(f(u)+f(v))(g(u)-g(v))|\] \[\leq \sum_{\{u,v\}\in E}|(f(u)-f(v))(g(u)+g(v))|+\sum_{\{u,v\}\in E}|(f( u)+f(v))(g(u)-g(v))|\] \[\leq \sqrt{\sum_{\{u,v\}\in E}(f(u)-f(v))^{2}}\sqrt{\sum_{\{u,v\}\in E }(g(u)+g(v))^{2}}\] \[\qquad\qquad\qquad\qquad+\sqrt{\sum_{\{u,v\}\in E}(f(u)+f(v))^{2} }\sqrt{\sum_{\{u,v\}\in E}(g(u)-g(v))^{2}}\] \[= \left(\sqrt{(1-\mu)(1+\nu)}+\sqrt{(1+\mu)(1-\nu)}\right)\|f\|_{2 }\|g\|_{2}\] \[\leq \sqrt{2}(\sqrt{1+\mu}+\sqrt{1+\nu})\|f\|_{2}\|g\|_{2}.\] This completes the proof. Lemma 2 tells that the \(\ell^{1}\)-energy of the product \(fg\) of two eigenfunctions is close to \(0\), whenever the corresponding two eigenvalues are close to \(-1\). The next lemma shows that if an eigenvalue is close to \(-1\), then the absolute value of its eigenfunction is close to being a constant. **Lemma 3**.: _Let \(G=(V,E)\) be a finite connected graph. Assume that \(f\) is an eigenfunction of the eigenvalue \(\mu\) of the normalized adjacency matrix \(D^{-1}A\) with \(\|f\|_{2}=1\). Let \(\mathbf{c}\) is the constant function such that \(\|c\|_{2}=1\) and \(f_{1}:=|f|-\langle|f|,\mathbf{c}\rangle\mathbf{c}\). Then we have_ \[\|f_{1}\|_{2}^{2}\leq\frac{1+\mu}{1-\mu_{2}}\] _and_ \[\langle|f|,\mathbf{c}\rangle^{2}\geq 1-\frac{1+\mu}{1-\mu_{2}}.\] Proof.: Oberserving that \(\langle f_{1},\mathbf{c}\rangle=0\), we have \[(1-\mu_{2})\langle f_{1},f_{1}\rangle\leq\langle f_{1},(I-D^{-1}A)f_{1}\rangle.\] Therefore, we estimate by Lemma 1 \[\|f_{1}\|_{2}^{2} \leq\frac{1}{1-\mu_{2}}\langle f_{1},(I-D^{-1}A)f_{1}\rangle=\frac{ 1}{1-\mu_{2}}\langle|f|,(I-D^{-1}A)|f|\rangle\] \[=\frac{1}{1-\mu_{2}}\sum_{\{u,v\}\in E}(|f(u)|-|f(v)|)^{2}\] \[\leq\frac{1}{1-\mu_{2}}\sum_{\{u,v\}\in E}(f(u)+f(v))^{2}\] \[=\frac{1}{1-\mu_{2}}\langle f,(I+D^{-1}A)f\rangle\] \[=\frac{1+\mu}{1-\mu_{2}}.\] The other inequality holds since \[1=\|f\|_{2}^{2}=\langle|f|,\mathbf{c}\rangle^{2}+\|f_{1}\|_{2}^{2}.\] This concludes the proof. The following lemma is a useful consequence of Lemma 3. It bounds from below the \(\ell^{1}\)-norm of the product of two eigenfunctions. **Lemma 4**.: _Let \(G=(V,E)\) be a finite connected graph and \(f,g\) be eigenfunctions of the eigenvalues \(\mu\) and \(\nu\) of \(D^{-1}A\), respectively, such that \(\|f\|_{2}=\|g\|_{2}=1\). Then_ \[\langle|f|,|g|\rangle\geq\sqrt{1-\frac{1+\mu}{1-\mu_{2}}}\sqrt{1-\frac{1+\nu} {1-\mu_{2}}}-\sqrt{\frac{1+\mu}{1-\mu_{2}}}\sqrt{\frac{1+\nu}{1-\mu_{2}}}. \tag{13}\] Proof.: Let \(\mathbf{c}\) be a positive constant function with \(\|c\|_{2}=1\). Decompose \(|f|\) and \(|g|\) such that \[|f|=\langle|f|,\mathbf{c}\rangle\mathbf{c}+f_{1},\ \ \text{and}\ \ |g|=\langle|g|,\mathbf{c} \rangle\mathbf{c}+g_{1}.\] Then we compute \[\langle|f|,|g|\rangle =\langle|f|,\mathbf{c}\rangle\langle|g|,\mathbf{c}\rangle+\langle f_{1},g_ {1}\rangle\] \[\geq\langle|f|,\mathbf{c}\rangle\langle|g|,\mathbf{c}\rangle-\|f_{1}\|_{ 2}\|g_{1}\|_{2}\] \[\geq\sqrt{1-\frac{1+\mu}{1-\mu_{2}}}\sqrt{1-\frac{1+\nu}{1-\mu_{2 }}}-\sqrt{\frac{1+\mu}{1-\mu_{2}}}\sqrt{\frac{1+\nu}{1-\mu_{2}}}\] where the last inequality comes from Lemma 3. We are now prepared to prove Theorem 1. Proof of Theorem 1.: Let \(f,g\) be eigenfunctions of the eigenvalues \(\mu_{n}\) and \(\mu_{n-1}\) such that \(\|f\|_{2}=\|g\|_{2}=1\) and \(\langle f,g\rangle=0\), i.e., \(\sum_{u\in V}f(u)g(u)d_{u}=0\). Applying Proposition 1 to the function \(fg\) and inserting the estimates from Lemma 2 and Lemma 4, we derive the following inequality \[\frac{\sqrt{2}}{2}h\leq\frac{\sqrt{1+\mu_{n}}+\sqrt{1+\mu_{n-1}}}{\sqrt{1-\frac{ 1+\mu_{n}}{1-\mu_{2}}}\sqrt{1-\frac{1+\mu_{n-1}}{1-\mu_{2}}}-\sqrt{\frac{1+\mu _{n}}{1-\mu_{2}}}\sqrt{\frac{1+\mu_{n-1}}{1-\mu_{2}}}},\] when \(1+\mu_{n-1}<(1-\mu_{2})/2\). Notice that the following function \[f(x,y)=\frac{\sqrt{x}+\sqrt{y}}{\sqrt{1-\frac{1}{1-\mu_{2}}x}\sqrt{1-\frac{1} {1-\mu_{2}}y}-\sqrt{\frac{1}{1-\mu_{2}}x}\sqrt{\frac{1}{1-\mu_{2}}y}} \tag{14}\] is continuous on \([0,\frac{1}{2}(1-\mu_{2}))\times[0,\frac{1}{2}(1-\mu_{2}))\), monotonically increasing with respect to either \(x\) or \(y\) and \(f(0,0)=0\). Let \(c=c(h,1-\mu_{2})\) be the number such that \[f(c,c)=\frac{\sqrt{2}}{2}h.\] Then we have \[1+\mu_{n-1}\geq c,\] since, otherwise, \[f(1+\mu_{n},1+\mu_{n-1})<f(c,c)=\frac{\sqrt{2}}{2}h\] is a contradiction. A direct computation shows that \[\sqrt{c}=\frac{1-\mu_{2}}{\sqrt{2}h}\left(\sqrt{1+\frac{h^{2}}{1-\mu_{2}}}-1 \right).\] This proves the inequality (1) when \(1+\mu_{n-1}<(1-\mu_{2})/2\). For the case that \(1+\mu_{n-1}\geq(1-\mu_{2})/2\), the inequality (1) still holds since \[1+\mu_{n-1}\geq\frac{1-\mu_{2}}{2}\geq\frac{(1-\mu_{2})^{2}}{2h^{2}}\left( \sqrt{1+\frac{h^{2}}{1-\mu_{2}}}-1\right)^{2},\] while the last inequality comes from the fact that \(1-\mu_{2}\geq 0\). It is direct to check that \(c=c(h,1-\mu_{2})\) is monotonically increasing with respect to either \(h\) or \(1-\mu_{2}\). Replacing \(1-\mu_{2}\) with powers of \(h\) or doing the opposite via the Cheeger inequality (12) yields the following result. **Corollary 2**.: _Let \(G=(V,E)\) be a finite graph. Let \(n\), \(\mu_{2}\), \(\mu_{n-1}\) and \(h\) be as in Theorem 1. Then, we have_ \[1+\mu_{n-1}\geq 2\left(\sqrt{1+\frac{1-\mu_{2}}{4}}-1\right)^{2}, \tag{15}\] \[1+\mu_{n-1}\geq\frac{(\sqrt{3}-1)^{2}}{8}h^{2}. \tag{16}\] **Remark 1**.: _If we do not care about the constant, the inequality (16) can also be derived from the higher order dual Cheeger inequalities [13, Theorem 1.2]. Indeed, we have \(1+\mu_{n-1}\geq C(1-\overline{h}(2))^{2}\), where \(\overline{h}(2)\) stands for the two-way dual Cheeger constant. Then the inequality (16) follows directly from the observation that \(1-\overline{h}(2)\geq h\). The proof of higher order dual Cheeger inequalities involves applying deep results from the random partition theory. Our proof here is much simpler and more elementary. We also obtain a better constant here._ ## 4 On non-bipartite vertex-transitive graphs For graphs such that \(\mu_{n}=\mu_{n-1}\), Theorem 1 provides a bound for the smallest eigenvalue of the normalized adjacency matrix. For vertex-transitive graphs we have the following gap phenomenon. **Theorem 3**.: _Let \(G\) be a finite connected vertex-transitive graph and \(\mu\) a simple eigenvalue of its normalized adjacency matrix. Then_ \[\mu=\frac{2k}{d}-1 \tag{17}\] _where \(0\leq k\leq d\) is an integer and \(d\) is the degree of \(G\)._ Proof.: Fix \(a\in V\). Suppose that \(\mu\) is a simple eigenvalue of \(D^{-1}A\) with \(f\) being its eigenfunction. Assume further that \(f(a)=-1\) or \(1\). For every \(g\in\mathrm{Aut}(G)\), the function \(f_{g}\) defined as \(f_{g}(\cdot)=f(g(\cdot))\) is still an eigenfunction of the same eigenvalue \(\mu\). Since \(\mu\) is simple, there exits \(\lambda_{g}\) such that \(f_{g}=\lambda_{g}f\). Since \(\|f\|_{2}=\|f_{g}\|_{2}\), we have \(\lambda_{g}\in\{-1,1\}\). From the fact that \(G\) is vertex transitive, \(\mathrm{Aut}(G)\) acts transitively on \(G\), we have for any \(u\in V\) that \(f(u)\in\{-1,1\}\). Calculating \(D^{-1}Af(a)\) yields the result. Combining Corollary 2 and the above Theorem 3, we prove Corollary 1. For graphs with no simple eigenvalues, we can get rid of the term \(2/d\) in (2) and (3). **Corollary 3**.: _Let \((G,S)\) be a finite connected Cayley graph with a finite group \(G\) and a gernerating set \(S\). Assume that \(G\) is a simple group or the size \(|G|\) is odd. Let_ \[-1<\mu_{n}\leq\mu_{n-1}\leq...\leq\mu_{2}\leq\mu_{1}=1\] _be the eigenvalues of its normalized adjacency matrix where \(n=|G|\). Denote by \(h\) its edge-expansion. Then we have_ \[1+\mu_{n}\geq\frac{(\sqrt{3}-1)^{2}}{8}h^{2} \tag{18}\] _and_ \[1+\mu_{n}\geq 2\left(\sqrt{1+\frac{1-\mu_{2}}{4}}-1\right)^{2}. \tag{19}\] Proof.: The map \(\lambda_{g}:G\rightarrow\{\pm 1\}\) in the proof of Theorem 3 is in fact a homomorphism of groups since for any \(g_{1},g_{2}\in G\), it holds that \[\lambda_{g_{1}g_{2}}f(e)=f_{g_{1}g_{2}}(e)=f(g_{1}g_{2})=f_{g_{1}}(g_{2})= \lambda_{g_{1}}f(g_{2})=\lambda_{g_{1}}\lambda_{g_{2}}f(e).\] Here we denote by \(e\) the identity element of the group \(G\). If \(G\) is a simple group or the size \(|G|\) is odd, then such a homomorphism must be trivial otherwise there will be a subgroup of \(G\) of index \(2\). As a result all non-trivial eigenvalues of \((G,S)\) will be of multiplicity great than \(1\) and the above estimates hold. ## Acknowledgement SL is very grateful to Paul Horn and Matthias Keller for very inspiring discussions on related topics. This work is supported by the National Key R and D Program of China 2020YFA0713100, the National Natural Science Foundation of China (No. 12031017), and Innovation Program for Quantum Science and Technology 2021ZD0302902.
2310.19519
A General Neural Causal Model for Interactive Recommendation
Survivor bias in observational data leads the optimization of recommender systems towards local optima. Currently most solutions re-mines existing human-system collaboration patterns to maximize longer-term satisfaction by reinforcement learning. However, from the causal perspective, mitigating survivor effects requires answering a counterfactual problem, which is generally unidentifiable and inestimable. In this work, we propose a neural causal model to achieve counterfactual inference. Specifically, we first build a learnable structural causal model based on its available graphical representations which qualitatively characterizes the preference transitions. Mitigation of the survivor bias is achieved though counterfactual consistency. To identify the consistency, we use the Gumbel-max function as structural constrains. To estimate the consistency, we apply reinforcement optimizations, and use Gumbel-Softmax as a trade-off to get a differentiable function. Both theoretical and empirical studies demonstrate the effectiveness of our solution.
Jialin Liu, Xinyan Su, Peng Zhou, Xiangyu Zhao, Jun Li
2023-10-30T13:21:04Z
http://arxiv.org/abs/2310.19519v1
# A General Neural Causal Model for Interactive Recommendation ###### Abstract Survivor bias in observational data leads the optimization of recommender systems towards local optima. Currently most solutions re-mines existing human-system collaboration patterns to maximize longer-term satisfaction by reinforcement learning. However, from the causal perspective, mitigating survivor effects requires answering a counterfactual problem, which is generally underintifiable and inestimable. In this work, we propose a neural causal model to achieve counterfactual inference. Specifically, we first build a learnable structural causal model based on its available graphical representations which qualitatively characterizes the preference transitions. Mitigation of the survivor bias is achieved though counterfactual consistency. To identify the consistency, we use the Gumbel-max function as structural constrains. To estimate the consistency, we apply reinforcement optimizations, and use Gumbel-Softmax as a trade-off to get a differentiable function. Both theoretical and empirical studies demonstrate the effectiveness of our solution. Reinforcement Learning, Collaborative Recommendation, Counterfactual Inference ## I Introduction Recommendation systems accelerate many commercial applications by filtering out user-intended contents [1]. However, due to singularity of user preference, observed presentation only covers a fraction of the database, with even less interaction recorded. This sparsity exacerbates the survivor bias [2] when constructing effective recommendation policies. To further enhance the utility of recommendations, mitigating the survivor effect becomes essential. In offline construction, historical behavior data becomes the survivor of observation, and the behavioral pattern can be sub-optimal since previously deployed systems used to collect the data is generally unknown [3]. In online construction where limited experimental recommendation is allowed, the survivor effect still prevails because intervention can not be measured twice on the same user, whose state has been evolving since the first interaction. Under both offline and online scenarios, evaluating the survivor effect necessitates identifying a counterfactual question, _i.e., what if the system had previously chose another recommendation under the same state_. In real-world applications, counterfactual inference is challenging that acquires knowledge about the underline physical mechanism which we generally do not have [4]. Different types of user feedback reflects various aspects of their interests, _e.g.,_ click signal reveals short-term preference during interaction, and purchase demonstrates a long-term preference usually coming after continuous clicks. To put both aspects into consideration, recent works [5, 6] frame the recommendation as Markov Decision Process (MDP) with recommender systems as agents and users as interactive environments to maximize long-term cumulation without sacrificing short-term utility [7]. This Reinforcement Learning (RL) modulation can recombine high-value short-term transitions across different interaction trajectories to form higher long-term satisfaction [8], and thus alleviate survivor effects especially in offline environments. However, existing disciplines avoid directly answering the counterfactual question which is fundamental in offline RL research [9]. In this work, we mitigate the survivor bias on the counterfactual hierarchy [10]. Specifically, we first transform the measurement of different recommendation of the agent under current preference state into the consistency of same recommendation across different agents under the same preference, the latter can be formalized as the Probability of Necessity (PN) [11] and benefit from the fact that we can use observational data to estimate the parametric agent when the PN is identifiable. The parameter space represents different agents. Causally, the survivor effect is reduced via counterfactual consistency. To identify the consistency with the ground true Structural Causal Model (SCM) unknown, we propose a general Neural Causal Model (NCM) based on the available graphical representation of the MDP, the proposed model uses learnable neural networks as an approximation of causal structural functions. Consistency is obtained via structural constraints, _i.e.,_ Gumbel-max neural rewards. To estimate the proposed NCM, we implement a recursive neural architecture and three types of optimization procedures. In a nutshell, our contributions are: * We propose a neural causal model to mitigate the survivor bias via consistency, the model is identifiable and estimable. Although we implement a vanilla neural architecture in this study, advanced regularization skills [12] can be bundled to further accelerate the performance. * We theoretically prove the effectiveness of the proposed model. Empirical studies on both offline real-world datasets and online commercial simulators prove the generalization of this model. ## II Preliminaries **Notations**. \(X\) denotes a random variable, \(x\) represents its value. \(X^{(t)}\) denotes timestamps and \(Y_{k[x_{k}]}\) represents the \(k\)-th recommender system, _i.e._, the factual system (\(k=1\)) and the counterfactual system (\(k\neq 1\)). Graphically, SCM \(\mathcal{M}\) is a Directed Acyclic Graph (DAG) of the tuple \(<\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{U})>\). For \(f_{V_{i}}\in\mathcal{F}\), a directed edge \((V_{j}\to V_{i})\) maps \(V_{j}\in\mathbf{Pa}_{V_{i}}\) to \(V_{i}\). The corresponding neural SCM is defined as \(\mathcal{M}(\mathbf{\theta})\triangleq<\hat{\mathbf{U}},\mathbf{V},\hat{\mathcal{F}},P( \hat{\mathbf{U}})>\), where structural functions \(\mathcal{F}\) are parameterized with \(\mathbf{\theta}=\{\mathbf{\theta}_{V_{i}}:V_{i}\in\mathbf{V}\}\), each \(\hat{f}_{V_{i}}\) is a FeedForward Neural Network (FFN). \(\hat{\mathbf{U}}\) over bi-directed edges in \(\mathcal{M}(\theta)\) is equivalently transformed into uniform prior according to neural causal theories [13]. With the language of \(\mathcal{M}(\theta)\), counterfactual is formalized as a joint distribution over multiple interventions in \(\mathcal{M}(\theta)\)[14]. **Definition 1** (Counterfactual Distribution).: _Given \(\mathcal{M}(\mathbf{\theta})\) and interventions \(\mathbf{X}=\{\mathbf{X}_{k}:\mathbf{X}_{k}\subseteq\mathbf{V},k=1,\ldots,K\}\), \(P^{\mathcal{M}(\mathbf{\theta})}(\mathbf{Y}_{1[\mathbf{x}_{1}]},\ldots,\mathbf{Y}_{K[\mathbf{x}_{ K}]})\) is computed as,_ \[\int_{\mathcal{D}_{\hat{\mathbf{u}}}}\mathbb{1}\left[\mathbf{Y}_{1[\mathbf{x}_{1}]}(\widehat{\mathbf{u}})=\mathbf{y}_{1},\ldots,\mathbf{Y}_{K[ \mathbf{x}_{K}]}(\widehat{\mathbf{u}})=\mathbf{y}_{K}\right]dP(\widehat{\mathbf{u}}),\] _where \(\mathbf{Y}_{k[\mathbf{x}_{k}]}(\widehat{\mathbf{u}})\) comes from \(\{f_{V_{j}}:V_{j}\in\mathbf{V}\setminus\mathbf{X}_{k}\}\bigcup\{f_{X}\gets x:X\in \mathbf{X}_{k}\}\), and denotes the \(k\)-th imaginary interventions._ _Probability of Necessity_[15] is formalized upon the counterfactual, which measures the necessity of current intervention to the observed results. In our task, the intervention is the recommendation policy and the result is the user feedback. \[\text{PN}(\mathbf{Y}_{k[\mathbf{x}_{k}]}=\mathbf{y})\triangleq P^{\mathcal{M}(\mathbf{\theta}) }\big{(}\mathbf{Y}_{k[\mathbf{x}_{k}]}=\mathbf{y}\mid\mathbf{Y}_{1[\mathbf{x}_{1}]}=\mathbf{y}_{1}\big{)}. \tag{1}\] We consider the recommendation task under the framework of MDP in this work, where users form the environment and recommender systems are the agents, recommendation is then an interaction between the agent and the environment, as the environment receives the action \(\mathbf{a}\in\mathbb{R}^{|\mathcal{A}|}\) (recommendation), it adjusts its state \(\mathbf{s}\in\mathbb{R}^{d_{s}}\) (user preference) and feeds rewards \(r\left(\mathbf{s},\mathbf{a}\right)\in\mathbb{R}\) (user behavior) back to the system \(\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\), which maximize discounted long-term satisfaction as: \[\max_{\pi_{\theta}}\mathbb{E}_{\tau\sim\pi_{\theta}}\left[\sum_{t=0}^{|\tau|} \gamma^{t}r\left(\mathbf{s}^{(t)},\mathbf{a}^{(t)}\right)\right], \tag{2}\] where \(\tau=\big{(}\mathbf{s}^{(0)},\mathbf{a}^{(0)},\ldots,\mathbf{s}^{(|\tau|-1)},\mathbf{a}^{(| \tau|-1)}\big{)}\) represents an episodic interaction. From causal perspective, the recommendation from the system to the user is an intervention from the agent to the environment, therefore different agents of the same environment represent different kinds of interventions. Even more challenging, different possible actions \(\mathbf{a}_{t}\) of the same agents under same state \(\mathbf{s}_{t}\) become counterfactual interventions (\(k_{1}\neq k_{2}\)), because there can only be one observation and all other actions become counterfactual at the same time when the agent makes its recommendation choice. ## III Framework To mitigate survivor effect in both offline and online recommendation, we develop a general NCM in this section. We first introduce the model with theoretical analysis of its identifiability. Then we present a neural architecture which balances theoretical integrity and practical implementation. Finally, we present three common RL objective all of which can be used to optimize the neural architecture. ### _Neural Causal Model_ As illustrated in Figure 1, a \(t-\)time MDP can be untangled as a graphical representation of the NCM \(\mathcal{M}(\theta_{S},\theta_{R},\theta_{A})\), \[\left\{\begin{array}{l}S^{(t)}=\hat{f}_{S}(S^{(t-1)},R^{(t-1)};\theta_{S}) \\ R^{(t)}=\hat{f}_{R}(A^{(t)},S^{(t)},\widehat{U}^{(t)};\theta_{R})\\ A^{(t)}=\hat{f}_{A}(S^{(t)};\theta_{A})\end{array}\right., \tag{3}\] where structural function \(\hat{f}\) is a parameterized unidirectional neural network. Considering the current state \(\mathbf{s}^{(t)}\) of this causal model, different actions \(\mathbf{a}^{(t)}_{k}\) (\(k\neq 1\in|\mathcal{A}|\)) from the same agent \(\pi_{1}\) is conceptually equivalent to the same action \(\mathbf{a}^{(t)}_{f}\) from different agents \(\pi_{k}\) which would all achieve the same user feedback \(r_{k}(\mathbf{s}^{(t)},\mathbf{a}^{(t)}_{1})\), since the policy is greedily searched [5]. The benefaction of this transformation is that \(r_{1}(\mathbf{s}^{(t)},\mathbf{a}^{(t)}_{1})\) is an instance of Survivor Effect [2] and we can not observe other'survivors' \(r_{1}(\mathbf{s}^{(t)},\mathbf{a}^{(t)}_{k})\) especially in the offline environments. On-policy interaction (simulation) can alleviate this effect but this interaction is risky in recommendation [16]. However, we can use a learnable agent \(\pi_{1}\) in (3) to approximate \(r_{k}(\mathbf{s}^{(t)},\mathbf{a}^{(t)}_{1})\) with consistency between \(r_{k}\) and \(r_{1}\) concerned. Formally, consistency can be described as the necessity of the current recommendation using (1): \[\frac{\text{PN}(R_{k[\mathbf{x}^{(t)}_{k}]}=r_{1})}{P(R_{1[\mathbf{x}^{(t)}_{1}]}=r_{1})} \geqslant\frac{\text{PN}(R_{k[\mathbf{x}^{(t)}_{k}]}=r_{k})}{P(R_{1[\mathbf{x}^{(t)}_{1}] }=r_{k})} \tag{4}\] \[\implies\text{PN}(R_{k[\mathbf{x}^{(t)}_{k}]}=r_{k})=0,\] where \(\mathbf{x}^{(t)}_{k}=\{\mathbf{s},\mathbf{a}^{(t)}_{k}\}\) denotes the \(k\)-th option. This literally means to recommend current \(\mathbf{a}^{(t)}_{1}\) under environment state \(\mathbf{s}^{(t)}_{1}\), the reward \(r_{1}\) needs to exceed other potentials. The Fig. 1: MDP as a neural-causal DAG. Blue cycles denote known exogenous \(\widehat{\mathbf{U}}\sim\mathrm{Unif}[0,1]\), gray (observable) and black (computable thus observable) denotes endogenous \(\mathbf{V}=\{\mathbf{S},\mathbf{A},\mathbf{R}\}\). Dash line \(\widehat{\mathbf{U}}\dashrightarrow\mathbf{S}\) denotes a recursive relation due to Markov property useful to ease model implementation. key challenge is that (1) is not directly estimable for statistic tools, _i.e.,_ deep learning [4, 10], we need first identify its estimability, however, identification is generally not achievable [11]. To be identifiable with consistency kept, we reduce model space of NCM \(\mathcal{M}(\boldsymbol{\theta})\) with a particular reward form: \[R^{(t)}=\arg\max_{r}\left\{\log P_{\theta_{R}}\left(R^{(t)}=r\mid\boldsymbol{s }^{(t)},\boldsymbol{a}^{(t)}\right)+g_{r}^{(t)}\right\}, \tag{5}\] where \(g_{r}^{(t)}=-\log(-\log(u_{r}^{(t)})\) and \(u_{r}\sim\mathrm{Unif}(0,1)\). \(P_{\theta_{R}}\propto exp\left(f_{\theta_{R}}(\boldsymbol{s},\boldsymbol{a})\right)\) is a neural-estimated posterior. The restricted \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) now satisfies counterfactual-consistency (4). Proof.: According to identification theory [13], \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) is neural identifiable if (4) is first symbolically identifiable and \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) matches the observational data \(\mathbb{Z}_{data}\). To prove (5) satisfying the first requirement, suppose that \(\forall r_{1}\neq r_{k}\in\mathcal{D}_{R}\), then \(R_{1[\boldsymbol{x}_{1}]}=r_{1}\in\mathbb{Z}_{data}\) gives, \[\log P(R_{1[\boldsymbol{x}]}=r_{1})+g_{r_{1}}\geqslant\log P(R_{1[\boldsymbol {x}]}=r_{k})+g_{r_{k}}. \tag{6}\] Now fix \(r_{k}\) and make \(P(R_{k[\boldsymbol{x}]}=r_{k}\mid R_{1[\boldsymbol{x}]}=r_{1})\neq 0\) and get, \[\log P(R_{k[\boldsymbol{x}]}=r_{k}\mid R_{1[\boldsymbol{x}]}=r_{1})+g_{r_{k}} \tag{7}\] \[\geqslant \log P(R_{k[\boldsymbol{x}]}=r_{1}\mid R_{1[\boldsymbol{x}]}=r_{ 1})+g_{r_{1}}.\] From (6) and (7), we get this inequality, \[\frac{\text{PN}(R_{k[\boldsymbol{a}]}=r_{1})}{P(R_{1[a]}=r_{1})}\leqslant\frac {\text{PN}(R_{k[\boldsymbol{a}]}=r_{k})}{P(R_{1[a]}=r_{k})},\] the contrapositive proposition of (4). We formalize a \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) with consistency concerned in this section, such formulation directly handles counterfactual queries which is bypassed before [9]. Theoretical analysis denotes \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) is identifiable. For the latter requirements in the proof, we develop an adversarial optimization whose convergence has been proved [17]. ### _Model Implementation_ We address \(\mathcal{M}^{\prime}(\boldsymbol{\theta})\) implementation now. Specifically, we design major backbone in (3) based on neural networks to benefit from the observational data collected. #### Iii-B1 **Reward** \(\hat{f}_{R}\): Identifiability in (5) theoretically guarantees the estimability of desired consistency (4), yet this formulation is not differentiable. We replace \(\arg\max\) with Gumbel-Softmax [12] as approximation which is: \[\widetilde{R}^{(t)}\approx\frac{\exp\left(\left(\log\left(f_{\theta_{R}}\left( \boldsymbol{s}^{(t)},\boldsymbol{a}^{(t)}\right)\right)+g_{i}\right)/\gamma_{r }\right)}{\sum_{j=1}^{|\mathcal{R}|}\exp\left(\left(\log\left(f_{\theta_{R}} \left(\boldsymbol{s}^{(t)},\boldsymbol{a}^{(t)}\right)\right)+g_{j}\right)/ \gamma_{r}\right)}, \tag{8}\] where \(f_{\theta}\) is a FFN due to neural-causal requirements, \(\gamma_{r}\) is the balancing scalar, and \(|\mathcal{R}|\) represents feedback types, _i.e.,_ clicks, purchase and none. The trade-off in (8) affects identifiability, however, it benefits model implementation and optimization since consistency itself serves as a kind of regularization which is adjustable in practice. We normalize (8) to balance exploration and exploitation for policy learning: \[r\left(\boldsymbol{s}^{(t)},\boldsymbol{a}^{(t)};\theta_{R}\right)=2\times \widetilde{R}^{(t)}-0.5. \tag{9}\] Based on (9), we now tackle the matching issue required with adversarial estimation. Specifically, Equ (9) is implemented as a discriminator \(D(r_{k[\boldsymbol{a}]}^{(t)}\mid\boldsymbol{s}^{(t)},\boldsymbol{a}^{(t)}; \theta_{D})\). As consistency (4) guarantees \(r_{k[\boldsymbol{a}]}^{(t)}=r_{1[\boldsymbol{a}]}^{(t)}\) at dynamic equilibrium, the overall optimization is defined as: \[\min_{\pi_{1}}\max_{D}\mathbb{E}_{r^{(t)}\sim p\left(R_{1[ \boldsymbol{a}]_{1}}\right)}\left[\log D\left(r_{1[\boldsymbol{a}]_{1}}^{(t)} \mid\boldsymbol{s}^{(t)},\boldsymbol{a}^{(t)};\theta_{D}\right)\right] \tag{10}\] \[+ \mathbb{E}_{\bar{u}^{(t)}\sim P\left(\widehat{U}\right)}\left[ \log\left(1-D\left(r_{1[\boldsymbol{a}]_{1}}^{(t)}\mid\boldsymbol{s}^{(t)}, \boldsymbol{a}^{(t)};\theta_{D}\right)\right)\right],\] we drop the state \(\boldsymbol{s}^{(t)}_{1}\) in \(\boldsymbol{x}^{(t)}_{1}\) for simplicity. #### Iii-B2 **Agent** \(\hat{f}_{A}\): The recommender agent aims to generate relevant candidates for user to browse. Since available options are generally huge in modern recommendation platforms (\(|\mathcal{A}|\gg 1\)), we implement the factual recommending agent upon \(\hat{f}_{S}\) to ease model complexity: \[\boldsymbol{h}_{1[\boldsymbol{a}_{i}]}^{(t)}=\boldsymbol{w}_{(a)}^{T}\sigma \left(\boldsymbol{W}_{(a)}\left[\left(\boldsymbol{s}^{(t)}\right)^{T},\left( \boldsymbol{a}_{i}^{(t)}\right)^{T}\right]^{T}+\boldsymbol{b}_{(a)}\right), \tag{11}\] where \(\theta_{A}=\{\boldsymbol{w}_{(a)},\boldsymbol{W}_{(a)},\boldsymbol{b}_{(a)}\}\). As Figure 1 denotes, the recursion comes from the fact that (3) is bundled upon each other. Consequently, we implement the agent as follows: \[\pi\left(i\in\boldsymbol{a}^{(t)}\mid\boldsymbol{s}^{(t)};\theta_{A}\right)= \frac{\exp\left(\left(\log\boldsymbol{h}_{1[\boldsymbol{a}_{i}]}^{(t)}+g_{i} \right)/\gamma_{a}\right)}{\sum_{j=1}^{|\mathcal{A}|}\exp\left(\left(\log \boldsymbol{h}_{1[\boldsymbol{a}_{j}]}^{(t)}+g_{j}\right)/\gamma_{a}\right)}, \tag{12}\] where \(\{g_{j}\}_{j=1}^{|\mathcal{A}|}\) is i.i.d. sampled from Gumbel distribution with scalar \(\gamma_{a}\), here we use Gumbel-softmax again. #### Iii-B3 **State** \(\hat{f}_{S}\): This function aims to encode preference transitions in (3) according to previous browsing history. Attention mechanism has been proved effective to capturing this autoregressive dynamics [18], thus we apply here for \(\hat{f}_{S}\) estimation. Position codings are involved since self-attention does not contain temporal information. Specifically, \(\hat{f}_{S}\) first encodes hitherto interactions \(\boldsymbol{i}:=[r_{1[\boldsymbol{a}]}^{(0)},r_{1[\boldsymbol{a}]}^{(1)}, \ldots,r_{1[\boldsymbol{a}]}^{(t-1)}]\) as the matrix \(\boldsymbol{E}\). Then,position-aware matrix \(\boldsymbol{P}\) is learned as: \[\widehat{\boldsymbol{E}}=\boldsymbol{E}+\boldsymbol{P},\] where \(\boldsymbol{E}\in\mathbb{R}^{n\times d}\) and \(\boldsymbol{P}\in\mathbb{R}^{n\times d}\) of \(d\) dimension. Based on it, the dot-product attention [19] computes a weighted sum scaled by the dimensional factor, \[\mathrm{Att}(\boldsymbol{Q},\boldsymbol{K},\boldsymbol{V})=\mathrm{Softmax}\left( \boldsymbol{Q}\boldsymbol{K}^{T}/\sqrt{d}\right)\boldsymbol{V},\] where \(\boldsymbol{Q},\boldsymbol{K},\boldsymbol{V}\) denotes query, key and value matrices. For our task, each of three matrices is linearly projected from the same \(\widehat{\boldsymbol{E}}\), a multi-head layer is then concatenated as, \[\boldsymbol{H}=\mathrm{SA}(\widehat{\boldsymbol{E}})=\left[\text{ head}_{1};\cdots;\text{head}_{h}\right]\boldsymbol{W},\] \[\mathrm{head}_{\ell}=\mathrm{Att}\left(\widehat{\boldsymbol{E}} \boldsymbol{W}_{\ell}^{Q},\widehat{\boldsymbol{E}}\boldsymbol{W}_{\ell}^{K}, \widehat{\boldsymbol{E}}\boldsymbol{W}_{\ell}^{V}\right),\] where \(\boldsymbol{W}_{\ell}^{Q},\boldsymbol{W}_{\ell}^{K},\boldsymbol{W}_{\ell}^{V}\in \mathbb{R}^{d\times\frac{d}{\delta}}\) and \(\boldsymbol{W}\in\mathbb{R}^{d\times d}\) are are weights of the \(\ell-\)th head. To avoid future leaking which is anti-causal, we mask out links between \(\mathbf{Q}_{i}\) and \(\mathbf{K}_{j}\) where \(j>i\), and encourage asymmetry via a point-wise FFN: \[\mathbf{S}_{i}=\mathrm{FFN}(\mathbf{H}_{i})=\mathrm{ReLU}(\mathbf{H}_{i}\mathbf{W}^{(f_{1})}+ \mathbf{b}^{(f_{1})})\mathbf{W}^{(f_{2})}+\mathbf{b}^{(f_{2})},\] where \(\mathbf{W}^{(f_{1})},\mathbf{W}^{(f_{2})}\in\mathbb{R}^{d\times d}\) and \(\mathbf{b}^{(f_{1})},\mathbf{b}^{(f_{2})}\in\mathbb{R}^{d}\). The \(b\)-th self-attention block is designed as, \[\mathbf{H}^{(b)}=\mathrm{SA}(\mathbf{S}^{(b-1)}), \tag{13}\] \[\mathbf{S}_{j}^{(b)}=\mathrm{FFN}(\mathbf{H}_{j}^{(b)}),\] where \(j\in\{1,2,\dots,n\}\) denotes the first \(j\) items. Note that we reorganize reward feedback (_i.e.,_ pass, click and purchase) into binary groups (_i.e.,_ pass or not) to ease implementation, extension to involve mult-type valuation can be achieved via regression upon self-attention blocks. ### _Model Optimization_ \(\mathcal{M}^{\prime}(\mathbf{\theta})\in\mathcal{M}(\mathbf{\theta})\) is still an expressive model which can benefit common reinforcement optimizations, _e.g.,_ policy-based learning [7], value-based [5] and actor-critic learning [6]. We consider these representatives in our studies as a demonstration of generality of \(\mathcal{M}^{\prime}(\mathbf{\theta})\). Specifically, for both optimizations, we use (12) as the policy network and a FFN as the critic network. Algorithm 1 shows the overall optimization. ``` 1:Initialize parameters \(\theta_{D},\theta_{A},\theta_{S},\theta_{V}\). 2:for iteration \(i=0,1,\dots\)do 3:for step \(j=0,1,\dots\)do 4: Sample exogenous priors \(u_{r}\sim P(\widehat{U})\). 5: Sample observational trajectories \((\mathbf{s},\mathbf{a})\sim\mathcal{O}\) 6: Update discriminator parameters \(\theta_{D},\theta_{S}\quad\triangleright\) (10) 7:endfor 8:for step \(k=0,1,\dots\)do 9: Sample interventional trajectories \((\mathbf{s},\mathbf{a})\sim\pi_{\theta_{A}}\) 10: Sample exogenous priors \(u_{r}\sim P(\widehat{U})\). 11: Update parameters \(\theta_{A},\theta_{V}\quad\triangleright\) (14) to (16) 12:endfor 13:endfor ``` **Algorithm 1** Model Optimization. #### Iii-C1 **Policy-based Learning** we adopt policy REINFORCE [7] as an illustration. This learning approach directly optimize (2) with reparameterization tricks with following gradients: \[\mathbb{E}_{\tau\sim data}\left[\sum_{t=0}^{|\tau|}V^{(t)}\nabla_{\theta_{A}} \log\pi\left(i\in\mathbf{a}^{(t)}\mid\mathbf{s}^{(t)};\theta_{A}\right)\right], \tag{14}\] where \(V^{(t)}=\sum_{t^{\prime}=t}^{t^{\prime}=|\tau|}\gamma^{t^{\prime}-t}D(\mathbf{s}^{ (t)},\mathbf{a}^{(t)})\) is the critic valueation. (14) is originally designed for online interaction, since (2) requires on-policy evaluations. In offline environments, off-correction [3] can alleviate distribution shifting. #### Iii-C2 **Value-based Learning** we utilize Temporal Difference (TD) [5] as a representative, which updates the critic network \(V(\mathbf{s}^{(t)},\mathbf{a}^{(t)})\) with following gradients: \[\mathbb{E}_{\tau\sim data} \left[\nabla_{\theta_{V}}\left(D(\mathbf{s}^{(t)},\mathbf{a}^{(t)})+ \gamma\max_{a^{\prime}}V\left(\mathbf{s}^{(t+1)},\mathbf{a}^{\prime};\theta_{V}\right)\right.\right. \tag{15}\] \[-\left.\left.V\left(\mathbf{s}^{(t)},\mathbf{a}^{(t)};\theta_{V}\right) \right)^{2}\right],\] where \(\theta_{V}\) contains policy parameters \(\theta_{A}\) and the parameters from the critic FNN \(V(\mathbf{s},\mathbf{a})\) itself. #### Iii-C3 **Actor-Critic Learning** we consider General Advantage Estimation (GAE) [6] to update a proximal policy objective: \[\mathbb{E}_{\tau\sim data}\left[\min\left(\frac{\pi_{\theta}}{\pi_{\theta_{ \text{ad}}}}A^{(t)},\mathrm{clip}\left(\frac{\pi_{\theta}}{\pi_{\theta_{\text{ ad}}}},1-\epsilon,1+\epsilon\right)A^{(t)}\right)\right], \tag{16}\] where \(\epsilon\) is the clipping scalar for the conservative updates. The cumulative advantage function \(A\) is estimated as follows: \[A^{(t)}=\sum_{l=0}^{\infty}\left(\gamma\lambda_{g}\right)^{l}\left[-V\left(\mathbf{ s}^{(t)}\right)+\sum_{l=0}^{\infty}\gamma^{l}D^{(t+l)}\right], \tag{17}\] where \(l\) denotes the \(l\)-step away from the current time \(t\). As a summary, we formalize a counterfactual-consistent NCM in this section to mitigate the survivor effect, and propose to use recursion and softmax trade-off for implementation. We also theoretically prove the consistency is identifiable. Empirical studies in next section also demonstrate the effectiveness of these designs. ## IV experiments To verify the mitigation on survivor effect, we conduct experiments with two consideration: (i) **Generalization**. Does the counterfactual consistency is effective in both offline learning and online learning? (ii) **Adaptivity**. Can different optimization procedures all benefit the mitigation effect? ### _Experimental Setup_ **Data.** Offline experiments are conducted on two released recommendation datasets _i.e., Kaggle1_ and _RecSys15_2. For online experiments, we use a simulator _VirtualTB3_. Footnote 1: [https://www.kaggle.com/retailrocket/commerce-dataset](https://www.kaggle.com/retailrocket/commerce-dataset) Footnote 2: [https://recsys.acm.org/recsys15/challenge](https://recsys.acm.org/recsys15/challenge) Footnote 3: [https://github.com/cyoumx/VirtualTaboao.git](https://github.com/cyoumx/VirtualTaboao.git) * **Offline datasets** We treat views as clicks and adding items to the cart as purchases resulting in binary user feedbacks, _i.e.,_ clicks and purchases. Both items and interaction trajectories less than 3 times is removed because of sparsity. Table I details the preprocessing results. * **Online simulations**_VirtualTB_ simulates real-world user behaviors on one of the largest online e-commerce platforms. In this simulator, each user has 11 binary attributes encoded as an 88-dim vector, and recommendation action is a 27-dimensional vector, the immediate reward as feedback signals are integers from 0 to 10. **Metrics.** For offline evaluation, we measure top-k \((k=\{5,10\})\) Hit Ratio (H@k) [20] and Normalized Discounted Cumulative Gain (N@k) [21], widely adopted as a measurement for recalling and ranking [12, 20]. For online simulations, we use click-through-rate from the simulator as: \[CTR=\frac{r_{epi}}{10\times N_{epi}},\] where \(r_{epi}\) is the episodic rewards in \(N_{epi}-\)length interactions. **Baselines.** We consider three types of learning strategies: IRe [7], CQN [5] and Inv [6]. These methods all adopt adversarial learning combined with reinforcement learning to optimize reward functions, IRe inducts REINFORCE (policy-based) to learn the recommendation agent. CQN applies Temporal Difference (value-based) to implicitly update the agent. InvGAN uses Generative Advantage Estimation (actor-critic based) to update state functions. The major difference is the reward function formulation, where IRe and CQN utilize FNN, while Inv sets the log-scale discrimination difference as the reward that can involve additive reward for enhancement. **Implementation.** All bases use the same neural architecture. Since offline datasets offer expertise demonstration and we use a unified policy collector in online simulations, we implement a model-free IRe and CQN for simplicity. For CQN, we use supervised regularization, _i.e.,_ cross-entropy, in addition with the adversarial learning (10). For Inv, we add predefined feedback in addition with the differential reward, _i.e.,_ 0.2 for click and 1.0 for purchase, 0.0 otherwise, this setting is proved effective [20]. For online simulators, we adopt Deep Deterministic Policy Gradients (DDPG) as original IncRec suggests to train the expert policy collector. One-head self-attention block with embedding size 50 is adopted, the learning rate for the actor is \(1e-4\) and \(1e-3\) for the critic, both of which are optimized with Adam [22]. The discounted factor \(\gamma\) is 0.7. We use 10 recent interactions as input length (\(w=10\)), with mini-batch \(B=256\). Item embeddings are initialized from Gaussian distribution. For the agent (12), we adopt a 2-layer FNN with with 512 hidden units and ReLU as nonlinear activation, \(\gamma=0.2\) for Gumbel-Softmax, \(\lambda_{g}=0.97\) and \(\epsilon=0.2\) for GAE [6], same FNN for the critic \(V\). 100 episodes as 1 iteration in VirtualTB for illustration. ### _Experimental Results_ #### Iv-B1 **Offline Performance** Table II and Table III details offline performance. First, we observe that Inv works worst, because GAE used by Inv in its nature is an on-policy evaluation which acquires distribution correction to use trajectories collected by other policies, and TD used by CQN offers an off-policy evaluation more suitable for offline environments. Second, we observe each type of reinforcement optimization methods can benefit from the neural causal model, specifically, the Gumbel reward implementation, since offline environment can not cover each possible state-action-reward tuples and online interaction is unavailable. Technically, Gumbel design here serves as regularization to reduce model complexity. Theoretically, the regularization comes from consistency between current policy (factual) and potential policies (counterfactual), this consistency as a causal quantity is identifiable and thus can be estimated without conducting random control experiments, which is necessary for unidentifiable situations. In recommendation task, random control experiments are unrestricted online interaction which is unrealistic for safety concerned [16]. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{click} & \multicolumn{4}{c}{RecSys} & \multicolumn{4}{c}{Kaggle} \\ \cline{2-9} & H\(\circ\)5 & N@5 & H\(\circ\)10 & N@10 & H\(\circ\)5 & N@5 & H\(\circ\)10 & N@10 \\ \hline Inv &.3159 &.2179 &.4142 &.2503 &.2700 & 2119 &.3238 &.2236 \\ Inv & **.3269** & **.2312** & **.4300** & **.2612** & **.2760** & **.2136** & **.3291** & **.2281** \\ \hline IRe &.3339 &.2183 & **.4374** & **.2672** &.2775 &.2163 &.3394 &.2199 \\ IRe & **.3361** & **.2385** & **.4438** & **.2686** & **.2882** & **.2202** & **.3564** & **.2408** \\ \hline CQN &.3343 &.2364 &.4451 &.2850 &.2912 &.2249 &.3512 &.2369 \\ CQN+ & **.3571** & **.2473** & **.4619** & **.3154** & **.3029** & **.2458** & **.3677** & **.2614** \\ \hline \hline \end{tabular} \end{table} TABLE II: Offline performance. Bold denotes the best. “\(\ast\)” denotes the statistically significant improvements (_i.e.,_ two-sided t-test with \(p<0.05\)) over the best baseline. \begin{table} \begin{tabular}{l c c} \hline \hline & _Kaggle_ & _RecSys15_ \\ \hline \#interactions & 195,523 & 200,000 \\ \#items & 70,852 & 26,702 \\ \#clicks & 1,176,680 & 1,110,965 \\ \#purchases & 57,269 & 43,946 \\ \hline \hline \end{tabular} \end{table} TABLE I: Data Statistics. Fig. 2: Online simulations with VirtualTB. #### Iv-B2 **Online Performance** Simulator offers an environment to timely interact. Figure 2 details simulation results, where blue lines ('Expert') is the DDPG expert collectors. Figure 1(a) compares among baselines, we observe that Inv works best different from offline experiments, since it can immediately evaluate the agent with the simulator without further restriction, and the reward is learnt by initiating expert policy from DDPG. All three baselines are improved with Gumbel regularization, which proves that regularization via consistency is still effective with free interaction available, because counterfactual consistency is inheritable via lower interventional level (online) and observational level (offline) [10] moreover, this also demonstrates Gumbel-Softmax trade-off is still effective while it hurts theoretical integrity. We also observe more improvement over CQN and Inv than IRe, this benefaction comes from the implicit consistency on cumulative valuation, _i.e.,_ Q-value in CQN and advantage function in Inv. As a brief summary, empirical results verify \(\mathcal{M}^{\prime}(\theta)\) is **generalizable** and **adaptive**, since all three baselines are improved in both offline and online experiments. Furthermore, the recursion we use to reduce model complexity and the softmax trade-off is proved effective. ## V Related Works Collaborative recommendation [23] fails to capture high-order interaction relations. Autoregressive approaches [18] model this relation as evolving sequences. For multiple feedbacks, reinforcement recommenders maximize cumulative valuation as the representative of user satisfaction, existing works covers policy-based methods with distribution correction [3, 7], value-based methods [5, 20] with supervised or adversarial regularization, and actor-critic methods [12] with action space approximation. Typically, reward functions is predefined. To automate the reward fine-tuning procedure, recent works learn to reward inversely from user feedbacks [6]. Both reinforcement agents need correction since offline collections can be inconsistent with online interaction [16]. In this work, we develop a counterfactual-consistent causal model based on recent development of causal inference. Causality formalize the language to analyze model inductive bias [4, 10], and derive desired properties, _i.e.,_ consistency in this work, as a mathematical quantity. Recent research develope causal machine learning methods, _i.e.,_ neural-causal connection [24], which is the foundation of our method. ## VI conclusion In this work, we propose a novel NCM which theoretically achieves consistency on counterfactual hierarchy. Such consistency is necessary to tackle survivor effect in recommendation. We implement our model with major RL methods. Empirical experiments on both offline and online environments prove the adaptivity and generalization. We assume observable MDP in this studies, partially-observed MDP will be explored next. ## acknowledgement This research is supported by APRC - CityU New Research Initiatives (No.9610565, No.9360163), Hong Kong ITC Fund Project (No.ITS/034/22MS), and SIRG - CityU Strategic Research Grant (No.7020046, No.7020074, No.7005894).
2310.01356
Less is More: Toward Zero-Shot Local Scene Graph Generation via Foundation Models
Humans inherently recognize objects via selective visual perception, transform specific regions from the visual field into structured symbolic knowledge, and reason their relationships among regions based on the allocation of limited attention resources in line with humans' goals. While it is intuitive for humans, contemporary perception systems falter in extracting structural information due to the intricate cognitive abilities and commonsense knowledge required. To fill this gap, we present a new task called Local Scene Graph Generation. Distinct from the conventional scene graph generation task, which encompasses generating all objects and relationships in an image, our proposed task aims to abstract pertinent structural information with partial objects and their relationships for boosting downstream tasks that demand advanced comprehension and reasoning capabilities. Correspondingly, we introduce zEro-shot Local scEne GrAph geNeraTion (ELEGANT), a framework harnessing foundation models renowned for their powerful perception and commonsense reasoning, where collaboration and information communication among foundation models yield superior outcomes and realize zero-shot local scene graph generation without requiring labeled supervision. Furthermore, we propose a novel open-ended evaluation metric, Entity-level CLIPScorE (ECLIPSE), surpassing previous closed-set evaluation metrics by transcending their limited label space, offering a broader assessment. Experiment results show that our approach markedly outperforms baselines in the open-ended evaluation setting, and it also achieves a significant performance boost of up to 24.58% over prior methods in the close-set setting, demonstrating the effectiveness and powerful reasoning ability of our proposed framework.
Shu Zhao, Huijuan Xu
2023-10-02T17:19:04Z
http://arxiv.org/abs/2310.01356v1
# Less is More: Toward Zero-Shot Local Scene Graph Generation via Foundation Models ###### Abstract Humans inherently recognize objects via selective visual perception, transform specific regions from the visual field into structured symbolic knowledge, and reason their relationships among regions based on the allocation of limited attention resources in line with humans' goals (Folk et al., 1992). While it is intuitive for humans, contemporary perception systems falter in extracting structural information due to the intricate cognitive abilities and commonsense knowledge required. To fill this gap, we present a new task called Local Scene Graph Generation. Distinct from the conventional scene graph generation task, which encompasses generating all objects and relationships in an image, our proposed task aims to abstract pertinent structural information with partial objects and their relationships for boosting downstream tasks that demand advanced comprehension and reasoning capabilities. Correspondingly, we introduce zEro-shot Local scEne GrAph geNeraTion (ELEGANT), a framework harnessing foundation models renowned for their powerful perception and commonsense reasoning, where collaboration and information communication among foundation models yield superior outcomes and realize zero-shot local scene graph generation without requiring labeled supervision. Furthermore, we propose a novel open-ended evaluation metric, Entity-level CLIPSCorE (ECLIPSE), surpassing previous closed-set evaluation metrics by transcending their limited label space, offering a broader assessment. Experiment results show that our approach markedly outperforms baselines in the open-ended evaluation setting, and it also achieves a significant performance boost of up to 24.58% over prior methods in the close-set setting, demonstrating the effectiveness and powerful reasoning ability of our proposed framework. ## 1 Introduction Human visual perception is adeptly synchronized with ongoing activity, highlighting salient objects and swiftly deducing their relationships in novel contexts. Such cognitive proficiency underpins our intuitive structural information extraction. For instance, when hungry, by identifying a pizza on a plate or inside a microwave, we can easily obtain the structural information (pizza, on, plate) or (pizza, in, microwave) to facilitate subsequent decisions, e.g., getting the pizza from the plate or opening the microwave. The perceptual competence also holds promise for enhancing AI systems' comprehension and reasoning capabilities, as evidenced by advancements in visual question answering (Han et al., 2021; Jiang et al., 2020) and image captioning (Yang et al., 2022; Zhang et al., 2021). In this paper, we delve into scene graphs - a structured representation of visual scenes wherein entities are graph nodes connected by labeled edges denoting their relationships. Such representations bridge the chasm between raw pixels and semantic comprehension, offering valuable information for diverse computer vision tasks. Despite recent strides in scene graph generation (Zhang et al., 2023; Jung et al., 2023; Kundu & Aakur, 2023; Zheng et al., 2023), translating these advancements into effective scene graph generation tools still presents challenges in novel environments due to the noisy supervision, constrained label space, and long-tailed relationship distribution, hindering the direct applicability of these approaches in downstream tasks (Li et al., 2022; Yao et al., 2021). Consequently, ground truth scene graphs are often leveraged in downstream tasks (Puig et al., 2021), narrowing their versatility. In light of these challenges, we propose emphasizing select entities and relationships aligned with specific tasks - echoing human cognition. It inspires the introduction of the "local scene graph," a departure from the conventional "global scene graph," as illustrated in Figure 1. Local scene graphs encapsulate task-pertinent entities and relationships. For instance, when performing an instruction such as "obtain the white cup," global scene graph generation approaches recognize all entities and relationships, inevitably introducing cumulative errors detrimental to downstream tasks from individual inaccurate relationship estimation. In contrast, a local scene graph succinctly illuminates the cup's position on a shelf, ignoring entities and relationships unrelated to the current task and streamlining the subsequent task, i.e., unmounting the white cup from the shelf, which aligns with the human cognition process (Folk et al., 1992). Therefore, we introduce a new task named local scene graph generation, which generates a local scene graph according to given entities. Correspondingly, considering the scarcity of annotated scene graph data, we propose a z**E**ro-shot **L**ocal **s**c**E**ne **Gr**x**ph **g**e**N**er**a**T**ion (ELEGANT) framework harnessing foundation models enriched with perceptual and commonsense reasoning. It emphasizes model synergy, producing superior outcomes and realizing zero-shot local scene graph generation without labeled supervision. Specifically, given a subject selected by humans, an observer model outputs objects associated with the subject. A thinker model then identifies possible relationships, delegating them to a verifier model to validate their correctness. However, simply combining models leads to subpar results due to individual model limitations. Therefore, we advocate a **Co**-**Calibration** (CoCa) strategy, calibrating model-specific knowledge through cross-model knowledge exchange to enhance model collaboration. Furthermore, we introduce a novel open-ended evaluation metric, **E**mtity-level **CLI**PS**cor**E** (ECLIPSE), surpassing previous closed-set evaluation metrics by transcending their limited label space, offering a broader assessment. We summarize the main contributions as follows: Figure 1: Local vs. Global Scene Graphs. While global scene graph methods detect all entities (represented by green dashed boxes) and their relationships, we argue that for instructions like “obtain the white cup,” a local scene graph, exemplified by (cup, mounted on, shelf) (highlighted by red solid boxes), is more consistent with the human recognition process and adequate for guiding subsequent actions, such as unmounting the white cup from the shelf. * We propose a new task, local scene graph generation, to abstract pertinent structural information with partial objects and relationships, which aligns with human cognition and improves downstream tasks demanding intricate comprehension and reasoning. * We devise a new framework, zero-shot local scene graph generation, to abstract local scene graphs without labeled supervision, exploiting foundation models renowned for their powerful perception and commonsense reasoning. Moreover, we propose a co-calibration strategy, fostering knowledge exchange to amplify model cooperation. * We introduce a new open-ended evaluation metric, entity-level CLIPScore, to evaluate our proposed method in an open-ended setting. ## 2 Related Work **Supervised Scene Graph Generation** has attracted substantial attention within the domain of computer vision due to its pivotal role in bridging the chasm between fundamental low-level visual features and the elevated semantic comprehension of visual scenes, offering a valuable substrate for diverse computer vision tasks (Han et al., 2021; Jiang et al., 2020; Yang et al., 2022b; Zhang et al., 2021). The panorama of existing scene graph generation models encompasses both single-stage and two-stage methodologies. Drawing inspiration from the single-stage object detection model DETR (Carion et al., 2020), single-stage scene graph generation models (Liu et al., 2021; Shit et al., 2022; Li et al., 2022b; Cong et al., 2023; Teng and Wang, 2022) directly prognosticate pair proposals and effectuate scene graph synthesis through learnable queries. In contrast, two-stage methods (Kundu and Aakur, 2023; Wang et al., 2019; Shi et al., 2021; Chen et al., 2019) employ pre-trained object detectors to identify image regions, which serve as nodes in the scene graph. Subsequently, relationships between entities are delineated through relationship classification for edge labeling. To bolster the fidelity of entity representations with contextual insights, models incorporate diverse modules to facilitate the fusion of information across graph nodes and image contexts, including transformer (Vaswani et al., 2017; Li et al., 2022b; Kundu and Aakur, 2023) and graph neural network (Li et al., 2021; Khademi and Schulte, 2020). However, existing scene graph generation models generate global scene graphs and cannot directly generate local scene graphs given a subject. **Zero-Shot Scene Graph Generation** marks an innovative direction, abstracting scene graphs without labeled supervision. (Yao et al., 2021) extracts possible (subject, relationship, object) triplets from a knowledge base, Conceptual Captions (Sharma et al., 2018), and exploit a CLIP model (Radford et al., 2021) for verifying the correctness of relationship candidates. (Li et al., 2023b), on the other hand, leverages large language models to furnish detailed descriptions of visual cues for relationships, subsequently deploying prompts to a CLIP model for relationship prediction. However, knowledge-base-driven triplets only capture partial relationships, and the scalability of description generation-based methods falter with the advent of new entities or relationships. Moreover, the constrained label space limits a broader assessment. In this paper, we propose a zero-shot local scene graph generation framework harnessing foundation models to extract and evaluate open-vocabulary local scene graphs. **Foundation Model Collaboration** has recently received significant attention. The success of large language models (LLMs) (Brown et al., 2020; OpenAI, 2023) leads to employing LLMs as controllers to integrate various foundation models (Bommassani et al., 2021) for multi-modal reasoning. (Zhang et al., 2023a) build a cooperative embodied agent to collaborate with other agents and humans to decompose tasks with LLMs. (Wang et al., 2023) mimic the cognitive synergy in human intelligence using a single LLM but acting in different roles to collaborate by prompting. In this paper, we leverage foundation models, renowned for their powerful perception and commonsense reasoning, to obtain local scene graphs. ## 3 Local Scene Graph Generation ### Problem Definition Given an image and a subject \(s\) in this image, the local scene graph generation task needs models to abstract a local scene graph \(\mathbb{G}=\{\mathbb{E},\mathbb{R}\}\), where nodes \(\mathbb{E}\) contain subject \(s\) and the objects \(\mathbb{O}=\{o_{1},o_{2},\cdots\}\) associated with \(s\); edges \(\mathbb{R}\) include relationships between \(s\) and \(o_{i}\). Compared with the global scene graph generation task, which generates all the entities and relationships in an image, our proposed local scene graph generation task fixes the subject as \(s\) and only extracts objects and relationships associated with \(s\). ### Zero-Shot Local Scene Graph Generation We present a z**E**ro-shot **L**ocal **sc**E**ne **Gr**A**ph **ge**N**era**T**ion (ELEGANT) framework harnessing foundation models renowned for their powerful perception and commonsense reasoning, where collaboration and information communication among foundation models yield superior outcomes and realize zero-shot local scene graph generation without requiring labeled supervision. It comprises three procedures: (1) Perception: Detect open-vocabulary objects related to the specified subject. (2) Reasoning: Deduce open-vocabulary relationships between the subject and detected objects. (3) Verification: Ascertain the validity of the triplets candidates. Figure 2 shows the pipeline. The subject is determined by various control signals provided by humans, including points, boxes, sentences, etc. It is worth noting that ELEGANT's modular nature allows the swapping of any internal model for its more potent counterparts to enhance performance. **Perception.** We employ an observer model designed to discern open-vocabulary objects within the image. Although any suitable detection or segmentation model fits into the framework, we lean on the Segment Anything Model (SAM) (Kirillov et al., 2023), an open-vocabulary segmentation model with impressive zero-shot transferability, benefiting from promptable pre-training on a large dataset. However, SAM is a class-agnostic model, and it lacks the capability to produce symbolic semantic labels for the following reasoning stage. Recent endeavors have sought to infuse SAM with semantic information, and we utilize the GroundedSAM (Liu et al., 2023) as the observer model in this paper. **Reasoning.** After obtaining the subject and objects, potential relationships are inferred, leveraging commonsense knowledge. GPT-like LLMs (Ouyang et al., 2022; OpenAI, 2023; Anil et al., 2023) recently emerged as powerful reasons showcasing profound reasoning and commonsense knowledge when prompted suitably (You et al., 2023; Zhu et al., 2023). Therefore, We use an LLM as a thinker model and prompt it as a commonsense reasoner, subsequently yielding relationship triplet candidates. We detail the specific prompts in Appendix A.1. **Verification.** Although LLMs have powerful commonsense knowledge, they cannot directly receive visual information, leading to inevitable bias. We introduce a verifier model, bridging the visual and linguistic realms, to check if these triplet candidates are correct. Direct verifying Figure 2: Pipeline of the ELEGANT Framework. For a given image and a user-specified entity as the subject, the observer identifies associated entities as objects. The thinker, equipped with robust reasoning and rich commonsense knowledge, proposes relationship candidates. Subsequent validation by the verifier ensures their relevance. The CoCa strategy then refines these results, enhancing prediction quality. triplets, however, is fraught with challenges. For instance, posing the query, "Does the image contain ({subject}, {relationship}, {object})?" to the verifier yields unsatisfactory results due to its inherent model limitations. To circumvent this, we devise a structured query, "Question: is the {subject} {relationship} the {object}?" and ask the verifier to answer a yes/no question, which is consistent with the pre-training tasks of the verifier and yields enhanced results. Still, the semantic divergence between models (Enser & Sandom, 2003) might result in the verifier misscorsturing the thinker's commonsense knowledge, as illustrated in Figure 2. The verifier gives a wrong answer about the triplet "(person, riding, elephant)." To ground the knowledge in the verifier with the powerful reasoner, we introduce a Co-Calibration (CoCa) strategy, where the reasoner, mimicking a teacher, enables the verifier as a student to self-diagnose errors. Specifically, we ask the verifier to provide its rationale when negating a relationship. Then, we would like to know if the rationale given by the verifier can be grounded with the commonsense knowledge provided by the reasoner: "Can we infer {A} from {B}?" where {A} and {B} are structured triplets obtained by the reasoner and verifier, respectively. Because LLMs cannot access visual information directly, it may lead to bias. If the answer is still no, the triplet is probably wrong, and we discard the triplet. If the answer is yes, the knowledge is calibrated, and we keep the triplet. The detailed prompts are listed in Appendix A.1. ### Open-Ended Evaluation Metric Our proposed framework has the ability to predict open-vocabulary objects and relationships. However, vanilla scene graph evaluation metrics, confined to a fixed label space, tend to overlook or exclude entities and relationships outside the prescribed vocabulary, hampering a comprehensive scene graph evaluation. Recently, CLIPScore (Hessel et al., 2021) emerged as a promising open-ended evaluation technique, harnessing the CLIP model to offer a robust automatic evaluation for image captioning tasks. CLIPScore encodes both image and text features and calculates a similarity score between two modalities to represent their relevance: \[\mathrm{CLIPScore}(\mathbf{I},\mathbf{C})=\max(100*\cos(\mathbf{E_{I}},\mathbf{ E_{C}}),0), \tag{1}\] where \(\mathbf{I}\) is the image; \(\mathbf{C}\) is the caption; \(\mathbf{E_{I}}\) is the image feature extracted from the vision encoder of the CLIP model; \(\mathbf{E_{C}}\) is the text feature extracted from the text encoder of the CLIP model; \(\cos(\cdot)\) is the cosine similarity. To tailor this evaluation metric for local scene graph generation, for each triplet \(\mathbf{C}_{i}=(s,r_{i},o_{i})\), where \(r_{i}\in\mathbb{R}\) and \(o_{i}\in\mathbb{O}\) within a local scene graph \(\mathbb{G}\), we obscure the background to derive a masked image \(\mathbf{I}_{i}\) based on bounding boxes of \(s\) and \(o_{i}\). Meanwhile, we rewrite the triplet as "The {\(s\)} is {\(r_{i}\)} the {\(o_{i}\)}." Subsequently, the CLIPScore for the triplet is computed as \(\mathrm{CLIPScore}(\mathbf{I}_{i},\mathbf{C}_{i})\). By calculating and averaging the CLIPScores of all triplets in \(\mathbb{G}\), we derive the cumulative CLIPScore for \(\mathbb{G}\): \[\mathrm{CLIPScore}(\mathbb{G})=\frac{1}{|\mathbb{G}|}\sum_{i=1}^{|\mathbb{G}| }\mathrm{CLIPScore}(\mathbf{I}_{i},\mathbf{C}_{i}), \tag{2}\] where \(|\mathbb{G}|\) is the number of triplets in \(\mathbb{G}\). While CLIPScore offers evaluation in the open-ended setting, its original design caters to image-level evaluations, ignoring the importance of prediction length in our task. For instance, model A gives a prediction with the highest confidence and might achieve a superior score over model B, which offers three predictions. Additionally, cases where a model produces synonymous relationships for a relationship inside, such as in and inner, might attain high CLIPScores but fail to offer novel semantic insight (Li et al., 2022). In light of this, we craft a penalty function, which targets predictions that are either overly brief or elongated, with a heightened penalty for the former, advocating the generation of richer triplets. While numerous function forms are plausible, we initiate our exploration with the log barrier function (Murray & Wright, 1994): \[y=x-\mu\log(x-1), \tag{3}\] where \(\mu>0\) is a scalar. Because the log barrier function is a convex function, the minimum value is \(y^{*}=\mu-\mu\log(\mu)+1\) at \(x=\mu+1\). Then, we adjust equation 3 places this minimal value at \(0\): \[y^{{}^{\prime}}=y-y^{*}. \tag{4}\] Given the range of \(y^{{}^{\prime}}\) is \([0,+\infty]\), we modify the range to [0, 1] by integrating \(\exp(-x)\) and an \(\alpha\) parameter to control the penalty strength for enhancing the flexibility of the evaluation metric: \[y^{{}^{\prime\prime}}=\exp(-\alpha y^{{}^{\prime}}). \tag{5}\] Nevertheless, there is still a challenge: the variability in penalty scores across diverse prediction lengths makes it difficult to evaluate models fairly (Papineni et al., 2002). Therefore, we compute the average prediction length \(m^{*}\) across the dataset and set \(\mu=m^{*}-1\). The penalty function is: \[\mathrm{P}(x)=\exp\left(-\alpha\left(x+(m^{*}-1)\log\frac{m^{*}-1}{x-1}-m^{*} \right)\right), \tag{6}\] where \(x\) is the length of prediction. The image of the function is shown in Appendix A.2. Finally, we present our novel open-ended evaluation metric called Entity-level CLIPS\({}_{\texttt{COF}}\)E (ECLIPSE): \[\mathrm{ECLIPSE}(\mathbb{G})=P(|\mathbb{G}|)\frac{1}{|\mathbb{G}|}\sum_{i=1}^ {|\mathbb{G}|}\mathrm{CLIPScore}(\mathbf{I}_{i},\mathbf{C}_{i}), \tag{7}\] where \(-\frac{\mathrm{dP}}{\mathrm{d}|\mathbb{G}|}\Big{|}_{|\mathbb{G}|\to 1}>-\frac{ \mathrm{dP}}{\mathrm{d}|\mathbb{G}|}\Big{|}_{|\mathbb{G}|\to+\infty}\). Therefore, shorter predictions receive a larger penalty than longer predictions. ## 4 Experiments ### Experiment Setup **Datasets.** To evaluate our method, we utilize 1) Visual Genome (Krishna et al., 2017), which contains \(26,443\) images for testing, and each image is manually annotated with entities and relationships. 2) GQA (Hudson and Manning, 2019) containing \(8,208\) images for testing, whose split provided by (Li et al., 2023). To ensure consistent benchmarking against prior zero-shot scene graph generation works (Yao et al., 2021; Li et al., 2023), we also conduct experiments within a closed-set paradigm on the Visual Genome dataset, where (Yao et al., 2021) removes hyperyms and redundant synonyms in the most frequent \(50\) relation categories, resulting in \(20\) well-defined relation categories, and (Li et al., 2023) adopts the \(24\) semantic relationship classes. **Evaluation Metrics.** We iteratively select one from all entities in an image as the subject and generate a local scene graph to construct a global scene graph for evaluation. We conduct experiments on both open-ended and closed-set settings. For the open-ended setting, our proposed ECLIPSE is reported. For the closed-set setting, we report Recall@K (R@K), which indicates the proportion of ground truths that appear among the top-K confident predictions, and Mean Recall@K (mR@K), which averages R@K for each category. **Implementation Details.** Our observer model is the GroundedSAM (Liu et al., 2023), an open-vocabulary segmentation model with semantic information. The thinker model is based on the GPT-3.5-turbo (OpenAI, 2023), a large language model with impressive reasoning skills and commonsense knowledge. For the verifier model, we deploy the BLIP2 (Li et al., 2023), a pre-trained vision-language model. ### Open-Ended Local Scene Graph Generation Evaluation We introduce diverse baselines to evaluate the efficiency of our proposed framework. Table 1 illustrates the results on the Visual Genome dataset. The results on the GQA dataset are shown in Appendix A.3. **Observer.** We employ open-vocabulary (GroundedSAM (Liu et al., 2023)) and closed-set detectors (FasterRCNN (Ren et al., 2015), which were widely utilized in previous approaches) to demonstrate the effectiveness of perception performance. GroundedSAM markedly outperforms FasterRCNN, attributed to its open-vocabulary perception capabilities, whereas FasterRCNN can only recognize objects defined in a fixed label space. A comparative analysis reveals that our method identifies approximately \(4\)x more object categories than previous closed-set methods, as illustrated in Figure 3 (a). **Thinker.** The results show that the OPT models Zhang et al. (2022) do not perform well due to limited reasoning ability. In comparison, LLaMA2 (Touvron et al., 2023) and Vicuna (Zheng et al., 2023b) show comparative performance due to Instruction Tuning (Longpre et al., 2023). GPT-3.5-Turbo (OpenAI, 2023) exhibits superior performance, attributable to its enhanced reasoning capabilities. Compared with closed-set methods, our method produces about \(25\)x more relationship categories, as shown in Figure 3 (b). **Verifier.** As the verifier to verify relationship candidates, we explore various variants of BLIP2, a visual question answering model trained on large datasets. BLIP2 OPT is based on the unsupervised-trained OPT model family (Zhang et al., 2022) for decoder-based LLMs, and BLIP2 FlanT5 is based on the instruction-trained FlanT5 model family (Zhang et al., 2022) for encoder-decoder-based LLMs. The results show that BLIP2 OPT achieves higher performance than BLIP2 FlanT5. From Figure 3 (c), compared with VisualDS, our approach achieves around \(7\)x more triplet categories. ### Comparision with Closed-Set Zero-Shot Scene Graph Generation Methods To assess the commonsense reasoning capability of our proposed method, we benchmark it against two zero-shot scene graph generation approaches: VisualDS (Yao et al., 2021) and RECODE (Li et al., 2023b). VisualDS crafts scene graphs by mining relationship candidates from knowledge bases, subsequently validated via the CLIP model (Radford et al., 2021). Furthermore, they employ the predicted scene graphs as pseudo labels for the supervised scene graph generation model training. Our comparison focuses on the first phase of VisualDS, and it's worth noting that the scene graphs we generate can similarly be harnessed for supervised training. For a fair comparison, we adopt \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{1}{c}{ECLIPSE} \\ \hline Observer & Thinker & Verifier & \\ \hline Faster RCNN & GPT-3.5-Turbo & BLIP2 OPT 6.7B & 19.31 \\ \hline GroundedSAM & OPT 2.7B & BLIP2 OPT 6.7B & 0.09 \\ GroundedSAM & OPT 6.7B & BLIP2 OPT 6.7B & 0.16 \\ GroundedSAM & LLaMA2 7B & BLIP2 OPT 6.7B & 19.01 \\ GroundedSAM & Vicuna 7B & BLIP2 OPT 6.7B & 20.41 \\ \hline GroundedSAM & GPT-3.5-Turbo & BLIP2 FlanT5 XL & 20.97 \\ GroundedSAM & GPT-3.5-Turbo & BLIP2 FlanT5 XXL & 21.20 \\ GroundedSAM & GPT-3.5-Turbo & BLIP2 OPT 2.7B & 21.50 \\ \hline **GroundedSAM** & **GPT-3.5-Turbo** & **BLIP2 OPT 6.7B** & **21.54** \\ \hline \hline \end{tabular} \end{table} Table 1: Open-ended evaluation results on the test set of the Visual Genome dataset. The parameter \(\alpha\) is set as \(0.01\). Figure 3: Assessing Prediction Diversity. (a) Number of entity categories. (b) Number of relationship categories. (c) Number of predicted triplets. the ground truth object detections, control the generation of relationship candidates by prompts, and filter out relationships that do not exist in relationship label space. As our approach is a local scene graph generation framework, we iteratively run our method on all objects to obtain a global scene graph. The results are shown in Table 2. From Table 2, our approach significantly improves performance up to \(24.58\) in Recall. In contrast to VisualDS, which sources relationship candidates from a static knowledge base, our method leverages the vast commonsense reasoning of large language models (LLMs) pretrained on expansive datasets. Furthermore, the in-context learning ability (Wei et al., 2022) provides a powerful and effective way to receive context about the world. Prompts encompassing entities like "cup, oven, stove, refrigerator" hint at a kitchen scene, and LLMs might consequently produce scene-specific relationships. Meanwhile, RECODE employs LLMs to generate detailed descriptions for relationship triplets. However, it still needs to pre-define a relationship label space and generate a description for each triplet, which is time-consuming and cannot easily scale up for new entities and relationships. Our approach utilizes LLMs to generate relationship candidates, which is efficient and can effortlessly deal with new environments. ### Effectiveness of CoCa Strategy To demonstrate the effectiveness of our proposed CoCa strategy, we compare our approach with a variant devoid of the CoCa strategy. The results are shown in Table 3. In the absence of the CoCa strategy, there's a marked decline in performance, coupled with a significant reduction in predicted triplet counts, indicating that \(5012\) triplets are initially recognized as negative samples by the verifier and rectified by the CoCa strategy. ### Qualitative Results Figure 4 shows the qualitative results. The red dashed box is the subject, and the green solid boxes denote objects. The results demonstrate the effectiveness of our proposed method. The first example is evaluated in the open-vocabulary setting, and other images are evaluated in the closed-set setting. The open-vocabulary detector can generate various multi-grained entities, e.g., hat, pant, shoe, and the closed-set detector can only give coarse-grained entities, e.g., child. Consequently, given more diverse entities, our method can produce a more significant number of triplets. ### Local Scene Graphs for Downstream Tasks To assess the utility of local scene graphs, we incorporate them into the Visual Question Answering task. We evaluate our approach on the GQA testdev set (Hudson & Manning, 2019), selecting a \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Method & \#Rel & R@10 & R@20 & R@50 & mR@10 & mR@20 & mR@50 \\ \hline VisualDS Yao et al. (2021) & 27.72 & 33.22 & 38.21 & 16.32 & 20.49 & 24.94 \\ Ours & **30.27** & **36.80** & **41.04** & **21.21** & **26.11** & **29.78** \\ \hline RECODE Li et al. (2023b) & 24 & - & 10.60 & 18.30 & - & 10.70 & 18.70 \\ Ours & **28.14** & **35.18** & **38.87** & **39.51** & **16.54** & **21.39** \\ \hline \hline \end{tabular} \end{table} Table 2: Closed-set evaluation results on the test set of the Visual Genome dataset. Note the differences in the experimental setup between VisualDS and RECODE concerning the number of relationship categories (#Rel). \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Method & R@10 & R@20 & mR@10 & mR@20 & [email protected] & [email protected] & [email protected] & \#Triplets \\ \hline Ours & **30.27** & **36.80** & **21.21** & **26.11** & **17.63** & **20.39** & **20.78** & **12120** \\ - Co-Calibration & 23.64 & 31.96 & 18.78 & 25.89 & 14.86 & 16.51 & 16.73 & 7108 \\ \hline \hline \end{tabular} \end{table} Table 3: The effectiveness of CoCa strategy on the test set of the Visual Genome dataset. E@\(*\) denotes ECLIPSE when the parameter \(\alpha\) is set as \(*\). random subset of \(1,000\) samples. To derive local scene graphs pertinent to a given query, spaCy1 is employed to extract nouns from the question, and the nouns then serve as subjects for creating local scene graphs via ELEGANT. Our experiments leverage the BLIP2 FlanT5 XL model (Li et al., 2023a). For a given local scene graph \(\mathbb{G}\), we utilize the template "Context: {G}. Question: {Q}, Short Answer:" as the prompt, where Q is the question; G consists of triplets denoted as "A {s} is {\(r_{i}\)} a {\(o_{i}\)}." For comparison, global scene graphs are also produced. Notably, scene graphs are integrated directly into the BLIP2 model via prompting. So, we do not train or fine-tune the BLIP2 model. We report the accuracy and the results are shown in Table 4. Footnote 1: [https://spacy.io/](https://spacy.io/) From Table 4, both local and global scene graphs enhance the VQA task's performance, underscoring the value of scene graphs for downstream tasks that demand intricate reasoning and common-sense knowledge. The results also suggest that local scene graphs, crafted in a task-specific manner, can offer richer commonsense knowledge insights. Moreover, the results demonstrate that models with superior reasoning capabilities yield improved results. ## 5 Conclusion We introduce a novel task, Local Scene Graph Generation, to effectively abstract relevant structural information from partial objects. This task narrows the gap between human cognitive processes and AI perception systems. To address this, we present a scalable framework, termed ELEGANT, which leverages collaboration among foundational models to realize zero-shot local scene graph generation. Additionally, We propose a new open-ended evaluation metric, ECLIPSE, surpassing the limitations of previously closed-set metrics with limited label space. Experiment results show that our approach outperforms baselines in the open-ended evaluation setting and performs significantly over prior methods in the close-set setting. Moreover, the predicted local scene graphs can significantly improve intricate comprehension and reasoning in downstream tasks. \begin{table} \begin{tabular}{c c c c} \hline \hline Method & Thinker & Scene Graph & ACC \\ \hline Baseline & - & - & 50.4 \\ \hline \multirow{3}{*}{Ours} & Vicuna & global & 51.9 \\ & & local & **55.4** \\ \cline{2-4} & GPT-3.5-Turbo & global & 54.2 \\ \cline{2-4} & & local & **58.3** \\ \hline \hline \end{tabular} \end{table} Table 4: Effectiveness of local scene graphs in VQA tasks. Figure 4: Visualization of Local Scene Graphs Generation. Subjects are depicted with red text and dashed boxes, while objects are highlighted with green text and solid boxes. Relationships are denoted in black texts.
2308.13518
Holographic Euclidean thermal correlator
In this paper, we compute holographic Euclidean thermal correlators of the stress tensor and $U(1)$ current from the AdS planar black hole. To this end, we set up perturbative boundary value problems for Einstein's gravity and Maxwell theory in the spirit of Gubser-Klebanov-Polyakov-Witten, with appropriate gauge fixing and regularity boundary conditions at the horizon of the black hole. The linearized Einstein equation and Maxwell equation in the black hole background are related to the Heun equation of degenerate local monodromy. Leveraging the connection relation of local solutions of the Heun equation, we partly solve the boundary value problem and obtain exact two-point thermal correlators for $U(1)$ current and stress tensor in the scalar and shear channels.
Song He, Yi Li
2023-08-25T17:54:17Z
http://arxiv.org/abs/2308.13518v3
# Holographic Euclidean thermal correlator ###### Abstract In this paper, we compute holographic Euclidean thermal correlators of the stress tensor and \(U(1)\) current from the AdS planar black hole. To this end, we set up perturbative boundary value problems for Einstein's gravity and Maxwell theory in the spirit of Gubser-Klebanov-Polyakov-Witten, with appropriate gauge fixing and regularity boundary conditions at the horizon of the black hole. The linearized Einstein equation and Maxwell equation in the black hole background are related to the Heun equation of degenerate local monodromy. Leveraging the connection relation of local solutions of the Heun equation, we partly solve the boundary value problem and obtain exact two-point thermal correlators for \(U(1)\) current and stress tensor in the scalar and shear channels. ArXiv ePrint: 2308.13518 + Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-119, USA ## 1 Introduction As an embodiment of the holographic principle [1; 2], the Anti-de Sitter gravity/conformal field theory (AdS/CFT) correspondence [3; 4; 5] establishes a connection between a quantum gravity theory in AdS space and a conformal field theory on the boundary. This equivalence is encapsulated in the Gubser-Klebanov-Polyakov-Witten (GKPW) relation, where the partition function of the conformal field theory with operator sources equals the gravity partition function with prescribed boundary conditions \[\langle e^{\int\phi_{0}O}\rangle_{CFT}=Z_{\rm G}[\phi_{0}] \tag{1}\] In the most helpful limit to exploit this correspondence, the classical gravity on-shell action becomes the generating functional of connected correlators of the strongly-coupled CFT \[I_{CFT}[\phi_{0}]=I_{\rm G,on-shell}[\phi_{0}] \tag{2}\] Correlators are computed by functional differentiation of the generating functional, which amounts to solving the perturbative boundary value problem for the bulk fields' equation of motion. This involves varying the boundary value of the bulk fields and solving for the corresponding variation of the on-shell configuration in the bulk. The near-boundary behavior is well-established, allowing the extraction of holographic correlators [6; 7; 8; 9]. However, solving the global boundary value problem is generally intricate, exemplified in cases like pure gravity [10]. Although the prescription is clear, explicit computation of holographic Euclidean correlators in the GKPW approach has been limited to pure AdS space and its quotient spaces, such as thermal AdS where the method of images can be applied (e.g., see [11] for thermal bootstrap emphasis). In our prior work [12], we computed holographic torus correlators of the stress tensor. This study focuses on Euclidean thermal two-point correlators of the stress tensor and \(U(1)\) current in four-dimensional CFTs. Beyond the Hawking-Page transition [13], the thermal state holographically corresponds to a five-dimensional Euclidean AdS planar black hole [14]. Correlators are derived by solving perturbative boundary value problems in Einstein's gravity and Maxwell theory for the \(U(1)\) gauge field in the black hole background. Two important steps are involved. The first is to appropriately fix the gauge and impose regularity boundary conditions at the horizon, ensuring a unique solution. The second step identifies the equations of motion as the Heun equation [15], and solves the boundary value problems with the connection relation of local solutions. The general connection relation was established in [16], and it has been applied to exact thermal correlators in Minkowski signature [17; 18], and employed in various black hole perturbation problems [19; 20; 21; 22; 23; 24]. In our case, the Heun equations feature degenerate local monodromy, with characteristic exponents differing by an integer. We compute the connection relation by taking a limit of the generic case. Ultimately, we obtain exact two-point correlators for the \(U(1)\) current and stress tensor in the scalar and shear channels (as defined in [25]). Thermal two-point correlators, also known as thermal spectral functions, have many important applications and have been studied in [25] using gauge invariants in each channel. In the final discussion section, we comment on our approach to holographic computation and relevant applications of thermal two-point correlators. ## 2 Holographic setup We start by reviewing the basics of holographic computation independent of the bulk background geometry. For Einstein's gravity, it's customary to work in the Fefferman-Graham gauge [6; 26] near the conformal boundary \[ds^{2}=\frac{dr^{2}}{r^{2}}+\frac{1}{r^{2}}\mathbf{g}_{ij}(r,x)dx^{i}dx^{j} \tag{1}\] and in dimension four, we have the series expansion \[\mathbf{g}_{ij}=\mathbf{g}_{ij}^{(0)}+r^{2}\mathbf{g}_{ij}^{(2)}+r^{4} \mathbf{g}_{ij}^{(4)}+r^{4}\log r\mathbf{h}_{ij}^{(4)}+\ldots \tag{2}\] The background metric of the holographic field theory \(\gamma_{ij}\) corresponds to \(\mathbf{g}_{ij}^{(0)}\), and the one-point correlator of the stress tensor, with appropriate renormalization, is given by [7; 27; 28] \[\langle T_{ij}\rangle=\frac{4}{16\pi G}\big{[}\mathbf{g}_{ij}^{(4)}-\frac{1}{ 8}\mathbf{g}_{ij}^{(0)}(\mathbf{P}^{(0)2}-\mathbf{P}_{kl}^{(0)}\mathbf{P}^{(0 )kl})-\frac{1}{2}\mathbf{P}_{ik}^{(0)}\mathbf{P}_{j}^{(0)k}+\frac{1}{4} \mathbf{P}^{(0)}\mathbf{P}_{ij}^{(0)}\big{]} \tag{3}\] where \(\mathbf{P}_{ij}^{(0)}\) is the Schouten tensor of \(\mathbf{g}_{ij}^{(0)}\). In our case, the holographic field theory lives on a flat background, so the terms of Schouten tensor do not contribute. We have \[\langle T_{ij}\rangle=\frac{1}{4\pi G}\mathbf{g}_{ij}^{(4)} \tag{4}\] The Einstein equation near the conformal boundary determines the series (2) in terms of \(\mathbf{g}_{ij}^{(0)}\) and \(\mathbf{g}_{ij}^{(4)}\) or equivalently the one point correlator \(\langle T_{ij}\rangle\), and imposes holographic Ward identities of conservation and the Weyl anomaly on \(\langle T_{ij}\rangle\). Near boundary solutions of the Einstein equation are in one-to-one correspondence to the pair \((\gamma_{ij},\langle T_{ij}\rangle)\). The global geometry of the bulk spacetime fully determines the one-point correlator as a functional of the boundary metric, from which we can compute multi-point correlators by functional differentiation. Similarly, for the \(U(1)\) gauge field, we can put it in the radial gauge near the conformal boundary using the Fefferman-Graham coordinates of the bulk metric \[A={\bf A}_{i}(r,x)dx^{i} \tag{5}\] with the series expansion \[{\bf A}_{i}={\bf A}_{i}^{(0)}+r^{2}{\bf A}_{i}^{(2)}+r^{2}\log r{\bf B}_{i}^{( 2)}+\ldots \tag{6}\] The one-point correlator with appropriate renormalization is given by \[\langle J_{i}\rangle=-2{\bf A}_{i}^{(2)} \tag{7}\] The Maxwell equation near the conformal boundary determines the series (6) in terms of \({\bf A}_{i}^{(0)}\) and \({\bf A}_{i}^{(2)}\) or equivalently the one point correlator \(\langle J_{i}\rangle\), and imposes the holographic Ward identity of conservation on \(\langle J_{i}\rangle\). The global geometry of the bulk spacetime fully determines \(\langle J_{i}\rangle\). Now we specialize in the holographic correlators from the five-dimensional AdS planar black hole. The black hole is a solid cylinder \(\mathbb{B}^{2}\times\mathbb{R}^{3}\) with the metric \[ds^{2}=\frac{1}{\rho^{2}}[(1-\frac{\rho^{4}}{\rho_{0}^{4}})^{-1}d\rho^{2}+(1- \frac{\rho^{4}}{\rho_{0}^{4}})dt^{2}+d\vec{x}^{2}] \tag{8}\] The period of Euclidean time \(t\), namely the inverse temperature, is \(\beta=\pi\rho_{0}\). The conformal boundary is at \(\rho=0\), and the horizon is at \(\rho=\rho_{0}\), being the \(\mathbb{R}^{3}\) axis of the cylinder. The standard Fefferman-Graham radial coordinate \(r\) is related to \(\rho\) by \[\rho=\frac{r}{\sqrt{1+\frac{r^{4}}{4\rho_{0}^{4}}}} \tag{9}\] For simplicity, we set \(\rho_{0}=1\) in the metric, effectively working in the unit of \(\rho_{0}\), and we will recover \(\rho_{0}\) dependence when final results are obtained. As a convention, we label bulk spacetime coordinate indices by Greek alphabets \(\mu,\nu,\rho,\ldots\), the boundary spacetime coordinate indices by Roman alphabets \(i,j,k,\ldots\) and the boundary space indices by \(a,b,c,\ldots\). ## 3 \(U(1)\) current Now, we work on the boundary value problem of the \(U(1)\) gauge field, beginning with gauge-fixing. We can put it in the radial gauge \(A_{\rho}=0\) in the region \(0\leq\rho<1\) (excluding the horizon) by a \(U(1)\) gauge transformation. For a global solution, its restriction to the region \(0\leq\rho<1\) must have a regular limit going to the horizon \(\rho=1\). Therefore, we formulate the boundary value problem in the radial gauge, with the boundary condition that the solution has a regular limit as \(\rho\to 1\) after a gauge transformation. To work out the explicit form of the boundary condition, we introduce the "cylindrical radial coordinate" \(\mathfrak{s}\) \[\cosh 2\mathfrak{s}=\frac{1}{\rho^{2}} \tag{10}\] Near the horizon \(\rho\to 1\) or \(\mathfrak{s}\to 0\), the metric takes the form of Euclidean metric in "cylindrical coordinates" \[ds^{2}\sim d\mathfrak{s}^{2}+\mathfrak{s}^{2}d(2t)^{2}+d\vec{x}^{2} \tag{11}\] and the horizon is properly covered by the "Cartesian coordinates" \[X =\mathfrak{s}\cos 2t\] \[Y =\mathfrak{s}\sin 2t\] \[\vec{x} =\vec{x} \tag{12}\] The gauge field is regular at the horizon if and only if its components in a coordinate chart that properly covers the horizon, for example the "Cartesian coordinates", are regular. That is, we have \[A=\mathbf{A}_{i}dx^{i} \tag{13}\] and there exists a \(U(1)\) gauge transformation \(\Lambda\), such that \[\lim_{\mathfrak{s}\to 0}A+d\Lambda=A_{X}^{*}(\vec{x})dX+A_{Y}^{*}(\vec{x})dY+A_{a }^{*}(\vec{x})dx^{a} \tag{14}\] The components on the right-hand side can only depend on \(\vec{x}\) because the \(t\)-circle shrinks to a point as \(\mathfrak{s}\to 0\). We find \[\lim_{\mathfrak{s}\to 0}\partial_{\mathfrak{s}}\Lambda=A_{X}^{*}( \vec{x})\cos 2t+A_{Y}^{*}(\vec{x})\sin 2t \tag{15}\] \[\lim_{\mathfrak{s}\to 0}\frac{\mathbf{A}_{t}+\partial_{t} \Lambda}{\mathfrak{s}}=-2A_{X}^{*}(\vec{x})\sin 2t+2A_{Y}^{*}(\vec{x})\cos 2t\] (16) \[\lim_{\mathfrak{s}\to 0}\mathbf{A}_{a}+\partial_{a} \Lambda=A_{a}^{*}(\vec{x}) \tag{17}\] From (15) we see \(\Lambda\) can be approximated as a linear function of \(\mathfrak{s}\) as \(\mathfrak{s}\to 0\) (or \(\rho\to 1\)), then from (17) we know \[\lim_{\rho\to 1}\mathbf{A}_{a}\;\text{exists} \tag{18}\] In addition, by integrating (16) over \(t\) we find \[\int_{0}^{\pi}dt\mathbf{A}_{t}|_{\rho=1}=0 \tag{19}\] This gauge fixing and regularity boundary conditions at the horizon, together with the boundary value \[\mathbf{A}_{i}|_{\rho=0}=\mathcal{A}_{i} \tag{20}\] as a turned-on source on the CFT side, determine a unique solution to the Maxwell equation as we will see. We utilize the translational symmetry in \(t,\vec{x}\) direction and work with Fourier modes \(\tilde{\mathbf{A}}_{i}\) with Matsubara frequency \(\omega=2m,m\in\mathbb{Z}\) and spatial momentum \(\vec{p}\). For simplicity, we also rotate the spatial momentum to the \(x^{1}\) direction. The Maxwell equation \[d*F=0 \tag{20}\] then decouples to the transverse channel for \(\tilde{\mathbf{A}}_{2},\tilde{\mathbf{A}}_{3}\) and the longitudinal channel for \(\tilde{\mathbf{A}}_{t},\tilde{\mathbf{A}}_{1}\). For the transverse component \(\tilde{\mathbf{A}}_{2}\) (and the same for \(\tilde{\mathbf{A}}_{3}\)) we have \[(\partial_{z}^{2}-\frac{2z}{1-z^{2}}\partial_{z}-\frac{4m^{2}+p^{2}(1-z^{2})}{ 4z(1-z^{2})^{2}})\tilde{\mathbf{A}}_{2}=0 \tag{21}\] where we used the convenient coordinate \(z=\rho^{2}\). This is an ordinary differential equation with four regular singularities \(z=0,1,-1,\infty\). By the substitution \(\tilde{\mathbf{A}}_{2}(z)=(1-z^{2})^{-\frac{1}{2}}w(z)\), we get a Heun equation in the normal form for \(w(z)\) \[(\partial_{z}^{2}+\frac{\frac{1}{4}-(\frac{1}{2})^{2}}{z^{2}}+ \frac{\frac{1}{4}-(\frac{m}{2})^{2}}{(z-1)^{2}}+\frac{\frac{1}{4}-(\frac{m}{2} )^{2}}{(z+1)^{2}}+\frac{p^{2}+4m^{2}-2}{8z(z-1)}-\frac{p^{2}+4m^{2}+2}{8z(z+1) })w(z)=0 \tag{22}\] with the Heun equation parameters \[t=-1,a_{0}=\frac{1}{2},a_{1}=\frac{|m|}{2},a_{t}=\frac{m}{2}i,a_{\infty}=\frac {1}{2},u=-\frac{p^{2}+4m^{2}+2}{8} \tag{23}\] We refer the readers to Appendix A for a brief review of Fuchsian differential equations, the Heun equation, its connection problem, and notational conventions. By the boundary condition (19), \(\tilde{\mathbf{A}}_{2}\) is regular at \(z=1\), so it must be proportional to the solution of exponent \(\frac{|m|}{2}\) at \(z=1\). The constant of proportionality is determined by the boundary condition \(\tilde{\mathbf{A}}_{2}|_{z=0}=\tilde{\mathcal{A}}_{2}\) and the connection relation (17). We find \[\tilde{\mathbf{A}}_{2}(\omega=2m,p,z)=\tilde{\mathcal{A}}_{2}( \omega,p)(1-z^{2})^{-\frac{1}{2}}\big{[}w_{-}^{(0)}+\frac{p^{2}+4m^{2}}{4}(-2 \psi(1)-1\\ +\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi(\theta\frac{m}{2}+\sigma a )-\frac{1}{2}\partial_{a_{0}}^{2}F\big{|}_{a_{0}=\frac{1}{2}}-\frac{2}{p^{2}+ 4m^{2}}(1+2\partial_{t}\partial_{a_{0}}F\big{|}_{a_{0}=\frac{1}{2},t=-1}))w_{+ }^{(0)}\big{]} \tag{24}\] For the longitudinal components \(\tilde{\mathbf{A}}_{t},\tilde{\mathbf{A}}_{1}\), we have \[\partial_{z}^{2}\tilde{\mathbf{A}}_{t}-\frac{p^{2}}{4z(1-z^{2})} \tilde{\mathbf{A}}_{t}+\frac{2mp}{4z(1-z^{2})}\tilde{\mathbf{A}}_{1}=0 \tag{25}\] \[\partial_{z}^{2}\tilde{\mathbf{A}}_{1}-\frac{2z}{1-z^{2}} \partial_{z}\tilde{\mathbf{A}}_{1}-\frac{4m^{2}}{4z(1-z^{2})^{2}}\tilde{ \mathbf{A}}_{1}+\frac{2mp}{4z(1-z^{2})^{2}}\tilde{\mathbf{A}}_{t}=0\] (26) \[\frac{2m}{1-z^{2}}\partial_{z}\tilde{\mathbf{A}}_{t}+p\partial_ {z}\tilde{\mathbf{A}}_{1}=0 \tag{27}\] Plugging (3.19) into \(\partial_{z}\big{(}z(1-z^{2})(3.17)\big{)}\), we obtain \[(\partial_{z}^{2}+\frac{1-3z^{2}}{z(1-z^{2})}\partial_{z}-\frac{p^{2}(1-z^{2})+4 m^{2}}{4z(1-z^{2})^{2}})\partial_{z}\tilde{\mathbf{A}}_{t}=0 \tag{3.20}\] When \(m\neq 0\), the solution to this third-order differential equation is determined by the three boundary conditions \[\tilde{\mathbf{A}}_{1}|_{z=1}\;\text{regular}\] \[\tilde{\mathbf{A}}_{1}|_{z=0}=\tilde{\mathcal{A}}_{1},\;\tilde{ \mathbf{A}}_{t}|_{z=0}=\tilde{\mathcal{A}}_{t} \tag{3.21}\] By the substitution \(\partial_{z}\tilde{\mathbf{A}}_{t}=z^{-\frac{1}{2}}(1-z^{2})^{-\frac{1}{2}}w(z)\), (3.20) can be transformed to the normal Heun equation \[\big{(}\partial_{z}^{2}+\frac{\frac{1}{4}-0^{2}}{z^{2}}+\frac{ \frac{1}{4}-(\frac{m}{2})^{2}}{(z-1)^{2}}+\frac{\frac{1}{4}-(\frac{m}{2}i)^{2 }}{(z+1)^{2}}+\frac{p^{2}+4m^{2}-6}{8z(z-1)}-\frac{p^{2}+4m^{2}+6}{8z(z+1)} \big{)}w(z)=0 \tag{3.22}\] with \[t=-1,a_{0}=0,a_{1}=\frac{|m|}{2},a_{t}=\frac{m}{2}i,a_{\infty}=1,u=-\frac{p^{ 2}+4m^{2}+6}{8} \tag{3.23}\] By (3.19) the solution must be proportional to \(w_{+}^{(1)}\) for \(\tilde{\mathbf{A}}_{1}\) to be regular at \(z=1\). The constant of proportionality can be further determined by using the connection relation (A.12) and evaluating (3.17) at \(z=0\). We find \[z^{\frac{1}{2}}\sqrt{1-z^{2}}\partial_{z}\tilde{\mathbf{A}}_{t}= \frac{2mp\tilde{\mathcal{A}}_{1}-p^{2}\tilde{\mathcal{A}}_{t}}{4} \big{[}-w_{-}^{(0)}(z)\] \[+(2\psi(1)-\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi(\frac{1}{2}+ \theta\frac{m}{2}+\sigma a)+\frac{1}{2}\partial_{a_{0}}^{2}F)w_{+}^{(0)} \big{]} \tag{3.24}\] Then we integrate to obtain \(\tilde{\mathbf{A}}_{t}\) with the constant of integration given by the boundary value \(\tilde{\mathcal{A}}_{t}\) \[\tilde{\mathbf{A}}_{t}= \tilde{\mathcal{A}}_{t}+\frac{2mp\tilde{\mathcal{A}}_{1}-p^{2} \tilde{\mathcal{A}}_{t}}{4}\big{[}-(z\log z+\ldots)\] \[+(2\psi(1)+1-\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi(\frac{1}{2} +\theta\frac{m}{2}+\sigma a)+\frac{1}{2}\partial_{a_{0}}^{2}F)(z+\ldots))\big{]} \tag{3.25}\] We get \(\tilde{\mathbf{A}}_{1}\) by plugging \(\tilde{\mathbf{A}}_{t}\) back to (3.19) \[\tilde{\mathbf{A}}_{1}= \tilde{\mathcal{A}}_{1}(1+\ldots)+\frac{2m(p\tilde{\mathcal{A}}_{ t}-2m\tilde{\mathcal{A}}_{1})}{4}\] \[\times\big{(}2\psi(1)+1-\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi( \frac{1}{2}+\theta\frac{m}{2}+\sigma a)+\frac{1}{2}\partial_{a_{0}}^{2}F\big{)} (z+\ldots) \tag{3.26}\] When \(m=0\), we get \(\tilde{\mathbf{A}}_{1}=\tilde{\mathcal{A}}_{1}\) from (3.19). We still solve for \(z^{\frac{1}{2}}\sqrt{1-z^{2}}\partial_{z}\tilde{\mathbf{A}}_{t}\) from (3.20), which is a linear combination of \(w_{+}^{(1)}=\sqrt{1-z}(1+\ldots)\) and \(w_{-}^{(1)}=\sqrt{1-z}(\log(1-z)+\ldots)\). Then we plug it into (3.17) and evaluate at \(z=1\). We have \(\tilde{\mathbf{A}}_{t}(m=0)|_{z=1}=0\) from the boundary condition (3.10), and we find \(z^{\frac{1}{2}}\sqrt{1-z^{2}}\partial_{z}\tilde{\mathbf{A}}_{t}\) must be proportional to \(w_{+}^{(1)}\), the same as the previous case when \(m\neq 0\). So, we can carry over the results for \(m\neq 0\) and set \(m=0\) in the expression. To obtain the holographic correlators, we recover the dependence on \(\rho_{0}\) or the inverse temperature \(\beta=\pi\rho_{0}\), and read off \(\mathbf{A}_{i}^{(2)}\) from the bulk gauge field \(\mathbf{A}_{i}\) (in our case the coefficient of \(z^{1}\)) \[\tilde{\mathbf{A}}_{2}^{(2)}(\omega=\frac{2m}{\rho_{0}},p)=-\frac {p^{2}+\omega^{2}}{4}\tilde{\mathcal{A}}_{2}(\omega,p)\mathcal{C}_{1}(\omega= \frac{2m}{\rho_{0}},p)\] \[\tilde{\mathbf{A}}_{t}^{(2)}(\omega=\frac{2m}{\rho_{0}},p)=\frac {\omega p\tilde{\mathcal{A}}_{1}(\omega,p)-p^{2}\tilde{\mathcal{A}}_{t}( \omega,p)}{4}\mathcal{C}_{2}(\omega=\frac{2m}{\rho_{0}},p)\] \[\tilde{\mathbf{A}}_{1}^{(2)}(\omega=\frac{2m}{\rho_{0}},p)=\frac {\omega p\tilde{\mathcal{A}}_{t}(\omega,p)-\omega^{2}\tilde{\mathcal{A}}_{1}( \omega,p)}{4}\mathcal{C}_{2}(\omega=\frac{2m}{\rho_{0}},p) \tag{3.27}\] where \[\mathcal{C}_{1}(\omega=\frac{2m}{\rho_{0}},p)=(2\psi(1)+1-\frac{1 }{2}\sum_{\theta,\sigma=\pm}\psi(\theta\frac{m}{2}+\sigma a)\] \[+\frac{1}{2}\partial_{a_{0}}^{2}F+\frac{2}{\rho_{0}^{2}p^{2}+4m^{ 2}}(1+\partial_{t}\partial_{a_{0}}F))\big{|}_{t=-1,a_{0}=\frac{1}{2},a_{1}= \frac{|m|}{2},a_{t}=\frac{m}{2},i,a_{\infty}=\frac{1}{2},u=-\frac{\rho_{0}^{2 }p^{2}+4m^{2}+2}{8}}\] \[\mathcal{C}_{2}(\omega=\frac{2m}{\rho_{0}},p)=(2\psi(1)+1-\frac{1 }{2}\sum_{\theta,\sigma=\pm}\psi(\frac{1}{2}+\theta\frac{m}{2}+\sigma a)\] \[+\frac{1}{2}\partial_{a_{0}}^{2}F)\big{|}_{t=-1,a_{0}=0,a_{1}= \frac{|m|}{2},a_{t}=\frac{m}{2},i,a_{\infty}=1,u=-\frac{\rho_{0}^{2}p^{2}+4m^{ 2}+6}{8}} \tag{3.28}\] We compute two-point correlators by the formula for renormalized one-point correlators (2.7). Rotating the spatial momentum to a general direction, we find \[\langle\tilde{J}_{t}(\omega,p)\tilde{J}_{t}(-\omega,-p)\rangle= \frac{p^{2}}{2}\mathcal{C}_{2}(\omega,p)\] \[\langle\tilde{J}_{t}(\omega,p)\tilde{J}_{b}(-\omega,-p)\rangle= -\frac{\omega}{2}\mathcal{C}_{2}(\omega,p)p_{b}\] \[\langle\tilde{J}_{a}(\omega,p)\tilde{J}_{b}(-\omega,-p)\rangle= \frac{p^{2}+\omega^{2}}{2}\mathcal{C}_{1}(\omega,p)(\delta_{ab}-\frac{p_{a }p_{b}}{p^{2}})+\frac{\omega^{2}}{2}\mathcal{C}_{2}(\omega,p)\frac{p_{a}p_{b}}{ p^{2}} \tag{3.29}\] ## 4 Stress tensor Gauge fixing and regularity boundary conditions at the horizon for Einstein's gravity follow the same line as the Maxwell theory. We can make the solid cylinder coordinates \(\rho,t,\vec{x}\) the Fefferman-Graham coordinates of the perturbed bulk metric in the region \(0\leq\rho<1\) by a diffeomorphism. Then, the boundary value problem is formulated in this gauge with the boundary condition that the metric has a regular limit as \(\rho\to 1\) after a diffeomorphism. For a first-order perturbation of the bulk metric, we have \[\delta ds^{2}=\delta\mathbf{g}_{ij}dx^{i}dx^{j} \tag{4.1}\] And to the first order, the diffeomorphism is characterized by a vector \(V\), then the regularity boundary condition at the horizon is the variation of the bulk metric \[\mathcal{L}_{V}(ds^{2})+\delta ds^{2} \tag{10}\] has a regular limit as \(\rho\to 1\) (or \(\mathfrak{s}\to 0\)), that is, its components in the "Cartesian coordinates" (11) are regular. We find \[\lim_{\mathfrak{s}\to 0}2\partial_{\mathfrak{s}}V^{\mathfrak{s}}= \cos^{2}2t\delta g^{*}_{XX}+2\cos 2t\sin 2t\delta g^{*}_{XY}+\sin^{2}2t\delta g ^{*}_{YY} \tag{11}\] \[\lim_{\mathfrak{s}\to 0}\frac{\partial_{t}V^{\mathfrak{s}}+ \frac{\sinh^{2}2\mathfrak{s}}{\cosh 2\mathfrak{s}}\partial_{\mathfrak{s}}V^{t}}{2 \mathfrak{s}}\] \[=-\cos 2t\sin 2t\delta g^{*}_{XX}+(\cos^{2}2t-\sin^{2}2t)\delta g ^{*}_{XY}+\cos 2t\sin 2t\delta g^{*}_{YY}\] (12) \[\lim_{\mathfrak{s}\to 0}\partial_{a}V^{\mathfrak{s}}+\cosh 2 \mathfrak{s}\partial_{\mathfrak{s}}V^{x}=\cos 2t\delta g^{*}_{Xa}+\sin 2t\delta g ^{*}_{Ya}\] (13) \[\lim_{\mathfrak{s}\to 0}\frac{\delta\mathbf{g}_{tt}+\frac{ \sinh 2\mathfrak{s}}{\cosh^{2}2\mathfrak{s}}((3+4\cosh 4\mathfrak{s})V^{ \mathfrak{s}}+\sinh 4\mathfrak{s}\partial_{t}V^{t})}{4\mathfrak{s}^{2}}\] \[=\sin^{2}2t\delta g^{*}_{XX}-2\cos 2t\sin 2t\delta g^{*}_{XY}+ \cos^{2}2t\delta g^{*}_{YY}\] (14) \[\lim_{\mathfrak{s}\to 0}\delta\mathbf{g}_{ta}+\frac{\sinh^{2}2 \mathfrak{s}}{\cosh 2\mathfrak{s}}\partial_{a}V^{t}+\cosh 2\mathfrak{s} \partial_{t}V^{a}=-2\mathfrak{s}\sin 2t\delta g^{*}_{Xa}+2\mathfrak{s}\cos 2t \delta g^{*}_{Ya}\] (15) \[\lim_{\mathfrak{s}\to 0}\delta\mathbf{g}_{ab}+\cosh 2 \mathfrak{s}(\partial_{a}V^{b}+\partial_{b}V^{a})+2\sinh 2\mathfrak{s}V^{ \mathfrak{s}}\delta_{ab}=\delta g^{*}_{ab} \tag{16}\] (11) and (13) show that \(V^{\mathfrak{s}}\) and \(V^{a}\) can be approximated by linear function in \(\mathfrak{s}\) as \(\mathfrak{s}\to 0\) (or \(\rho\to 1\)). Plugging into (16), we see \[\lim_{\rho\to 1}\delta\mathbf{g}_{ab}\text{ exists} \tag{17}\] By (12) we know \[V^{t}=O(\frac{1}{\mathfrak{s}}) \tag{18}\] as \(\mathfrak{s}\to 0\). Then by integrating (15) over \(t\) we find \[\int_{0}^{\pi}dt\delta\mathbf{g}_{ta}|_{\rho=1}=0 \tag{19}\] Similar to the case of \(U(1)\) gauge field, we work in Fourier modes and rotate the spatial momentum to the \(x^{1}\) direction. And for simplicity, we use the variable \(\mathbf{h}_{ij}=\rho^{2}\delta\mathbf{g}_{ij}\) which on the conformal boundary equals the variation of the CFT background metric \[\mathbf{h}_{ij}|_{\rho=0}=\delta\gamma_{ij} \tag{20}\] The linearized Einstein equation \[\frac{1}{2}(\nabla^{\lambda}\nabla_{\mu}\delta g_{\lambda\nu}+\nabla^{\lambda }\nabla_{\nu}\delta g_{\lambda\mu}-\nabla^{\lambda}\nabla_{\lambda}\delta g_ {\mu\nu}-\nabla_{\mu}\nabla_{\nu}\delta g^{\lambda}_{\lambda})+4\delta g_{ \mu\nu}=0 \tag{21}\] ecouples to the scalar channel of \(\tilde{\mathbf{h}}_{23}\) and \(\tilde{\mathbf{h}}_{22}-\tilde{\mathbf{h}}_{33}\), the shear channel of \(\tilde{\mathbf{h}}_{t2},\tilde{\mathbf{h}}_{12}\) and \(\tilde{\mathbf{h}}_{t3},\tilde{\mathbf{h}}_{13}\), and the sound channel of \(\tilde{\mathbf{h}}_{tt},\tilde{\mathbf{h}}_{11},\tilde{\mathbf{h}}_{22}+ \tilde{\mathbf{h}}_{33},\tilde{\mathbf{h}}_{t1}\). In the scalar channel, we have \[\partial_{z}^{2}\tilde{\mathbf{h}}_{23}-\frac{1+z^{2}}{z(1-z^{2})}\partial_{z} \tilde{\mathbf{h}}_{23}-\frac{p^{2}(1-z^{2})+\omega^{2}}{4z(1-z^{2})^{2}} \tilde{\mathbf{h}}_{23}=0 \tag{4.14}\] and in the shear channel, we have \[\partial_{z}^{2}\tilde{\mathbf{h}}_{t2}-\frac{1}{z}\partial_{z} \tilde{\mathbf{h}}_{t2}-\frac{p^{2}}{4z(1-z^{2})}\tilde{\mathbf{h}}_{t2}+ \frac{2mp}{4z(1-z^{2})}\tilde{\mathbf{h}}_{12}=0 \tag{4.15}\] \[\partial_{z}^{2}\tilde{\mathbf{h}}_{12}-\frac{1+z^{2}}{z(1-z^{2} )}\partial_{z}\tilde{\mathbf{h}}_{12}-\frac{4m^{2}}{4z(1-z^{2})^{2}}\tilde{ \mathbf{h}}_{12}+\frac{2mp}{4z(1-z^{2})^{2}}\tilde{\mathbf{h}}_{t2}=0\] (4.16) \[\frac{2m}{1-z^{2}}\partial_{z}\tilde{\mathbf{h}}_{t2}+p\partial _{z}\tilde{\mathbf{h}}_{12}=0 \tag{4.17}\] The computation in these two channels is very similar to that of the transverse channel and longitudinal channel of the \(U(1)\) gauge field in the previous section, so we present the results of correlators without showing the detailed computation \[\langle\tilde{T}_{23}(\omega=\frac{2m}{\rho_{0}},p)\tilde{T}_{23} (-\omega,-p)\rangle=\frac{1}{4\pi G}\frac{(p^{2}+\omega^{2})^{2}}{32}\mathcal{ C}_{3}(\omega=\frac{2m}{\rho_{0}},p)\] \[\langle\tilde{T}_{t2}(\omega=\frac{2m}{\rho_{0}},p)\tilde{T}_{t2} (-\omega,-p)\rangle=\frac{1}{4\pi G}\frac{p^{2}+\omega^{2}}{32}p^{2}\mathcal{ C}_{4}(\omega=\frac{2m}{\rho_{0}},p)\] \[\langle\tilde{T}_{t2}(\omega=\frac{2m}{\rho_{0}},p)\tilde{T}_{12} (-\omega,-p)\rangle=-\frac{1}{4\pi G}\frac{p^{2}+\omega^{2}}{32}\omega p \mathcal{C}_{4}(\omega=\frac{2m}{\rho_{0}},p)\] \[\langle\tilde{T}_{12}(\omega=\frac{2m}{\rho_{0}},p)\tilde{T}_{12} (-\omega,-p)\rangle=\frac{1}{4\pi G}\frac{p^{2}+\omega^{2}}{32}\omega^{2} \mathcal{C}_{4}(\omega=\frac{2m}{\rho_{0}},p) \tag{4.18}\] with \[\mathcal{C}_{3}(\omega=\frac{2m}{\rho_{0}},p)=\big{[}2\psi(1)+ \frac{5}{2}-\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi(-\frac{1}{2}+\theta\frac {m}{2}+\sigma a)\] \[\quad+\frac{1}{2}\partial_{a_{0}}^{2}F-\frac{16}{(\rho_{0}^{2}p^{ 2}+4m^{2})^{2}}\big{(}4a^{2}-2a^{2}m^{2}+\frac{1}{4}m^{4}+4(\partial_{t}F)^{ 2}+(-8a^{2}+2m^{2})\partial_{t}F\] \[\quad-4\partial_{t}F\partial_{t}\partial_{a_{0}}F+(-2+4a^{2}-m^{2 })\partial_{t}\partial_{a_{0}}F\big{)}\big{]}\big{|}_{t=-1,a_{0}=1,a_{1}=\frac{ |m|}{2},a_{t}=\frac{m}{2}i,a_{\infty}=0,u=-\frac{\rho_{0}^{2}p^{2}+4m^{2}-2}{8}}\] \[\quad\mathcal{C}_{4}(\omega=\frac{2m}{\rho_{0}},p)=\big{(}2\psi(1) +1-\frac{1}{2}\sum_{\theta,\sigma=\pm}\psi(\theta\frac{m}{2}+\sigma a)\] \[\quad+\frac{1}{2}\partial_{a_{0}}^{2}F+\frac{2}{\rho_{0}^{2}p^{2} +4m^{2}}(1+2\partial_{t}\partial_{a_{0}}F)\big{)}\big{|}_{t=-1,a_{0}=\frac{1} {2},a_{1}=\frac{|m|}{2},a_{t}=\frac{m}{2}i,a_{\infty}=\frac{3}{2},u=-\frac{ \rho_{0}^{2}p^{2}+4m^{2}+10}{8}} \tag{4.19}\] In the sound channel, we have \[\partial_{z}^{2}\tilde{\mathbf{h}}_{tt}-\frac{3-5z^{2}}{2z(1-z^{2}) }\partial_{z}\tilde{\mathbf{h}}_{tt}-\frac{1+z^{2}}{2z}\partial_{z}(\tilde{ \mathbf{h}}_{11}+\tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33})+\frac{-4z+12z ^{3}-p^{2}(1-z^{2})}{4z(1-z^{2})^{2}}\tilde{\mathbf{h}}_{tt}\] \[-\frac{4m^{2}}{4z(1-z^{2})}(\tilde{\mathbf{h}}_{11}+\tilde{ \mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33})+\frac{2mp}{2z(1-z^{2})}\tilde{ \mathbf{h}}_{t1}=0 \tag{4.20}\] \[\partial_{z}^{2}\tilde{\mathbf{h}}_{11}-\frac{3+z^{2}}{2z(1-z^{2} )}\partial_{z}\tilde{\mathbf{h}}_{11}-\frac{1}{2z(1-z^{2})}\partial_{z}\tilde{ \mathbf{h}}_{tt}-\frac{1}{2z}\partial_{z}(\tilde{\mathbf{h}}_{22}+\tilde{ \mathbf{h}}_{33})\] \[-\frac{4m^{2}}{4z(1-z^{2})^{2}}\tilde{\mathbf{h}}_{11}-\frac{p^{ 2}+4z}{4z(1-z^{2})^{2}}\tilde{\mathbf{h}}_{tt}-\frac{p^{2}}{4z(1-z^{2})}( \tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33})+\frac{2mp}{2z(1-z^{2})^{2}} \tilde{\mathbf{h}}_{t1}=0\] (4.21) \[\partial_{z}^{2}(\tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33} )-\frac{2}{z(1-z^{2})}\partial_{z}(\tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}} _{33})-\frac{1}{z(1-z^{2})}\partial_{z}\tilde{\mathbf{h}}_{tt}-\frac{1}{z} \partial_{z}\tilde{\mathbf{h}}_{11}\] \[-\frac{4m^{2}+p^{2}(1-z^{2})}{4z(1-z^{2})^{2}}(\tilde{\mathbf{h} }_{22}+\tilde{\mathbf{h}}_{33})-\frac{2}{(1-z^{2})^{2}}\tilde{\mathbf{h}}_{tt }=0\] (4.22) \[\partial_{z}^{2}\tilde{\mathbf{h}}_{t1}-\frac{1}{z}\partial_{z} \tilde{\mathbf{h}}_{t1}-\frac{2mp}{4z(1-z^{2})}(\tilde{\mathbf{h}}_{22}+ \tilde{\mathbf{h}}_{33})=0\] (4.23) \[\partial_{z}^{2}(\tilde{\mathbf{h}}_{11}+\tilde{\mathbf{h}}_{22} +\tilde{\mathbf{h}}_{33})+\frac{1}{1-z^{2}}\partial_{z}^{2}\tilde{\mathbf{h} }_{tt}-\frac{z}{1-z^{2}}\partial_{z}(\tilde{\mathbf{h}}_{11}+\tilde{\mathbf{h }}_{22}+\tilde{\mathbf{h}}_{33})\] \[+\frac{z}{(1-z^{2})^{2}}\partial_{z}\tilde{\mathbf{h}}_{tt}+ \frac{2}{(1-z^{2})^{3}}\tilde{\mathbf{h}}_{tt}=0\] (4.24) \[2m\partial_{z}(\tilde{\mathbf{h}}_{11}+\tilde{\mathbf{h}}_{22} +\tilde{\mathbf{h}}_{33})+\frac{2mz}{1-z^{2}}\partial_{z}(\tilde{\mathbf{h}}_{ 11}+\tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33})-p\partial_{z}\tilde{ \mathbf{h}}_{t1}-\frac{2pz}{1-z^{2}}\tilde{\mathbf{h}}_{t1}=0\] (4.25) \[p\partial_{z}(\tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33})+ \frac{p}{1-z^{2}}\partial_{z}\tilde{\mathbf{h}}_{tt}-\frac{2m}{1-z^{2}} \partial_{z}\tilde{\mathbf{h}}_{t1}+\frac{pz}{(1-z^{2})^{2}}\tilde{\mathbf{h}} _{tt}=0 \tag{4.26}\] We don't know how to analytically solve the boundary value problem in the sound channel. For future reference, we can reduce the sound channel to a five-dimensional first-order equation of variables \(\tilde{\mathbf{h}}_{tt},\tilde{\mathbf{h}}_{11},\frac{\tilde{\mathbf{h}}_{22} +\tilde{\mathbf{h}}_{33}}{2},\tilde{\mathbf{h}}_{t1},\partial_{z}\tilde{ \mathbf{h}}_{t1}\) (a similar equation can be found in [25]), and by the substitution \[\begin{pmatrix}\tilde{\mathbf{h}}_{tt}\\ \tilde{\mathbf{h}}_{11}\\ \tilde{\mathbf{h}}_{22}+\tilde{\mathbf{h}}_{33}\\ \tilde{\mathbf{h}}_{t1}\\ \partial_{z}\tilde{\mathbf{h}}_{t1}\end{pmatrix}=\begin{pmatrix}0&-\frac{1}{3} (1-z^{2})^{2}&\frac{2}{3}z(1-z^{2})&0&0\\ -z^{2}&1-z^{2}&\frac{2}{3}z&0&0\\ \frac{1}{2}z^{2}&0&-\frac{1}{3}z&0&0\\ 0&0&0&1-z^{2}&0\\ 0&0&0&z\end{pmatrix}H \tag{4.27}\] we can transform the equation into a Fuchsian system of normal form \[\partial_{z}H=(\frac{M_{0}}{z}+\frac{M_{1}}{z-1}+\frac{M_{-1}}{z+1})H \tag{4.28}\] with \[M_{0}=\begin{pmatrix}-2&-\frac{2}{3}&0&\frac{p}{3m}&\frac{12m^{2}+p^{2}}{6mp}\\ 0&0&0&0&0\\ 0&-m^{2}+\frac{p^{2}}{12}&-1&mp&0\\ 0&0&0&0&0\\ 0&0&-\frac{mp}{3}&0&2\end{pmatrix}\] \[M_{1}=\begin{pmatrix}0&0&-\frac{1}{3}&0&-\frac{m}{p}\\ 0&-\frac{1}{2}&0&-\frac{p}{2m}&-\frac{p}{4m}\\ \frac{p^{2}}{8}&\frac{-1+m^{2}}{2}&2&\frac{p(1-m^{2})}{2m}&\frac{p}{4m}\\ 0&0&0&-1&-\frac{1}{2}\\ -\frac{mp}{4}&0&\frac{mp}{6}&0&0\end{pmatrix}\] \[M_{-1}=\begin{pmatrix}0&0&\frac{1}{3}&0&-\frac{m}{p}\\ 0&-\frac{1}{2}&0&-\frac{p}{2m}&-\frac{p}{4m}\\ \frac{p^{2}}{8}&\frac{1+m^{2}}{2}&2&-\frac{p(1+m^{2})}{2m}&-\frac{p}{4m}\\ 0&0&0&-1&-\frac{1}{2}\\ \frac{mp}{4}&0&\frac{mp}{6}&0&0\end{pmatrix} \tag{4.29}\] ## 5 Summary and discussion In our study, we calculated holographic Euclidean thermal correlators of the \(U(1)\) current and stress tensor for four-dimensional CFTs using the AdS\({}_{5}\) planar black hole, following the approach of GKPW. By utilizing the connection relation of local solutions of the Heun equation, we obtained exact correlators for the \(U(1)\) current and stress tensor in the scalar and shear channels. Extensive research has focused on thermal two-point correlators (thermal spectral functions). Notably, [25] demonstrated the presence of gauge invariants in each channel that diagonalize coupled differential equations. These invariants and their derivatives render the on-shell action quadratic. Thermal two-point correlators have been computed using this formalism numerically or analytically by approximations [29; 30]. For example in the longitudinal channel the gauge invariant is \(E_{L}=p\tilde{\mathbf{A}}_{t}-\omega\tilde{\mathbf{A}}_{1}\) and we have \[\partial_{z}^{2}E_{L}-\frac{2\omega^{2}z}{(1-z^{2})(\omega^{2}+p^{2}(1-z^{2}) )}\partial_{z}E_{L}-\frac{\omega^{2}+p^{2}(1-z^{2})}{4z(1-z^{2})^{2}}E_{L}=0 \tag{5.1}\] This is a Fuchsian differential equation with six singularities. The two singularities \(z=\pm\sqrt{1+\frac{\omega^{2}}{p^{2}}}\) are apparent singularities since they don't appear in the equation of the fields. One can verify that these apparent singularities cannot be transformed away by a substitution \(E_{L}(z)=P(z)f(z)\) where \(P(z)\) is a meromorphic function that does not introduce new singularities. In essence, these apparent singularities remain inherent to the equation. We don't know how to relate this equation to the Heun equation and obtain the exact holographic correlators. From the technical standpoint, we want to work with equations of fields, and in the Euclidean signature, the boundary conditions of fields with gauge/diffeomorphism symmetry are clearly specified. This is the technical reason for our approach of holographic computation, in addition to giving an illustrative example of Euclidean boundary value problems. Thermal two-point correlators find diverse applications. They characterize the linear response to perturbations in thermal equilibrium. They can be used to compute transport coefficients such as shear viscosity, thermal conductivity, and electric conductivity [31; 32], and higher order transport coefficients (see [33; 34] for formula of second order coefficients in terms of two-point correlators and holographic computation). In addition, we can probe the chaotic dynamics by studying the pole-skipping of the correlators [35; 36; 37]. These correlators also encode the information on operator product expansion (OPE) of holographic CFTs. For instance, [38; 39; 40] computed holographic correlators in the OPE limit via near-boundary analysis, extracting OPE coefficients for multi-stress tensors. For integer operator dimension with operator mixing, exact two-point correlators are necessary for complete OPE coefficient extraction. We would like to thank Alba Grassi, Cristoforo Iossa, Yun-Ze Li, Hongfei Shu, Ashish Shukla and Yunda Zhang for their helpful discussion. S.H. would appreciate the financial support from the Fundamental Research Funds for the Central Universities and Max Planck Partner Group and the Natural Science Foundation of China (NSFC) Grants No. 12075101 and No. 12235016. ## Appendix A Fuchsian ODE, the Heun equation and connection problem In this appendix, we briefly review Fuchsian differential equations, the Heun equation, and its connection relation we used in the computation in the main text. An ordinary differential equation (ODE) is called Fuchsian if the coefficients are rational functions and all singularities are regular. Eigenvectors of local monodromy constitute a natural basis of local solutions around singularities. When eigenvalues of the local monodromy are all distinct, eigenvectors span the space of local solutions, and they take the form of a series \[w_{k}^{(z_{0})}=(z-z_{0})^{\rho_{k}}\sum_{i=0}^{\infty}c_{i}(z-z_ {0})^{i} \tag{10}\] where \(z_{0}\) is the singularity, \(k\) labels the local solution and the prefactor \((z-z_{0})^{\rho_{k}}\) captures the local monodromy. The characteristic exponents \(\rho_{k}\) are computed as the roots of the indicial equation. We usually adopt the normalization that \(c_{0}=1\). When we have repeated eigenvalues of the local monodromy, that is some characteristic exponents differ by integers, we may need generalized eigenvectors to span the space of local solutions, and they are expressed as series with logarithms. For a second order ODE, we label the two characteristic exponents as \(\rho_{+},\rho_{-}\), with \(\mathrm{Re}\rho_{+}\geq\mathrm{Re}\rho_{-}\). There is always a series solution without logarithm \(w_{+}^{(z_{0})}\) with the exponent \({\rho_{+}}\)1. If two exponents differ by an integer, the other solution \(w_{-}^{(z_{0})}\) to form a basis may contain logarithm. There is also no canonical choice of \(w_{-}^{(z_{0})}\) since we can add any constant multiple of \(w_{+}^{(z_{0})}\) to \(w_{-}^{(z_{0})}\). For computational convenience, we choose the convention that the coefficient of the power \((z-z_{0})^{\rho_{+}}\) is zero in \(w_{-}^{(z_{0})}\). The Heun equation is the second-order Fuchsian ODE with four regular singularities. By Mobius transformation and substitutions, we can bring it to the normal form \[\big{(}\partial_{z}^{2}+\frac{\frac{1}{4}-a_{0}^{2}}{z^{2}}+\frac {\frac{1}{4}-a_{1}^{2}}{(z-1)^{2}}+\frac{\frac{1}{4}-a_{t}^{2}}{(z-t)^{2}}- \frac{\frac{1}{2}-a_{1}^{2}-a_{t}^{2}-a_{0}^{2}+a_{\infty}^{2}+u}{z(z-1)}+\frac {u}{z(z-t)}\big{)}w(z)=0 \tag{10}\] The four singularities with exponents at these points are \[z=0,\rho=\frac{1}{2}\pm a_{0}\] \[z=1,\rho=\frac{1}{2}\pm a_{1}\] \[z=t,\rho=\frac{1}{2}\pm a_{t}\] \[z=\infty,\rho=-\frac{1}{2}\pm a_{\infty} \tag{11}\] We adopt the convention that \(\text{Re}a_{0}\geq 0\) etc., so the exponents with plus sign will be the exponent with greater real part \(\rho^{+}\). The connection relation of the local solutions in the generic case (that is, characteristic exponents do not differ by an integer) was studied in [16] by relating the Heun equation to the Belavin-Polyakov-Zamolodchikov (BPZ) equation [41] satisfied by conformal blocks with degenerate insertion2 in the Liouville field theory in the semiclassical limit. By the Alday-Gaiotto-Tachikawa (AGT) correspondence, the Liouville correlators can be exactly computed by localization in supersymmetric gauge theories [44; 45; 46; 47; 48]. Without losing generality, let \(z=0\) and \(z=1\) be two adjacent singularities, the connection relation between local solutions around these two points is Footnote 2: One can also refer to the relevant studies offered by [42; 43]. \[w_{\theta}^{(1)}(z)=\sum_{\theta^{\prime}=\pm}\mathcal{M}_{\theta \theta^{\prime}}(a_{1},a_{0};a)e^{(\frac{\theta}{2}\partial_{a_{1}}-\frac{ \theta^{\prime}}{2}\partial_{a_{0}})F(\frac{a_{t}}{a_{\infty}}\frac{a_{1}}{a_ {0}};\frac{1}{t})}w_{\theta^{\prime}}^{(0)}(z) \tag{12}\] where \[\mathcal{M}_{\theta\theta^{\prime}}(a_{1},a_{0};a)=\frac{\Gamma(-2 \theta^{{}^{\prime}}a_{0})\Gamma(1+2\theta a_{1})}{\Gamma(\frac{1}{2}+\theta a _{1}-\theta^{\prime}a_{0}+a)\Gamma(\frac{1}{2}+\theta a_{1}-\theta^{\prime}a_ {0}-a)} \tag{13}\] and \(F\) is the Nekrasov-Shatashvili function, defined as power series in \(\frac{1}{t}\) with combinatorially defined rational functions of other parameters as the coefficients. We refer the reader to Appendix C in [17] (or Appendix C in [16]) for the detailed definition 3. The exchange momentum \(a\) is to be implicitly determined from the relation \[u=-\frac{1}{4}-a^{2}+a_{t}^{2}+a_{0}^{2}+t\partial_{t}F \tag{10}\] In our computation, the masslessness of the bulk fields leads to a degenerate local monodromy of the Heun equation at \(z=0\) (the conformal boundary), that is, two exponents differ by an integer (\(a_{0}\) becomes a half-integer). This degenerate scenario can be derived as a limit of the generic case, as a specific solution to the Heun equation continuously depends on the parameters. The emergence of logarithm and the discontinuity of the local monodromy basis reflect a qualitative change of the local monodromy, rather than a specific solution. The solution \(w_{+}^{(1)}\) remains well-defined and continuously depends on parameters including \(a_{0}\), even when \(a_{1}\) approaches half-integers 4. We proceed to take the limit \(a_{0}\rightarrow\frac{N}{2},N\in\mathbb{N}\) while keeping other parameters, such as \(t,a_{1},a_{t},a_{\infty},a\), fixed 5. For \(a_{0}=0\) we have Footnote 4: Meanwhile \(w_{-}^{(1)}\) is not continuous when \(a_{1}\) approaches half-integers. When both \(a_{0}\) and \(a_{1}\) are half-integers, the complete connection relation is computed by solving two linear equations obtained from the limits of \(w_{+}^{(0)}\) and \(w_{+}^{(1)}\). Footnote 5: Another curve in the parameter space can also be chosen to approach the limit, such as fixing \(u\), an explicit parameter in the Heun equation, instead of \(a\). However, as the connection coefficients explicitly depend on \(a\), fixing \(a\) yields a relatively simple expression for the limit. \[w_{+}^{(1)}=\lim_{a_{0}\to 0}\frac{1}{2a_{0}}\big{[} \frac{\Gamma(1+2a_{1})\Gamma(1+2a_{0})}{\Gamma(\frac{1}{2}+a_{1}+a_{0}+a) \Gamma(\frac{1}{2}+a_{1}+a_{0}-a)}e^{(\frac{1}{2}\partial_{a_{1}}+\frac{1}{2} \partial_{a_{0}})F}z^{\frac{1}{2}-a_{0}}(1+\ldots)\] \[-\frac{\Gamma(1+2a_{1})\Gamma(1-2a_{0})}{\Gamma(\frac{1}{2}+a_{1 }-a_{0}+a)\Gamma(\frac{1}{2}+a_{1}-a_{0}-a)}e^{(\frac{1}{2}\partial_{a_{1}}- \frac{1}{2}\partial_{a_{0}})F}z^{\frac{1}{2}+a_{0}}(1+\ldots)\big{]} \tag{11}\] The quantity in the square bracket must vanish when \(a_{0}=0\) for the limit to exist. It indeed vanish because \(\partial_{a_{0}}F|_{a_{0}=0}=0\) with \(F\) being an even function of \(a_{0}\). Then the limit becomes the derivative with respect to \(a_{0}\), and we get \[w_{+}^{(1)}= \frac{\Gamma(1+2a_{1})}{\Gamma(\frac{1}{2}+a_{1}+a)\Gamma(\frac{ 1}{2}+a_{1}-a)}e^{\frac{1}{2}\partial_{a_{1}}F}z^{\frac{1}{2}}\] \[(2\psi(1)-\psi(\frac{1}{2}+a_{1}+a)-\psi(\frac{1}{2}+a_{1}-a)+ \frac{1}{2}\partial_{a_{0}}^{2}F-\log z+\ldots)\] \[= \frac{\Gamma(1+2a_{1})}{\Gamma(\frac{1}{2}+a_{1}+a)\Gamma(\frac{ 1}{2}+a_{1}-a)}e^{\frac{1}{2}\partial_{a_{1}}F}\] \[\big{[}-w_{-}^{(0)}+\big{(}2\psi(1)-\psi(\frac{1}{2}+a_{1}+a)- \psi(\frac{1}{2}+a_{1}-a)+\frac{1}{2}\partial_{a_{0}}^{2}F\big{)}w_{+}^{(0)} \big{]} \tag{12}\] where \(\psi\) denotes the digamma function. For \(a_{0}=\frac{1}{2}\) we find \[w_{+}^{(1)}= \lim_{a_{0}\rightarrow\frac{1}{2}}\big{[}\frac{\Gamma(1+2a_{1}) \Gamma(2a_{0})}{\Gamma(\frac{1}{2}+a_{1}+a_{0}+a)\Gamma(\frac{1}{2}+a_{1}+a_{0} -a)}e^{(\frac{1}{2}\partial_{a_{1}}+\frac{1}{2}\partial_{a_{0}})F}\] \[\times z^{\frac{1}{2}-a_{0}}(1+\frac{-\frac{t}{2}+t(a_{0}^{2}+a_{ 1}^{2}+a_{t}^{2}-a_{\infty}^{2})+(1-t)u}{(1-2a_{0})t}z+\ldots)\] \[+\frac{\Gamma(1+2a_{1})\Gamma(-2a_{0})}{\Gamma(\frac{1}{2}+a_{1} -a_{0}+a)\Gamma(\frac{1}{2}+a_{1}-a_{0}-a)}e^{(\frac{1}{2}\partial_{a_{1}}- \frac{1}{2}\partial_{a_{0}})F}z^{\frac{1}{2}+a_{0}}(1+\ldots)\big{]}\] \[= \frac{\Gamma(1+2a_{1})}{\Gamma(1+a_{1}+a)\Gamma(1+a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}+\frac{1}{2}\partial_{a_{0}})F}+\Gamma(1+2a_{1})e ^{\frac{1}{2}\partial_{a_{1}}F}z\] \[\times\lim_{a_{0}\rightarrow\frac{1}{2}}\frac{1}{1-2a_{0}}\big{[} \frac{\Gamma(2a_{0})}{\Gamma(\frac{1}{2}+a_{1}+a_{0}+a)\Gamma(\frac{1}{2}+a_{ 1}+a_{0}-a)}e^{\frac{1}{2}\partial_{a_{0}}F}\] \[\times\frac{-\frac{t}{2}+t(a_{0}^{2}+a_{1}^{2}+a_{t}^{2}-a_{ \infty}^{2})+(1-t)u}{t}z^{\frac{1}{2}-a_{0}}\] \[-\frac{\Gamma(2-2a_{0})}{2a_{0}\Gamma(\frac{1}{2}+a_{1}-a_{0}+a) \Gamma(\frac{1}{2}+a_{1}-a_{0}-a)}e^{-\frac{1}{2}\partial_{a_{0}}F}z^{-\frac {1}{2}+a_{0}}+\ldots\big{]}\] (A.9) The quantity in the square bracket must vanish when \(a_{0}=\frac{1}{2}\) for the limit to exist, that is, we must have \[e^{\partial_{a_{0}}F}\frac{-\frac{t}{2}+t(a_{0}^{2}+a_{1}^{2}+a_ {t}^{2}-a_{\infty}^{2})+(1-t)u}{t}\Big{|}_{a_{0}=\frac{1}{2}}\] \[=e^{\partial_{a_{0}}F}\frac{-\frac{1+t}{4}+ta_{0}^{2}+ta_{1}^{2}+( 1-t)a^{2}-a_{\infty}^{2}-(1-t)t\partial_{t}F}{t}\Big{|}_{a_{0}=\frac{1}{2}}\] \[=a_{1}^{2}-a^{2}\] (A.10) By the expansion of \(F\) \[F=\frac{(\frac{1}{4}-a^{2}-a_{t}^{2}+a_{\infty}^{2})(\frac{1}{4}-a^{2}-a_{1}^ {2}+a_{0}^{2})}{\frac{1}{2}-2a^{2}}\frac{1}{t}+O(\frac{1}{t^{2}})\] (A.11) one can verify (A.10) holds to the order of expansion. Again, the limit becomes the derivative with respect to \(a_{0}\) and we find \[w_{+}^{(1)}= \frac{\Gamma(1+2a_{1})}{\Gamma(1+a_{1}+a)\Gamma(1+a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}+\frac{1}{2}\partial_{a_{0}})F}\] \[+\frac{\Gamma(1+2a_{1})}{\Gamma(a_{1}+a)\Gamma(a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}-\frac{1}{2}\partial_{a_{0}})F}z\big{[}-2\psi(1)-1+ \frac{1}{2}\psi(1+a_{1}+a)+\frac{1}{2}\psi(1+a_{1}-a)\] \[+\frac{1}{2}\psi(a_{1}+a)+\frac{1}{2}\psi(a_{1}-a)-\frac{1}{2} \partial_{a_{0}}^{2}F+\log z-\frac{t+t(1-t)\partial_{t}\partial_{a_{0}}F}{2(- \frac{t}{2}+t(a_{0}^{2}+a_{1}^{2}+a_{t}^{2}-a_{\infty}^{2})+(1-t)u)}\big{]}+\ldots\] \[= \frac{\Gamma(1+2a_{1})}{\Gamma(1+a_{1}+a)\Gamma(1+a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}+\frac{1}{2}\partial_{a_{0}})F}w_{-}^{(0)}\] \[+\frac{\Gamma(1+2a_{1})}{\Gamma(a_{1}+a)\Gamma(a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}-\frac{1}{2}\partial_{a_{0}})F}\big{[}-2\psi(1)-1+ \frac{1}{2}\psi(1+a_{1}+a)+\frac{1}{2}\psi(1+a_{1}-a)\] \[+\frac{1}{2}\psi(a_{1}+a)+\frac{1}{2}\psi(a_{1}-a)-\frac{1}{2} \partial_{a_{0}}^{2}F-\frac{t+t(1-t)\partial_{t}\partial_{a_{0}}F}{2(-\frac{t }{2}+t(a_{0}^{2}+a_{1}^{2}+a_{t}^{2}-a_{\infty}^{2})+(1-t)u)}\big{]}w_{+}^{(0)}\] \[= \frac{\Gamma(1+2a_{1})}{\Gamma(a_{1}+a)\Gamma(a_{1}-a)}e^{( \frac{1}{2}\partial_{a_{1}}-\frac{1}{2}\partial_{a_{0}})F}\big{[}\frac{t}{- \frac{t}{2}+t(a_{0}^{2}+a_{1}^{2}+a_{t}^{2}-a_{\infty}^{2})+(1-t)u}w_{-}^{(0)}\] \[+\big{(}-2\psi(1)-1+\frac{1}{2}\psi(1+a_{1}+a)+\frac{1}{2}\psi(1+ a_{1}-a)+\frac{1}{2}\psi(a_{1}+a)+\frac{1}{2}\psi(a_{1}-a)\] \[-\frac{1}{2}\partial_{a_{0}}^{2}F-\frac{t+t(1-t)\partial_{t} \partial_{a_{0}}F}{2(-\frac{t}{2}+t(a_{0}^{2}+a_{1}^{2}+a_{t}^{2}-a_{\infty}^ {2})+(1-t)u)}\big{)}w_{+}^{(0)}\big{]} \tag{101}\] In general, the coefficient \(c_{N}\) in the series solution \(z^{\frac{1}{2}-a_{0}}\sum_{k=0}^{\infty}c_{k}z^{k}\) and the connection coefficient for \(w_{+}^{(0)}\) on the right hand side of (100) simultaneously take \(a_{0}=\frac{N}{2}\) as a pole, so the limit \(a_{0}\to\frac{N}{2}\) always becomes a differentiation with respect to \(a_{0}\). For example, for \(a_{0}=1\) we have \[w_{+}^{(1)}=\frac{\Gamma(1+2a_{1})}{2\Gamma(-\frac{1}{2}+a_{1}+ a)\Gamma(-\frac{1}{2}+a_{1}-a)}e^{(\frac{1}{2}\partial_{a_{1}}-\frac{1}{2} \partial_{a_{0}})F}\big{[}-\frac{1}{(2-2a_{0})c_{2}|_{a_{0}=1}}w_{-}^{(0)}\] \[+(2\psi(1)+\frac{5}{2}-\frac{1}{2}\psi(-\frac{1}{2}+a_{1}+a)- \frac{1}{2}\psi(-\frac{1}{2}+a_{1}-a)-\frac{1}{2}\psi(\frac{3}{2}+a_{1}+a)- \frac{1}{2}\psi(\frac{3}{2}+a_{1}-a)\] \[+\frac{1}{2}\partial_{a_{0}}^{2}F+\frac{\partial_{a_{0}}((2-2a_{ 0})c_{2})}{2(2-2a_{0})c_{2}}|_{a_{0}=1})w_{+}^{(0)}\big{]} \tag{102}\] We use Mathematica to compute the connection relation in the degenerate case for higher values of \(N\).
2303.09632
Conflict Optimization for Binary CSP Applied to Minimum Partition into Plane Subgraphs and Graph Coloring
CG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem. In this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs.
Loïc Crombez, Guilherme D. da Fonseca, Florian Fontan, Yan Gerard, Aldo Gonzalez-Lorenzo, Pascal Lafourcade, Luc Libralesso, Benjamin Momège, Jack Spalding-Jamieson, Brandon Zhang, Da Wei Zheng
2023-03-16T20:21:48Z
http://arxiv.org/abs/2303.09632v2
# Conflict Optimization for Binary CSP Applied to ###### Abstract CG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem. In this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs. ## 1 Introduction The CG:SHOP challenge (Computational Geometry: Solving Hard Optimization Problems) is an annual geometric optimization competition, whose first edition took place in 2019. The 2022 edition proposed a problem called _minimum partition into plane subgraphs_. The input is a graph \(G\) embedded in the plane with edges drawn as straight line segments, and the goal is to partition the set of edges into a small number of plane graphs (Fig. 1) [6]. This goal can be formulated as a vertex coloring problem on a graph \(G^{\prime}\) defined as follows. The vertices of \(G^{\prime}\) are the segments defining the edges of \(G\), and the edges of \(G^{\prime}\) correspond to pairs of _crossing_ segments (segments that intersect only at a common endpoint are not considered crossing). The three top-ranking teams (Lasa, Gitastrophe, and Shadoks) on the CG:SHOP 2022 challenge all used a common approach called _conflict optimization_[7, 26, 3] while the fourth team used a SAT-Boosted Tabu Search [25]. Conflict optimization is a technique used by Shadoks to obtain the first place in the CG:SHOP 2021 challenge for low-makespan coordinated motion planning [4], and the main ideas of the technique lent themselves well to the 2022 challenge. Next, we describe the conflict optimizer as a metaheuristic to solve constraint satisfaction problems (CSP) [29]. We start by describing a CSP. A CSP is a triple of * _variables_\(X=(x_{1},\ldots,x_{n})\), ## 1 Introduction The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP202 instance is a _C_-SHOP202 instance. The _C_-SHOP202 instance is a _C_-SHOP2022 instance. The _C_-SHOP2022 instance is a _C_-SHOP202 instance. The * _domains_\(\mathcal{D}=(D_{1},\ldots,D_{n})\), and * _constraints_\(\mathcal{R}\). Each variable \(x_{i}\) must be assigned a _value_ in the corresponding domain \(D_{i}\) such that all constraints are satisfied. In general, the constraints may forbid arbitrary subsets of values. We restrict our attention to a particular type of constraints (_binary CSP_), which only involve pairs of assignments. A _partial evaluation_ is an assignment of a subset of the variables, called _evaluated_, with the remaining variables called _non-evaluated_. All constraints involving a non-evaluated variable are satisfied by default. We only consider assignments and partial assignments that satisfy all constraints. The conflict optimizer iteratively modifies a partial evaluation with the goal of emptying the set \(S\) of non-evaluated variables, at which point it stops. At each step, a variable \(x_{i}\) is removed from \(S\). If there exists a value \(x\in D_{i}\) that satisfies all constraints, then we assign the value \(x\) to the variable \(x_{i}\). Otherwise, we proceed as follows. For each possible value \(x\in D_{i}\), we consider the set \(K(i,x)\) of variables (other than \(x_{i}\)) that are part of constraints violated by the assignment \(x_{i}=x\). We assign to \(x_{i}\) the value \(x\) that minimizes \[\sum_{x_{j}\in K(i,x)}w(j),\] where \(w(j)\) is a weight function to be described later. The variables \(x_{j}\in K(i,x)\) become non-evaluated and added to \(S\). The weight function should be such that \(w(j)\) increases each time \(x_{j}\) is added to \(S\), in order to avoid loops that keep moving the same variables back and forth from \(S\). Let \(q(j)\) be the number of times \(x_{j}\) became non-evaluated. A possible weight function is \(w(j)=q(j)\). More generally, we can have \(w(j)=q(j)^{p}\) for some exponent \(p\) (typically between 1 and 2). Of course, several details of the conflict optimizer are left open. For example, which element to choose from \(S\), whether some random noise should be added to \(w\), and the decision to restart the procedure from scratch after a certain time. The CSP as is, does not apply to optimization problems. However, we can, impose a maximum value \(k\) of the objective function in order to obtain a CSP. The conflict optimizer was introduced in a low makespan coordinated motion planning setting. In that setting, the variables are the robots, the domains are their paths (of length at most \(k\)) and the constraints forbid collisions between two paths. In the graph coloring setting, the domains are the \(k\) colors of the vertices and the constraints forbid adjacent vertices from having the same color. The conflict optimizer can be adapted to non-binary CSP, but in that case multiple variables may be unassigned for a single violated constraint. The strategy has some resemblance to the similarly named _min-conflicts algorithm_[21], but notable differences are that a partial evaluation is kept instead of an invalid evaluation and the weight function that changes over time. While the conflict optimization strategy is simple, there are different ways to apply it to the graph coloring problem. The goal of the paper is to present how the top three teams applied it or complemented it with additional strategies. We compare the relative benefits of each variant on the instances given in the CG:SHOP 2022 challenge. We also compare them to baselines on some instances issued from graph coloring benchmarks. The paper is organized as follows. Section 2 presents the details of the conflict optimization strategy applied to graph coloring. In the three sections that follow, the three teams Lasa, Gitastrophe, and Shadoks present the different parameters and modified strategies that they used to make the algorithm more efficient for the CG:SHOP 2022 challenge. The last section is devoted to the experimental results. ### Literature Review The study of graph coloring goes back to the 4-color problem (1852) and it has been intensively studied since the 1970s (see [14, 17] for surveys). Many heuristics have been proposed [10, 13, 19, 23], as well as exact algorithms [5, 12, 18]. We briefly present two classes of algorithms: greedy algorithms and exact algorithms. Greedy algorithms.These algorithms are used to find good quality initial solutions in a short amount of time. The classic greedy heuristic considers the vertices in arbitrary order and colors each vertex with the smallest non-conflicting color. The two most famous modern greedy heuristics are _DSATUR_[2] and _Recursive Largest First_ (_RLF_) [16]. At each step (until all vertices are colored), DSATUR selects the vertex \(v\) that has the largest number of different colors in its neighbourhood. Ties are broken by selecting a vertex with maximum degree. The vertex \(v\) is colored with the smallest non-conflicting color. _RLF_ searches for a large independent set \(I\), assigns the vertices \(I\) the same color, removes \(I\) from \(G^{\prime}\), and repeats until all vertices are colored. Exact algorithms.Some exact methods use a branch-and-bound strategy, for example extending the DSATUR heuristic by allowing it to backtrack [24, 8]. Another type of exact method (branch-and-cut-and-price) decomposes the vertex coloring problem into an iterative resolution of two sub-problems [20, 12, 9]. The "master problem" maintains a small set of valid colors using a set-covering formulation. The "pricing problem" finds a new valid coloring that is promising by solving a maximum weight independent set problem. Exact algorithms are usually able to find the optimal coloring for graphs with a few hundred vertices. However, even the smallest CG:SHOP 2022 competition instances involve at least a few thousands vertices. ## 2 Conflict Optimization for Graph Coloring Henceforth, we will only refer to the intersection conflict graph \(G^{\prime}\) induced by the instance. Vertices will refer to the vertices \(V(G^{\prime})\), and edges will refer to the edges \(E(G^{\prime})\). Our goal is to partition the vertices using a minimum set of \(k\) color classes \(\mathcal{C}=\{C_{1},\ldots,C_{k}\}\), where no two vertices in the same color class \(C_{i}\) are incident to a common edge. ### Conflict Optimization We consider the classical problem of coloring the vertices of a graph \(G^{\prime}=(V(G^{\prime}),E(G^{\prime}))\). We assume that an initial solution \(\mathcal{C}=\{C_{1},\ldots,C_{k}\}\) has been previously computed (the choice of the initial solution does not seem to impact the quality of the final solution produced by the conflict optimizer). The goal of the conflict optimizer is to reduce the number of colors of \(\mathcal{C}\) by one. When (and if) the conflict optimizer terminates, it will give such a solution. However, after a certain amount of time or when a certain situation arrives, we may decide to abort the execution of the conflict optimizer without any solution, and perhaps try again. Throughout the execution, we maintain a partial coloring, which is a valid coloring for a subset of the vertices. The complementary subset of uncolored vertices is called the _conflict set_ and denoted \(S\). The conflict optimizer proceeds as follows: 1. Pick a color class \(C_{i}\) to be eliminated. Uncolor all vertices in \(C_{i}\) and make \(S\gets C_{i}\). A valid vertex-coloring is maintained for the set \(V(G^{\prime})\setminus S\). If \(S\) is empty, we have a valid vertex coloring of \(G^{\prime}\) which uses one fewer color. 2. Pick and remove an element \(v\) from \(S\). For each color class, compute the _conflict score_ with \(v\). The conflict score of a color class \(C_{j}\) is \[score(C_{j})=f(C_{j})\sum_{\begin{subarray}{c}u\in C_{j}\\ (u,v)\in E(G^{\prime})\end{subarray}}w(u)\] (1) where the weight \(w(u)\) is a variable depending on the the number of times that \(u\) has been removed from the conflict set \(S\) in previous iterations, and where \(f(C_{j})\) is a random variable adding randomness in the process. 3. Pick the color class \(C_{j}\) with the lowest conflict score. Uncolor all vertices in \(C_{j}\) which are adjacent to \(v\) and add those vertices to \(S\). This step is slightly modified when the BDFS option detailed in the later is activated. In this case, the algorithm does not put in the conflict set \(S\) all the vertices in conflict with \(S\). Some of them are recolored easily so that they do not enter in the conflict set \(S\). Insert \(v\) into \(C_{j}\). 4. Repeat steps 2 and 3 until the set \(S\) is empty. The three teams provided different variants of the algorithm by playing with different options of the optimizer. 1. The first option is the choice of the initial color \(C_{i}\) to be eliminated at the first step of the loop. It is random for Gitastrophe, and the smallest color class for Shadoks and Lasa variants. 2. The second option is the way to choose the element \(v\) from \(S\) in step 2. Random for Gitastrophe, a fifo queue for Shadoks, and the element that provides the least total conflict score after its removal for Lasa. 3. The third option is the choice of the weight function \(w(\cdot)\) defined on the vertices. Different functions can be used, all depending on the parameter \(q(u)\) that is defined as the number of times that a vertex \(u\) has been removed from \(S\). Lasa uses \(w(u)=1+q(u)\). Gitastrophe uses \(w(u)=1+q(u)^{2}\). Shadoks uses \(w(u)=1+q(u)^{p}\) with \(p\in[1,2]\). Shadoks also add a threshold \(q_{\max}\) with \(w(u)=\infty\) if \(q(u)>q_{\max}\). Gitastrophe also has such a threshold, but instead uses it as a heuristic to abort the execution and start again. 4. The fourth option is the choice of \(f(C_{i})\). Lasa and Gitastrophe simply set \(f(C_{i})=1\), while Shadoks use a Gaussian random variable with average 1 for \(f(C_{i})\). The right amount of randomness, controlled by the variance \(\sigma\), has a significant impact on the search time. 5. The fifth option is that Shadoks add a Bounded Depth-First Search (BDFS) option which detects vertices that can be recolored easily. These vertices are recolored immediately, instead of entering \(S\), and consequently does not suffer an increase in the value of \(q(\cdot)\). Some extra options are useful in order to drive the computation. * Restart: The computation is restarted from step 2 if the size of the conflict set \(S\) becomes too large because the coloring of \(V(G^{\prime})\setminus S\) has deteriorated too much to come back to a valid coloring. * Multistart: Shadoks also use a multistart option to restart from step 1 with a random eliminated color \(C_{i}\) and a color shuffle. The different parameters, options and complementary strategies used by each team are described in the next three sections. Lasa Team ### Finding Initial Solutions Lasa team used two approaches to find initial solutions: 1. **DSATUR** is the classical graph coloring algorithm presented in Section 1. 2. **Orientation greedy** is almost the only algorithm where the geometry of the segments is used. If segments are almost parallel, it is likely that they do not intersect (thus forming an independent set). This greedy algorithm first sorts the segments by orientation, ranging from \(-\frac{\pi}{2}\) to \(\frac{\pi}{2}\). For each segment in this order, the algorithm tries to color it using the first available color. If no color has been found, a new color is created for coloring the considered segment. This algorithm is efficient, produces interesting initial solutions and takes into account the specificities of the competition. ### Conflict Optimization TABUCOL inspired neighbourhoodOne classical approach for the vertex coloring involves allowing solutions with conflicting vertices (two adjacent vertices with the same color). It was introduced in 1987 [13] and called TABUCOL. It starts with an initial solution, removes a color (usually the one with the least number of vertices), and assigns uncolored vertices with a new color among the remaining ones. This is likely to lead to some conflicts (_i.e._ two adjacent vertices sharing a same color). The local search scheme selects a conflicting vertex, and tries to swap its color, choosing the new coloring that minimises the number of conflicts. If it reaches a state with no conflict, it provides a solution with one color less than the initial solution. The process is repeated until the stopping criterion is met. While the original TABUCOL algorithm includes a "tabu-list" mechanism to avoid cycling, it is not always sufficient, and requires some hyper-parameter tuning in order to obtain a good performance on a large variety of instances. To overcome this issue, we use a neighbourhood, but replace the "tabu-list" by the conflict optimizer scheme presented above. Partialcalcol inspired neighbourhoodPARTIALCOL another local search algorithm solving the vertex coloring problem was introduced in 2008. This algorithm proposes a new local search scheme that allows partial coloring (thus allowing uncolored vertices). The goal is to minimize the number of uncolored vertices. Similarly to TABUCOL, PARTIALCOL starts with an initial solution, removes one color (unassigning its vertices), and performs local search iterations until no vertex is left uncolored. When coloring a vertex, the adjacent conflicting vertices are uncolored. Then, the algorithm repeats the process until all vertices are colored, or the stopping criterion is met. This neighbourhood was also introduced alongside a tabu-search procedure. The tabu-search scheme is also replaced by a conflict-optimization scheme. Note that this neighbourhood was predominantly used by the other teams. ## 4 Gitastrophe ### Solution Initialization The gitastrophe team uses the traditional greedy algorithm of Welsh and Powell [30] to obtain initial solutions: order the vertices in decreasing order of degree, and assign each vertex the minimum-label color not used by its neighbors. During the challenge Gitastrophe attempted to use different orderings for the greedy algorithm, such as sorting by the slope of the line segment associated with each vertex (as the orientation greedy initialization presented in Section 3), and also tried numerous other strategies. Ultimately, after running the solution optimizer for approximately the same amount of time, all initializations resulted in an equal number of colors. ### Modifications to the Conflict Optimizer Taking inspiration from memetic algorithms, which alternate between an intensification and a diversification stage, the algorithm continually switched between a phase using the above conflict score, and one minimizing only the number of conflicts. Thus during the conflict-minimization phase, the random variables \(f(C_{j})\) and \(w(u)\) are both fixed equal to 1 leading to a conflict score \[score(C_{j})=\sum_{u\in C_{j},(u,v)\in E(G^{\prime})}1.\] Each phase lasted for \(10^{5}\) iterations. Adding the conflict-minimization phase gave minor improvements to some of the challenge instances. ## 5 Shadoks In this section, we describe the choices used by the Shadoks team for the options described in Section 2.1. Option (a)The Shadoks generally chose to eliminate the color with the smallest number of elements. However, if the multistart option is toggled on, then a random color is used each time. Option (b)The conflict set \(S\) is stored in a queue. The Shadoks tried other strategies, but found that the queue gives the best results. Option (c)The weight function used is \(w(u)=1+q(u)^{p}\), mostly with \(p=1.2\). The effect of the parameter \(p\) is shown in Fig. 2. Notice that in all figures, the number of colors shown is the average of ten executions of the code using different random seeds. If \(q(u)\) is larger than a threshold \(q_{\max}\), the Shadoks set \(w(u)=\infty\) so that the vertex \(u\) never reenters \(S\). If at some point an uncolored vertex \(v\) is adjacent to some vertex \(u\) of infinite weight in every color class, then the conflict optimizer is restarted. When restarting, the initial coloring is shuffled by moving some vertices from their initial color class to a new one. Figure 2: Number of colors over time for the instance vispecn13806 using different values \(p\). The algorithm uses \(\sigma=0.15\), easy vertices, \(q_{\max}=59022\), but does not use the BDFS nor any clique. Looking at Fig. 3, the value of \(q_{\max}\) does not seem to have much influence as long as it is not too small. Throughout the challenge the Shadoks almost exclusively used \(q_{\max}=2000\cdot(75000/m)^{2}\), where \(m\) is the number of vertices. This value roughly ensures a restart every few hours. If the clique option is toggled on, each vertex \(u\) in the largest known clique has \(w(u)=\infty\). The impact of the clique option on the computation is shown in Fig. 4. The idea is that since each vertex of the clique must have a different color, it is useless to change their color. The algorithm works by recoloring the other vertices. During the challenge, the Shadoks used several methods to produce large cliques, including simulated annealing and mixed integer programming. Option (d)The Shadoks use the function \(f\) as a Gaussian random variable of mean 1 and variance \(\sigma\). A good default value is \(\sigma=0.15\). The effect of the variance is shown in Fig. 5. Notice that setting \(\sigma=0\) gives much worse results. Option (e)The goal of BDFS is to further optimize very good solutions that the conflict optimizer is not able to improve otherwise. Fig. 4 shows the influence of BDFS. While on this figure, the advantages of BDFS cannot be noticed, its use near the end of the challenge improved about 30 solutions. The _bounded depth-first search_ (BDFS) algorithm tries to improve the dequeuing process. The goal is to prevent a vertex in conflict with some adjacent colored vertices from entering in the conflict set. At the first level, the algorithm searches for a recoloring of some adjacent vertices which allows us to directly recolor the conflict vertex. If no solution is found, the algorithm Figure 4: Number of colors over time with and without clique knowledge and BDFS obtained on the instance vispecn13806. Parameters are \(\sigma=0.15\), \(p=1.2\), and \(q_{\max}=1500000\). Figure 3: Number of colors over time with different values of \(q_{\max}\) obtained on the instance vispecn13806. Parameters are \(\sigma=0.15\), \(p=1.2\), no clique knowledge, and no BDFS. could recolor some vertices at larger distances from the conflict vertex. To do so, a local search is performed by trying to recolor vertices at a bounded distance from the conflict vertex in the current partial solution. The BDFS algorithm has two parameters: _adjacency bound_\(a_{\max}\) and _depth_\(d\). In order to recolor a vertex \(v\), BDFS gets the set \(\mathcal{C}\) of color classes with at most \(a_{\max}\) neighbors of \(v\). If a class in \(\mathcal{C}\) has no neighbor of \(v\), \(v\) is assigned to \(C\). Otherwise, for each class \(C\in\mathcal{C}\), BDFS tries to recolor the vertices in \(C\) which are adjacent to \(v\) by recursively calling itself with depth \(d-1\). At depth \(d=0\) the algorithm stops trying to color the vertices. During the challenge the Shadoks used BDFS with parameters \(a_{\max}=3\) and \(d=3\). The depth was increased to \(5\) (resp. \(7\)) when the number of vertices in the queue was \(2\) (resp. \(1\)). Degeneracy orderGiven a target number of colors \(k\), we call _easy vertices_ a set of vertices \(Y\) such that, if the remainder of the vertices of \(G^{\prime}\) are colored using \(k\) colors, then we are guaranteed to be able to color all vertices of \(G^{\prime}\) with \(k\) colors. This is obtained using the degeneracy order \(Y\). To obtain \(Y\) we iteratively remove from the graph a vertex \(v\) that has at most \(k-1\) neighbors, appending \(v\) to the end of \(Y\). We repeat until no other vertex can be added to \(Y\). Notice that, once we color the remainder of the graph with at least \(k\) colors, we can use a greedy coloring for \(Y\) in order from last to first without increasing the number of colors used. Removing the easy vertices reduces the total number of vertices, making the conflict optimizer more effective. The Shadoks always toggle this option on (the challenge instances contain from \(0\) to \(23\%\) easy vertices). ## 6 Results We provide the results of the experiments performed with the code from the three teams on two classes of instances. First, we present the results on some selected CG:SHOP 2022 instances. These instances are intersection graphs of line segments. Second, we execute the code on graphs that are not intersection graphs, namely the classic DIMACS graphs [15], comparing the results of our conflict optimizer implementations to previous solutions. The source code for the three teams is available at: * Lasa: [https://github.com/librallu/dogs-color](https://github.com/librallu/dogs-color) * Gitastrophe: [https://github.com/jacketsj/cgshop2022-gitastrophe](https://github.com/jacketsj/cgshop2022-gitastrophe) * Shadoks: [https://github.com/gfonsecabr/shadoks-CGSHOP2022](https://github.com/gfonsecabr/shadoks-CGSHOP2022) Figure 5: Number of colors over time for the instance vispecn13806 for different values of \(\sigma\). In both figures the algorithm uses \(p=1.2\), easy vertices, \(q_{\max}=59022\), but does not use the BDFS nor any clique. For \(\sigma\geq 0.25\), no solution better than \(248\) colors is found. ### CG:SHOP 2022 Instances We selected 14 instances (out of 225) covering the different types of instances given in the CG:SHOP 2022 challenge. The results are presented in Table 1. For comparison, we executed the HEAD [22] code on some instances using the default parameters. The table shows the smallest number of colors for which HEAD found a solution. We ran HEAD for 1 hour of repetitions for each target number of colors on a single CPU core (the HEAD solver takes the target number of colors as a parameter and we increased this parameter one by one). At the end of the challenge, 8 colorings computed by Lasa, 11 colorings computed by Gitastrophe, and 23 colorings computed by Shadoks over 225 instances have been proved optimal (their number of colors is equal to the size of a clique). In order to compare the efficiency of the algorithms, we executed the different implementations on the CG:SHOP instance vispecn13806. The edge density of this graph is 19%, the largest clique that we found has 177 vertices and the best coloring found during the challenge uses 218 colors. Notice that vispecn13806 is the same instance used in other Shadoks experiments in Section 5. Notice also that HEAD algorithm provides 283 colors after one hour compared to less than 240 colors for the conflict optimizers. We ran the three implementations on three different servers and compared the results shown in Figure 6. For each implementation, the \(x\) coordinate is the running time in hours, while the \(y\) coordinate is the smallest number of colors found at that time. ### Results on DIMACS Graphs We tested the implementation of each team on the DIMACS instances [15] to gauge the performance of the conflict optimizer on other classes of graphs. We compared our results to the best known bounds and to the state of the art coloring algorithms HEAD [22] and QACOL [27, 28]. The time limit for Lasa's algorithms is 1 hour. CWLS is Lasa's conflict optimizer with the neighbourhood presented in TABUCOL [13], while PWLS is the optimizer with the neighbourhood presented in PARTIALCOL [1]. Gitastrophe algorithm ran 10 minutes after which the number of colors no longer decreases. Shadoks algorithm ran for 1 hour without the BDFS option (results with BDFS are worse). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Instance & Clique & Best & HEAD [22] & Lasa & Gitastrophe & Shadoks \\ \hline rvisp5013 & 46 & **49** & 59 & 50 & **49** & **49** \\ rsqrpecn8051 & 173 & **175** & 207 & 177 & 176 & **175** \\ vispecn13806 & 77 & **218** & 283 & 224 & 221 & **218** \\ rsqrp14364 & 134 & **136** & 174 & 137 & 137 & **136** \\ vispecn19370 & 169 & **192** & 266 & 197 & 194 & **192** \\ rvisp24116 & 97 & **104** & 166 & 110 & 105 & **104** \\ visp26405 & 78 & **81** & 112 & 83 & **81** & **81** \\ sqrp28863 & **190** & **190** & 297 & 191 & 191 & **190** \\ visp38574 & 118 & **133** & 199 & 138 & 134 & **133** \\ sqrpecn45700 & 460 & **462** & & 465 & 465 & **462** \\ reecn51526 & 308 & **310** & & 315 & 312 & **310** \\ vispecn58391 & 305 & **367** & & 380 & 369 & **367** \\ vispecn65831 & 357 & **439** & & 453 & 440 & **439** \\ sqrp72075 & 264 & **269** & & 272 & 271 & **269** \\ \hline \end{tabular} \end{table} Table 1: Several CG:SHOP 2022 results. We compare the size of the largest known clique to the smallest coloring found by each team on a selection of 14 CG:SHOP 2022 instances. Results are presented in Table 2. We only kept the difficult DIMACS instances. For the other instances, all the results match the best known bounds. The DIMACS instances had comparatively few edges (on the order of thousands or millions); the largest intersection graphs considered in the CG:SHOP challenge had over 1.5 billion edges. We notice that the conflict optimizer works extremely poorly on random graphs, but it is fast and appears to perform well on geometric graphs (r250.5, r1000.1c, r1000.5, dsjr500.1c and dsjr500.5), matching the best-known results [11]. Interestingly, these geometric graphs are not intersection graphs as in the CG:SHOP challenge, but are generated based on a distance threshold. On the DIMACS graphs, Lasa implementation shows better performance than the other implementations. ## 7 Acknowledgments We would like to thank the challenge organizers and other competitors for their time, feedback, and making this whole event possible. The Shadoks would like to thank Helene Toussaint, Raphael Amato, Boris Lonjon, and William Guyot-Lenat from LIMOS, as well as the Qarma and TALEP teams and Manuel Bertrand from LIS, who continue to make the computational resources of the LIMOS and LIS clusters available to our research. The work of Loic Crombez has been sponsored by the French government research program "Investissements d'Avenir" through the IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25). The work of Guilherme D. da Fonseca is supported by the French ANR PRC grant ADDS (ANR-19-CE48-0005). The work of Yan Gerard is supported by the French ANR PRC grants ADDS (ANR-19-CE48-0005), ACTIVmap (ANR-19-CE19-0005) and by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25). The work of Aldo Gonzalez-Lorenzo is supported by the French ANR PRC grant COHERENCE4D (ANR-20-CE10-0002). The work of Pascal Lafourcade is supported by the French ANR PRC grant MobiS5 (ANR-18-CE39-0019), Figure 6: Number of colors over time (in hours) for the instance vispecn13806. DECRYPT (ANR-18-CE39-0007), SEVERITAS (ANR-20-CE39-0005) and by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP 20-25). The work of Luc Libralesso is supported by the French ANR PRC grant DECRYPT (ANR-18-CE39-0007).
2308.12606
A Greedy Approach for Offering to Telecom Subscribers
Customer retention or churn prevention is a challenging task of a telecom operator. One of the effective approaches is to offer some attractive incentive or additional services or money to the subscribers for keeping them engaged and make sure they stay in the operator's network for longer time. Often, operators allocate certain amount of monetary budget to carry out the offer campaign. The difficult part of this campaign is the selection of a set of customers from a large subscriber-base and deciding the amount that should be offered to an individual so that operator's objective is achieved. There may be multiple objectives (e.g., maximizing revenue, minimizing number of churns) for selection of subscriber and selection of an offer to the selected subscriber. Apart from monetary benefit, offers may include additional data, SMS, hots-spot tethering, and many more. This problem is known as offer optimization. In this paper, we propose a novel combinatorial algorithm for solving offer optimization under heterogeneous offers by maximizing expected revenue under the scenario of subscriber churn, which is, in general, seen in telecom domain. The proposed algorithm is efficient and accurate even for a very large subscriber-base.
Piyush Kanti Bhunre, Tanmay Sen, Arijit Sarkar
2023-08-24T07:11:51Z
http://arxiv.org/abs/2308.12606v1
# A Greedy Approach for Offering to Telecom Subscribers ###### Abstract Customer retention or churn prevention is a challenging task of a telecom operator. One of the effective approaches is to offer some attractive incentive or additional services or money to the subscribers for keeping them engaged and make sure they stay in the operator's network for longer time. Often, operators allocate certain amount of monetary budget to carry out the offer campaign. The difficult part of this campaign is the selection of a set of customers from a large subscriber-base and deciding the amount that should be offered to an individual so that operator's objective is achieved. There may be multiple objectives (e.g., maximizing revenue, minimizing number of churns) for selection of subscriber and selection of an offer to the selected subscriber. Apart from monetary benefit, offers may include additional data, SMS, hots-spot tethering, and many more. This problem is known as offer optimization. In this paper, we propose a novel combinatorial algorithm for solving offer optimization under heterogeneous offers by maximizing expected revenue under the scenario of subscriber churn, which is, in general, seen in telecom domain. The proposed algorithm is efficient and accurate even for a very large subscriber-base. ## I Introduction Offer optimization or campaign management is one of the routine tasks of a telephone operator for customer retention and service adoption. A telecom operator always looks out for potential subscribers who may be interested for adopting certain service in exchange of an incentive from the telecom operator. To achieve a business objective, an operator's task is to identify as set of subscribers and a set of appropriate offers that may be accepted by the chosen subscribers. Most of the time, this decision is made through a rule-based system which implement certain business rules and intuitive human judgment. Resende and Pardalos [2008] describe varieties of optimization problems arise in telecommunication systems. Johnson et al. [2013] formulate a constrained Optimization problem for maximizing the profit to a manufacturer by giving discounts to the customers. Pham et al. [2021] proposed a offers recommender system for telecommunications. Cohen [2004] addressed the problem of bank's marketing campaign optimization by solving a mixed integer programming (MIP) problem. Nobibon et al. [2011] proposed a branch-and-price algorithm to allocate one or more offers to the clients. Verma [2020] proposed a two stage framework for retail offer optimization, firstly, they have exploited generalized non-linear model based on temporal convolutional network to estimate item purchase probability, and secondly offer values are optimized by solving constrained based optimization problem with derived purchase probability. In this paper, we propose a novel greedy algorithm for solving this problem by maximizing expected revenue under the scenario of subscriber-churn from the network. In this variant of the offer optimization problem, apart from a monetary incentive given to the subscribers, the operator may award some other offers to the subscribers such as extra amount of data, talk time, increased limit of hots-spot data usage, unlimited data usage in certain time period of a day or week, download, etc. The objective is to allocate appropriate offer to the subscribers who are most likely to accept the offer and active in the network for a longer time period. The problem under consideration will be referred to as an _Offer Optimization Problem_ (OOP). The contribution of our work is summarized below. * We have proposed a novel algorithm for finding optimum solution to the underlying optimization problem which our main contribution. The proposed algorithm is able to handles large number of subscribers. It provides optimal solution efficiently and out performs many existing algorithms which is evident from a comparison test given in the experiment section. We also provided a theoretical argument why our algorithm provides accurate and efficiency solution. * The algorithm is not limited to the offer optimization to telecom domain, but also applicable in solving problems coming from various other domains. The rest of the paper is organized as follow. In Section II, we present the mathematical problem formulation which leads to solving the telecom OOP. The optimization algorithm along with necessary data structures, complexity analysis and relevant discussions are presented in Section III. Section IV is furnished with experimental results and comparisons with a couple of standard algorithms to validate the novelty and merit of the proposed algorithm. Finally we conclude in Section V. ## II Problem Formulation First we shall formulate the OOP under the incentives with different denominations. The same formulation is enough for solving more general problem with heterogeneous offers, where a offer may be monetary or non-monetary services or advantages to the subscribers. This generalization can be done by a simple transformation or interpretation of the non-monetary offers which we shall discuss later part of this section. Here, we assume that any of the existing subscribers may churn out from the network with some probability. Further, each subscriber or a group of subscribers has some susceptibility towards an offer, which will be referred to as offer acceptance rate. The offer acceptance rate is used to model the probability of accepting an offer by a subscriber, which is incorporated in the objective function. Suppose \(S\) is a set of \(n\) subscribers, and the \(i^{\text{th}}\) subscriber possesses the following characteristics: * \(p_{i}\): monthly top-up done by the subscriber * \(\alpha_{i}\): probability that the subscriber may churn out * \(\gamma_{i}\): acceptance rate, a parameter that defines sensitivity of \(i^{\text{th}}\) subscriber towards an offer * \(\beta_{i}\): probability of accepting an offer, and it is dependent on \(\gamma_{i}\). The parameters \(\alpha_{i}\)'s are estimated by using ML-based models and the subscriber's profile and usage data. The rate of acceptance \(\beta_{i}\)'s are estimated from data related to past offer campaigns and acceptance as well as subscriber profile. The monthly top-up amount \(p_{i}\)'s comes from the rate plan and top-up history of the subscribers. Note that for a large segment of subscribers, if the amount offer to a subscriber is increased, then it is more likely that the subscriber will accept the offer. So, the probability of offer acceptance by a subscriber is modeled by the exponential distribution with mean or rate of the distribution as the acceptance rate. So, the acceptance probability is expressed as \(\beta_{i}=1-e^{-\gamma x_{i}}\), where \(x_{i}\) is the incentive offered to the subscriber. The revenue can be generated from a subscriber if the customer does not churn out. Note that a revenue \(p_{i}\) is generated from the subscriber \(i\) when it is sure that the subscriber does not churn out. In the churn scenario, the subscriber stays in the network with probability \(1-\alpha_{i}\) and pays \(p_{i}\). So, the expected revenue from \(i^{\text{th}}\) subscriber is \((1-\alpha_{i})p_{i}\). Now consider the case when an offer is made to the subscriber who may accept the offer with probability \(1-\beta_{i}\) or may reject the offer with probability \(\beta_{i}\). If he does not accept the offer, the revenue is \((1-\alpha_{i})p_{i}\) with probability \(1-\beta_{i}\). If he accepts \(x_{i}\) as an offer, then the revenue is \(p_{i}-x_{i}\) with probability \(\beta_{i}\) So, the expected revenue from the \(i^{\text{th}}\) subscriber is \[f(x_{i};\alpha_{i},\gamma_{i},p_{i})=\beta_{i}(p_{i}-x_{i})+(1-\beta_{i})(1- \alpha_{i})p_{i} \tag{1}\] The possible values of the offers may be non-negative integers or some predefined discrete values. Suppose there are \(k\) different types of offers and the \(j^{\text{th}}\) type has an offer value \(\delta_{j}\) and it can be awarded to \(n_{j}\) subscribers. So, possible discrete offer denominations are \(\{\delta_{1},\delta_{2},\dots,\delta_{k}\}\). Then the total value of type \(j\) offers is \(w_{j}=\delta_{j}n_{j}\) and the total number of offers is \(K=n_{1}+n_{2}+\dots+n_{k}\), and hence, the total budget is \(W=w_{1}+w_{2}+\dots+w_{k}\). Further, the unknown offer value \(x_{i}\) for \(i^{\text{th}}\) subscriber can be represented as \(x_{i}=\sum_{j=1}^{k}\delta_{j}x_{i,j}=\delta\cdot\mathbf{x}_{i}\), where vectors \(\delta=[\delta_{1},\delta_{2},\cdots,\delta_{k}]\) and \(\mathbf{x}_{i}=[x_{i,1},x_{i,2},\cdots,x_{i,k}]\) and each \(x_{i,j}\) is a binary variable such that \(x_{i,j}=1\) implies the \(j^{\text{th}}\) offer is selected for \(i^{\text{th}}\) subscriber. Note that the revenue function can be written as \(f(\delta\cdot\mathbf{x}_{i};\alpha_{i},\gamma_{i},p_{i})=f(x_{i};\alpha_{i}, \gamma_{i},p_{i})=\beta_{i}(p_{i}-x_{i})+(1-\beta_{i})(1-\alpha_{i})p_{i}\). Then the offer optimization problem can be stated as follow: \[\max F(\mathbf{x})=\sum_{i=1}^{n}f(\delta\cdot\mathbf{x}_{i};\alpha_{i}, \gamma_{i},p_{i})\] S.t. \[\sum_{j=1}^{k}x_{i,j}\leq 1,\ \forall i=1,2,\dots,n \tag{2}\] \[\sum_{i=1}^{n}x_{i,j}\leq n_{j},\ \forall j=1,2,\dots,k\] \[x_{i,j}\in\{0,1\},\ \forall\ i\ \&\ j\] where the unknown variables are given by \(\mathbf{x}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{n}]^{T}\), \(\mathbf{x}_{i}=[x_{i,1},x_{i,2},\cdots,x_{i,k}]\), and the acceptance probability is computed as \(\beta_{i}=1-e^{-\gamma_{i}\sum_{j=1}^{k}\delta_{j}x_{i,j}}\). However, this probability may be estimated by using different types of distribution and the algorithm is equally applicable for optimizing the function \(F(\mathbf{x})\). A subscriber may receive at most one of the possible offers, which is enforced by the first constraint of the optimization problem. The second constraint ensures that the number of specific type of offers never exceed the number of available offers of the type. The third constraint implies that the decision variables are binary. In our problem formulation, the restrictions on the budgets for each type of offer (i.e., \(w_{j}\)'s) and the total budget (\(W\)) are implicit and ensured by the given constraints. Hence, we have ignored additional constraints such as \(\sum_{i=1}^{n}\delta_{j}x_{i,j}\leq w_{j},\ \ \forall j=1,2,\cdots,k\) and \(\sum_{j=1}^{k}\sum_{i=1}^{n}\delta_{j}x_{i,j}\leq W\) in the optimization. Note that if the number of subscribers, \(n\) is large (i.e., in order of million), the number of binary variables \(nk\) is very large. Naturally it is very difficult to to solve such a problem efficiently and accurately. In this study, we propose an efficient greedy algorithm for solving this problem. ## III Proposed Algorithm The problem stated in Sec. II is a very large scale optimization as the number of subscribers may be in the order of million. Solving this problem by a general optimization technique, in general, is difficult and inefficient. Here we propose a greedy algorithm which is simple, elegant, and efficient for finding an optimal solution to the problem. A greedy algorithm finds an optimal solution based on some "local optimal criteria" that leads to a global optimal solution. We would like to maximize the objective function by offering one subscriber at a time and the subscriber is selected by a greedy choice. Let us take a closer look at the objective function \(F(\mathbf{x})=\sum_{i}f(x_{i};\alpha_{i},\gamma_{i},p_{i})\), where each \(f(x_{i};\alpha_{i},\gamma_{i},p_{i})\) is non-negative function, and it defined on a discrete set of values. We expect to maximize each of these functions in order to maximize \(F(\mathbf{x})\). Here we adopt the following greedy choice: Greedy Choice / Selection: _Choose a subscriber \(i\) and an offer value \(\delta_{j}\) from the available set of offer such that \(f(\delta_{j};\alpha_{i},\gamma_{i},p_{i})\) is maximum among all available alternatives._ So, every time we choose the one that provides the maximum revenue and we continue till there is no more offer or all subscribers received the offer. Suppose initially we have the set of subscribers, \(S=\{1,2,\cdots,n\}\) to whom the offers will be made and let \(A\), initialized to \(A=\emptyset\), denote the set of subscribers who already have received an offer. We consider \(k\) buckets, each containing a set of offers with same value. Our approach would be to select a subscriber \(i\) from the set of unassigned subscribers such that \(f(x_{i};\alpha_{i},\gamma_{i},p_{i})\) is maximum for some offer, say \(x_{i}=\delta_{j}\). This offer can be written as vector \(\mathbf{x}_{i}=[x_{i,1}=0,x_{i,2}=0,\cdots,x_{i,j}=1,x_{i,j+1}=0,\cdots,x_{i,k }=0]\), and \(x_{i}=\mathbf{x}_{i}\cdot\delta\). As soon as we find such a subscriber \(i\) and an offer \(\delta_{j}\), we assign the offer to the subscriber \(i\), and remove it from \(S\), remove the offer from \(j\)th bucket and we put \((i,j)\) in \(A\). Next, from the remaining set of subscribers in \(S\), we choose a subscriber \(l\) and an offer \(\delta_{m}\) from the available set of offers such that the revenue function of the subscriber is maximum, i.e., \(f(x;\alpha_{l},\gamma_{l},p_{l})\) is maximum at \(x=\delta_{m}\). Then we remove the offer from the \(m^{\text{th}}\) bucket and the subscriber \(l\) from \(S\), and put \((l,m)\) into \(A\). This process is continued until all subscribers are assigned with an offer or all the offer buckets are empty. If buckets are empty before \(S\) is empty, then the remaining subscribers will not receive any offer. For unified interpretation, we assume each unassigned subscriber receives a zero offer. ### _Data Structures_ For an efficient implementation of the proposed algorithm, we need appropriate data structures. For a speedy execution of the greedy choice, we exploit the priority queues of the revenue values \(f(x;\alpha,\gamma,p)\) for each subscribers. In order to make a greedy choice for a subscriber and an offer, we need to determine \(\max_{j}\{\max_{i}f(\delta_{j};\alpha_{i},\gamma_{i},p_{i})\}\). For a given offer type \(j\), we construct a _max-priority queue_\(Q_{j}\) of subscribers with the priority values as the revenue from the subscriber. So, the priority value for \(i^{\text{th}}\) subscriber is \(f(\delta_{j};\alpha_{i},\gamma_{i},p_{i})\). Since, there are \(k\) possible types of offers, we shall maintain \(k\) priority queues, namely \(Q_{1},Q_{2},\ldots,Q_{k}\). Each of these priority queues is implemented by using binary heap, which is a complete binary tree stored in an array. The space complexity of a binary heap or priority queue is \(O(n)\) and it can be constructed in \(O(n)\) time., where \(n\) is the number of elements. The front element or the max-priority element of a priority queue lies at the root of the underlying binary heap, and hence it can be found in \(O(1)\) time. Since deletion of an element from the queue requires \(O(\log n)\) time in an \(n\)-element queue, additional \(O(\log n)\) time is required to maintain the queue after removal of the max-priority element. We also maintain an array \(L\) of size \(k\) that stores the references to the front elements of the priority queues \(Q_{1},Q_{2},\ldots,Q_{k}\). At any point of time, we simply scan this array to find the elements (i.e., the subscriber) having the maximum priority value among all front elements of the queues. So, finding maximum of maximums can be done in \(O(k)\) time. Once this maximum is found, corresponding element is deleted from the queues and the array \(L\) is also updated accordingly. For details of binary heap and priority queue, and their properties please refer to Cormen et. al Cormen et al. (2009). We also use a lookup table for performing deletion of an element from \(k\) priority queues efficiently. The greedy choice determines a subscriber \(i\) and an offer type \(j\) such that subscriber \(i\) lies at the front of \(Q_{j}\). So, while deleting corresponding queue element from \(Q_{j}\) is easy and efficient as we know the position of the element in \(Q_{j}\). However, for the subscriber \(i\), there are corresponding elements in the queues other than \(Q_{j}\), but their positions are unknown, and hence, their deletions are not straightforward for achieving logarithmic time complexity. In order to achieve efficient deletion of an element from \(k\) priority queues, we use a lookup table \(T\), which is a 2-dimensional array of dimension \(n\times k\) storing references of each subscriber in the priority queues. To be more precise, the \((i,j)^{\text{th}}\) element \(T_{i,j}\) in \(T\) stores the reference of \(i^{\text{th}}\) subscriber in \(j^{\text{th}}\) queue \(Q_{j}\). The lookup table is updated according to the changes performed in the priority queues. Although this increases the computational overhead of the algorithm, but the time complexities of the operations on priority queues remain unchanged, and update operation on a queue can be performed in \(O(\log n)\) time. Implementation of \(T\) requires \(O(kn)\) memory and \(O(kn)\) construction time. ### _Algorithm Greedy Offer(AGO)_ The main steps of the proposed greedy algorithm is presented in Algorithm GreedyOffer, where \(S\) denotes the set of subscriber, \(K=\sum_{j=1}^{k}n_{j}\) denotes the total number of offers, and \(A\) denotes the set of \(2\)-tuple \((i,j)\), which represent assignment of an offer of type \(j\) to a subscribers \(i\). First initialization of \(S\), \(A\), and constructions of the max-priority queues \(Q_{1},Q_{2},\cdots,Q_{k}\) are performed in Line \(1\) to \(2\). Note that the front element of \(Q_{j}\) has the maximum key value along all key values of the elements stored in \(Q_{j}\). In the loop at Line 3, the algorithm selects subscribers and assigns offer iteratively by following the "Greedy Choice/Selection" strategy earlier. For example, if the greedy choice provides a subscriber \(i\) and an offer type \(j\), i.e., \(i^{\text{th}}\) subscriber corresponds to the maximum of the key values stored in the \(j^{\text{th}}\) priority queue \(Q_{j}\), then \(f(\delta_{j};\alpha_{i},\gamma_{i},p_{i})=\max_{l,m}f(\delta_{m};\alpha_{l}, \gamma_{l},p_{l})\) and it is the maximum over all subscribers and all possible offers. Then Fig. 1: Main data structures for implementing the proposed greedy algorithm. \(Q_{1},Q_{2},\cdots,Q_{k}\) are max-priority queues of the subscribers. The queue \(Q_{j}\) is a Max. Priority Queue and contains at most \(n\) subscribers with \(f(\delta_{j};\alpha_{i},\gamma_{i},p_{i})\) as their priority values. The references to the roots of \(Q_{1},Q_{2},\ldots,Q_{k}\) (i.e., max. priority elements) are stored in another list \(L\) which helps in finding the maximum of maximum. the algorithm inserts \((i,j)\) into \(A\), and deletes \(i^{th}\) subscriber from each of the queues \(Q_{1},Q_{2},\cdots,Q_{k}\) and reduce the value of \(n_{j}\) by one, and reduce the total number of available offers \(K\) by one. Note that if \(n_{j}\) is zero, no more offer of type-\(j\) can be assigned to any of the remaining subscribers, and hence the queue \(Q_{j}\) is no longer needed and hence, it is deleted. In the next iteration, the same steps are repeated for making a greedy choice, and continued till all offers are assigned to subscriber or no subscriber is left. The steps of the algorithm are described below. _Algorithm_ GreedyOffer(\(\alpha[1..n]\), \(\gamma[1..n]\), \(p[1..n]\)) 1. Set \(S=\{1,2,\ldots,n\}\), \(A=\emptyset\) // \(O(n)\) 2. Construct \(Q_{1},Q_{2},\cdots,Q_{k}\) //\(O(kn)\) 3. **while**\((K>0\&\ S\neq\emptyset\) //\(\emptyset\) //repeats \(n\) or \(\sum n_{j}\) times 4. \((i,j)\leftarrow\textsc{FindMaxOfMax}(Q_{1},\ldots,Q_{k})\) //\(O(k)\) 5. \(A\gets A\cup\{(i,j)\}\) //\(O(1)\) 6. \(S\gets S\setminus\{i\}\) //\(O(1)\) 7. Delete \(i\) from \(Q_{1},Q_{2},\cdots,Q_{k}\) //\(O(k\log n)\) 8. \(n_{j}\gets n_{j}-1\), \(K\gets K-1\) //\(O(1)\) 9. **if** (\(n_{j}==0\)) **do** //\(O(1)\) 10. Delete \(Q_{j}\) //\(O(1)\) 11. **return**\(A\) Observe that the proposed algorithm is designed in such as way that a subscriber can receive at most one offer from the available set, which is insured by the steps shown in line \(6\) to line \(10\). Also steps in line \(8\) to line \(10\) ensure that if a specific type of offers finishes, none of the remaining subscribers receive such offer. So, the constraints of the optimization problem stated in Eq. II are satisfied. ### _Time and Space Complexity_ A priority queue of \(n\) elements can be constructed in \(O(n)\) time, and hence \(k\) priority queues can be constructed \(O(kn)\) time. In Algorithm GreedyOffer, the initialization of the data structures lines 1 to 2 can be executed in \(O(kn)\) time. The loop in line 5 is executed \(\max\{n,m\}\) times, where total number of offers is \(m=\sum_{j=1}^{k}n_{j}\). In general, \(m<n\), and hence we can assume that this loop is executed \(m\) times. Since, in line 9, an arbitrary subscriber can be deleted from a priority queue in \(O(\log n)\) time, the execution time of the entire loop consumes \(O(km\log n)\) time which is bounded from above by \(O(kn\log n)\). Hence, the time complexity of the algorithm is \(O(kn\log n)\). Each priority queue contains at most \(n\) elements, one for each subscriber, and each element can be stored using constant amount of space. So the space complexity for storing a single queue is \(O(n)\). Since there are \(k\) priority queues, one for each type of offer, we need \(O(nk)\) memory for maintaining the queues. So, the space complexity of the algorithm is \(O(nk)\). ### _Optimality of Solution_ We assume that each subscriber is independent from others, i.e., the revenue from a subscriber depends only on the incentive offered to the subscriber, and it is independent of what incentives given to others. Under this assumption, the proposed algorithm provides an optimal solution to the optimization problem. The revenue function discussed in Section II satisfies this condition. In order to prove our claim, we shall utilize the following observation about the greedy choice made by our algorithm. _Observation 1 (Monotonicity):_ Suppose the greedy algorithm determines the offers \(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{n}}\) to the subscribers in the order \(i_{1},i_{2},\ldots,i_{n}\) respectively. Then the corresponding revenues from the subscribers are in decreasing order, i.e., \[\begin{array}{ll}f(x_{i_{1}};\alpha_{i_{1}},\gamma_{i_{1}},p_{i_{1}})& \geq f(x_{i_{2}};\alpha_{i_{2}},\gamma_{i_{2}},p_{i_{2}})\\ &\geq\ldots\\ &\geq f(x_{i_{n}};\alpha_{i_{n}},\gamma_{i_{n}},p_{i_{n}}).\end{array} \tag{3}\] The proof of Observation 1 trivially follows directly from the fact that the greedy algorithm always chooses the subscriber who is giving maximum revenue under the available offers at that moment. **Theorem 1** (Optimality): _The solution provided by the greedy algorithm is optimal._ Proof: Let \(\mathbf{X}=[x_{1},x_{2},\ldots,x_{n}]\) be a solution given by the algorithm. Let \(\mathbf{Y}=[y_{1},y_{2},\ldots,y_{n}]\) be an arbitrary solution. The revenue corresponding to the offer vectors \(\mathbf{X}\) and \(\mathbf{Y}\), respectively are given by: \[\begin{array}{l}F(\mathbf{X})=\sum_{i=1}^{n}f(x_{i};\alpha_{i},\gamma_{i},p _{i})=\sum_{i=1}^{n}f_{i}(x_{i})\text{ and }\\ F(\mathbf{Y})=\sum_{i=1}^{n}f(y_{i};\alpha_{i},\gamma_{i},p_{i})=\sum_{i=1}^{n} f_{i}(y_{i}),\end{array} \tag{4}\] where, for simplicity, we denote\(f_{i}(x):=f(x;\alpha_{i},\gamma_{i},p_{i})\). In order to prove that the greedy algorithm provides an optimal solution, it sufficient to show that \(F(\mathbf{X})\geq F(\mathbf{Y})\). Without loss of generality, we assume that the subscribers are re-arranged in such a way that the algorithm first finds the offer \(x_{1}\) for subscriber \(1\), then \(x_{2}\) for subscriber \(2\), and so on, and finally \(x_{n}\) for \(n\). Note that some of these offers may be zero. Let \(i\) be the smallest index with \(x_{i}\neq y_{i}\), i.e, \(x_{1}=y_{1},x_{2}=y_{2},\ldots,x_{i-1}=y_{i-1},x_{i}\neq y_{i}\). We have \[\begin{array}{l}F(\mathbf{X})-F(\mathbf{Y})\\ =\sum_{i=1}^{n}f_{i}(x_{i})-\sum_{i=1}^{n}f_{i}(y_{i})\\ =\sum_{i=1}^{n}(f_{i}(x_{i})-f_{i}(y_{i}))\text{, since }x_{j}=y_{j}\ \forall j<i\\ =\big{(}f_{i}(x_{i})-f_{i}(y_{i})\big{)}+\big{(}f_{i+1}(x_{i+1})-f_{i+1}(y_{i+1 })\big{)}\\ \quad+\ldots+(f_{n}(x_{n})-f_{n}(y_{n})).\end{array} \tag{5}\] According to the greedy algorithm, \(x_{i}\) is the best available offer made to the \(i^{th}\) subscriber such that the additional revenue coming from the \(i^{th}\) subscriber is maximum. i.e., \[f_{i}(x_{i})=\max_{z_{j}\in O_{i},k\in S_{i}}f_{k}(z_{j})\geq f_{i}(y_{i}) \tag{6}\] where \(O_{i}\) is the set of all remaining offers and \(S_{i}\) is the set of remaining subscribers to be offered at the time of the \(i^{\text{th}}\) greedy choice made by the algorithm. Then by Eq. 5 and Eq. 6, it follows that \(f_{i+1}(x_{i+1})\geq f_{i+1}(y_{i+1})\), \(f_{i+2}(x_{i+2})\geq f_{i+2}(y_{i+2}),\ldots,f_{n}(x_{n})\geq f_{n}(y_{n})\). Hence, \(F(\mathbf{X})-F(\mathbf{Y})\geq 0\), i.e., \(F(\mathbf{X})\geq F(\mathbf{Y})\). This completes the proof. Finally, from the discussions in Sec. III-C and Sec. III-D, the novelty of the proposed algorithm can be summarized in the following theorem. **Theorem 2**: _Algorithm GreedyOffer determines an optimal solution the offer optimization problem (given in Eq. II) in \(O(kn\log n)\) time, and it consumes \(O(kn)\) memory for solving the problem. ### _OOP with Heterogeneous Offers_ The problem stated in Sec. II is designed with offers in terms of monetary incentive. The same is also applicable to a more generalized scenario where the offers are made in terms of services and non-monetary offers such as additional data, hot-spot tethering, additional SMS, value added services or specific download limit, etc. These additional services or non-monetary offers can be represented in terms of equivalent monetary advantages. An example of such offers are shown in Table I, in which offers are presented as different types with equivalent values, and hence, we solve it by using the same formulation as stated in Sec. II. In order to incorporate the subscriber specific profile or usage history of a subscriber, a relative weight to each type of offers can also be set for incorporating in the objective function. ## IV Experimental Results All experimental results are obtained by using as noted book computer with Intel(R) Core(TM) i5 2.60GHz processor, 16GB RAM, and Windows 10 Enterprise OS. In order to show the novelty of the proposed algorithm, we present the experimental results in two parts. * Comparison performance with two standard algorithms * Efficiency and capability of proposed algorithm for handling large problems. ### _Implementation_ The proposed algorithm is implemented in pure Python and no special library is used for the implementation. The implementation of max-heap and max-priority queue is done by following the standard algorithm in Cormen et al. [2009]. We have inserted additional steps required for construction of priority queues, deletion key and update operation in the priority queues in order to maintain the consistency of the positions of a subscriber in the queues and in the lookup table adopted in the proposed algorithm. Although the maintenance of lookup table brings a small computational overhead, it makes the deletion and update operation very fast (i.e., logarithmic time). ### _Comparison of Performance_ The efficiency (i.e., execution time) of the proposed algorithm is compared with a couple of standard optimization algorithms, namely, i) _Genetic Algorithm_ with constraints that is available in the python library _pymoo_(Blank and Deb [2020]), and ii) constraints non-linear solver IPOPT available in the python library _pyomo_(Hart et al. [2011], Bynum et al. [2021]). In general, for problems with large number of unknowns, these techniques either take long time to find a good solution, or fail to converge to a feasible solution. So, the comparison is shown with moderate size of the problem where number of subscribers varies from \(100\) to \(1000\) with the second parameter \(k\) remains fixed to \(5\). So, the number of decision variables for corresponding optimization problems varies for \(nk=500\) to \(nk=5000\). In general, the genetic algorithm demands large population size and large number of iterations, otherwise it cannot find a feasible solution for large number of unknowns. So, for conducting the experiment, the population size and the number of iterations are increased as the problem size increases. The proposed algorithm completes execution in less than a second from finding an optimal solution solution, which is negligible compared to the time consumed by genetic algorithm. Note that the pyomo solver also consumes significantly longer time than the proposed algorithm for solving the same problem and yet the optimal value of the objective function provided by it is smaller than the one provided by the proposed algorithm. The execution time and the maximum value of the objective functions for problems with varying size are tabulated in Table II. This result clearly shows that the proposed algorithm always provides largest value of the objective function, while taking smallest amount of execution time. This is expected as these algorithms, in general, provides a sub-optimal solution to a problem, while our algorithm always provides an optimal solution. ### _Efficiency with Large Problems_ In order to show the capability of handling large scale optimization problem, we have also furnished experimental results with very large number of subscribers which ranges from \(100\) thousands to one million, and offer types varies from \(5\) to \(20\). The execution time is tabulated in Table III and the growth of the execution time is also visualized in Fig. 2, which shows that the execution time grows slowly, and it is similar to a linear or log-linear function. So, the empirical results does not deviate from the theoretical bound of time complexity of our algorithm. ## V Conclusion Here we conclude with few remarks on applications, efficiency, improvement, and shortcomings. * In this paper, we proposed a greedy algorithm for solving a special type of combinatorial optimization problem, namely offer optimization problem that comes from telecom domain. However, the algorithm may be used for solving similar problem coming from different domains, which requires allocation and utilization of resources to optimize predefined objectives. * The time complexity of the algorithm can be improved by using another priority queue \(Q\) for implementing the greedy choice of the proposed algorithm, which compute the maximum of maximums for selecting a subscriber and an offer that make maximum revenue. Instead of the list \(L\), if we use another queue \(Q\) to storing the references of the roots of \(k\) priority queues, then the greedy choice can be made in \(O(\log k)\) time instead of \(O(k)\) time. * The efficiency of the algorithm is achieved with a cost of memory. The space complexity of the algorithm is \(O(kn)\) which not a big issue if number of offer types \(k\) is of moderate size. The typical values of \(k\) in telecom domain may be \(5\) to \(20\). For problem coming from some domain such as online retailer, the number of different types of offer may be large, and then the memory requirement will be increased. The algorithm may be extended for the environment of distributed computing to handle such issues as well as increasing the efficiency.
2301.06378
Reaching sub-millisecond accuracy in stellar occultations and artificial satellites tracking
In recent years there appeared a need for astronomical observations timed with sub-millisecond accuracy. These include e.g. timing stellar occultations by small, sub-km or fast Near Earth Asteroids, but also tracking artificial satellites at Low Earth Orbit using optical sensors. Precise astrometry of fast-moving satellites, and accurate timing of stellar occultations have parallel needs, requiring reliable time source and good knowledge of camera delays. Thus a need for an external device that would enable equipment and camera testing, to check if they reach the required accuracy in time. We designed, constructed and thoroughly tested a New EXposure Timing Analyser (NEXTA): a GNSS-based precise timer (Global Navigation Satellite System), allowing to reach the accuracy of 0.1 millisecond, which is an order of magnitude better than in previously available tools. The device is a simple strip of blinking diodes, to be imaged with a camera under test and compare imaged time with internal camera time stamp. Our tests spanned a range of scientific cameras widely used for stellar occultations and ground-based satellite tracking. The results revealed high reliability of both NEXTA and most of the tested cameras, but also pointed that practically all cameras had internal time bias of various level. NEXTA can serve the community, being easily reproducible with inexpensive components. We provide all the necessary schemes and usage instructions.
K. Kamiński, C. Weber, A. Marciniak, M. Żołnowski, M. Gędek
2023-01-16T11:48:00Z
http://arxiv.org/abs/2301.06378v1
# Reaching sub-millisecond accuracy in stellar occultations and artificial satellites tracking ###### Abstract In recent years there appeared a need for astronomical observations timed with sub-millisecond accuracy. These include e.g. timing stellar occultations by small, sub-km or fast Near Earth Asteroids, but also tracking artificial satellites at Low Earth Orbit using optical sensors. Precise astrometry of fast-moving satellites, and accurate timing of stellar occultations have parallel needs, requiring reliable time source and good knowledge of camera delays. Thus a need for an external device that would enable equipment and camera testing, to check if they reach the required accuracy in time. We designed, constructed and thoroughly tested a New EXposure Timing Analyser (NEXTA): a GNSS-based precise timer (Global Navigation Satellite System), allowing to reach the accuracy of 0.1 millisecond, which is an order of magnitude better than in previously available tools. The device is a simple strip of blinking diodes, to be imaged with a camera under test and compare imaged time with internal camera time stamp. Our tests spanned a range of scientific cameras widely used for stellar occultations and ground-based satellite tracking. The results revealed high reliability of both NEXTA and most of the tested cameras, but also pointed that practically all cameras had internal time bias of various level. NEXTA can serve the community, being easily reproducible with inexpensive components. We provide all the necessary schemes and usage instructions. ## 1 Introduction Certain observations in astronomy require sub-millisecond timing precision, either due to short duration of the studied phenomena, or due to rapidly changing position of the studied bodies. An example for the first are stellar occultations by small asteroids, and for the latter, the artificial satellite tracking by optical, ground-based sensors. In the era of Gaia mission catalogues [1] the predictions of stellar occultations has recently faced unprecedented improvement, enabling to register occultations by only km-sized bodies or even by Near-Earth Asteroids (NEAs), as exampled by recent successful occultation campaigns on Apophis and Phaethon, being around two hundred meters and two kilometers in diameter, respectively [2]. Phaethon is the target of JAXA's DESTINY+ mission, so knowing as much as possible about the target properties prior to the mission is a key issue. Basing on recently acquired possibilities in occultation studies, an ACROSS project has been launched (Asteroid Collaborative Research via Occultation Systematic Survey1) aiming to regularly observe stellar occultations by small NEAs, including binary Didymos - target of the DART (NASA) and Hera (ESA) missions. Footnote 1: [https://lagrange.oca.eu/fr/home-across](https://lagrange.oca.eu/fr/home-across) Stellar occultations by asteroids are among the most accurate methods to determine asteroid sizes. The technique is quite simple, yet powerful: one only needs to precisely measure the moments of disappearance and reappearance of a star occulted by a minor body from the solar system. The yield of occultation observations is greatest when done from a network of observers optimally positioned within the predicted shadow path. Despite its simplicity, the "resolving power" of this method lies between the possibilities of a space telescope and in-situ studies by a dedicated space mission. For example the diameters of Ceres and Vesta determined from multi-chord stellar occultations observed with small telescopes from the ground are within 1% from the direct measurements made by Dawn spacecraft [3]. The most widespread technique to determine asteroid sizes however uses their absolute H magnitude and assumed albedo [4]. As such it can be off by even 50%. More precise method, to use asteroid infrared fluxes from space observatories can reach 30% uncertainty (or the discrepancy between various missions results), being typically of the order of 10% - 20% for Simple Thermal Model, with unknown spin and assumed spherical shape [5], and decreasing to 5% or less, when using Thermophysical Modelling with detailed spin and shape models [6]. Occultation events also improve the astrometry, of both involved bodies with even milliarcsecond accuracy [7], enabling e.g. the Yarkovsky drift measurements [8]. They also facilitated the discovery and studies on seasonal changes in the atmospheric profiles of distant dwarf planet Pluto [9, 10]. Such events also enable to discover rings [11] and natural satellites of minor bodies, as recently exampled by the first confirmed detection of a moon orbiting minor planet (4337) Arecibo, in two independent occultation events [12], later also confirmed by photocenter offset of this body detected in Gaia mission data [13]. Typically, stellar occultations by asteroids are capable of breaking the inherent symmetry of two mirror pole solutions from lightcurve inversion, and confirming the shape features of their 3-D shape models, or pointing to shape model areas that still need some improvement [14]. For large, Trans-Neptunian Objects, occultations amended with other observations can lead to density estimates [15], allowing compositional studies of bodies so distant that often invisible to small telescopes (it is only required to see the occulted star). Last but not least, stellar occultations by solar system objects regularly unravel binary nature of stars, and give insight into brightnesses and separation of the stellar components [3]. In recent years a rapid growth in the number of artificial Earth satellites is observed, reaching over 25000 in March 2022 in Space Track catalogue [16]. Among them the population of Low Earth Orbit (LEO), with altitudes above the ground below 2000 km, is the most numerous, comprising about 60% of all catalogued objects. Although typically LEOs are mostly tracked using ground based radars, there is a growing need for using optical telescopes for tracking and survey observations of them as well. This is partly because of the necessity of monitoring orbital perturbations [17] and partly because of growing interference of satellite streaks with astronomical observations [18]. Both areas would benefit from the increase of the accuracy of satellite position which can be obtained by combining range and Doppler measurements from radars and laser ranging stations with position measurements from optical telescopes. In order to reach the accuracy of astrometric measurements of LEOs at the level of an arcsecond it is necessary to assure an accurate image timing down to sub-millisecond level. This is because these satellites are typically observed at angular velocities of hundreds and thousands of arcsec/s. Optical observations of higher satellites usually include calibration targets, with accurately known orbits, such as navigational satellites. By comparing the predicted and observed positions a time-bias (the difference between the recorded image timing and the actual image timing) is derived. It is not uncommon for an astronomical camera to have a time-bias of the order of tens or even hundreds of milliseconds and sometimes the time-bias can change on a daily basis. Unfortunately, navigational satellites have angular velocities much smaller than LEOs, making such calibration not accurate enough. Satellites on low orbits that could be used as fast moving calibrators are sparse and less convenient to use due to short observing time windows. Therefore there is a need for a method that would provide optical calibration signals with an accuracy at the level of 0.1 millisecond with respect to UTC timescale in a stable and convenient manner for image timing error measurements. Low-orbit artificial satellites astrometry, as well as observing stellar occultations by small and fast moving asteroids, poses an observational challenge connected with the image timing accuracy. Occultation events often have duration of less than one second (e.g. 0.218 seconds in case of occultation by Phaethon from 15 October 2019, ACROSS campaign), and for scientifically usable and consistent results between observers, the timing precision needs to be of the order of milliseconds. The GNSS-based (Global Navigation Satellite System) time stamps enable such precision, the problem are unknown camera delays, shutter delays and other instrumental effects, deteriorating the timing. Thus the fast observing systems and cameras used for such observations should be tested for their timing precision and possible delays, with an order of magnitude better time resolution. So far, probably the only device enabling to test camera timing accuracy has been SEXTA (Southern EXposure Timing Array), described by [19], constructed basing on the original idea of EXTA (EXposure Time Analyser) by [20]. The devices consist of an array of blinking diodes, precisely timed by a GNSS receiver, to be imaged by scientific camera under study, in order to compare the template timing from GNSS (coded as a set of diodes that are on) with the camera internal time stamp, usually saved in the FITS image header. SEXTA array allows temporal resolution down to 2 ms (milliseconds). There is also a software tool implemented in Cyanogen Imaging MaximDL, called Shutter Latency Measurement. It displays a special sequence of images in a PC monitor which when recorded by a camera allows to measure timing errors. Unfortunately this solution depends on PC internal delays and is precise to only 10 ms. New observational challenges, however, require much better timing resolution, thus the idea, design (by KK) and successful construction of NEXTA (New EXposure Timing Analyser). In this work we describe NEXTA simple design, possible to be reproduced by anyone interested using inexpensive and simple components. The instructions, including wiring scheme are added in the Appendix A. The operating code is available from our Institute server, see Appendix B. This work describes also the possibilities and limitations of NEXTA deduced from extended testing on a range of astronomical cameras commonly used for stellar occultations (Sect. 2), assessing their usability for such observations. Section 3 presents the testing results for cameras used for observations of artificial satellites. The last section summarises the NEXTA suitability and camera testing results. ## 2 Construction and tests of NEXTA on occultation cameras In this section we discuss NEXTA from the point of view of observing stellar occultations by small bodies. Favored by strongly improved predictions (Gaia EDR3, [1]) and the provision of modern CMOS cameras, the current trend in occultation astronomy is towards ever smaller and/or closer-to-Earth objects with correspondingly increasing requirements for timing. NEXTA is an innovative development and offers a wide range of applications, particularly with regard to testing the timing capabilities of occultation recording devices. In the following we describe how to build and set up the instrument and give examples of its use in occultation astronomy. ### NEXTA replication All the necessary parts to build NEXTA are available off the shelf at a reasonable price. Figure 1 shows the main components. A wiring diagram (Fig. 24) as well as some Figure 3: Test setup example. SharpCap 32bit Pro version 4.0.8655.0 recording software, NEXTA display (equipped with a mask for better differentiation of the LED sections, indicated in cyan. The current LED status intentionally does not match the SharpCap display.), QHY174M-GPS camera with a photo lens. Figure 2: NEXTA completed. Front view with 20-LED display. _Bottom right:_ View from above, GNSS module at the bottom of the image. Figure 1: NEXTA main parts. (1) Arduino Due, (2) Arduino Proto Shield, (3) GNSS module with internal antenna and external antenna socket, (4) 20-LED strip, (5) Resistors 1 k Ohm. special hints are given in Appendix A. There is extensive freedom in the specific design of the device. This makes it easily adaptable to different test setups. Sufficient computing power must be ensured for the micro-controller. Our tests show that Arduino Mega controller board (16MHz 8-bit Atmel processor) is insufficient to achieve the 0.1ms resolution, while Arduino Due (84MHz 32-bit ARM processor) is perfectly adequate for the task. The NEXTA prototype presented here has an internal GNSS antenna and also a socket for connecting an external antenna. With the latter, the device can be placed indoors for more defined optical conditions. Figure 2 shows the built prototype without housing. After completion the device has to be programmed using Arduino Integrated Figure 4: Exemplary demonstration of the functionality of NEXTA using the WAT-910HX-RC camera / VTI recording system described in Sect. 2.4.1. Top image shows a 20 ms half-frame (= 1 video field), field number 122436, of an AVI recording of NEXTA’s display, cyan: LED sections (digits), ’Decoded’: Digits decoding according to Fig. 5, a\({}_{1\mathrm{s}}\) and a\({}_{0.1\mathrm{s}}\) are the LEDs photometrically read by PyMovie version 3.3.2. The corresponding diagrams (”light curves”, LED intensity over VTI generated UTC timestamps) from PyOTE version 4.6.4 are shown below, where the one on the right represents a temporal stretching of the area marked in red on the left. The yellow 0.1 s digit curves are shifted up in intensity by 6000 units for better visibility. The red dotted vertical lines correspond to the analogue time (15:32:37.741 UTC) indicated on the video field above. The green dotted line indicates the end of the 15:32 37th UTC second, both LEDs went out at the same time. This and the identically shaped curves (with for the a-LED positions exactly 3 phases with LED ON) show the synchronous working mode of the NEXTA sections 1 s and 0.1 s. The three remaining sections work in the same way. The exposure time was 10 \(\mu\)s, so even the sections below 0.1 s show individual LEDs lit. The ”VTI” marked area shows the VTI generated time mark stamped into the video field. The red arrows show the correspondences of the digits. The 20 ms time difference within the 0.01 s digit is due to an instrumental delay of the camera, for details see Sect. 2.4.1 Figure 5: Decoding of NEXTA’s LED locations (marked in blue) into analogue numbers (yellow), shown for the 1 s LED section. X = LED ON, no sign = LED OFF, green marks refer to the LED’s state presented on the top. This scheme is valid for all of the five sections. Figure 6: NEXTA / WAT-910HX-RC / VTI test setup. (1) NEXTA, (2) WAT-910HX-RC with lens; VTI interior view: (3) Arduino Uno, (4) GNSS module with antenna. Figure 8: Tangra light curve of the state of the d\({}_{1\mathrm{s}}\) LED (see Fig. 7 and Table 1) obtained from the test video described in Sect. 2.4.1. The red line refers to frame 127, video field 120050, as shown in Table 1. Figure 7: Two 20 ms duration consecutive video fields of a WAT-910HX-RC 25 FPS PAL video (see Sect. 2.4.1). Red marked: Time stamps imprinted on the video stream by the VTI [h:min:s ms]. Cyan: NEXTA sections. Yellow: NEXTA’s decoded analogue UTC time stamps [s]. Blue: For light curve simulation (Fig. 8) used LED location d\({}_{1\mathrm{s}}\) of NEXTA’s 1 s digit section. Figure 11: Single QHY174M-GPS camera FITS frame (Capture-09048-20-12-51Z.fits) time-relevant FITS keywords recorded with SharpCap (recording details see text). On the top there is imaged the NEXTA display and (coloured) decoding of its digits, indicating the start of a 100 \(\mu\)s exposure, identically represented by the FITS keywords GPS_ST, GPS_SU and DATE-OBS. The FITS keywords GPS_ET, GPS_EU and DATE-OB2 relate to the end of exposure. Figure 10: Test setup for the NEXTA tests with the QHY174M-GPS camera. Figure 9: AOTA analysis of the light curve shown in Fig. 8 Development Environment (IDE)1. The NEXTA software utilises hardware interrupts to ensure the lowest possible internal latency. Each GNSS impulse (which is setup to PP10S - one pulse per 10 seconds) triggers an unambiguous sequence of LEDs blinking at a frequency of 10kHz for 10 seconds. If no further impulses arrive the device displays 3 LEDs (\(a_{1s}\), \(c_{1s}\) and \(a_{0.1s}\)) constantly on as an indication for the user. This combination is never displayed during any other situation. The LEDs timing between PP10S impulses is based on the 12MHz oscillator on the Arduino board. The error of this oscillator is measured with respect to the GNSS signals after each startup. Than if drift calibration is selected in source code (code line 17) it is used during the subsequent operations of NEXTA. By default the drift it only is compared with limit. If it is too large (the default limit is 10 microsec / sec, configurable in code line 20) the device does not run and two LEDs (\(a_{1s}\) and \(c_{1s}\)) are constantly on as an error message. During our tests Figure 12: First 4 and last 3 frames of a 431 dropped frame free section from a 30 s FITS sequence taken with a QHY174M-GPS camera capturing the NEXTA display with 1 ms exposure time. DATE-OBS is the SharpCap written FITS keyword representing the GNSS-PPS controlled start of frame in seconds after 2022-05-21T18:36 UTC. DATE-OB2 is the corresponding keyword for the end of the 1 ms exposure, which is followed by an interframe delay of 57.3 \(\mu\)s. The colored numbers show the correspondence of the respective decoded NEXTA digits with their counterparts in the FITS keyword DATE-OBS. Using 1 ms exposure time, NEXTA’s 0.000,1 s digit cannot be time resolved. we were always below that limit, but clock drift may be different for each Arduino, and may change with temperature. During initial setup the device displays 4 or more LEDs constantly on as an indication for the user about the setup progress (see Appendix C for details). The device can be powered by 5 V (USB socket) or 12 V. If GNSS satellite signals can be received in sufficient quality, the device is in stable operation after about 10 minutes. A typical test setup is shown in Fig. 3, using SharpCap1 as capture software and a QHY174M-GPS2 camera. Figure 14: PyOTE solution of a simulated light curve drop, derived from the 0.1 s digit of NEXTA (see the curve in blue in Fig. 13). Figure 13: QHY174M-GPS dropped frame free FITS sequence of 431 1-ms frames imaging NEXTA’s 1 s to 0.001 s digits (on the left). The digit 0.000,1 s is omitted because there is no time resolution. The curves represent the PyMovie read out d-LED states versus the GNSS controlled time form the camera, plotted with PyOTE. The curve colors correspond to those of the d-LED positions on the left. For better visibility, the curves are shifted vertically by an amount seen from the inline image top right. The state of the 1 s d-LED (= OFF) did not change during the recording (red curve). The red dotted line refers to the current d-LED states on the left side. The blue curve will be considered as occultation light curve, see Fig. 14 Figure 16: NEXTA rolling shutter detection on a RunCam Night Eagle 3 camera. On the left, the camera equipped with a C-mount adapter; on the right, a single frame (40 ms duration with exposure time 0.039 ms) showing typical rolling shutter effects in the 0.000,1 s digit of NEXTA. Figure 17: NEXTA rolling shutter detection on a ZWO ASI1600MM camera. On the left, the camera (1.8/50mm photo lens, used for the test, removed); on the right, a single frame (exposure time 50 \(\mu\)s) showing typical rolling shutter effects in the 0.000,1 s digit of NEXTA. Figure 15: PyOTE solution of the 0.01 s digit light curve (here blue, light blue in Fig. 13); violet: 0.1 s digit light curve (in Figs. 13 and 14 blue). The D time here agrees with the D time in Fig. 14. ### NEXTA mode of operation NEXTA provides 5 analogue UTC digits (sections), ranging from 1 s to 0.1 ms (Fig. 3). Each digit consists of 4 LEDs controlled by the GNSS receiver's PP10S signal, which has an accuracy several orders of magnitude higher than the 0.1 ms temporal resolution of the NEXTA. The NEXTA mode of operation is shown in an example, presented in Fig. 4 (software used: PyMovie1 and PyOTE2). Footnote 1: [https://pypi.org/project/pymovie/](https://pypi.org/project/pymovie/) Footnote 2: [https://pypi.org/project/pyote/](https://pypi.org/project/pyote/) The decoding of NEXTA's display sections into analogue numbers occurs according to the scheme shown in Fig. 5. The comparison of the times displayed visually by NEXTA with the own time stamps of the devices to be tested results in their temporal accuracy. However, the camera must allow for sufficiently short exposure times, in Figure 19: ZWO ASI1600MM camera rolling shutter delay test. For the experimental setup see Sect. 2.4.3 and Fig. 18. Both curves represent the state of NEXTA’s d\({}_{0.1\mathrm{s}}\) LED. The red dotted curve is from the fibre optic cable output mapped to the top of the sensor. Due the effect of the rolling shutter, this curve is shifted in time by 1 frame (44 ms) compared to the blue dotted curve from NEXTA’s d\({}_{0.1\mathrm{s}}\) LED directly mapped to the bottom of the sensor. The test case showed here is without binning. Figure 18: Test setup to determine the rolling shutter delay of a ZWO ASI1600MM camera. In the background the SharpCap user interface with the sensor region of interest (5) used and the frame live view (1), (3). (4), (3): Fibre optic cable entrance imaging NEXTA’s d\({}_{0.1\mathrm{s}}\) LED. (2), (1): Fibre optic cable output. Figure 21: Example images of the NEXTA recorded with QHY 600M Pro camera during tests in 5 different modes described in Sect. 3.1. Rolling shutter effect is best visible in three sections with largest frequency of blinking (0.01s, 0.001s, 0.0001s). Such data was used to calculate the delay between readout of consecutive rows and subsequently to calculate the UTC timing of the first row of the camera sensor. Figure 20: NEXTA ZWO ASI1600MM rolling shutter delay measurement. In the background SharpCap, capturing parts of NEXTA’s 0.001 s digit (exposure time 0.4 ms). The inserted image shows the setup with spacer rings and a 2.8/35 mm photo lens for imaging the d\({}_{0.001s}\) LED onto the camera sensor. From the light pattern (compare Fig. 5) of the d\({}_{0.001s}\) LED a row readout time of 13.3 μs was measured, corresponding to 46.8 ms for the entire sensor. Figure 23: Example images of the NEXTA recorded with Andor Zyla 5.5 camera during tests in 4 different modes described in Sect. 3.2. Exposure time was always the same - 27\(\mu\)s. Top images were taken using rolling shutter, bottom images - global shutter. In global shutter mode the sections of NEXTA blinking at 1ms and 0.1ms intervals were always recorded with 4 LED on, even though in rolling shutter mode the same sections are recorded with changing LED state without problems. The images taken in global shutter mode are also significantly better exposed. It looks like they were acquired with about 10ms exposure time, even though it was set in software and reported in FITS header as 27\(\mu\)s. Figure 22: Subsection of single images of NEXTA with exposure time of 70\(\mu\)s showing the same single LED (\(a_{0.001s}\)) recorded with QHY 600M Pro camera during tests described in Sect. 3. Due to rolling shutter working with different readout speeds (14bit mode - 15.6\(\mu\)s/line, Extended 2CMS mode - 78.1\(\mu\)s/line, Photographic mode - 39.1\(\mu\)s/line) different recordings of the exact same sequence of LED blinking are visible. By dividing the time between selected blinks of the LED with the number of pixel rows at which they were recorded we derived single row readout time of this camera (see Table 2). order to be able to use the highest resolution 0.000,1 s LED section, for example. This requires electronic shutters, ideally global ones. Sensor row readout delays of rolling shutter cameras can also be detected and quantified with NEXTA, see Sects. 2.4.3 and 3.1. Another limitation on the camera side with regard to the utilization of NEXTA's higher temporal resolutions are timing errors, for example as a result of dropped or backwards jumping frames. Such errors will increase when the frame rate (FPS, frames per second) of the recording system reaches its limit. The use of common occultation photometry software such as PyMovie and Tangra1 facilitates the evaluation although currently NEXTA's visual time stamps have to be decoded manually. The development of a special automatic NEXTA decoding software would, for example, greatly simplify long-term tests of the temporal and thermal stability of test objects. Footnote 1: [http://www.hristopavlov.net/Tangra/Tangra.html](http://www.hristopavlov.net/Tangra/Tangra.html) ### Temporal accuracy of NEXTA The primary functional requirement of NEXTA is that it will display a sequence of LED blinks which is synchronised with UTC time scale. In order to verify that there are no delays larger than 0.1 ms in the displayed LED sequence an experiment was performed using a very high FPS camera (\(\sim 10\) kHz) - Andor Zyla 5.5. No dropped frames were reported by Andor Solis camera controll software. NEXTA was recorded together with a single LED which was directly connected to a PPS signal of an additional GNSS receiver. Inspecting individual frames of the recording we found that both NEXTA diodes and GNSS diode displayed a whole second mark at exactly the same time. Since the PPS Figure 24: Example images of the NEXTA recorded with Andor Balor 17-12 and FLI Kepler 4040, described in Sect. 3.3 and Sect. 3.4. signal is synchronised with UTC with accuracy of tens of nanoseconds we concluded that there is no measurable delay in time displayed by NEXTA during its operation. For testing purposes two commands were added to Arduino program. First at the beginning of main loop to change selected digital pin state to high, second at the end of main loop to change that pin state to low. With oscilloscope it was measured that the loop execution time is below 57 microsec while changing LEDs state and below 3 microsec while the LEDs state requires no change. This results show that the overall latency of LEDs display should be adequate for the task, especially because the control program is setting up the highest frequency LEDs first (within 18 microsec from the loop beginning), and the lowest frequency LEDs last. A 30 hour long-term test was also performed using the global shutter QHY174-GPS camera as a reference. Both NEXTA and the camera were equipped with their own external antennas to ensure good GPS reception (according to the camera's GPS log, there were always 9-12 satellites in sight). SharpCap was used to record 1ms exposure 5 frames FITS sequences every 20 minutes. While the resolution of this test was limited to 1 millisecond it showed no problems with NEXTA readings and perfect agreement with camera timing recorded in DATE-OBS keyword of FITS images. ### NEXTA typical applications The following is a non-exhaustive description of NEXTA applications in testing stellar occultation equipment and, to some extent, software. Secondly, the tests are also intended to verify the functioning of NEXTA itself. The setup assumes a stable NEXTA GNSS state, the system under test has only to record NEXTA's LED display in an appropriate manner (for an example, see Fig. 3), followed by an analysis of the recording. #### 2.4.1 NEXTA for testing a WAT-910HX-RC Camera / VTI occultation recording system Before the era of modern CMOS cameras, analogue video cameras were the means of choice for recording stellar occultations. Mainly because of their high sensitivity, these cameras are still used [21, 7]. To time occultation events, video cameras require additional equipment, usually so-called Video Time Inserters (VTIs). The following NEXTA use case (Fig. 6) involves a WATEC WAT-910HX-RC 1 (PAL) video camera, equipped with a lens FUCINON 1:0.95/2.8-8mm, and a VTI (built by CW). The WATEC camera is an interline transfer CCD capable of making simultaneous exposures of all its pixels with exposure time down to 10 \(\mu\)s. The VTI bases on work of Smolarz1 and Andre2. The VTI (Fig. 6) is equipped with a GNSS receiver that provides the PPS signal and an Arduino-controlled unit that imprints UTC-accurate time stamps on the camera's video signal. The VTI does not generate its own delay of the video signal, at least not in the range of the temporal resolution of NEXTA. According to [20], however, the camera shows instrumental delays (time bias), which depend on the settings of the camera. _Recording and analysis of a 25 FPS interlaced AVI._ We used WAT-910HX-RC recordings of the NEXTA display with the camera output overlaid with the VTI time stamps (for recording hardware chain see Fig. 6). The VTI output was digitized using an USB video grabber (Hauppauge USB-Live21) and recorded as a lossless 25 FPS interlaced AVI using Syntek Video View on a W7-64bit i7 16GB RAM PC. As a rule, there were no dropped frames during the recordings. Figure 7 shows two consecutive fields (half-frames) of a 56 s video taken with an exposure time of 10 \(\mu\)s due to camera's electronic shutter. In this mode, the camera works without frame integration. The duration of a video field is nevertheless 20 ms (PAL video standard). Table 1 presents the analysis of the video field times from Fig. 7 and in addition the result from the end (frame 1376) of the 56 s video. Within the 1 ms temporal resolution of the hardware chain under test the time-bias with respect to NEXTA is always below 1 ms. _NEXTA for light curve simulation._ NEXTA can also be applied to simulate occultation light curves, which can, for example, be used to evaluate data reduction software. The following is an example \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Parameter} & & \multicolumn{3}{c}{Frame No. / Video field No.} \\ & & 126/120049\({}^{a}\) & 127/120050\({}^{a}\) & 1376/122548 & 1376/122549 \\ \hline VTI time & stamp & 15:31:50.001 & 15:31:50.021 & 15:32:39.981 & 15:32:40.001 \\ [h:min:s] & & & & \\ Camera & instrumental & - 0.020 & - 0.020 & - 0.020 & - 0.020 \\ delay\({}^{b}\) [s] & & & & & \\ Corrected & camera & 15:31:49.981 & 15:31:50.001 & 15:32:39.961 & 15:32:39.981 \\ time [h:min:s] & & & & & \\ NEXTA visual time [s] & 9.981,2 & 0.001,9 & 9.961,7 & 9.981,7 \\ Corrected & camera & - 0.2 & - 0.9 & - 0.7 & - 0.7 \\ time & deviation & from & & & \\ NEXTA & visual & time & & & \\ [ms] & & & & & \\ State of the d\({}_{1\rm s}\) LED & ON & OFF & ON & ON \\ of the 1 s digit section & & & & & \\ \hline \hline \end{tabular} \({}^{a}\)See Fig. 7, \({}^{b}\)see Sect. 2.4.1 \end{table} Table 1: Timing analysis of four video fields of the 25 FPS test video. taken from the test video described in Sect. 2.4.1. For this purpose, the state of the d\({}_{1\mathrm{s}}\) LED (see Fig. 7 and Table 1) was photometrically measured with Tangra version 3.7.4. Figure 8 shows the light curve obtained in this way. We analysed the light curve with Occult/AOTA1 version 4.2022.5.12, see Fig. 9. AOTA's analysis confirms the correct work of NEXTA, since D and R both occur at the full UTC second. Footnote 1: [http://www.lunar-occultations.com/iota/occult4.htm](http://www.lunar-occultations.com/iota/occult4.htm) #### 2.4.2 NEXTA tests with a QHY174M-GPS camera While the WATEC WAT-910HX-RC video camera used in the previous section offered a temporal resolution of only 20 ms, the QHY174M-GPS camera (Fig. 3) achieves precise timings down to 1 \(\mu\)s. This camera is currently the only internally GNSS-controlled type and is often used for recording stellar occultations, also for professional campaigns [22, 23]. The 2 Mega pixels 1920 x 1200 CMOS camera with a pixel size of 5.86 \(\mu\)m is equipped with a global shutter that enables exposure times from 900 s down to 5 \(\mu\)s. However, exposure times in the range below about 0.5 ms are often not really necessary for occultation recordings, and the usual hardware chains do not allow them either due to the limited frame rates, especially because of the commonly used USB data lines. Primarily to test the faster time resolution digits of NEXTA, QHY174M-GPS FITS sequences with exposure times down to 0.1 ms were achieved by reducing the image size and using 8 bits of image resolution instead of the possible 16 bits. Despite these measures and also with a relatively fast PC (W7-64bit i7 16GB RAM), under 0.5 ms exposure time, the frame timing errors were close to 100%, so that the fastest resolution of NEXTA of 0.1 ms could only partly be tested. With SharpCap, due to a special calibration routine and the shutter control directly derived from the GNSS PPS signal, the QHY174M-GPS camera is able to determine \(\mu\)s-accurate time stamps for the start and end of each frame and write them to the corresponding FITS keyword. The time stamps of a single frame are thus determined exactly, even though frame failures may occur before or after due to insufficient USB connections or limited achievable frame rates. _Tests with short exposure FITS captures._ The following example refers to a single FITS frame with an exposure time of 0.1 ms. To map the NEXTA display, the camera was equipped with a 35 mm lens 1:2.8. To capture we used SharpCap running on a W7-64bit i7 16GB RAM PC. Figure 10 shows the test setup. Figure 11 presents the FITS recorded NEXTA display as well as the related time-relevant FITS keywords and demonstrates the agreement of the 0.1 ms resolved NEXTA readout with the temporal precision of the QHY174M GPS camera. For further testing, a dropped frame free section of 431 frames was used from a 30 s FITS sequence with a frame exposure time of 1 ms. Figure 12 shows the first 4 and the last 3 images of the 431-image sequence. The NEXTA visual timestamps decoded from the images are compared to the corresponding FITS keywords provided by SharpCap. From Figs. 11 and 12 can be concluded the camera's \(\mu\)s-precise timing which is confirmed by NEXTA. At the same time, this demonstrates NEXTA's suitability for carrying out such tests. _PyMovie/PyOTE and NEXTA._ Besides Tangra/AOTA (Sect. 2.4.1), the programs PyMovie and PyOTE have established themselves in the data reduction of occultation recordings. With PyMovie, the NEXTA's digits LED states can be read out (but not automatically decoded to analogue numbers) and written together with the camera's FITS header time stamps to a CSV file. Opening this file in PyOTE provides analysis options for both the occultation recording hardware and the associated data reduction software. To demonstrate this, the QHY174M-GPS recorded FITS sequence (431 1 ms frames) described above was used. PyMovie was applied to photometrically record the brightness of the respective d-LEDs of the digits 1 s to 0.001 s. As can be seen from Fig. 13, PyMovie/PyOTE are well suited tools to measure the LED states of NEXTA. The blue 0.1 s plot in Fig. 13 was considered as a simulation of an occultation drop and resolved with PyOTE; the result is shown in Fig. 14. Figure 14 shows the very close match of the LED status with the GNSS time reference derived camera time stamps. Figure 15 gives an analogous solution for the light curve of the 0.01 s digit (light blue in Fig. 13). Its D time is identical to the D time of the 0.1 s light curve in Fig. 14, confirming the precise work of NEXTA. #### 2.4.3 NEXTA for the detection and measurement of rolling shutter effects In contrast to the global shutter cameras described in previous sections, also rolling shutter cameras are used for recording stellar occultations. Modern CMOS cameras usually have electronic rolling shutters. Depending on the shooting parameters, rolling shutter cameras can cause image effects such as distortion of fast moving objects. The latter are not the main problem when recording stellar occultations, but in addition, with rolling shutters timing problems can occur due to the camera's sequential row-by-row sensor readout. The time data of an occultation derived from such recordings may therefore depend on the vertical sensor position of the occulted star. With NEXTA, it is possible to determine if a camera has a rolling shutter, as this is not always immediately known. If a macro lens is used and LED images are sufficiently large it is also possible to use NEXTA to determine the readout rate of individual rows. This can be used, for example, to verify manufacturer's specification, test the frame-by-frame consistency of readout rate and determine exposure delay of a pixel row used for recording an occultation. It can also be used to convert the NEXTA optical time measured for a selected, individual pixel row to the first pixel row and compare is with image timing from FITS header, just like with global shutter (see Sect. 3.1). _RunCam Night Eagle 3._ Low-cost cameras are needed for mobile, unattended deployment of various occultation recording stations, for example in campaigns where the shadow path needs to be relatively densely populated and consequently a larger number of stations are required. The RunCam Night Eagle 31, actually a first-person view (FPV) camera, meets this requirement and provides sufficient sensitivity comparable to the WAT-910HX-RC when its frame integration is not used [24]. Depending on the RunCam Night Eagle 3 settings, this CMOS camera outputs a PAL or NTSC video signal. Additional timing equipment is required for recording occultations. Tests with NEXTA confirmed the presence of a rolling shutter on the camera (Fig. 16). The rolling shutter caused instrumental delays of the RunCam Night Eagle 3 were determined by [24]. These delays can reach up to 16.7 ms for NTSC. PyOTE is able to incorporate the rolling shutter effects of this camera. Footnote 1: [https://shop.runcam.com/runcam-night-eagle-3/](https://shop.runcam.com/runcam-night-eagle-3/) Footnote 2: [https://astronomy-imaging-camera.com/product/asi1600mm-kit](https://astronomy-imaging-camera.com/product/asi1600mm-kit) _ZWO ASI1600MM camera._ For the tests a ZWO ASI1600MM3 camera (16 Mega pixels 4656 x 3520 CMOS sensor, pixel size 3.8 \(\mu\)m) was equipped with a 1.8/50mm photo lens. Figure 17 demonstrates the rolling shutter effect. The image was taken with SharpCap with an exposure time of 50 \(\mu\)s. In the live view of SharpCap, vertically moving light patterns (related to Fig. 5) within individual NEXTA LED sections indicate the presence of the rolling shutter. Literature shows that the camera is used to record stellar occultations [25, 26], although, unlike the QHY174M-GPS, additional timing equipment is required. It is not known if there is any effort or data on how to handle the camera's rolling shutter in the context of stellar occultations. With NEXTA, however, an attempt was made to determine the magnitude of the readout delay over the entire sensor in the vertical direction. To realize this, the NEXTA display was mapped to the lower vertical end of the camera sensor. At the same time, the input of a fibre optic cable was placed in front of the d\({}_{0.1\mathrm{s}}\) LED and the fibre optic cable's other end was positioned to be imaged on top of the sensor (Fig. 18). Due to the relatively large sensor region of interest required (see Fig. 18), the achievable frame rate was not greater than 46 FPS for bin2 and 23 FPS without binning. Therefore, only the NEXTA sections of 1 s and 0.1 s were time resolved and consequently only these sections could be used. The results from the upper end of the sensor were found to be time delayed compared to the lower end. As shown in Fig. 19, the time difference was 1 frame (44 ms) at native sensor resolution and 22 ms correspondingly at x2 binning. As to expect, an analogous test with the QHY174M-GPS global shutter camera did not show a time delay. We also tested the rolling shutter effect of the ZWO ASI1600MM camera using the methods described in Sect. 3. As presented in Fig. 20 the readout time for a single row was measured to be 13.3 us in bin1 mode. For the entire sensor follows a readout time of 46.8 ms. This result is in good agreement with the outcome of the measurement using a fibre optic cable (Figs. 18 and 19). During the rolling shutter tests of the ZWO ASI1600MM camera, possible time delays were not in view because no external GPS device was available. ## 3 Tests of NEXTA on satellite tracking cameras In this section we present results of image timing analysis with NEXTA for four different cameras, that are a potentially interesting choice for satellite tracking and survey observations. Two of them are using rolling shutter only, two other have software selectable shutter mode: rolling or global. All of them can be equipped with an external timing device for improved image timing accuracy. ### QHY 600M Pro The QHY 600M Pro1 camera (61 Mega pixels, 9600 x 6422 CMOS sensor, pixel size 3.76 \(\mu\)m) is one of a few astronomical cameras available on the market that can be supplied with a dedicated GNSS based timing device - GPSBOX. Its purpose is to provide accurate image timing with the resolution of 0.1\(\mu\)s and accuracy not specified clearly by the manufacturer. During the tests the camera was equipped with 1.4/50mm photo lens with macro extension rings. We used the fibre data interface therefore we collected data at higher FPS than possible using only USB3 interface (see Table 2). Exposure time was set to 70\(\mu\)s, the shortest possible for this camera, which caused slight blurring of the 100\(\mu\)s section of NEXTA. SharpCap software (ver. 4.0.9063 64-bit) and so called Live View mode, in which camera is continuously displaying images even if they are not commanded or recorded, was used throughout the tests. The alternative Still mode was not used, since it seems to reduce frame rates significantly. Footnote 1: [https://www.qhyccd.com/scientific-cooled-camera-qhy600pro-imx455-cmos/](https://www.qhyccd.com/scientific-cooled-camera-qhy600pro-imx455-cmos/) QHY 600M Pro uses rolling shutter only, so the image timing provided by the camera (in FITS header) corresponds to its first row of pixels. All tests were performed using the central part of the sensors (see Fig. 21), so direct comparison of image timings was not possible. Therefore we used NEXTA first to calculate the time delay between consecutive readouts of pixel rows (see Fig. 22). Afterwards the obtained delay was used to convert the measured optical time for selected central row to the time of the first row. The time delay between readout of consecutive unbinned rows was 39.1\(\mu\)s in Photographic, High Gain and Extended Fullwell modes. It was doubled to 78.1\(\mu\)s in 2CMS mode and reduced to 15.6\(\mu\)s in 14 bit mode, which is only available when using optical fibre interface. The time provided by QHY 600M Pro camera, equipped with a GPSBOX, for the beginning of the first pixel row was always slightly before the actual UTC time measured with NEXTA. The shift was between 0.5ms and 3.0ms, depending on camera mode (see Table 2). The time-bias in any particular mode was consistent throughout the tests. Although the deviation from the actual image timing is small, it is still an important correction that should be taken into account during satellite tracking observations of LEO targets. Image timing of QHY 600M Pro was significantly worse and less consistent without the GPSBOX attached. The difference between the time recorded in FITS files and the actual time of exposure in this case was changing between 250ms and 750ms. Such results were achieved even with PC system clock synchronised using NTP (Network Time Protocol) over the internet with an accuracy of several milliseconds. It is a great example of unpredictable and in the case of satellite tracking unacceptable delays that are introduced even if a sensor is equipped with an electronic shutter. ### Andor Zyla 5.5 Andor Zyla 5.51 camera (5.5 Mega pixels, 2560 x 2160 CMOS sensor, pixel size 6.5 \(\mu\)m) is one of a few astronomical cameras available on the market that can operate in software selectable rolling and global shutter modes. It is equipped with an general purpose IO port. A trigger-out signal is generated at the beginning of each exposure which can be used to measure image timing independently on PC software. Unfortunately, Andor does not offer timing accessories similar to GPSBOX from QHY. Therefore an external GNSS image timing device for this camera (designed by KK) was used. During the test \begin{table} \begin{tabular}{l c c c c} \hline camera mode & bin & fps & row readout & time-bias \\ & & & [\(\mu\)s] & [ms] \\ \hline Photographic & 1x1 & 4 & 39.1 & -1.5 \\ Photographic & 4x4 & 4 & 156.3 & -1.5 \\ High gain & 1x1 & 4 & 39.1 & -1.5 \\ High gain & 4x4 & 4 & 156.3 & -1.5 \\ Extended fullwell & 1x1 & 4 & 39.1 & -1.5 \\ Extended fullwell & 4x4 & 4 & 156.3 & -1.5 \\ Extended 2CMS & 1x1 & 2 & 78.1 & -3.0 \\ Extended 2CMS & 4x4 & 2 & 312.5 & -3.0 \\ 14bit (fibre only) & 1x1 & 8 & 15.6 & -0.5 \\ 14bit (fibre only) & 4x4 & 8 & 62.1 & -0.5 \\ \hline \end{tabular} \end{table} Table 2: Timing analysis of QHY 600M Pro camera equipped with a GPSBOX. the camera was equipped with a 1.4/16mm photo lens. Exposure time was set to 27\(\mu\)s. Andor Solis1 software (ver. 4.32.30004.0), Single Scan mode and 16-bit dynamic range was used throughout the test. In rolling shutter mode the same procedure was used as described in Sect. 3.1. Footnote 1: [https://andor.oxinst.com/products/solis-software/](https://andor.oxinst.com/products/solis-software/) In rolling shutter mode the time delay between readout of consecutive unbinned rows of Zyla was 25.6\(\mu\)s at 200MHz and 9.1\(\mu\)s at 560MHz readout speed. This is very close to the manufacturer specification of 25.41\(\mu\)s and 9.24\(\mu\)s, respectively. The image timing of this camera recorded using external GNSS clock was always slightly behind the actual UTC time measured with NEXTA. The time-bias value was about 55ms at 200MHz and 20ms at 560MHz readout speed (see Table 3). This is significantly larger than in the case of QHY 600M Pro, but constant throughout the tests, therefore it is easy to apply corrections. In case of satellite tracking or survey these timing delays are necessary to be corrected for on all orbital regimes. In global shutter mode we encountered unexpected difficulties. Even with a very short 27\(\mu\)s exposure time we have always seen the 0.1ms and 1ms sections of NEXTA with all LEDs lit on. The camera behaves as it would not fully block the incoming light during the readout of the sensor. According to specification the readout takes from 9.98ms to 27.44ms, depending on selected readout speed. See Fig. 23 for examples of NEXTA images in rolling and global shutter modes. The observed camera inability to take short exposures in global shutter mode reduced the resolution of the test to about 10ms. Therefore we were not able to test the camera's global shutter mode usability when the accuracy of image timing is required to be better than 10ms. Image timing of Andor Zyla 5.5, without an external GNSS clock, is saved by Andor Solis software with resolution of only 1 second in FITS headers. The difference between the time recorded and the actual time of the exposure in this case was usually within 1 second, as expected. Surprisingly, examples of much larger differences, up to 4 seconds, were also encountered. This renders the software image timing practically useless for any of the applications discussed in this paper. It is worth noting that using our own \begin{table} \begin{tabular}{l c c c} \hline camera mode & bin & row readout & time-bias \\ & & [\(\mu\)s] & [ms] \\ \hline rolling 200MHz & 1x1 & 25.6 & 55.4 \\ rolling 200MHz & 2x2 & 50.0 & 54.3 \\ rolling 560MHz & 1x1 & 9.1 & 19.8 \\ rolling 560MHz & 2x2 & 18.2 & 19.8 \\ global 200MHz & 1x1 & - & \textless{}10 \\ global 560MHz & 1x1 & - & \textless{}10 \\ \hline \end{tabular} \end{table} Table 3: Timing analysis of Andor Zyla 5.5 camera equipped with a GNSS timing device. software, based on Linux SDK for Andor cameras, we were able to significantly improve the software image timing accuracy, down to the level of a few milliseconds. ### Andor Balor Andor Balor 17-121 (16.9 Mega pixels, 4128 x 4104 CMOS sensor, pixel size 12 \(\mu\)m) is a large format, high FPS camera (up to 54Hz full frame) capable to operate in software selectable rolling and global shutter modes. It is equipped with a dedicated IRIG-B port for connecting a compatible GNSS receiver, however we did not use it. Instead we used the trigger-out functionality of the general purpose IO port just as with Andor Zyla (see Sect. 3.2 ). Tests were performed using Andor Solis software and 0.11ms exposure time (Fig. 24). The time-bias measured with NEXTA is presented in Table 4. In global shutter mode it was below the 0.1ms resolution of NEXTA - perfect result for even the most demanding satellite observations. In rolling shutter mode, however, we detected that the GNSS timing for the first row of pixels was always recorded 1.5ms prior to the actual beginning of image exposure. This time-bias was constant and therefore easily reducible. As in the case of Andor Zyla, the timing provided by Andor Solis software for Balor was only with the resolution of 1 sec. We did not observed the same problem of "light leaking" through closed electronic shutter in global shutter mode as in Andor Zyla 5.5. This shows that the problem was most likely not related to the procedure or equipment used during the test but a camera itself. ### FLI Kepler FLI Kepler 4041 (16.9 Mega pixels, 4096 x 4096 CMOS sensor, pixel size 9 \(\mu\)m) is a large format, high FPS camera (up to 20Hz full frame) with a popular, front-illuminated GSense404 sensor. The camera has an electronic rolling shutter and is equipped with a general purpose IO port with trigger-out functionality. We did not have the FLI Kepler Image Time Stamp device, which is claimed to provide image timing accuracy of 1.5ms. Instead we used our own timing device which has similar accuracy and uses the same \begin{table} \begin{tabular}{l c c c} \hline camera mode & bin & row readout & time-bias \\ & & [\(\mu\)s] & [ms] \\ \hline rolling & 1x1 & 5.49 & -1.5 \\ global & 1x1 & - & 0.0 \\ \hline \end{tabular} \end{table} Table 4: Timing analysis of Andor Balor 17-12 camera equipped with a GNSS timing device connected to IO port. signals from the camera for measurements. The exposure time used here was 41\(\mu\)s, software was a custom CLI solution based on FLI SDK1. Footnote 1: [https://www.flicamera.com/software/index.html](https://www.flicamera.com/software/index.html) Only one camera mode (High Dynamic Range + Low Dark Current), with 1x1 binning was tested with NEXTA. The single row readout time was measured as 10.4\(\mu\)s, and time-bias for the first row of pixels with respect to image timing based on GNSS receiver was consistently 0.3ms. When compared to software based image timing the time-bias was varying between about 40ms and 90ms from image to image. ## 4 Conclusions NEXTA has proven to be a very suitable tool to test the timing accuracy of various image timing systems and, to some extent, the associated data reduction software. One of the advantages of the instrument is its simplicity and ease of reproducibility. In a large number of tests with a wide range of devices, the NEXTA showed no problems and provided a valuable insight into the image timing precision and accuracy. The primary limitation of the NEXTA is that the resolution of the measurement possible with this device is limited to the minimum exposure time of the camera which is being tested. Therefore it is very well suited for high FPS devices equipped with an Figure 25: Schematic illustration of an idea to use NEXTA directly between the telescope and camera. A folding mirror and a focusing lens would be necessary, so a sufficiently large back-focus distance is required. Dashed lines represent optical axis of the telescope and the lens, green lines show selected light rays. The folding mirror is presented in open position, during measurements with NEXTA. During observations it would be in a closed position which does not obstruct light coming from the telescope. This idea should allow for a very convenient use of NEXTA for example during telescope slewing. electronic shutter and less so for low FPS sensors equipped with mechanical shutters. NEXTA allows to significantly improve the accuracy of determination of time-bias when compared to a classic method utilizing observations of navigation satellites. The former has two orders of magnitude better resolution and accuracy than the latter and allows to make calibration measurements during the day. It is possible to install the NEXTA, after some adaptations, directly on the telescope and use it during telescope slewing (see Fig. 25). It is also possible to dedicate part of a large sensor field of view for NEXTA permanently, but that would require some adaptation in order to minimise the risk of overexposure for longer exposure time. Both solutions would allow for a much more frequent calibration measurements and therefore better monitoring of camera image timing system accuracy and stability. The camera tests that we conducted comprised mostly of units equipped with CMOS sensors and only electronic shutters. They showed that software based image timing is accurate at best at the level of tens of ms and sometimes only at the level of full seconds. Thanks to NEXTA, we were able to prove that only the internally directly PPS-controlled QHY174M-GPS camera meets its us-accurate specification. When using external GNSS based image timers attached to trigger-out port, we see significant improvements in image timing accuracy. Nevertheless, only in the case of Andor Balor working in global shutter mode we observed correct image timing provided by the external GNSS timer. All other cameras (including Balor working in rolling shutter mode) had measurable time-biases. They range from 50 ms to -3 ms and were stable during the short term of the conducted tests. These are non negligible corrections that should be taken into account when measuring LEO satellites and, to a lesser degree, also asteroid occultations. The open question that was not tested is a long term stability of image timing systems. With the NEXTA being cheap and easy to manufacture, and accurate to the level of 0.1 ms, monitoring of such stability should become much more widely available. The authors thank B. Anderson for the development of PyMovie/PyOTE, H. Pavlov for the Tangra development, B. Herald, developer of Occult/AOTA and R. Glover at AstroSharp Limited, developer of SharpCap. This work was supported by the National Science Centre, Poland, through grant no. 2020/39/O/ST9/00713
2307.04839
Two classes of posets with real-rooted chain polynomials
The coefficients of the chain polynomial of a finite poset enumerate chains in the poset by their number of elements. It has been a challenging open problem to determine which posets have real-rooted chain polynomials. Two new classes of posets, namely those of all rank-selected subposets of Cohen-Macaulay simplicial posets and all noncrossing partition lattices associated to finite Coxeter groups, are shown to have this property. The first result generalizes one of Brenti and Welker. As a special case, the descent enumerator of permutations of the set $\{1, 2,\dots,n\}$ which have ascents at specified positions is shown to be real-rooted, hence log-concave and unimodal, and a good estimate for the location of the peak is deduced.
Christos A. Athanasiadis, Theo Douvropoulos, Katerina Kalampogia-Evangelinou
2023-07-10T18:16:02Z
http://arxiv.org/abs/2307.04839v2
# Two classes of posets with real-rooted chain polynomials ###### Abstract. The coefficients of the chain polynomial of a finite poset enumerate chains in the poset by their number of elements. It has been a challenging open problem to determine which posets have real-rooted chain polynomials. Two new classes of posets with this property, namely those of all rank-selected subposets of Cohen-Macaulay simplicial posets and all noncrossing partition lattices associated to irreducible finite Coxeter groups, are presented here. The first result generalizes one of Brenti and Welker. As a special case, the descent enumerator of permutations of the set \(\{1,2,\ldots,n\}\) which have ascents at specified positions is shown to be real-rooted, hence unimodal, and a good estimate for the location of the peak is deduced. _Mathematics Subject Classifications:_ 05A15, 05E45, 06A07, 26C10. _Key words and phrases_. Chain polynomial, simplicial poset, noncrossing partition, rank selection, real-rooted polynomial, permutation enumeration, descent Research supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the '2nd Call for H.F.R.I. Research Projects to support Faculty Members & Researchers' (Project Number: HFRI-FM20-04537). Introduction Let \(T\subseteq[n-1]\) be a positive integer. Let \(\mathfrak{S}_{n}\) be the set of all noncrossing partitions of \(\mathfrak{S}_{n}\). A _crossing partition_\(\mathfrak{S}_{n}\) is a _crossing partition_\(\mathfrak{S}_{n}\) of \(\mathfrak{S}_{n}\) if \(\mathfrak{S}_{n}\) is the set of all noncrossing partitions of \(\mathfrak{S}_{n}\). The _crossing partition_\(\mathfrak{S}_{n}\) is the set of all noncrossing partitions of \(\mathfrak{S}_{n}\). 2. _Let_ \(P\) _be a Cohen-Macaulay simplicial poset of rank_ \(n\)_. Then, every rank-selected subposet_ \(\hat{P}_{T}\) _of_ \(\hat{P}\) _has a real-rooted chain polynomial. Moreover,_ \(h(\Delta(\hat{P}_{T}),x)\) _is interlaced by_ \(A_{n}^{T}(x)\) _for every_ \(T\subseteq[n]\)_._ Noncrossing partition lattices associated to Coxeter groups are central objects of study in Coxeter-Catalan combinatorics; see [1, Chapter 2] for an overview. The enumeration of chains in these posets has been a very popular topic [6, 15, 18, 24, 27, 28]. Let us denote by \(\mathrm{NC}_{W}\) the noncrossing partition lattice associated to \(W\). The second main result of this paper is as follows. **Theorem 1.3**.: _The noncrossing partition lattice \(\mathrm{NC}_{W}\) has a real-rooted chain polynomial for every irreducible finite Coxeter group \(W\). Moreover, \(h(\Delta(\mathrm{NC}_{W}),x)\) has a nonnegative real-rooted symmetric decomposition with respect to \(r_{W}-1\), where \(r_{W}\) is the rank of \(W\). In particular, \(h(\Delta(\mathrm{NC}_{W}),x)\) is unimodal, with a peak at position \(\lfloor r_{W}/2\rfloor\)._ Question 1.1 cannot have an affirmative answer for all Cohen-Macaulay posets since, as already explained, it fails for finite distributive lattices. However, and since the proper parts of face lattices of polytopes, geometric lattices, noncrossing partition lattices of types \(A\) and \(B\)[23] and rank-selected subposets of Boolean lattices are doubly Cohen-Macaulay (see [32, Section III.3] for information about doubly Cohen-Macaulay posets), it seems reasonable to pose the following question. **Question 1.4**.: _Does the chain polynomial of any doubly Cohen-Macaulay lattice (or even doubly Cohen-Macaulay poset) have only real roots?_ This paper is organized as follows. Section 2 reviews definitions and tools from the theory of real-rooted polynomials (and especially the theory of interlacing) and the enumerative combinatorics of posets which are essential in understanding the main results and their proofs. The proof of Theorem 1.2 splits in two sections. Section 3 proves that \(A_{n}^{T}(x)\) is real-rooted (see Theorem 3.1), hence unimodal, gives a good estimate for the location of the peak and discusses some interesting special cases. Section 4 proves part (b) of the theorem by combining Theorem 3.1 with an exercise from [32] (see Lemma 4.1) and, as an application, generalizes part (a) in the setting of colored permutations. Theorem 1.3 is proven in Section 5. The proof is based on explicit combinatorial interpretations (as descent enumerators of certain families of words) of the \(h\)-polynomials of the order complexes \(\Delta(\mathrm{NC}_{W})\) for the irreducible finite Coxeter groups \(W\) of classical types (see Proposition 5.1) and on computer computations for the exceptional groups. These combinatorial interpretations are extracted from the known explicit formulas for the entries of the flag \(f\)-vectors of noncrossing partition lattices [6, 18, 28], the case of groups of type \(D\) being the trickiest. The second statement of Theorem 1.3 follows by an application of a result of Jochemko [21] about Veronese operators on formal power series. The question remains open for noncrossing partition lattices associated to arbitrary (meaning, possibly reducible) finite Coxeter groups. ## 2. Preliminaries This section reviews basic concepts and tools from the theory of real-rooted polynomials and the enumerative combinatorics of posets (the theory of rank selection, in particular) which will be essential in the following three sections. Standard references for these topics are [12, 19, 30, 32, 35]. ### Polynomials A polynomial \(p(x)=h_{0}+h_{1}x+\cdots+h_{n}x^{n}\in\mathbb{R}[x]\) is called * _symmetric_, with center of symmetry \(n/2\), if \(h_{i}=h_{n-i}\) for all \(0\leq i\leq n\), * _unimodal_, with a peak at position \(k\), if \(h_{0}\leq h_{1}\leq\cdots\leq h_{k}\geq h_{k+1}\geq\cdots\geq h_{n}\), * _log-concave_, if \(h_{i}^{2}\geq h_{i-1}h_{i+1}\) for \(1\leq i\leq n-1\), * _real-rooted_, if every root of \(p(x)\) is real, or \(p(x)\equiv 0\). Every real-rooted polynomial with nonnegative coefficients is log-concave and unimodal; see [12, 30] for more information about these concepts. A real-rooted polynomial \(p(x)\), with roots \(\alpha_{1}\geq\alpha_{2}\geq\cdots\), is said to _interlace_ a real-rooted polynomial \(q(x)\), with roots \(\beta_{1}\geq\beta_{2}\geq\cdots\), if \[\cdots\leq\alpha_{2}\leq\beta_{2}\leq\alpha_{1}\leq\beta_{1}.\] We then write \(p(x)\preceq q(x)\). By convention, the zero polynomial interlaces and is interlaced by every real-rooted polynomial and nonzero constant polynomials interlace all polynomials of degree at most one. A sequence \((p_{0}(x),p_{1}(x),\ldots,p_{m}(x))\) of real-rooted polynomials is called _interlacing_ if \(p_{i}(x)\preceq p_{j}(x)\) for \(0\leq i<j\leq m\). The following statement lists well known properties of interlacing sequences; see, for instance, [12, Section 7.8][19, Chapter 3]. **Lemma 2.1**.: _Let \((p_{0}(x),p_{1}(x),\ldots,p_{m}(x))\) be an interlacing sequence of real-rooted polynomials with positive leading coefficients._ * _Every nonnegative linear combination_ \(p(x)\) _of_ \(p_{0}(x),p_{1}(x),\ldots,p_{m}(x)\) _is real-rooted. Moreover,_ \(p_{0}(x)\preceq p(x)\preceq p_{m}(x)\)_._ * _The sequence_ \((q_{0}(x),q_{1}(x),\ldots,q_{m+1}(x))\) _of partial sums_ \[q_{k}(x)=\sum_{i=k}^{m}p_{i}(x)\] _for_ \(k\in\{0,1,\ldots,m+1\}\) _is also interlacing._ * _The sequence_ \((t_{0}(x),t_{1}(x),\ldots,t_{m+1}(x))\) _defined by_ \[t_{k}(x)=x\sum_{i=0}^{k-1}p_{i}(x)+\sum_{i=k}^{m}p_{i}(x)\] _for_ \(k\in\{0,1,\ldots,m+1\}\) _is also interlacing._ Given a polynomial \(p(x)\in\mathbb{R}[x]\) of degree at most \(n\), there exist unique symmetric polynomials \(a(x),b(x)\in\mathbb{R}[x]\) with centers of symmetry \(n/2\) and \((n-1)/2\), respectively, such that \(p(x)=a(x)+xb(x)\). This expression is known as the _symmetric decomposition_ (or _Stapledon decomposition_) of \(p(x)\) with respect to \(n\). Then, \(p(x)\) is said to have a nonnegative_ (respectively, _unimodal_ or _real-rooted_) _symmetric decomposition_ with respect to \(n\) if \(a(x)\) and \(b(x)\) have nonnegative coefficients (respectively, are unimodal or real-rooted); see [7, 13] for more information about these concepts. Every polynomial which has a nonnegative unimodal symmetric decomposition with respect to \(n\) is unimodal, with a peak at position \(\lceil n/2\rceil\). ### Poset combinatorics Our notation and terminology generally follows that of [35, Chapter 3]. Let \(P\) be a finite graded poset of rank \(n\), having a minimum element \(\hat{0}\) and rank function \(\rho:P\to\{0,1,\ldots,n\}\), and let \(\hat{P}\) be the poset obtained from \(P\) by adding a maximum element \(\hat{1}\). Given \(T\subseteq[n]\), the \(T\)-rank-selected subposet of \(\hat{P}\) is defined as \[\hat{P}_{T}=\{y\in P:\rho(y)\in T\}\cup\{\hat{0},\hat{1}\}.\] We denote by \(\alpha_{\hat{P}}(T)\) the number of maximal chains of \(\hat{P}_{T}\) and set \[\beta_{\hat{P}}(T)=\sum_{S\subseteq T}(-1)^{|T\smallsetminus S|}\alpha_{\hat{P}} (S) \tag{4}\] for \(T\subseteq[n]\). Equivalently, we have \[\alpha_{\hat{P}}(T)=\sum_{S\subseteq T}\beta_{\hat{P}}(S) \tag{5}\] for \(T\subseteq[n]\). The collections of numbers \((\alpha_{\hat{P}}(T))_{T\subseteq[n]}\) and \((\beta_{\hat{P}}(T))_{T\subseteq[n]}\) are the _flag \(f\)-vector_ and the _flag \(h\)-vector_ of \(\hat{P}\), respectively. The _order complex_ of a finite poset \(Q\) is defined as the simplicial complex \(\Delta(Q)\) which consists of all chains in \(Q\). The \(f\)-polynomial and the \(h\)-polynomial of \(\Delta(Q)\) are defined by Equations (1) and (2), respectively, when \(P\) is replaced by \(Q\). Since the \(h\)-polynomial is unaffected when maximum or minimum elements are removed from \(Q\), we have the equivalent expressions \[f(\Delta(\hat{P}_{T}\smallsetminus\{\hat{0},\hat{1}\}),x)=\sum_{S\subseteq T} \alpha_{\hat{P}}(S)x^{|S|}=\sum_{S\subseteq T}\beta_{\hat{P}}(S)x^{|S|}(1+x)^ {|T\smallsetminus S|} \tag{6}\] and \[h(\Delta(\hat{P}_{T}),x)=\sum_{S\subseteq T}\alpha_{\hat{P}}(S)x^{|S|}(1-x)^{ |T\smallsetminus S|}=\sum_{S\subseteq T}\beta_{\hat{P}}(S)x^{|S|} \tag{7}\] for every \(T\subseteq[n]\), where the second equality in each case is a consequence of Equation (5). **Example 2.2**.: Let \(\hat{P}\) be the Boolean lattice \(B_{n}\) of subsets of \([n]\), partially ordered by inclusion. Then, \(\beta_{\hat{P}}(S)\) is equal to the number of permutations \(w\in\mathfrak{S}_{n}\) with \(\operatorname{Des}(w)=S\) for every \(S\subseteq[n-1]\)[35, Corollary 3.13.2] and Equation (7) yields that \[h(\Delta((B_{n})_{T}),x)=\sum_{w\in\mathfrak{S}_{n}\colon\operatorname{Des}(w )\subseteq T}x^{\operatorname{des}(w)}=A_{n}^{T}(x).\] We note that, by definition of \(A_{n}^{T}(x)\) and a standard argument, we have \(A_{n}^{T}(x)=A_{n}^{n-T}(x)\) for every \(T\subseteq[n-1]\), where \(n-T:=\{n-a:a\in T\}\) ## 3. Permutations with restricted descent set This section proves that the polynomials \(A_{n}^{T}(x)\) are real-rooted, as claimed in part (a) of Theorem 1.2, and in particular unimodal, locates their peak and discusses some interesting special cases and formulas. The applications of the real-rootedness of \(A_{n}^{T}(x)\) discussed here have a probabilistic flavor; see [12, Section 7.2][26] for overviews of this topic. For a probabilistic approach to the theory of descents in permutations, we recommend [10, Section 5]. Crucial to the proof will be the polynomials \[p_{n,k}^{T}(x)=\sum_{w\in\mathfrak{S}_{n+1,k+1}:\operatorname{Des}(w)\subseteq T }x^{\operatorname{des}(w)}, \tag{8}\] where \(T\subseteq[n]\), \(k\in\{0,1,\ldots,n\}\) and \(\mathfrak{S}_{n+1,k+1}\) is the set of permutations \(w\in\mathfrak{S}_{n+1}\) such that \(w(1)=k+1\). We set \(p_{n,k}^{T}(x)=p_{n,k}(x)\) when \(T=[n]\); these polynomials appeared in [14][16, Section 2.2] and have been studied intensely since then; see, for instance [4, Section 2][8, Section 3][12, Example 7.8.8] and the references given there. We note that \(p_{n,0}^{T}(x)=A_{n}^{T-1}(x)\) and \[p_{n,n}^{T}(x)=\begin{cases}xA_{n}^{T-1}(x),&\text{if $1\in T$}\\ 0,&\text{if $1\not\in T$}\end{cases}\] for \(T\subseteq[n]\), where \(T-1:=\{a-1:a\in T\}\) and, as mentioned in Section 1, \(A_{n}^{T}(x):=A_{n}^{T\cap[n]}(x)\) for \(T\subseteq\mathbb{N}\). The following statement is the main result of this section. **Theorem 3.1**.: _For all \(n\in\mathbb{N}\) and \(T\subseteq[n]\),_ \[(p_{n,0}^{T}(x),p_{n,1}^{T}(x),\ldots,p_{n,n}^{T}(x)) \tag{9}\] _is an interlacing sequence of real-rooted polynomials._ _In particular, \(A_{n}^{T-1}(x)\) is real-rooted and it interlaces \(A_{n+1}^{T}(x)\) for all positive integers \(n\) and \(T\subseteq[n]\)._ Proof.: We proceed by induction on \(n\), the result being trivial for \(n=0\). Suppose that \(n\geq 1\) and that (9) is an interlacing sequence of real-rooted polynomials when \(n\) is replaced by \(n-1\). It is straightforward to verify from the defining equation (8) that \[p_{n,k}^{T}(x)=\sum_{i=k}^{n-1}p_{n-1,i}^{T-1}(x)\] for \(k\in\{0,1,\ldots,n\}\), if \(1\not\in T\), and \[p_{n,k}^{T}(x)=x\sum_{i=0}^{k-1}p_{n-1,i}^{T-1}(x)+\sum_{i=k}^{n-1}p_{n-1,i}^{ T-1}(x)\] for \(k\in\{0,1,\ldots,n\}\), if \(1\in T\). This recurrence generalizes that of the special case \(T=[n]\); see [12, Example 7.8.8]. An application of Lemma 2.1 shows that, in either case, (9) is an interlacing sequence of real-rooted polynomials as well. This completes the inductive step. The last statement follows from part (a) of Lemma 2.1 since \(p_{n,0}^{T}(x)=A_{n}^{T-1}(x)\) and \(\sum_{k=0}^{n}p_{n,k}^{T}(x)=A_{n+1}^{T}(x)\). We recall that a polynomial \(p(x)=\sum_{k\geq 0}h_{k}x^{k}\in\mathbb{R}[x]\) with nonnegative and unimodal coefficients is said to have a _mode_\(m\) if there exists a unique \(m\in\frac{1}{2}\mathbb{Z}\) such that either \(h_{m}=\max_{k}h_{k}\), or \(h_{m\pm 1/2}=\max_{k}h_{k}\). **Corollary 3.2**.: _The polynomial \(A_{n}^{T}(x)\) is unimodal and log-concave for every \(T\subseteq[n-1]\). Moreover, \(A_{n}^{T}(x)\) has a mode \(m_{n}(T)\) such that \(\lfloor\mu_{n}(T)\rfloor\leq m_{n}(T)\leq\lceil\mu_{n}(T)\rceil\) for_ \[\mu_{n}(T):=r-\sum_{i=1}^{r}\binom{c_{i}+c_{i+1}}{c_{i}}^{-1}, \tag{10}\] _where \(T=\{a_{1},a_{2},\ldots,a_{r}\}\) with \(1\leq a_{1}<\cdots<a_{r}<n\) and \(c_{i}=a_{i}-a_{i-1}\) for \(i\in[r+1]\), with \(a_{0}:=0\) and \(a_{r+1}:=n\). In particular,_ \[h_{0}(T)\leq h_{1}(T)\leq\cdots\leq h_{\lfloor r/2\rfloor}(T), \tag{11}\] _if \(A_{n}^{T}(x)=\sum_{i=0}^{r}h_{i}(T)x^{i}\)._ Proof.: Since \(A_{n}^{T}(x)\) is real-rooted and has nonnegative coefficients, by a result of Darroch [17, Theorem 4] (see also [12, Theorem 2.2] [26, p. 284]) we only need to verify that the right-hand side of Equation (10) is equal to the expected number of descents when a permutation \(w\in\mathfrak{S}_{n}\) with \(\operatorname{Des}(w)\subseteq T\) is selected uniformly at random. This holds because the probability that \(a_{i}\in\operatorname{Des}(w)\) for such \(w\) is easily computed to be \(1-\binom{c_{i}+c_{i+1}}{c_{i}}^{-1}\). The last statement follows since \(\mu_{n}(T)\geq r/2\). **Remark 3.3**.: The fact (see Example 2.2) that \(A_{n}^{T}(x)=h(\Delta((B_{n})_{T}),x)\), combined with Equation (7), yields the explicit formula \[A_{n}^{T}(x)=\sum_{S\subseteq T}\alpha_{B_{n}}(S)\,x^{|S|}(1-x)^{|T\smallsetminus S |},\] where \(\alpha_{B_{n}}(S)\) is a multinomial coefficient. We thank Ira Gessel [20] for pointing out that this is equivalent to the determinantal formula \(x^{r}A_{n}^{T}(1/x)=n!\det(\theta_{ij}(x))_{0\leq i,j\leq r}\), where \[\theta_{ij}(x)=\begin{cases}0,&\text{if }i>j+1\\ 1,&\text{if }i=j+1\\ \dfrac{(1-x)^{j-i}}{(a_{j+1}-a_{i})!},&\text{if }i\leq j\end{cases}\] and \(T=\{a_{1},a_{2},\ldots,a_{r}\}\) is as in Corollary 3.2, and for suggesting a direct combinatorial proof. **Example 3.4**.: (a) For \(T=[r]\subseteq[n-1]\), the polynomial \(A_{n}^{T}(x)\) is the descent enumerator for permutations \(w\in\mathfrak{S}_{n}\) which have ascents in the last \(n-r-1\) positions. A \(q\)-analogue of \(A_{n}^{T}(x)\) in this case was studied in [16] (although the unimodality of \(A_{n}^{T}(x)\) was not addressed there). Theorem 3.1 and Corollary 3.2 imply that \(A_{n}^{T}(x)\) is a real-rooted, hence unimodal, polynomial of degree \(r\) and that it has a mode \(m\) such that \(\lfloor r/2\rfloor\leq m\leq\lceil(r+1)/2\rceil\). Since \((B_{n})_{T}\), with its maximum element removed, is a simplicial poset in this case, the real-rootedness of \(A_{n}^{T}(x)\) already follows from the main result of [14]. Setting \(q=1\) in the formula of [16, Theorem 2.10] gives that \[\sum_{m\geq 0}\sum_{i=0}^{r}\binom{n-r+i-1}{i}m^{i}(m+1)^{r-i}=\frac{A_{n}^{T}(x) }{(1-x)^{r+1}}\] or, equivalently (by [14, Equation (4)]), that \[A_{n}^{T}(x)=\sum_{i=0}^{r}\binom{n-r+i-1}{i}p_{r,i}(x).\] In particular, \(A_{n}^{T}(x)\) is interlaced by the Eulerian polynomial \(p_{r,0}(x)=A_{r}(x)\). (b) More generally, for \(T=\{s+1,s+2,\ldots,s+r\}\subseteq[n-1]\), the polynomial \(A_{n}^{T}(x)\) is the descent enumerator for permutations \(w\in\mathfrak{S}_{n}\) which have ascents in the first \(s\) and the last \(n-r-s-1\) positions. According to Theorem 3.1 and Corollary 3.2, \(A_{n}^{T}(x)\) is a real-rooted, hence unimodal, polynomial of degree \(r\) which has a mode \(m\) such that \(\lfloor(r-1)/2\rfloor\leq m\leq\lceil(r+1)/2\rceil\). **Example 3.5**.: Let \(T=\{2,4,\ldots,2n-2\}\), so that \(A_{2n}^{T}(x)\) is the descent enumerator for permutations \(w\in\mathfrak{S}_{2n}\) which have an ascent in every odd position. By Theorem 3.1 and Corollary 3.2, \(A_{2n}^{T}(x)\) is a real-rooted, hence unimodal, polynomial of degree \(n-1\) which has a mode \(m_{n}\) such that \(\lfloor 5(n-1)/6\rfloor\leq m_{n}\leq\lceil 5(n-1)/6\rceil\). Let us choose a permutation \(w\in\mathfrak{S}_{2n}\) with \(\operatorname{Des}(w)\subseteq T=\{2,4,\ldots,2n-2\}\) uniformly at random and let \(X_{n}(w)=\operatorname{des}(w)\) for such \(w\in\mathfrak{S}_{2n}\). One may compute the variance of the random variable \(X_{n}\) as \(\sigma_{n}^{2}=(19n-13)/180\) for \(n\geq 2\). As a consequence of Corollary 3.2, \(X_{n}\) has mean \(\mu_{n}=5(n-1)/6\). Given the real-rootedness of \(A_{2n}^{T}(x)\), a theorem of Bender [9] (see also [12, Theorem 2.1] [26, p. 286]) implies that \((X_{n}-\mu_{n})/\sigma_{n}\) converges to the standard normal distribution as \(n\to\infty\). We conclude this section with the following question. Part (b) provided a lot of the motivation behind this paper; it is an open problem [7, Question 7.2] to decide whether the inequalities which appear there hold for the \(h\)-vectors of all \((r-1)\)-dimensional doubly Cohen-Macaulay simplicial complexes. An affirmative answer to part (a) would imply the (weaker) top-heavy inequalities \(h_{i}(T)\leq h_{r-i}(T)\) for \(0\leq i\leq\lfloor r/2\rfloor\); we refer the reader to [39] for this implication and for the concept of a convex ear decomposition. **Question 3.6**.: _Let \(A_{n}^{T}(x)=\sum_{i=0}^{r}h_{i}(T)x^{i}\), where \(T\subseteq[n-1]\) has size \(r\)._ 1. _Does the order complex of the rank-selected subposet_ \((B_{n})_{T}\) _of the Boolean lattice_ \(B_{n}\) _(with its minimum and maximum elements removed) have a convex ear decomposition?_ 2. _Do the inequalities_ \[\frac{h_{0}(T)}{h_{r}(T)}\leq\frac{h_{1}(T)}{h_{r-1}(T)}\leq\cdots\leq\frac{h _{r}(T)}{h_{0}(T)}\] _hold?_ ## 4. Rank-selected subposets of simplicial posets This section proves part (b) of Theorem 1.2 and gives an application. We recall that a finite poset \(P\) with a minimum element \(\hat{0}\) is said to be _simplicial_[31][32, Section III.6] if the interval \([\hat{0},y]\) is isomorphic to a Boolean lattice for every \(y\in P\). The enumerative invariant of a graded simplicial poset \(P\) of rank \(n\) which will be essential to the proof is the \(h\)_-polynomial_ of \(P\). This was defined by Stanley [31] as \[h(P,x)=\sum_{i=0}^{n}f_{i-1}(P)\,x^{i}(1-x)^{n-i},\] where \(f_{i-1}(P)\) is the number of elements of \(P\) of rank \(i\). We then have \[f_{j-1}(P)=\sum_{i=0}^{j}\binom{n-i}{j-i}h_{i}(P) \tag{12}\] for \(j\in\{0,1,\ldots,n\}\) and \(h(P,x)=h(\Delta,x)\), if \(P\) is the face poset of an \((n-1)\)-dimensional simplicial complex \(\Delta\). Stanley [31] (see also [32, Section III.6]) showed that \(h(P,x)\) has nonnegative coefficients for every Cohen-Macaulay simplicial poset \(P\). Another essential ingredient for the proof of Theorem 1.2 is the following statement (an exercise from [32]), which expresses the flag \(h\)-vector of a graded simplicial poset in terms of its \(h\)-vector. We provide a proof for the convenience of the reader. **Lemma 4.1**.: ([32, Exercise III.15]) _Let \(P\) be a graded simplicial poset of rank \(n\). Then,_ \[\beta_{\hat{P}}(S)=\sum_{k=0}^{n}h_{k}(P)\ \#\{w\in\mathfrak{S}_{n+1}:\,w(n+1)=k+ 1,\mathrm{Des}(w)=[n+1]\smallsetminus S\}\] _for every \(S\subseteq[n]\)._ Proof.: Let \(\mathrm{Asc}(w):=[n]\smallsetminus\mathrm{Des}(w)\) be the set of ascents of a permutation \(w\in\mathfrak{S}_{n+1}\). We need to show that \[\beta_{\hat{P}}(S)=\sum_{k=0}^{n}h_{k}(P)\ \#\{w\in\mathfrak{S}_{n+1}:\,w(n+1)=k+ 1,\mathrm{Asc}(w)=S\}\] for every \(S\subseteq[n]\) or, equivalently, that \[\alpha_{\hat{P}}(T)=\sum_{k=0}^{n}h_{k}(P)\ \#\{w\in\mathfrak{S}_{n+1}:\,w(n+1)=k+ 1,\mathrm{Asc}(w)\subseteq T\}\] for every \(T\subseteq[n]\). Let us write \(T=\{a_{1},a_{2},\ldots,a_{r}\}\subseteq[n]\), with \(1\leq a_{1}<\cdots<a_{r}\leq n\). There are \(f_{a_{r}-1}(P)\) elements of rank \(a_{r}\) in \(P\) and \(\binom{a_{r}}{a_{1},a_{2}-a_{1},\ldots,a_{r}-a_{r-1}}\) chains of elements of ranks \(a_{1},a_{2},\ldots,a_{r-1}\) in any Boolean lattice of rank \(a_{r}\). Given this and Equation (12), we find that \[\alpha_{\hat{P}}(T) =f_{a_{r}-1}(P)\binom{a_{r}}{a_{1},a_{2}-a_{1},\ldots,a_{r}-a_{r-1}}\] \[=\sum_{k=0}^{n}\binom{n-k}{a_{r}-k}h_{k}(P)\binom{a_{r}}{a_{1},a_{2 }-a_{1},\ldots,a_{r}-a_{r-1}}.\] Thus, it suffices to verify that \(\binom{n-k}{a_{r}-k}\binom{a_{r}}{a_{1},a_{2}-a_{1},\ldots,a_{r}-a_{r-1}}\) is equal to the number of permutations \(w\in\mathfrak{S}_{n+1}\) such that \(w(n+1)=k+1\) and \(\operatorname{\mathrm{Asc}}(w)\subseteq T\), a task which can safely be left to the reader. Proof of Theorem 1.2.: Given Theorem 3.1, we only need to show part (b). By Lemma 4.1 we have \[\beta_{\hat{P}}(S) =\sum_{k=0}^{n}h_{k}(P)\ \#\{w\in\mathfrak{S}_{n+1}:\,w(n+1)=k+1, \operatorname{\mathrm{Asc}}(w)=S\}\] \[=\sum_{k=0}^{n}h_{k}(P)\ \#\{w\in\mathfrak{S}_{n+1}:\,w(1)=k+1, \operatorname{\mathrm{Des}}(w)=n+1-S\}.\] Therefore, by Equation (7), \[h(\Delta(\hat{P}_{T}),x) =\sum_{S\subseteq T}\beta_{\hat{P}}(S)x^{|S|}\] \[=\sum_{k=0}^{n}h_{k}(P)\sum_{S\subseteq T}\#\{w\in\mathfrak{S}_{ n+1}:\,w(1)=k+1,\operatorname{\mathrm{Des}}(w)=n+1-S\}\,x^{|n+1-S|}\] \[=\sum_{k=0}^{n}h_{k}(P)\sum_{w\in\mathfrak{S}_{n+1}:\,w(1)=k+1, \operatorname{\mathrm{Des}}(w)\subseteq n+1-T}x^{\operatorname{\mathrm{des} }(w)}\] \[=\sum_{k=0}^{n}h_{k}(P)p_{n,k}^{n+1-T}(x)\] and the proof follows from Theorem 3.1, Lemma 2.1 (a) and the fact that \(p_{n,0}^{n+1-T}(x)=A_{n}^{n-T}(x)=A_{n}^{T}(x)\). ### Colored permutations As an application, let us generalize part (a) of Theorem 1.2 to \(r\)-colored permutations. An _\(r\)-colored permutation_ of the set \([n]\) is defined as a pair \(w\times\mathbf{z}\), where \(w=(w(1),w(2),\ldots,w(n))\in\mathfrak{S}_{n}\) and \(\mathbf{z}=(z_{1},z_{2},\ldots,z_{n})\in\{0,1,\ldots,r-1\}^{n}\). The number \(z_{i}\) is thought of as the color assigned to \(w(i)\). The set of all \(r\)-colored permutations of \([n]\) is denoted by \(\mathfrak{S}_{n}[\mathbb{Z}_{r}]\). Let \(u=w\times\mathbf{z}\in\mathfrak{S}_{n}[\mathbb{Z}_{r}]\) be an \(r\)-colored permutation, as before, and set \(w(n+1)=n+1\) and \(z_{n+1}=0\). A _descent_ of \(u\) is any index \(i\in[n]\) such that either \(z_{i}>z_{i+1}\), or \(z_{i}=z_{i+1}\) and \(w(i)>w(i+1)\). Thus, \(n\) is a descent of \(u\) if and only if \(w(n)\) has nonzero color. As usual, we denote by \(\operatorname{Des}(u)\) and \(\operatorname{des}(u)\) the set and the number of descents of \(u\in\mathfrak{S}_{n}[\mathbb{Z}_{r}]\), respectively. The polynomial \[A^{T}_{n,r}(x)=\sum_{u\in\mathfrak{S}_{n}[\mathbb{Z}_{r}]:\operatorname{Des}(u )\subseteq T}x^{\operatorname{des}(u)}, \tag{13}\] defined for \(T\subseteq[n]\), provides a common generalization of \(A^{T}_{n}(x)\) (the special case \(r=1\)) and the \(r\)-colored Eulerian polynomial \(A_{n,r}(x)\) (the special case \(T=[n]\)), introduced and studied by Steingrimsson [36, 37]. The latter was shown to be real-rooted in [36, Theorem 3.19][37, Theorem 19]. **Theorem 4.2**.: _The polynomial \(A^{T}_{n,r}(x)\) is real-rooted and interlaced by \(A^{T}_{n}(x)\) for all positive integers \(n,r\) and every \(T\subseteq[n]\)._ Proof.: We will apply Theorem 1.2 to the poset of \(r\)-colored subsets of the set \([n]\), defined as follows. We consider the subsets \(\Omega\) of \([n]\times\{0,1,\ldots,r-1\}\) for which for every \(i\in[n]\) there is at most one \(j\in\{0,1,\ldots,r-1\}\) such that \((i,j)\in\Omega\) and let \(P\) be the set of all such subsets, partially ordered by inclusion. Thus, \(P\) is a graded simplicial poset of rank \(n\) which is isomorphic to the Boolean lattice \(B_{n}\) for \(r=1\). It was shown in the proof of [2, Theorem 1.3] that \(P\) is shellable, hence Cohen-Macaulay, and that \(\beta_{\hat{P}}(S)\) is equal to the number of \(r\)-colored permutations \(u\in\mathfrak{S}_{n}[\mathbb{Z}_{r}]\) with descent set equal to \(S\), for every \(S\subseteq[n]\). As a result, in view of Equation (7), \[h(\Delta(\hat{P}_{T}),x)=\sum_{S\subseteq T}\beta_{\hat{P}}(S)x^{|S|}=\sum_{u \in\mathfrak{S}_{n}[\mathbb{Z}_{r}]:\operatorname{Des}(u)\subseteq T}x^{ \operatorname{des}(u)}=A^{T}_{n,r}(x)\] for every \(T\subseteq[n]\) and the proof follows from Theorem 1.2. ## 5. Noncrossing partition lattices This section proves Theorem 1.3. We first recall the definition of \(\operatorname{NC}_{W}\). Let \(W\) be an irreducible finite Coxeter group with rank \(r_{W}\) and set of reflections \(T\). For \(\alpha\in W\) we denote by \(\ell_{T}(\alpha)\) the smallest \(k\) such that \(\alpha\) can be written as a product of \(k\) reflections in \(T\). We define the partial order \(\preceq\) on \(W\) by letting \(\alpha\preceq\beta\) if \(\ell_{T}(\alpha)+\ell_{T}(\alpha^{-1}\beta)=\ell_{T}(\beta)\), in other words if there exists a shortest factorization of \(\alpha\) into reflections which is a prefix of such a shortest factorization of \(\beta\). Then, \(\operatorname{NC}_{W}\) is defined as the closed interval \([e,\gamma]\) in \((W,\preceq)\), where \(e\in W\) is the identity element and \(\gamma\) is any Coxeter element of \(W\). The noncrossing partition poset \(\operatorname{NC}_{W}\) is a rank-symmetric, graded lattice with rank function \(\ell_{T}\) and rank \(r_{W}\); its combinatorial type is independent of the choice of \(\gamma\). A detailed exposition of noncrossing partition lattices can be found in [1, Chapter 2]. The proof of Theorem 1.3 is based on explicit combinatorial interpretations of the polynomial \(h(\Delta(\operatorname{NC}_{W}),x)\) for the irreducible finite Coxeter groups of classical types. Before stating them, we need to introduce some definitions and notation. A _descent_ of a word \(w\in[r]^{n}\) is any index \(i\in[n-1]\) such that \(w(i)\geq w(i+1)\). We denote by \(\mathcal{D}_{n}\) the set of words \(w\in\mathbb{Z}^{n}\) such that \((|w(1)|,w(2),\ldots,w(n))\in[n-1]^{n}\). A _descent_ of such a word \(w\in\mathcal{D}_{n}\) is defined as any index \(i\in[n-1]\) such that * \(|w(i)|>w(i+1)\), or * \(w(i)=w(i+1)>0\). As usual, we denote by \(\operatorname{Des}(w)\) and \(\operatorname{des}(w)\) the set and the number of descents, respectively, of a word \(w\). **Proposition 5.1**.: _Let \(W\) be an irreducible finite Coxeter group of Coxeter type \(\mathcal{X}\). Then,_ \[h(\Delta(\operatorname{NC}_{W}),x)=\begin{cases}\dfrac{1}{n} \sum_{w\in[n]^{n-1}}x^{\operatorname{des}(w)},&\text{if }\mathcal{X}=A_{n}\\ \\ \sum_{w\in[n]^{n}}x^{\operatorname{des}(w)},&\text{if }\mathcal{X}=B_{n}\\ \\ \sum_{w\in\mathcal{D}_{n}}x^{\operatorname{des}(w)},&\text{if }\mathcal{X}=D_{n}. \end{cases}\] _Moreover,_ \[h(\Delta(\operatorname{NC}_{W}),x)=\begin{cases}1+(m-1)x,&\text{if }\mathcal{X}=I_{2}(m)\\ 1+28x+21x^{2},&\text{if }\mathcal{X}=H_{3}\\ 1+275x+842x^{2}+232x^{3},&\text{if }\mathcal{X}=H_{4}\\ 1+100x+265x^{2}+66x^{3},&\text{if }\mathcal{X}=F_{4}\\ 1+826x+10778x^{2}+21308x^{3}+8141x^{4}+418x^{5},&\text{if }\mathcal{X}=E_{6}\\ 1+4152x+110958x^{2}+446776x^{3}+412764x^{4}\\ \quad+\,85800x^{5}+2431x^{6},&\text{if }\mathcal{X}=E_{7}\\ 1+25071x+1295238x^{2}+9523785x^{3}+17304775x^{4}\\ \quad+8733249x^{5}+1069289x^{6}+17342x^{7},&\text{if }\mathcal{X}=E_{8}. \end{cases}\] Proof.: Let us write \(P=\operatorname{NC}_{W}\) and first consider the case \(\mathcal{X}=A_{n}\). The explicit formula of [18, Theorem 3.2] (see also [28, p. 196]) for the entries of the flag \(f\)-vector of \(P\) can be rewritten as \[\alpha_{P}(T)=\frac{1}{n}\,\#\{w\in[n]^{n-1}:\operatorname{Des}(w)\subseteq T\}\] for \(T\subseteq[n-2]\). From Equation (5) it readily follows that \[\beta_{P}(S)=\frac{1}{n}\,\#\{w\in[n]^{n-1}:\operatorname{Des}(w)=S\}\] for \(S\subseteq[n-2]\) and hence that \[h(\Delta(P),x)=\sum_{S\subseteq[n-2]}\beta_{P}(S)x^{|S|}=\frac{1}{n}\sum_{w \in[n]^{n-1}}x^{\operatorname{des}(w)}.\] One can reach the same conclusion by using the combinatorial interpretation of \(\beta_{P}(S)\) in terms of parking functions, given in [33, Proposition 3.2]. The proof of the formula for \(\mathcal{X}=B_{n}\) is entirely similar, once one rewrites the formula of [33, Proposition 7] for the flag \(f\)-vector of \(P\) as \[\alpha_{P}(T)=\#\{w\in[n]^{n}:\operatorname{Des}(w)\subseteq T\}\] for \(T\subseteq[n-1]\). Let us now consider the case \(\mathcal{X}=D_{n}\), which is more involved: it is not true any more that \(\beta_{P}(S)\) is equal to the number of words \(w\in\mathcal{D}_{n}\) with descent set equal to \(S\). Let us write \(\bar{P}=P{\smallsetminus}\{\hat{0},\hat{1}\}\). The formula of [6, Theorem 1.2] for the flag \(f\)-vector of \(P\) shows that \[f_{k-1}(\Delta(\bar{P}))=2\sum_{(a_{1},a_{2},\ldots,a_{k+1})^{k }=n}\binom{n-1}{a_{1}}\binom{n-1}{a_{2}}\cdots\binom{n-1}{a_{k+1}}+\] \[\sum_{(a_{1},a_{2},\ldots,a_{k+1})^{k}=n}\sum_{i=1}^{k+1}\binom{n -1}{a_{1}}\cdots\binom{n-2}{a_{i}-2}\cdots\binom{n-1}{a_{k+1}},\] where the first two sums run through all compositions \((a_{1},a_{2}\ldots,a_{k+1})\) of \(n\) with \(k+1\) parts. Using the fact that \(\binom{n-2}{a_{i}-2}=\frac{a_{i}-1}{n-1}\binom{n-1}{a_{i}-1}\), changing the order of summation in the double sum and replacing \(a_{i}\) with \(a_{i}+1\) yields that \[f_{k-1}(\Delta(\bar{P}))=2\sum_{(a_{1},a_{2}\ldots,a_{k+1})^{k}= n}\binom{n-1}{a_{1}}\binom{n-1}{a_{2}}\cdots\binom{n-1}{a_{k+1}}+\] \[\sum_{i=1}^{k+1}\sum_{(a_{1},a_{2},\ldots,a_{k+1})^{k}=n-1}\frac {a_{i}}{n-1}\binom{n-1}{a_{1}}\cdots\binom{n-1}{a_{2}}\cdots\binom{n-1}{a_{k+1 }}.\] Changing again the order of summation in the double sum, since \(\sum_{i=1}^{k+1}a_{i}=n-1\), we find that \[f_{k-1}(\Delta(\bar{P}))=2\sum_{(a_{1},a_{2},\ldots,a_{k+1})^{k }=n}\binom{n-1}{a_{1}}\binom{n-1}{a_{2}}\cdots\binom{n-1}{a_{k+1}}+\] \[\sum_{(a_{1},a_{2},\ldots,a_{k+1})^{k}=n-1}\binom{n-1}{a_{1}} \cdots\binom{n-1}{a_{2}}\cdots\binom{n-1}{a_{k+1}}.\] We may rewrite this formula as \[f_{k-1}(\Delta(\bar{P}))=2\sum_{T\subseteq[n-1],\,|T|=k}\#\{w\in[n-1]^{n}: \operatorname{Des}(w)\subseteq T\}+\] \[\sum_{T\subseteq[n-2],\,|T|=k}\#\{w\in[n-1]^{n-1}:\operatorname{ Des}(w)\subseteq T\},\] whence \[h(\Delta(P),x) =h(\Delta(\bar{P}),x)=\sum_{k=0}^{n-1}f_{k-1}(\Delta(\bar{P}))\,x^{k }(1-x)^{n-1-k}\] \[=2\sum_{k=0}^{n-1}\sum_{T\subseteq[n-1],\,|T|=k}\#\{w\in[n-1]^{n}: \operatorname{Des}(w)\subseteq T\}x^{k}(1-x)^{n-1-k}\] \[+\sum_{k=0}^{n-1}\sum_{T\subseteq[n-2],\,|T|=k}\#\{w\in[n-1]^{n-1}: \operatorname{Des}(w)\subseteq T\}x^{k}(1-x)^{n-1-k}\] \[=2\sum_{T\subseteq[n-1]}\#\{w\in[n-1]^{n}:\operatorname{Des}(w )\subseteq T\}x^{|T|}(1-x)^{n-1-|T|}\] \[+\sum_{T\subseteq[n-2]}\#\{w\in[n-1]^{n-1}:\operatorname{Des}(w )\subseteq T\}x^{|T|}(1-x)^{n-1-|T|}.\] Setting \(\operatorname{Des}(w)=S\) in each sum, summing over all \(S\subseteq T\) and changing the order of summation yields that \[h(\Delta(P),x) =2\sum_{S\subseteq[n-1]}\#\{w\in[n-1]^{n}:\operatorname{Des}(w )=S\}\sum_{S\subseteq T\subseteq[n-1]}x^{|T|}(1-x)^{n-1-|T|}\] \[+\sum_{S\subseteq[n-2]}\#\{w\in[n-1]^{n-1}:\operatorname{Des}(w )=S\}\sum_{S\subseteq T\subseteq[n-2]}x^{|T|}(1-x)^{n-1-|T|}\] and hence that \[h(\Delta(P),x) =2\sum_{S\subseteq[n-1]}\#\{w\in[n-1]^{n}:\operatorname{Des}(w )=S\}x^{|S|}\] \[+(1-x)\sum_{S\subseteq[n-2]}\#\{w\in[n-1]^{n-1}:\operatorname{ Des}(w)=S\}x^{|S|} \tag{14}\] \[=2\sum_{w\in[n-1]^{n}}x^{\operatorname{des}(w)}+(1-x)\sum_{w\in[ n-1]^{n-1}}x^{\operatorname{des}(w)}.\] Considering the cases \(|w(1)|\neq w(2)\) and \(|w(1)|=w(2)\) for a word \(w\in\mathcal{D}_{n}\) shows that the number of words \(w\in\mathcal{D}_{n}\) with \(\operatorname{des}(w)=k\) is equal to the coefficient of \(x^{k}\) in the expression (14) and the proof follows. The exceptional types are handled via straightforward computations via Sage [40]. **Corollary 5.2**.: _The noncrossing partition lattice \(\operatorname{NC}_{W}\) has a real-rooted chain polynomial for every irreducible finite Coxeter group \(W\)._ Proof.: Let us first consider the case of groups of type \(D\). By Proposition 5.1, it suffices to show that \(h_{n}(x):=\sum_{w\in\mathcal{D}_{n}}x^{\operatorname{des}(w)}\) is real-rooted for every \(n\geq 2\). For \(k\geq 2\), we denote by \(\mathcal{D}_{n,k}\) the set of words \(w\in\mathbb{Z}^{k}\) such that \((|w(1)|,w(2),\ldots,w(k))\in[n-1]^{k}\) and note that \(\mathcal{D}_{n,n}=\mathcal{D}_{n}\). We define the notion of descent for words \(w\in\mathcal{D}_{n,k}\) just as in the special case \(k=n\) and set \[h_{n,k,j}(x)=\sum_{w\in\mathcal{D}_{n,k}:\,w(k)=j}x^{\operatorname{des}(w)}\] for \(j\in[n-1]\). We will prove that \((h_{n,k,n-1}(x),\ldots,h_{n,k,2}(x),h_{n,k,1}(x))\) is an interlacing sequence of real-rooted polynomials for all \(n,k\geq 2\) by induction on \(k\). This holds for \(k=2\) since then \(h_{n,k,j}(x)=(2j-1)+(2n-2j-1)x\) for every \(j\in[n-1]\). The inductive step follows by an application of part (c) of Lemma 2.1, since \[h_{n,k+1,j}(x)=\sum_{i=1}^{j-1}h_{n,k,i}(x)\,+\,x\sum_{i=j}^{n-1}h_{n,k,i}(x)\] for \(j\in[n-1]\). In particular, \(h_{n,n+1,1}(x)=xh_{n}(x)\) is real-rooted for every \(n\geq 2\) and hence so is \(h_{n}(x)\). A similar (and even simpler) argument shows that \(\sum_{w\in[r]^{n}}x^{\operatorname{des}(w)}\) is real-rooted for all \(n,r\geq 1\). This covers the cases of groups of types \(A\) and \(B\). The exceptional groups can be treated with a case by case verification. **Symmetric decompositions.** The second claim of Theorem 1.3 will be proven by an application a result of Jochemko [21], after the expressions of Proposition 5.1 for \(h(\Delta(\operatorname{NC}_{W}),x)\) are suitably rewritten. For a polynomial or formal power series \(H(x)=\sum_{n\geq 0}h_{n}x^{n}\in\mathbb{C}[[x]]\) we use the notation \(\mathcal{S}_{r}(H(x))=\sum_{n\geq 0}h_{rn}x^{n}\). **Lemma 5.3**.: _Let \(E_{n,r}(x)=\sum_{w\in[r]^{n}}x^{\operatorname{des}(w)}\). Then,_ \[x^{n}E_{n,r}(1/x)=\mathcal{S}_{r}\left(x(1+x+x^{2}+\cdots+x^{r-1})^{n+1}\right)\] _for all \(n,r\geq 1\)._ Proof.: First we relate the polynomials \(E_{n,r}(x)\) to the \[\tilde{E}_{n,r}(x):=\sum_{w\in\mathcal{W}_{n,r}}x^{\operatorname{asc}^{*}(w)},\] where \(\mathcal{W}_{n,r}\) is the set of words \(w:\{0,1,\ldots,n\}\to[r]\) with \(w(0)=1\) and \(\operatorname{asc}^{*}(w)\) is the number of indices \(i\in[n]\) such that \(w(i-1)<w(i)\). We note that \[x^{n-1}E_{n,r}(1/x)=\sum_{w\in[r]^{n}}x^{n-1-\operatorname{des}(w)}=\sum_{w \in[r]^{n}}x^{\operatorname{asc}^{*}(w)},\] where \(\operatorname{asc}^{*}(w)=n-1-\operatorname{des}(w)\) is the number of strict ascents of \(w\in[r]^{n}\). Distinguishing the cases \(w(1)=1\) and \(w(1)\geq 2\) for such a word and for a word \(w\in\mathcal{W}_{n,r}\) we get \[x^{n-1}E_{n,r}(1/x) =\tilde{E}_{n-1,r}(x)+\sum_{w\in[r]^{n}:\,w(1)\geq 2}x^{\text{asc}^{ *}(w)},\] \[\tilde{E}_{n,r}(x) =\tilde{E}_{n-1,r}(x)+x\sum_{w\in[r]^{n}:\,w(1)\geq 2}x^{\text{asc}^{ *}(w)}.\] These equalities imply that \[x^{n}E_{n,r}(1/x)=\tilde{E}_{n,r}(x)+(x-1)\tilde{E}_{n-1,r}(x). \tag{15}\] We now recall that \[\tilde{E}_{n,r}(x)=\mathcal{S}_{r}\left((1+x+x^{2}+\cdots+x^{r-1})^{n+1}\right). \tag{16}\] This formula follows from the identity \[\sum_{m\geq 0}\binom{n+rm}{n}x^{m}=\frac{\tilde{E}_{n,r}(x)}{(1-x)^{n+1}}, \tag{17}\] which can be proved by a standard 'placing balls into boxes' argument (see [29, Corollary 8] for a \(q\)-analogue) and the computation \[\sum_{m\geq 0}\binom{n+rm}{n}x^{m} =\mathcal{S}_{r}\left(\frac{1}{(1-x)^{n+1}}\right)=\mathcal{S}_{ r}\left(\frac{(1+x+x^{2}+\cdots+x^{r-1})^{n+1}}{(1-x^{r})^{n+1}}\right)\] \[=\frac{\mathcal{S}_{r}\left((1+x+x^{2}+\cdots+x^{r-1})^{n+1} \right)}{(1-x)^{n+1}}.\] Combining Equations (15) and (16) we get \[x^{n}E_{n,r}(1/x) =\mathcal{S}_{r}\left((1+x+x^{2}+\cdots+x^{r-1})^{n+1}\right)+(x- 1)\mathcal{S}_{r}\left((1+x+x^{2}+\cdots+x^{r-1})^{n}\right)\] \[=\mathcal{S}_{r}\left((1+x+x^{2}+\cdots+x^{r-1})^{n+1}+(x^{r}-1)( 1+x+x^{2}+\cdots+x^{r-1})^{n}\right)\] \[=\mathcal{S}_{r}\left(x(1+x+x^{2}+\cdots+x^{r-1})^{n+1}\right)\] and the proof follows. The following result of Jochemko [21] will be applied in the proof of Theorem 1.3. **Theorem 5.4**.: ([21, Theorem 1.1]) _Let \(h(x)=h_{0}+h_{1}x+\cdots+h_{d}x^{d}\) be a polynomial of degree \(s\leq d\) with nonnegative coefficients such that_ * \(h_{0}+h_{1}+\cdots+h_{i}\geq h_{d}+h_{d-1}+\cdots+h_{d-i+1}\)_, and_ * \(h_{0}+h_{1}+\cdots+h_{i}\leq h_{s}+h_{s-1}+\cdots+h_{s-i}\)__ _for all \(i\). Then, \(\mathcal{S}_{r}\left(h(x)(1+x+x^{2}+\cdots+x^{r-1})^{d+1}\right)\) has a nonnegative real-rooted symmetric decomposition with respect to \(d\) whenever \(r\geq\max\{s,d+1-s\}\)._ Proof of Theorem 1.3.: Using the notation of Lemma 5.3, by Proposition 5.1 and its proof we have \[h(\Delta(\mathrm{NC}_{W}),x)=\begin{cases}(1/n)E_{n-1,n}(x),&\text{if }\mathcal{X}=A_{n} \\ E_{n,n}(x),&\text{if }\mathcal{X}=B_{n}\\ 2E_{n,n-1}(x)+(1-x)E_{n-1,n-1}(x),&\text{if }\mathcal{X}=D_{n}.\end{cases}\] In view of Lemma 5.3, these formulas may be rewritten as \[x^{r_{W}}h(\Delta(\mathrm{NC}_{W}),1/x)=\begin{cases}(1/n)\mathcal{S}_{r} \left(x(1+x+x^{2}+\cdots+x^{n-1})^{n}\right),&\text{if }\mathcal{X}=A_{n} \\ \mathcal{S}_{r}\left(x(1+x+x^{2}+\cdots+x^{n-1})^{n+1}\right),&\text{if } \mathcal{X}=B_{n}\\ \mathcal{S}_{r}\left((x+x^{2})(1+x+x^{2}+\cdots+x^{n-2})^{n+1}\right),&\text{ if }\mathcal{X}=D_{n}.\end{cases}\] These expressions and Theorem 5.4 imply in each case that \(x^{r_{W}}h(\Delta(\mathrm{NC}_{W}),1/x)\) has a nonnegative real-rooted symmetric decomposition with respect to \(r_{W}\). Since \(h(\Delta(\mathrm{NC}_{W}),x)\) has degree \(r_{W}-1\), it has a nonnegative real-rooted symmetric decomposition with respect to \(r_{W}-1\). The exceptional groups are again handled by a routine case by case verification. We close this section with the analogue of Question 3.6. **Question 5.5**.: _Let \(h(\Delta(\mathrm{NC}_{W}),x)=\sum_{i=0}^{r}h_{i}(W)x^{i}\), where \(r=r_{W}-1\)._ 1. _Does the order complex_ \(\Delta(\overline{\mathrm{NC}}_{W})\) _of the noncrossing partition lattice_ \(\mathrm{NC}_{W}\) _(with its minimum and maximum elements removed) have a convex ear decomposition?_ 2. _Do the inequalities_ \[\frac{h_{0}(W)}{h_{r}(W)}\leq\frac{h_{1}(W)}{h_{r-1}(W)}\leq\cdots\leq\frac{h _{r}(W)}{h_{0}(W)}\] _hold?_ **Acknowledgments**. Part of the motivation behind Theorem 1.2 was developed during the workshop 'Interactions between Topological Combinatorics and Combinatorial Commutative Algebra', held at BIRS (Banff, Canada) in April 2023. The first named author wishes to thank the organizers Mina Bigdeli, Sara Faridi, Satoshi Murai and Adam Van Tuyl for the invitation and the participants for useful discussions. The authors also wish to thank Christian Stump for help with the computation of the chain polynomial of the noncrossing partition lattice of type \(E_{8}\).
2302.08605
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality
This paper applies eXplainable Artificial Intelligence (XAI) methods to investigate the socioeconomic disparities in COVID patient mortality. An Extreme Gradient Boosting (XGBoost) prediction model is built based on a de-identified Austin area hospital dataset to predict the mortality of COVID-19 patients. We apply two XAI methods, Shapley Additive exPlanations (SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), to compare the global and local interpretation of feature importance. This paper demonstrates the advantages of using XAI which shows the feature importance and decisive capability. Furthermore, we use the XAI methods to cross-validate their interpretations for individual patients. The XAI models reveal that Medicare financial class, older age, and gender have high impact on the mortality prediction. We find that LIME local interpretation does not show significant differences in feature importance comparing to SHAP, which suggests pattern confirmation. This paper demonstrates the importance of XAI methods in cross-validation of feature attributions.
Li Shi, Redoan Rahman, Esther Melamed, Jacek Gwizdka, Justin F. Rousseau, Ying Ding
2023-02-16T22:09:05Z
http://arxiv.org/abs/2302.08605v1
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality ###### Abstract _This paper applies eXplainable Artificial Intelligence (XAI) methods to investigate the socioeconomic disparities in COVID-19 patient mortality. An Extreme Gradient Boosting (XGBoost) prediction model is built based on a de-identified Austin area hospital dataset to predict the mortality of COVID-19 patients. We apply two XAI methods, Shapley Additive exPlanations (SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), to compare the global and local interpretation of feature importance. This paper demonstrates the advantages of using XAI which shows the feature importance and decisive capability. Furthermore, we use the XAI methods to cross-validate their interpretations for individual patients. The XAI models reveal that Medicare financial class, older age, and gender have high impact on the mortality prediction. We find that LIME's local interpretation does not show significant differences in feature importance comparing to SHAP, which suggests pattern confirmation. This paper demonstrates the importance of XAI methods in cross-validation of feature attributions._ ## 1 Introduction Socioeconomic status, such as health care access, race, and ethnicity, profoundly impact COVID-19 incidence and mortality [1]. Recent meta-review of 4.3 million patients from 68 studies highlights the strong association between socioeconomic disparity and COVID-19 mortality [2]. Understanding the effect of such societal factors helps us develop better health policies and health delivery methods to mitigate the disparity. Different works of literature show a strong relationship between socio-economic status and mortality and highlight the importance of socio-economic inequalities in health outcomes [3]. However, previous studies on socioeconomic features could not provide a comprehensive explanation from the patient cohort level and at the individual patient level (3). Especially, cross-validation of different explanation methods is critical because explanation offered by one method can be biased and one-sided. EXplainable Artificial Intelligence (XAI) methods provide a new way to understand feature importance. They offer a variety of perspectives to dissect feature importance to highlight the decisive capability of each feature, the dependence of different features, and the joint contributions to illustrate how these features shift the prediction of an individual patient either towards the mean or away from the mean [4]. Therefore, XAI methods offer the unique advantages than conventional feature importance in machine learning methods. Here, we apply two XAI methods, Shapley Additive exPlanations (SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), to investigate the effect of socioeconomic disparities on COVID-19 patient mortality. Unlike previous XAI approaches [5, 6], our methods provide several improvements including decisive capability identification, individual local interpretation, and cross-validation between different XAI methods. This paper examines a hospital dataset from Austin containing information regarding COVID-19 positive patients from March 2020 to December 2021. We apply the Extreme Gradient Boosting (XGBoost) algorithm to the dataset to understand the impact of the socioeconomic features on the patient mortality prediction. We use the Shapley score to illustrate the decisive capability of each feature. Furthermore, we use SHAP and LIME to analyze individual patient mortality prediction, which allows us to understand how individual feature contributions differ from the global contributions. Finally, we propose statistical methods to cross-validate the local interpretations from SHAP and LIME. Our study emphasizes the importance of socioeconomic features in COVID-19 mortality, and identifies that Medicare financial class, older age, and gender have high impact on the mortality prediction for COVID-19 patients in Austin. ## 2 Background XAI enables humans to interpret the model prediction process and enhance confidence in the result applications. Therefore, XAI tools have become more prevalent in healthcare analysis, especially in COVID-19 related studies over the past few years. [7] used Shapley global interpretation to indicate socio-economic factors' importance in confirmed case rate and death rate. In this research, non-white poverty has a high positive impact on the death rate while uninsured plays a relatively high negative impact on the death rate in the southern states. [8] proposed a web-based architecture based on Random Forest and XGBoost classifiers and then used Shapley values to demonstrate the varying dependence of various health factors on predicting the COVID-19 risk level. [9] analyzed the relationship between clinical data and COVID-19 mortality. In this research, the authors applied LIME to illustrate that HbA1c (i.e., a marker of diabetes control) was the most significant contributor to the mortality risk. Among all these studies, the researchers used feature importance plots to demonstrate various factors' contribution to the model prediction. However, in these pieces of literature, the interpretations did not consider the different impacts of the factors when they have different values. Other studies used the SHAP summary plot to understand the decisive factor capacities. [10] predicted the admission to the ICU and mortality across the COVID-19 patients who received heparin by using the Hybrid Extreme Gradient Boosting (HXGBoost) classifier. The research used a SHAP summary plot to interpret the HXGBoost model and found that low lymphocyte count at day seven combined with increased FIO2 on days 1 and 5 increase the risk of mortality. [11] assessed the contribution of the impulse-radio ultra-wideband radar factors to the predictions of COVID-19 test results with the help of SHAP summary and feature importance plots and then used the analysis results for feature selection. In these research studies, the involved researchers used SHAP summary plots to understand the decisive power of the features. This XAI method could help people to understand how the variation of the factor values would impact the prediction outcome. This paper will use SHAP summary plot to investigate how the existence or the absence of particular socio-economic categories would impact the prediction of patient mortality. XAI was not only used to understand how factors impact the predictions globally but also used to understand the roles that the factors played in each instance. [12] used a random forest model to dynamically predict the subsequent day mortality risk of COVID-19 patients based on summary characteristics of longitudinal risk factor trajectories. They used SHAP local interpretation to compare the dynamic factor contributions and prediction outcomes on COVID-19-positive survivors versus COVID-19 positive non-survivor. [13] used the Shapley force plot to interpret how five core laboratory parameters influence true positive and false negative predictions locally. [4] interpreted the random forest classifier using the Shapley waterfall plot, showing the local feature importance for the individuals who died due to COVID-19 with or without diabetes mellitus. The evidence that individual feature contributions differed from global contributions stressed the advantages of individualized risk explanations over generic risk descriptions. All the previous research findings using local interpretation illustrated that high local impact features may not have a high impact globally. Therefore, local interpretation could provide a unique view of how the factors work in different circumstances. In this research, we will use local interpretation to learn how socio-economic factors work in individual examples and therefore shed light on the practical applications of individualized mortality risk explanations. Previous research demonstrated that there is more than one tool in the XAI method and each tool has its own computational mechanisms and interpretation method. Therefore, cross-validation could increase the reliability of the model interpretations. Researchers mostly use SHAP for global interpretation while LIME for local interpretation [6, 14]. In [5], the authors used SHAP and LIME for global and local interpretation to understand the impact of socio-economic factors on COVID-19 mortality. They visualized the global and local interpretations, then briefly compared the ranks of the factors for cross-validation and concluded that age is the most important feature and time to hospital after symptom onset as the second most important feature to the mortality predictions. Even though this research compares the results from SHAP and LIME correspondingly, there was not a cross-validation method in this research using statistical methods. Therefore, our research would use SHAP and LIME to interpret the model predictions and utilize statistical methods to provide solid cross-validations to increase the reliability and trustworthiness of the machine learning model interpretations. ## 3 Methods ### LIME (Locally Interpretable Model Agnostic Explanations) The LIME method provides interpretations for the individual models by locally approximating the model around the given prediction. It is a post-hoc model-agnostic explanation technique. LIME is independent of the original classifier and works on specific observations [15]. LIME tries to fit a local model using sample data points similar to the explained observation data points. LIME produces the explanations as follows: \[\xi(x)=\operatorname*{argmin}_{g\in G}\mathcal{L}\left(f,g,\pi_{x}\right)+ \Omega(g)\] where G is a class of potentially interpretable models, \(g\in G\) is an explanation as a model, \(\Omega(g)\) is a measure of complexity, \(\pi_{x}\) is a proximity measure between an instance z to x, and f denotes \(\mathbb{R}^{d}\rightarrow\mathbb{R}\). LIME aims to minimize the locality aware loss \(\mathcal{L}\) without making assumptions about f. ### SHAP (Shapley Additive exPlanations) SHAP is an integrated framework that explains the output of any model based on Shapley values. It measures the feature importance for linear models in the presence of multicollinearity[16]. In contrast to LIME, SHAP does not have to build a local model. The Shapley values are calculated from coalitional game theory and express model predictions as combinations of binary variables representing the existence of each covariate. The model is retrained on all feature subsets \(S\subseteq F\), then predictions are weighed against the presence and missingness of the variable. Thus, Shapley values are calculated from the computed values and used as feature importance which is a weighted average of all possible references: \[\Phi_{i}=\sum\nolimits_{S\in F\{i\}}\frac{|S|!\left(|F|-|S|-1\right)!}{|F|}f_{ SV\{i\}}\big{(}x_{SV\{i\}}\big{)}-f_{S}(x_{S})\] Here, \(\Phi_{i}\) represents the feature importance of the i-th feature. This value can be positive or negative depending on the impact of the feature on the model prediction. If the feature impacts the model positively, then the assigned Shapley value to the feature is positive, and if the effect is negative, then the Shapley value reflects that impact. ### Cross Validation between SHAP and LIME In general, LIME calculates the difference between the predictions with or without the variable, while SHAP measures the variable's contribution to the difference between the actual and mean predictions. Due to differences in the computational mechanisms, the local interpretation of SHAP and LIME may reveal the features to have different impact rankings and tendencies contributing to different categories. To compare the local interpretation overall performance of SHAP and LIME, we measure the feature impact consistency and ranking difference to quantify the difference between SHAP and LIME interpretation. Impact Value ConsistencyWhen SHAP and LIME values are positive, the features push the prediction to mortality. When negative, the features push the prediction to non-mortality. Therefore, we compare the consistency of the contribution tendency by comparing the signs of the SHAP and LIME values. Then we calculate the ratio of the consistent features' impact on each instance's prediction tendency. \[ratio\ of\ impact\ value\ consistency=\frac{\#\ of\ features\ with\ same\ signs\ in\ SHAP\ and\ LIME}{\#\ of\ the\ feature\ used\ in\ model\ prediction}\] Impact Ranking DifferenceWe selected features with consistent contribution tendencies and compared their ranking differences between SHAP and LIME. For each instance, the features are separated based on the sign of the feature impact values and then given a rank based on the absolute feature impact value (i.e., the feature with the highest impact assigned as 1). Then we filtered out the features with consistent contribution tendency and calculated the rank difference for each feature. \[impact\ ranking\ difference=SHAP\ feature\ ranking-LIME\ feature\ ranking\] ### Dataset and Experiment Setup Electronic health record (EHR) data for 12,740 patients diagnosed with COVID-19 from March 2020 to December 2021 were extracted from the Ascension Seton Hospital Network clinical data warehouse with support and data sharing processes of the Dell Medical School Enterprise Data Intelligence Data Core [17]. Data were collected for five distinct hospitals comprised of three community hospitals, an academic hospital, and a children's hospital in Central Texas. Each patient visit was recorded as a data point in the dataset and in total there are 20,180 COVID-19-positive patient visits. The data were de-identified with all clinical events indexed to the first admission with a confirmed COVID-19 diagnosis (U07.1). This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of The University of Texas at Austin (IRB ID: 2020-04-0117). For the experiment, we selected ten different features to predict the mortality of COVID-19 patients. Table 1 describes the features in detail. To better understand how the features impact the predictions, we encoded most of the categorical data with OneHotEncoder by encoding the categorical features as one-hot numeric arrays and split each category as individual columns. However, considering the admission source feature has too many categories, we remained using the OrdinalEncoder scaling technique to encode it. For the model training, we excluded the data if it had missing values in any column. In the filtered dataset, 18,368 patients tested positive, among which 601 patients died. The data was split into 80% and 20% for training and testing, respectively. In this research, we chose Extreme Gradient Boosting (XGBoost) as our machine learning model. It is a highly scalable tree boosting system that is sparsity-aware and applicable in different scenarios. XGBClassifier was used from the xgboost python library and set the objective as'reg:logistic' since we are working on a binary classification problem. We tuned the parameters to prevent overfitting and finally set the 'learning_rate' as 0.1,'max depth' as 5, 'n_estimators' as 10. Considering the data were imbalanced between two classes,'scale_pos_weight' was set as'sqrt(total number of negative examples/total number of positive examples)' to assign different weights to the non-mortality and mortality classes. After training the model, we used the Shapley feature importance and summary plot to analyze the decisive power of the factors. Then we selected one mortality class individual and one non-mortality class individual to understand the local feature importance similarity and difference between SHAP and LIME. Then, we cross-validate the interpretations from SHAP and LIME by comparing the impact value consistency and impact ranking differences of SHAP and LIME. All the coding was done in the Jupyter Notebook. ## 4 Results XGBoost model achieved a good performance (precision (positive predictive value) = 0.95, recall (sensitivity)=0.97) in predicting mortality of COVID-19 patients based on their socio-economic information (Table 2). We employed SHAP and LIME tools to understand how the features impact the prediction globally and locally. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Feature name** & **Meaning** & **Encoding method** \\ \hline Encounter type & Classes to distinguish between different health care settings & _OneHotEncoder_ (encnt\_Emergency, encent\_Outpatient, encent\_Inpatient) \\ \hline Admission source & The origin of the patient’s admission to the hospital & _Ordinal Encoder_ (0 = clinic referral, 1 = court/law enforcement, 2 = emergency room, 3 = HMO (health maintenance organization) referral, 4 = newborn (extramural birth), 5 = newborn (normal delivery), 6 = physician referral, 7 = transfer from a hospital, 8 = transfer from a skilled nursing facility, 9 = transfer from another health care facility (includes rehabilitation and psychiatric facilities)) \\ \hline Race & Individual’s race & _OneHotEncoder_ (race\_BlackAfricanAmerican, race\_White, race\_Other) \\ \hline Ethnicity & Individual’s cultural identification & _OneHotEncoder_ (ethnicity\_HispanicLatino, ethnicity\_Not) \\ \hline Gender & Individual’s gender & _OneHotEncoder_ (gender\_F, gender\_M) \\ \hline Financial class & Payor groups for billing and reporting purposes & _OneHotEncoder_ (financ\_Medicare, financ\_Medicaid, financ\_Self, financ\_Commercial) \\ \hline Age & Individual’s age & \\ \hline Deidentified zip code & First 3 digits of the patient resident zip code & \\ \hline Admit quarter & Quarter when the individual was admitted & \\ \hline Admit year & Year when the individual was admitted & \\ \hline \end{tabular} \end{table} Table 1: Selected feature explanations and Encoding methods. ### Feature decisive power We utilize SHAP feature importance plot and summary plot to understand how the impact of socio-economic features on mortality predictions. The SHAP feature importance plot (Figure 1 (a)) illustrates the average impact (i.e., the average absolute value of SHAP) of each feature. The higher the ranking, the more significant the feature's impact on the mortality prediction model. It indicates that patient encounter type (i.e., emergency, outpatient, inpatient) plays a critical role in identifying patient mortality. Age, admission type, Medicare financial class, and gender also play a decisive role in the prediction. The other financial classes, ethnicity, race, and admitted quarter and year have little effect on the output. The SHAP summary plot (Figure 1(b)) reveals how the decisive capacities of each feature change according to each value. Features are sorted according to global contribution. The color of the dots in each feature row represents the instance feature value and the horizontal positions represents the instance SHAP value. The clustering of the same color dots indicates the possible relationship between feature value, and the impact on the model prediction. The figure shows that when the encounter type is emergency or outpatient (i.e., feature value=1), it has a decisive power towards non-mortality prediction. In contrast, if the encounter type is not emergency or outpatient (i.e., feature value =0), it has a relatively low power towards mortality prediction. Correspondingly, patients tend to have a higher probability of dying when the encounter type is inpatient. Different ages also have different power in the prediction. When age is higher, they are more likely to suffer mortality. However, when age is lower, the mixed pattern shows that younger age mostly contributes to the non-mortality but they might have either a high or a low impact on the predictions. The pattern in the admission source feature indicates that when the patients are more likely to die when are transferred from other medical facilities. Medicare financial class ranks sixth and illustrates that patients with Medicare insurance are more likely to die. However, when the financial class is Medicaid, patients are slightly less likely to die. The figure also illustrates that the female gender contributes to the non-mortality output, while the male gender contributes to the mortality output. Other features' SHAP values are primarily gathered around 0, showing that they do not contribute substantially to the mortality prediction. ### Local interpretation Figure 1: (A) SHAP feature importance plot and (B) SHAP summary plot. \begin{table} \begin{tabular}{l r r r r} \hline & Precision & Recall & F1 Score & Support \\ \hline 0 (non-mortality) & 0.97 & 1.00 & 0.98 & 3556 \\ 1 (mortality) & 0.50 & 0.03 & 0.05 & 118 \\ Accuracy & & & 0.97 & 3674 \\ Macro Avg & 0.73 & 0.51 & 0.52 & 3674 \\ Weighted Avg & 0.95 & 0.97 & 0.95 & 3674 \\ \hline \end{tabular} \end{table} Table 2: Classification report for XGBoost model predictions on test set. Local interpretations allow us to understand how the features affect the individual prediction and how the feature contributions are consistent with or different from the global average in specific circumstances. Figures 2 and 3 show the local interpretation of SHAP and LIME from one non-mortal patient prediction (true negative). Figure 2(A) and 2(B) are SHAP local interpretation plots. Features in red are pushing the prediction toward mortality, while the ones in blue push non-mortality. The size of the bar indicates impact power on the predictions. This figure reveals that, for this individual case, age at 75, financial class as Medicare, zip code starting with 786, and non-female gender contribute to the mortality prediction. On the other hand, encounter type as emergency, admission type as emergency room, and admit year as 2021 pushes contribute to the higher survival rate. Figure 3 shows the LIME local interpretation summarizing the factors contributing to the prediction, where orange-colored factors indicate the contribution to mortality prediction and blue-colored factors indicate the contribution to non-mortality prediction. The table on the right of Figure 3 shows the contribution's rank with the feature's actual value for the specific instance. It shows that emergency encounter type and admission source have the most significant impact on the prediction of non-mortality, followed by not-outpatient encounter type, age, and Medicare financial type contributing to the mortality prediction. Same as SHAP local interpretation, race, admit year, and admit quarter have a low impact on the prediction. For this specific individual, we found that Medicare financial class plays an essential role in the mortality-class prediction, which can hardly be seen from the global interpretation since the older age is more polarized than Medicare in the SHAP summary plot. Figure 4 and 5 show local interpretation of SHAP and LIME from one mortal patient prediction (true positive). Figure 4 illustrates that, for this individual case, admission source as 'transfer from another health care facility', zip code being 789, encounter type as inpatient, financial class as Medicare, age at 64 contribute to the mortality prediction. LIME interpretation in Figure 5 shows that zip code, encounter type, admission source, age, financial class have larger impact on the prediction towards mortality. In both figures, zip code, encounter type, admission source, financial class, and age rank high and push the prediction to higher probability of mortality in the local interpretation. Even though the SHAP summary plot indicates that zip code as a medium-high impactful feature, in this individual case, zip code is in top 2 ranked feature contributing to the mortality prediction. Figure 3: LIME local interpretation (non-mortality). Figure 2: SHAP local interpretation (non-mortality). ### Cross validation In the above two individual local interpretation, the contribution tendency toward non-mortality/ mortality and the rank of the impact of the features are quite consistent between SHAP and LIME. We therefore proceed to the cross validation of SHAP and LIME to assess the reliability of the local interpretations. Here we provide the descriptive statistics of impact value consistency and impact ranking difference in Table 3. As shown in the table, the mean ratio of impact value consistency is about 0.846 (SD=0.061), which means that in each individual local interpretation, on average about 85% of the features contribute the predictions to the same direction, either mortality or non-mortality. Among all the instances in the testing dataset, at least 63.2% and at most 94.7% of the feature impact values in SHAP and LIME are consistent. The mean of the impact ranking differences is around -0.032 (SD=2.207). Even though the difference of the ranking could be large, most (about 77.5%) of the impact ranking differences are between -2 to +2. Spearman's rank correlation was computed to assess the relationship between the SHAP ranking and LIME ranking of the features in local interpretation. For the features that are both contributing to mortality in SHAP and LIME, there was a strong, positive correlation between the two variables, which was statistically significant (r = 0.69, p=0.00). For the features that are both contributing to non-mortality in SHAP and LIME, there was also a strong, positive correlation between the two variables (r = 0.82, p=0.00). Figure 4: SHAP local interpretation (mortality). Figure 5: LIME local interpretation (mortality). \begin{table} \begin{tabular}{l l l l l l} \hline & Mean & Median & Std & Minimum & Maximum \\ \hline Ratio of Impact & 0.846 & 0.842 & 0.061 & 0.632 & 0.947 \\ Value Consistency & & & & & \\ (per individual) & & & & & \\ Impact Ranking & -0.032 & 0 & 2.207 & -9 & 11 \\ Difference & & & & & \\ \hline \end{tabular} \end{table} Table 3: Descriptive statistics for feature consistency and ranking differences between SHAP and LIME. ## 5 Discussion In this research, we train an XGBoost model to predict the mortality due to COVID-19 with associated socio-economic factors. We employed the Shapley summary plot to demonstrate the decisive power of each feature. Furthermore, we performed a case-by-case patient prediction analysis using SHAP and LIME, which gives us insight into how the feature contribution differs between individuals. Finally, we cross-validated the local interpretation of SHAP and LIME using statistical methods in order to assess the reliability of the local interpretations. In XAI interpretations, we could see several clear patterns, indicating that some features have a decisive impact on mortality prediction. When the encounter type is inpatient, the patient is at higher risk of mortality than the outpatient or emergency encounter type. This makes logical and possibly obvious sense considering only those with more severe symptoms would be admitted as an inpatient to the hospital vs. only being seen in the emergency department or having a short stay in the hospital where their status only meets the criteria for outpatient. This research also found that when the insurance class is Medicare, patients are at higher risk of mortality. While the insurance class is Medicaid, patients are more likely to survive. This is likely to happen because people with Medicare insurance are primarily older in age, who could have a higher likelihood of mortality. Previous research found that older adults have a higher mortality rate due to high Case Fatality Rate (CFR) and symptomatic infection rates [18, 19]. Our findings showing clear patterns that older people are at higher mortality risk are consistent with the previous research. Additionally, the decisive power of gender also confirms previous research findings. Men are more at risk for worse outcomes and death because of the differences in sex hormones [20, 21]. In contrast to [7], which mentioned that non-white race plays a vital role in southern states on the mortality prediction, our findings show that race has a shallow impact on the COVID-19 mortality compared to other factors. However, there are known limitations to representing the ground truth of race and ethnicity from EHR data [22]. Further work is needed to establish high-quality ground truth data reflecting social determinants of health. SHAP global interpretations make it easier to understand the decisive power of the contribution of each feature in the model. The feature importance offers a holistic perspective of the impact of different feature values on the output. Observing the clustering patterns in the SHAP summary plot could help us interpret the clinical patterns on the relationship between socio-economic features and COVID-19 patients' mortality. In individual circumstances, differences in the feature differentially impact mortality between the individual and the average. Like other research using local interpretation, the factor of importance for individuals does not always align with the global interpretations. The top 5 impact features change as predicted for different cohorts [4]. In our research, for some patients, zip code and admission source could act as the top 2 impact feature, higher than encounter types and age, which is different from global interpretation. This inconsistency is valuable when we study individual circumstances and compare predictions of different classes. Many of the social determinants of health we studied in this research are non-modifiable. However, providing information to clinicians caring for these patients, particularly local interpretations, would provide insight as to why a prediction tool indicates increased risk of mortality for the given patient and could drive quality improvement initiatives or increased resource allocation when certain features are met, such as a patient who is transferred from another facility, is older, or has Medicaid. Applying these methods to additional features would support clinical and policy decisions with explainability of what features contribute the most to clinical predictions. Even though SHAP has a more precise interpretation from the view of the computational mechanism, LIME local interpretation did not show significant differences in features consistency and rankings compared to SHAP in the COVID-19 patient mortality prediction. The cross-validation provides evidence for the reliability of XAI local interpretations. Furthermore, the local model assumptions method allows LIME to save substantial time for the local interpretation. Therefore, LIME is more efficient than SHAP in implementation. In some clinical circumstances requiring rapid reference content of the model prediction, LIME could be a potentially efficient and low-cost local interpretation tool to give preliminary results. ## 6 Conclusion XAI explains the model prediction explicitly, including feature importance, decisive power, and local interpretations. A variety of XAI tools also offer the possibility of cross-validation. XAI tools have the potential to considerably enhance the transparency and trustworthiness of the prediction results. Health providers could utilize these Explainable AI tools to validate the AI prediction results and quickly make medical and health policy judgments and decisions. Possible future research includes utilizing other datasets to increase generalizability, conducting assessments of collinearity or correlation of features, as well as study the implement of these XAI tools into other clinical dataset to understand the application value in different clinical scenarios. This research is supported by the COVID-19 Research Accelerator Grant funded by Gates Foundation (CORONAVIRUSHUB-D-21-00132). We thank Cole Maguire for his expertise and assistance throughout the data processing of our study and for the help in writing and reviewing the manuscript.
2305.04098
Neutral outflows in high-z QSOs
OH+ absorption is a powerful tracer of inflowing and outflowing gas in the predominantly atomic diffuse and turbulent halo surrounding galaxies. In this letter, we present observations of OH+(1_1-1_0), CO(9-8) and the underlying dust continuum in 5 strongly lensed z~2-4 QSOs, using ALMA to detect outflowing neutral gas. Blue-shifted OH+ absorption is detected in 3/5 QSOs and tentatively detected in a 4th. Absorption at systemic velocities is also detected in one. OH+ emission is observed in 3/5 QSOs at systemic velocities and CO(9-8) is detected in all 5 QSOs at high S/N, providing information on the dense molecular gas within the host galaxy. We compare our sample to high-z far-infrared (FIR) luminous star-forming and active galaxies from the literature. We find no difference in OH+ absorption line properties between active and star-forming galaxies with both samples following the same optical depth-dust temperature relation, suggesting that these observables are driven by the same mechanism in both samples. Similarly, star-forming and active galaxies both follow the same OH+ emission-FIR relation. Obscured QSOs display broader (>800 km/s) emission than the unobscured QSOs and all but one of the high-z star-forming galaxies, likely caused by the warm molecular gas reservoir obscuring the accreting nucleus. Broader CO(9-8) emission (>500 km/s) is found in obscured versus unobscured QSOs, but overall cover a similar range in line widths as the star-forming galaxies and follow the CO(9-8)-FIR luminosity relation found in low-z galaxies. We find that outflows traced by OH+ are only detected in extreme star-forming galaxies (broad CO emission) and in both types of QSOs, which, in turn, display no red-shifted absorption. This suggests that diffuse neutral outflows in galaxy halos may be associated with the most energetic evolutionary phases leading up to and following the obscured QSO phase.
Kirsty M. Butler, Paul P. van der Werf, Alain Omont, Pierre Cox
2023-05-06T17:02:10Z
http://arxiv.org/abs/2305.04098v1
# Neutral outflows in high-z QSOs ###### Abstract Context: OH\({}^{+}\) absorption is a powerful tracer of inflowing and outflowing gas in the predominantly atomic diffuse and turbulent halo surrounding galaxies. In this letter, we present observations of OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)), CO(9-8) and the underlying dust continuum in five strongly lensed \(z\sim 2-4\) QSOs, using the Atacama Large Millimeter/submillimeter Array (ALMA) to detect outflowing neutral gas. Blue-shifted OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) absorption is detected in 3/5 QSOs and tentatively detected in a fourth. Absorption at systemic velocities is also detected in one source also displaying blue-shifted absorption. OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) emission is observed in 3/5 QSOs at systemic velocities and the OH\({}^{+}\)(2\({}_{1}\)-1\({}_{0}\)) transition is also detected in one source. CO(9-8) is detected in all 5 QSOs at high S/N, providing information on the dense molecular gas within the host galaxy. We compare our sample to high-\(z\) far-infrared (FIR) luminous star-forming and active galaxies from the literature. We find no difference in OH\({}^{+}\) absorption line properties between active and star-forming galaxies with both samples roughly following the same optical depth-dust temperature relation. This suggests that these observables are driven by the same mechanism in both samples. Similarly, star-forming and active galaxies both follow the same OH\({}^{+}\) emission-FIR relation. Obscured QSOs display broader (\(>800\) km s\({}^{-1}\)) emission than the unobscured QSOs and all but one of the high-\(z\) star-forming galaxies (likely caused by the warm molecular gas reservoir obscuring the accreting nucleus). Broader CO(9-8) emission (\(>500\) km s\({}^{-1}\)) is found in obscured versus unobscured QSOs, but overall they cover a similar range in line widths as the star-forming galaxies and follow the CO(9-8)-FIR luminosity relation found in low-\(z\) galaxies. We find that outflows traced by OH\({}^{+}\) are only detected in extreme star-forming galaxies (indicated by broad CO(9-8) emission) and in both types of QSOs, which, in turn, display no red-shifted absorption. This suggests that diffuse neutral outflows in galaxy halos may be associated with the most energetic evolutionary phases leading up to and following the obscured QSO phase. ## 1 Introduction Feedback and outflows play a key role in the evolution, regulation, and demise of galaxies throughout cosmic time. Much of the gas accreted onto dark matter halos (and, consequently, their central galaxies, where it condenses to form new stars or feed supermassive black hole growth) is ejected back out of the galaxy via the energetic mechanisms associated with these phenomena. The removal of gas regulates the fuel available for galaxy growth, as well as transporting mass and angular momentum to higher galactic radii (Governato et al., 2010), via fountain flows, or, in more powerful cases, polluting the circumgalactic and intergalactic medium with enriched gas (Travascio et al., 2020). At \(z\sim 1-3\), the star formation rate density and black hole accretion peaks in the universe (Madau & Dickinson, 2014) and, as a result, feedback and outflows must do so as well. Outflows are complex multi-phase phenomena, in which the warmer ionised phase is found to dominate the kinetic energy, whilst the cooler neutral and molecular phases dominate the mass and momentum budget of the outflow (Fluetsch et al., 2021). The cooler phases are of particular interest as they remove the direct fuel for star formation, but they have only become available for observation at high-\(z\) relatively recently with new and upgraded facilities, such as ALMA and the NOrthern Extended Millimeter Array (NOEMA). Low-\(z\) studies have commonly made use of high-velocity line wings of bright emission lines to detect outflows (Feruglio et al., 2010), however, at high-\(z\), detecting these weak signals from CO or [CII] becomes difficult (e.g. Fan et al., 2018; Ginolfi et al., 2020). Blue-shifted molecular absorption lines have thus become a popular and reliable way of tracing cool gas outflows both at cosmic noon (e.g. OH\({}^{+}\); Butler et al., 2021; Riechers et al., 2021; Shao et al., 2022; CH\({}^{+}\) : Falgarone et al., 2017) and dawn (e.g. OH 119 \(\mu m\): Spilker et al., 2018, 2020; Butler et al., 2023; H\({}_{2}\)O: Jones et al., 2019). One molecule of note here is OH\({}^{+}\), which traces the extended turbulent halo of predominantly atomic and diffuse gas surrounding galaxies (Indriolo et al., 2018). Moreover, the proximity of OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) with the CO(9-8) emission line means that we can simultaneously observe the warm molecular gas, providing additional information on the physical properties within the galaxy (Berta et al., 2021; Riechers et al., 2021). Currently, observations are limited to star-forming galaxies (Butler et al., 2021; Riechers et al., 2021; Berta et al., 2021; Indriolo et al., 2018; Shao et al., 2022, Butler et al. in prep.), with only a few observations achieved in active galaxies (Stanley et al., 2021; Shao et al., 2019, 2022). In this letter, we present OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)), CO(9-8), and dust continuum observations in five \(z\)\(\sim\)2\(-\)4 far-infrared (FIR) bright QSOs. Throughout this work, we adopt a flat \(\Lambda\)CDM cosmology with \(\Omega_{\rm m}=0.307\) and H\({}_{0}=67.7\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Planck Collaboration et al., 2016). ## 2 Sample and observations The data were obtained in the Cycle 7 ALMA project 2019.1.01802.S (P.I.: K.M. Butler), targeting five FIR-bright QSOs at \(z\sim 2-4\) (Tables 1 and 2). The five quasars were selected based on their 500 \(\mu\)m continuum flux densities from a sample of 104 gravitationally lensed QSOs presented in Stacey et al. (2018). The sources are listed in both the CASTLES survey (Kochanek et al., 1999) and Sloan Digital Sky Survey Quasar Lens Search catalogue (Inada et al., 2012), which have since been followed up with _Herschel_/SPIRE observations (Stacey et al., 2018), providing accurate estimates of their FIR-luminosities and dust temperatures (Table 2). The selected sample covers about two decades in dust temperature, of about order of magnitude in \(L_{\rm FIR}\) and including one quasar with a jet-dominated radio emission, MG J01414+0534 (Stacey et al., 2018). All sources were observed with ALMA band 7, except PSS J2322+1944, which was observed in band 5. The receivers were tuned such that two overlapping spectral windows cover the OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) and CO(9-8) lines in one sideband, with the two remaining spectral windows placed in the second sideband to detect the underlying dust continuum at high S/N (Fig. 11). No calibration issues were found and the observations were all made during good or adequate weather conditions. The raw data were reduced using CASA (McMullin et al., 2007). The calibrated data were non-interactively imaged using a robust weighting of 0.5 and noise threshold of 1\(\sigma\) with the tclean routine. We did not subtract the continuum and separated the sidebands into two cubes, leaving the frequency resolution the same as that of the receiver channels (15.624 MHz) (Table 11). ## 3 Spectra and spectral fitting We present both sidebands of the ALMA spectra for each source in Fig. 11. The spectra were created and fitted twice: first by summing over all spaxels with an underlying dust continuum level \(\geq 3\sigma\), estimated from a first guess of the line-free channels. We then identified and fit the spectral components and used them to identify any spaxel containing a channel value \(\geq 3\sigma\) within the FWHM of one or more of the CO(9-8) components. These spaxels are then included in the spatially integrated spectra and fitted a second time. We fit the spectra with a combination of Gaussian spectral components and a linear continuum slope simultaneously, leaving the central frequencies, line widths, intensities, and continuum gradient as free parameters. We used the same line parameters to describe the OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) and OH\({}^{+}\)(2\({}_{1}\)-1\({}_{0}\)) transitions. The final best-fit parameters are presented in Tables 1 and 11. ## 4 Results Here, we present the best-fit parameters of the OH\({}^{+}\) absorption, emission and CO(9-8) emission lines in our sample of strongly lensed high-\(z\) obscured (MG J0414+0534) and unobscured (HE 1104-1805, PSS J2322+1944, RX J0911+0551 and WFI J2026-4536) QSOs. We compared our results with high-z sources from the literature with comparable FIR luminosities, including obscured (W0410\(-\)0913 \(z\)=3.592 Stanley et al. 2021) and unobscured (SDSS J231038.88+185519.7 \(z\)=6.0031, Shao et al. 2022) QSOs, and DSFGs (HerBS-89a \(z\)=2.95 Berta et al. 2021 and the sample of Riechers et al. (2021)). The central velocity of the CO(9-8) emission (Table 1) was used as the systemic velocity when calculating the Doppler-shifted velocities of the OH\({}^{+}\) lines. ### Fitted line properties Blue-shifted OH\({}^{+}\) absorption is detected in three out of the five QSOs, as well as at systemic velocities in the one obscured and jetted QSO, MG J0414+0534 (Fig. 11, Table 1). We include RX J0911+0551 as a tentative detection as it appears in both the OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) and OH\({}^{+}\)(2\({}_{1}\)-1\({}_{0}\)) transitions; however, we stress that these values are uncertain. No red-shifted absorption was found, unlike some of the sources reported in Riechers et al. (2021) or in the case of HerBS-89a Berta et al. (2021). Blue-shifted velocities and linewidths are not boosted with respect to the DSFGs. The QSOs show a trend between faster outflow velocity and broader full-width half maximum (FWHM). A larger sample is needed to confirm this finding. OH\({}^{+}\) emission is found in three out of the five QSOs at systemic velocities, unlike in the case of the DSFGs presented in Riechers et al. (2021), which display a large spread in velocity offsets between the OH\({}^{+}\) and CO(9-8) emission. The obscured QSO MG J0414+0534 displays the broadest OH\({}^{+}\) emission line in the QSO sample. Strong CO(9-8) emission is observed in all five QSOs. The two obscured QSOs (MG J0414+0534 and W0410\(-\)0913) display significantly broader CO(9-8) line widths than the unobscured QSOs. The DSFGs span a wide range of CO(9-8) line widths (Riechers et al. 2021). However, the eight sources with blue-shifted OH\({}^{+}\) absorption all display broad (FWHM \(>\) 500 km s\({}^{-1}\)) CO(9-8) line widths, similarly to the obscured QSOs and wider than all the unobscured QSOs. ### Derived line properties From the fitted line properties, we derive integrated OH\({}^{+}\) absorption optical depths: \[\int\tau\mathrm{d}v=-\ln\big{(}\frac{\mathrm{S}_{\mathrm{trans}}}{\mathrm{S}_{ \mathrm{cont}}}\big{)}\mathrm{d}v, \tag{1}\] where S\({}_{\mathrm{trans}}\) is the transmitted flux and S\({}_{\mathrm{cont}}\) is the unobscured continuum flux level fitted at the central velocity of the line. The OH\({}^{+}\) emission and CO(9-8) line luminosities were derived using the expressions given by Solomon et al. (1992) (Table 2). The integrated OH\({}^{+}\) absorption optical depths (\(\int\tau_{\mathrm{OH^{+}-A}}\)) of the QSO sample lie at the low end of the DSFG sample (Fig.2a), roughly following the \(\int\tau_{\mathrm{OH^{+}-A}}\) versus dust temperature relation found by Riechers et al. (2021). The QSOs similarly follow the positiveOH\({}^{+}\) emission line luminosity (\(L^{\prime}_{\mathrm{OH^{+}-E}}\)) - L\({}_{\mathrm{FIR}}\) relation set by the DSFGs. Interestingly, the scatter in this relation is greatly reduced when only considering the DSFGs with detected blue-shifted OH\({}^{+}\) absorption. Our sample of high-z QSOs follow the L\({}^{\prime}_{\mathrm{CO(9-8)}}\)-L\({}_{\mathrm{FIR}}\) correlation found in low-z galaxies (Liu et al. 2015), with MG J0414+0534 falling the farthest from the relation towards lower L\({}^{\prime}_{\mathrm{CO(9-8)}}\)/L\({}_{\mathrm{FIR}}\) ratios (Fig. 2c). Riechers et al. (2021) found that their sample of high-z DSFGs systematically deviates from this trend towards higher \(L^{\prime}_{\mathrm{CO(9-8)}}\)/\(L_{\mathrm{FIR}}\) ratios, a deviation not seen in other high-z star-forming galaxies from the literature. ## 5 Discussion ### OH\({}^{+}\) Absorption OH\({}^{+}\)(1\({}_{1}\)-1\({}_{0}\)) absorption has proven to be a reliable tracer of turbulent, diffuse, and predominantly atomic gas surrounding galaxies (e.g. Indriolo et al. 2018) revealing both inflowing (Berta et al. 2021; Riechers et al. 2021) and outflowing gas moving through the CGM at high-\(z\)(Indriolo et al. 2018; Butler et al. 2021; Riechers et al. 2021; Shao et al. 2022). OH\({}^{+}\) absorption and emission are detected at similar rates (\(\sim\) 75% and \(\sim\) 65%, respectively) in the high-\(z\) DSFGs and QSOs samples; however, Riechers et al. (2021) reported similar numbers of red- and blue-shifted OH\({}^{+}\) absorption in their DSFG sample, although current numbers of high-\(z\) sources showing clear evidence of infalling gas remain sparse (Berta et al. 2021; Riechers et al. 2021, and references therein). We did not find any occurrences of red-shifted absorption (Fig. 1), however, our selection of mostly Type 1 AGN systems (4/5 sources) may bias our results, as gas is more likely to infall perpendicularly to the opening of the active nucleus, where counter-acting radiation can escape most efficiently from the host galaxy (see, e.g. Shao et al. 2022). Furthermore, we cannot rule out red-shifted absorption in WFI J2026-4536, where this frequency range is not covered. Alternatively, this could be a real difference between DSFGs and QSOs, indicating that feeding from the CGM has been suppressed or halted by feedback in QSO hosts. The OH\({}^{+}\) absorption in the QSO sample does not otherwise display faster or broader lines (Fig. 1) and approximately follows the relation in optical depth with dust temperature set by the DSFGs (Fig. 2). This suggests that the ejection of gas traced by OH\({}^{+}\) absorption is not significantly affected by the presence of an active galactic nucleus, nor by whether the nucleus is obscured or not. ### OH\({}^{+}\) Emission In emission, OH\({}^{+}\) traces environments with high electron density (e.g. Gerin et al. 2016), which can arise in the dense hot gas in both photon- and X-ray-dominated regions (PDRs, XDRs) and has thus been detected in both active (van der Werl et al. 2010; Stanley et al. 2021; Shao et al. 2022) and star-forming galaxies (Stanley et al. 2021; Riechers et al. 2021, Butler et al., in prep.) The two obscured AGNs, MG J0414+0534 (see Fig. 11) and W0410\(-\)0913 (Stanley et al. 2021), display very broad (\(>\) 1000 km s\({}^{-1}\)) OH\({}^{+}\) emission, with only one source in the DSFG sample of Riechers et al. (2021) with a comparable linewidth. The remaining DSFGs and unobscured QSOs lie in similar ranges. A greater contribution from XDRs, a higher prevalence of both XDRs and PDRs or the presence of a wind component in the obscuring molecular gas reservoir directly surrounding the active nucleus in obscured QSOs may be responsible for their broader emission lines. We did not see a boost in the OH\({}^{+}\) emission line luminosity with respect to the \(L_{\mathrm{FIR}}\) of any of the QSOs, instead finding good agreement with the DSFG relation. Considerable AGN contributions to the L\({}_{\mathrm{FIR}}\) may be expected in the obscured active systems (Schneider et al. 2015; Duras et al. 2017), however, MG J0414+0534 and W0410\(-\)0913 do not display n offset towards lower \(L^{\prime}_{\mathrm{OH^{+}}}\)/\(L_{\mathrm{FIR}}\) ratios. This may indicate that the central QSO is also contributing to the L\({}^{\prime}_{\rm OH^{+}}\), such that the \(L^{\prime}_{\rm OH^{+}}\)-/\(L_{\rm FIR}\) ratio is maintained. ### CO(9-8) Emission CO(9-8) is predominantly excited by mechanisms associated with warm dense molecular gas in star-forming regions. AGN can contribute to the CO(9-8) emission when present but typically do not dominate until higher J transitions (e.g. Li et al., 2020). Whilst overall the QSO and DSFG samples cover similar ranges in CO(9-8) line width, the obscured systems (MG J0414+0534 and W0410-0913 Stanley et al., 2021) show significantly broader CO(9-8) emission than the unobscured QSOs. This is in agreement with Stacey et al. (2022), who showed that red, obscured QSOs (including MG J0414+0534) display broader (\(\gtrsim 500\) km s\({}^{-1}\)) high-J CO lines than their unobscured counterparts. Comparing high-J line widths with that of bulk gas tracers (i.e. low-J CO transitions or [CI]), they show that the high-J transitions in reddened sources displayed excess flux at high velocities. They attribute this emission to molecular gas winds driven by radiation pressure trapped by the obscuring material around the active nuclei. Narrow CO(9-8) emission observed in unobscured QSOs thus indicates a phase after which the obscuring material has been ejected and radiation from the central AGN can efficiently escape. Blue-shifted OH\({}^{+}\) absorption is detected in QSOs displaying both broad and narrow CO(9-8) emission but only in DSFGs displaying broad CO(9-8) emission. This may simply be due to the higher S/N of the QSO spectra, thus requiring larger samples at high S/N to be confirmed. If confirmed, this may indicate that neutral outflows in galaxy halos require extreme galaxies, namely, those displaying broad CO(9-8) emission or harbouring an AGN). Following the evolutionary picture where heavily star-forming galaxies evolve into quiescent galaxies via a short-lived QSO phase (Simpson et al., 2012), blue-shifted OH\({}^{+}\) absorption may be indicative of the energetic phases building up to an obscured QSO, and into the unobscured QSO phase (Petter et al., 2023). This is consistent with the absence of red-shifted OH\({}^{+}\) absorption in the QSO sample. Larger samples targeting QSOs at all evolutionary stages are needed to test this hypothesis. Riechers et al. (2021) suggest a higher prevalence of shock excitation causes the systematic deviation of their high-\(z\) DSFG sample from the low-z \(L^{\prime}_{\rm CO(9-8)}-L_{\rm FIR}\) relation. At low-\(z\), sources \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Name & z & T\({}_{\rm d}\) & \(\mu\)L\({}_{\rm FIR}\)\({}^{a}\) & OH\({}^{+}\)(1\({}_{1}-\)1\({}_{0}\)) Absorption & \multicolumn{2}{c}{OH\({}^{+}\)(1\({}_{1}-\)1\({}_{0}\)) Emission} & \multicolumn{2}{c}{CO(9–8) Emission} \\ & & & \(\int\tau\)dv & N & \(\mu\)L & \(\mu\)L\({}^{\prime}\) & \(\mu\)L & \(\mu\)L\({}^{\prime}\) \\ & & [K] & log\({}_{10}\)[L\({}_{\odot}\)] & [km s\({}^{-1}\)] & \(10^{15}\)[cm\({}^{-2}\)] & \(10^{8}\)[L\({}_{\odot}\)] & \(10^{9}\)[K km s\({}^{-1}\) pc\({}^{2}\)] & \(10^{8}\)[L\({}_{\odot}\)] & \(10^{9}\)[K km s\({}^{-1}\) pc\({}^{2}\)] \\ \hline HE 1104-1805 & 2.3222\({}^{\circ}\) & 36.113.22\({}^{\circ}\)\({}^{\circ categorised into Class I, II, and III in order of increasing CO excitation, showed trends of falling above, on, and both above and below the relation with greater scatter, respectively (Rosenberg et al. 2015, Fig.2). Riechers et al. (2021) noted that their sample fall into a similar offset range as half the Class III sources located below the relation. Additionally, the highest L\({}^{\prime}_{\rm CO(9-\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \rm\rm{\rm{\rm{\rm{\rm{\rm{\rm{\{\)\, ferences found in the emission lines of the obscured vs unobscured QSOs, we find no differences in the outflow properties traced by OH\({}^{+}\) absorption but we do note that only DSFGs with \((\mathrm{FWHM}_{\mathrm{CO9-8}})>500\) km s\({}^{-1}\)) CO(9-8) emission have blue-shifted OH\({}^{+}\) absorption detected. This may indicate that diffuse, neutral outflows in the CGM are driven by the most extreme sources (i.e. AGNs or displaying broad emission lines). We therefore suggest that blue-shifted OH\({}^{+}\) absorption may be indicative of the energetic phases leading up to the obscured QSO phase and into the unobscured phase where infalling gas has been halted (red-shifted absorption). ###### Acknowledgements. The Authors would like the thank the anonomous referee who helped improving this letter. This paper makes use of the following ALMA data: ADS/[email protected]. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASISTA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOI. This work benefited from the support of the project Z-GAL ANR-AAPG2019 of the French National Research Agency (ANR).
2308.02415
Car-Driver Drowsiness Assessment through 1D Temporal Convolutional Networks
Recently, the scientific progress of Advanced Driver Assistance System solutions (ADAS) has played a key role in enhancing the overall safety of driving. ADAS technology enables active control of vehicles to prevent potentially risky situations. An important aspect that researchers have focused on is the analysis of the driver attention level, as recent reports confirmed a rising number of accidents caused by drowsiness or lack of attentiveness. To address this issue, various studies have suggested monitoring the driver physiological state, as there exists a well-established connection between the Autonomic Nervous System (ANS) and the level of attention. For our study, we designed an innovative bio-sensor comprising near-infrared LED emitters and photo-detectors, specifically a Silicon PhotoMultiplier device. This allowed us to assess the driver physiological status by analyzing the associated PhotoPlethysmography (PPG) signal.Furthermore, we developed an embedded time-domain hyper-filtering technique in conjunction with a 1D Temporal Convolutional architecture that embdes a progressive dilation setup. This integrated system enables near real-time classification of driver drowsiness, yielding remarkable accuracy levels of approximately 96%.
Francesco Rundo, Concetto Spampinato, Michael Rundo
2023-07-27T10:59:12Z
http://arxiv.org/abs/2308.02415v1
# Car-Driver Drowsiness Assessment through 1D Temporal Convolutional Networks ###### Abstract Recently, the scientific progress of Advanced Driver Assistance System solutions (ADAS) has played a key role in enhancing the overall safety of driving. ADAS technology enables active control of vehicles to prevent potentially risky situations. An important aspect that researchers have focused on is the analysis of the driver's attention level, as recent reports confirmed a rising number of accidents caused by drowsiness or lack of attentiveness. To address this issue, various studies have suggested monitoring the driver's physiological state, as there exists a well-established connection between the Autonomic Nervous System (ANS) and the level of attention. For our study, we designed an innovative bio-sensor comprising near-infrared LED emitters and photo-detectors, specifically a Silicon PhotoMultiplier device. This allowed us to assess the driver's physiological status by analyzing the associated PhotoPlethysmography (PPG) signal.Furthermore, we developed an embedded time-domain hyper-filtering technique in conjunction with a 1D Temporal Convolutional architecture that embdes a progressive dilation setup. This integrated system enables near real-time classification of driver drowsiness, yielding remarkable accuracy levels of approximately 96%. Drowsiness, Deep learning, D-CNN, Deep-LSTM, PPG (PhotoPlethysmography) + Footnote †: publicationid: pubid: 978-8-8872-3749-8 ©2020 AEIT ## I Introduction In the medical domain, the term "drowsiness" refers to a state characterized by a reduced level of alertness and a tendency to sleep. The advancement of technology has motivated researchers to develop effective methods for detecting critical levels of driver drowsiness, aiming to prevent serious road traffic accidents. Numerous studies have extensively explored the correlation between attention level and Heart Rate Variability (HRV) [1]. HRV serves as an indicator of the autonomic regulation of the heart and can be derived from the frequency analysis of the ElectroCardioGraphy (ECG) or PhotoPlethysmoGraphic (PPG) signals obtained from the subject. Essentially, HRV reflects the variability in the time intervals between consecutive heartbeats, predominantly influenced by the dynamic interplay between the Autonomous Nervous System (ANS) and the heart [1]. Recent scientific investigations have focused on analyzing the relationship between drowsiness and ANS activity through HRV. In this study, we propose an innovative pipeline for evaluating the attention level of car drivers using PPG signals. The organization of this paper is as follows: Section II presents an overview of related works. In Section III, we provide detailed information about the hardware framework employed to acquire the PPG signals, along with the associated processing pipeline. Section IV describes the Deep Learning framework utilized for classifying the collected PPG waveforms. Finally, we present the experimental results and discuss future avenues of research. ## II Related Works Numerous studies have focused on utilizing physiological signals to evaluate the driver's condition. In a study by Vicente et al. [1], the authors employed Electrocardiography (ECG) signals to classify the driver's drowsiness status. However, ECG signals are susceptible to artifacts that can distort the accurate acquisition of Heart Rate Variability (HRV). To mitigate this issue, the authors proposed a pipeline to filter and stabilize the ECG signal. Many existing approaches concentrate on detecting the attention level through the analysis of Lower-Frequency (LF) and Higher-Frequency (HF) features. In a study by Szypulska et al. [2], the authors developed a drowsiness detection system that assessed fatigue and sleep onset by examining the LF/HF ratio derived from frequency analysis of the ECG's R-R tachogram. The results demonstrated the efficacy of this approach in Advanced Driver Assistance Systems (ADAS) applications. However, these methods require the acquisition of ECG signals for HRV analysis. Traditional ECG signal acquisition involves the configuration of Einthoven's Triangle [3], necessitating the driver's contact with three specific electrodes to ensure accurate signal acquisition. Consequently, this introduces challenges in maintaining the robustness of the ECG signal sampling system, which in turn affects the quality of the derived HRV signal [3]. In light of these challenges, recent studies have shifted towards the utilization of PhotoPlethysmoGraphic (PPG) signals as an alternative to ECG signals [4]. Unlike ECG signals, PPG signals only require a single contact point on the subject's skin for accurate sampling [4].In recent years, Deep Learning (DL) approaches have gained significant attention due to their effectiveness in estimating a subject's drowsiness. For instance, studies by Hong et al. [5] and Alshaqaqi et al. [6] propose DL architectures to track changes in eye state for estimating driver drowsiness. However, these approaches face challenges in recording video sequences under adverse condi tions, such as varying illumination and occlusions.Driven by the rapid advancements in Machine Learning, researchers have developed effective architectures for classifying the driver's physiological status. Cheon et al. [7] introduced a pipeline that addresses the classification problem of driver drowsiness by utilizing a variety of sensors on the steering wheel. Additionally, Choi et al. [8] employed a Multimodal Deep Learning model to recognize the driver's vigilance level by analyzing both visual and physiological data. While previous methods have shown impressive results, they often involve substantial computational costs, limiting their applicability on resource-constrained automotive-grade devices. ## III Methods and Materials In this investigation, our focus was on developing a system to capture the PPG signal of car drivers, enabling us to evaluate and monitor their corresponding attention levels. The PPG signal is a non-invasive approach widely used for analyzing the heart's pulse rate. By extracting features from the PPG signal, we can monitor various factors such as heart pulse, respiratory rate, as well as vascular and cardiac disorders [4].The PPG waveform consists of two main components: the pulsatile 'AC' signal, which represents the cardiac-synchronous changes in blood volume, and the 'DC' component, which is influenced by the processes of respiration and thermo-regulation. During a cardiac cycle, as the heart pumps blood towards the peripheral regions of the body, it generates pressure that causes the arteries and arterioles in the subcutaneous tissue to expand. This expansion, in turn, results in an increase in blood volume. To capture these changes in volume, we utilize a specialized device equipped with a light-emitting component and a detector that is placed on the skin. By illuminating the skin and measuring the amount of back-scattered light received by the detector [4], we can effectively detect the heart's pressure pulse, which is manifested as a peak in the PPG waveform. In Fig. 1, the process underlying PPG waveform(s) formation is reported. To perform the PPG acquisition, we used the PPG sampling device composed of the Silicon Photomultiplier sensor [9, 10]. The proposed PPG probes comprises an array device, called Silicon Photomultipliers (SiPMs) [11], characterized by a total area of \(4.0\times 4.5\)\(mm^{2}\) and \(4871\) square microcells with \(60\) um pitch. The devices present a geometrical fill factor of \(67.4\%\) and are packaged in a surface mount housing (SMD) with about \(5.1\times 5.1\)\(mm^{2}\) total area [4].For the PPG signal acquisition system, we employed a Pixeleto dichroic bandpass filter with specific characteristics. The filter was designed with a pass-band centered at approximately \(540\) nm and a Full Width at Half Maximum (FWHM) of \(70\) nm. Its optical transmission in the pass-band range exceeded \(90-95\%\). To secure the filter onto the SMD package, we used a Loctite 352TM adhesive. The PPG detector comprises a light emitter and a detector based on advanced technology. We utilized OSRAM LT M673 LEDs, which integrate InGan technology, in a compact SMD package. These LEDs offer a wide-angle view of 120\({}^{\circ}\) and cover an area of \(2.3\times 1.5\)\(mm^{2}\). With a spectral bandwidth of 33 nm, they emit lower power in the standard range.To optimize the functionality of the PPG probe, we designed a printed circuit board (PCB) that incorporates a user-interface based on National Instruments (NI) instrumentation. The PCB features a 4V portable battery, power management circuits, a conditioning circuit for the output signals of the SiPMs, as well as multiple USB connectors for PPG probes and corresponding SMA output connectors. In our previous works [4, 11], we provided additional information regarding the hardware setup for PPG signal acquisition. The PPG Sensor Probe consists of the SiPM sensor and the corresponding LEDs. To manage the power consumption of the SiPM device, we implemented a Power Management Circuit [9, 10, 11]. For PPG signal acquisition, we placed the PPG sensor probe on the car's steering wheel. The driver was instructed to maintain their hand in contact with the probe to trigger the signal. The acquired PPG raw signal was processed internally by the NI device, which includes multiple 24-bit ADCs. Furthermore, the NI device is equipped with a Windows-based operating system and utilizes the LabView software framework [4]. Figure 2 illustrates the pipeline employed to process the PPG signal using the NI device. We developed a LabView-based algorithm to filter the raw PPG data, implementing both low-pass and high-pass FIR (Finite Impulse Response) filters. Additionally, we computed the first and second derivatives of the PPG signal as part of the preprocessing step, enabling the evaluation of the minimum and maximum extremes for each waveform. The rendered PPG signal was displayed on the monitor connected to the NI device, as depicted in Figure 2. More detailed descriptions regarding the NI device and LabView software framework can be found in our previous publications [4, 9]. For the purpose of this study, we utilized the MATLAB framework to process the PPG raw data, as outlined in the validation session. ### _The Hyper Filtering Layers_ To mitigate the impact of motion and noise artifacts in the acquired PPG signal, we employed a frequency filtering approach and developed a signal stabilization algorithm. In Fig. 1: The proposed PPG sampling system. our bio-inspired pipeline, which has shown promising results in our previous works [4, 9, 10, 11], we utilized a series of FIR filters to implement a low-pass and high-pass filtering scheme within the frequency range of \(1-10\) Hz. This effectively eliminated unwanted noise components, including the interference caused by the 50 Hz power line frequency. To further enhance the robustness of the pipeline, we introduced a set of hyper-filtering layers. The objective was to explore the potential of hyperspectral imaging concepts in the context of PPG signal processing. Drawing inspiration from Chang's work [12], we adapted the concept of hyperspectral imaging to analyze 1D signals, specifically the PPG signal. Hyperspectral imaging is a technique used to capture visual information from the entire electromagnetic spectrum, allowing for detailed spectral analysis of each pixel in an image. While traditionally applied to visual imagery for object recognition and material identification, we explored its application to 1D signals. Our objective was to investigate whether collecting information from multiple frequency ranges of the PPG signal, akin to the concept of hyperspectral imaging, could provide more comprehensive features related to the driver's attention level. Instead of applying a single set of filters (e.g., low-pass and high-pass), we analyzed a range of frequencies within the 1-10 Hz range to characterize the individual PPG waveforms. Through our investigation, we discovered that the informative frequency range for attention assessment fell within this range. To further enhance the hyper-filtering process, we explored the possibility of subdividing the frequency range into sub-intervals. This subdivision aimed to simulate a hyper-spectral process, where each sub-interval captured specific frequency components related to attention. Consequently, we incorporated both low-pass and high-pass filters in our configuration, implementing two layers of hyper-filtering. In the hyper low-pass filtering layer, we adjusted the frequencies in the low-pass part while keeping the cutoff frequency of the high-pass filter constant, and vice versa. To ensure optimal performance and minimize the introduction of noise artifacts, we opted to use Butterworth filters in both layers of hyper-filtering [13, 14]. To determine the optimal number of sub-intervals within the 1-10 Hz frequency range, we employed a "try-and-error" approach combined with heuristic tests. Our goal was to strike a balance between computational load and discriminative capacity, ensuring that the sub-intervals effectively captured the relevant frequency components while avoiding excessive computational complexity. Through iterative experimentation and analysis, we determined that a total of 11 sub-intervals provided the most favorable results. Once subdividing the frequency into \(11\) sub-bands, we designed a Reinforcement Learning (RL) algorithm. The implementation details of this approach are reported in the following items: * We defined an action \(a_{t}\) as the sub-band frequency selected in the range reported in Table I and according to the type of filtering (low-pass or high-pass); * an Agent is defined selecting the action \(a_{t}\) * We defined a next state \(S_{t+1}\) as a set of pre-processed signals obtained collecting the value of each input PPG samples (in a windows of 5 sec sampling at 1 Khz as sampling frequency) of the filtered PPG raw signal at specific sub-band frequency of the action \(a_{t}\); * We define an environment Reward as \(R(.|s_{t},a_{t})\) i.e., a measure of drowsiness of the car driver. We indicated as \(R(.|s_{t},a_{t})\) the distance of the output of the deep learning system (regression layer plus SoftMax classification) with respect car-driver's level of attention. We determined the optimal policy \(P_{o}\) that minimizes the cumulative discount reward by applying the following formula: \[P_{o}=argmax_{P_{o}}\ E\left[\sum_{t\geq 0}\gamma^{t}R\left(.|s_{t},a_{t} \right)|P_{o}\ \right] \tag{1}\] Where \(\gamma\) is a proper discounted coefficient in (0,1). In order to evaluate the the goodness of a state st and the goodness of a state-action couple \((s_{t},a_{t})\), we denoted the Value function and the Q-value function respectively: \[V^{P_{0}}(s_{t})=E\left[\sum_{t\geq 0}\gamma^{t}R\left(.|s_{t}\right)|P_{o}\ \right] \tag{2}\] \[Q^{P_{0}}(s_{t},a_{t})=E\left[\sum_{t\geq 0}\gamma^{t}R\left(.|s_{t},a_{t} \right)|P_{o}\ \right] \tag{3}\] Fig. 2: The proposed driver drowsiness monitoring pipeline. To determine the optimal set of sub-band frequencies for each hyper-filtering layer, we employed Q-learning algorithms [15]. By utilizing this reinforcement learning technique, we iteratively trained our system to select the most suitable frequencies based on the observed states and rewards. The results of the RL algorithm, which specify the frequency sets for the two hyper-filtering layers, are summarized in Table II and III. Formally, let be \(W_{PPG}^{i}\left(t_{k}\right)\) the single segmented waveform of each hyper-filtered PPG time-series (i.e. by using a specific frequency values both in low-pass and high-pass). To capture the dynamic characteristics of the hyper-filtered PPG signals, we computed signal patterns for each sample \(s(t_{k})\) in the waveform. These signal patterns were derived from analyzing the variations in the hyper-filtered PPG time-series. By examining the changes in signal samples, we obtained a large dataset of signal patterns corresponding to each hyper-filtered PPG signal. The size of this dataset matched the number of filtering frequencies, which was 11 as indicated in Table II and III. To detect the car driver's drowsiness, we extracted a substantial amount of signal patterns from the hyper-filtered PPG time-series. These signal patterns were collected over a time window of 4 seconds, allowing us to capture relevant temporal variations in the PPG signal. The extracted signal patterns served as input to our designed Deep Learning block. In Fig. 3, we provide a visualization of the generated signal patterns obtained from the temporal variations of the samples \(s(t_{k})\) for each hyper-filtered signal. These signal patterns effectively capture the unique characteristics of the PPG signal and will be utilized to characterize the driver's level of attention through the subsequent Deep Learning block. ### _The Deep Learning block_ As mentioned earlier, we developed a customized Deep 1D Temporal Dilated Convolutional Neural Network (1D-CNN) [16] specifically tailored for our task. This network is designed to process the signal patterns \(s(t_{k})\) derived from each hyper-filtered PPG signal. The overall architecture of the proposed model can be seen in Fig.4. The key innovation of our architecture lies in the utilization of dilated causal convolution layers. The term "causal" indicates that the activation at a given time step depends only on the preceding time step, enhancing the temporal modeling capability of the network. The 1D-CNN comprises a series of residual blocks, with a total of 12 blocks stacked together. Each block consists of a dilated convolution layer, followed by batch normalization, ReLU activation, and spatial dropout. The dilated convolution layer performs a convolution operation with a \(3\times 3\) kernel. The dilation factor progressively increases for each block, starting from a value of 2. To complete the pipeline, a two-class softmax layer is added as the final output of the 1D-CNN. This layer predicts the driver's drowsiness level based on the input signal-patterns generated from the hyper-filtered PPG signals. By leveraging the hierarchical structure of the 1D-CNN and incorporating dilated causal convolutions, our proposed architecture demonstrates the capability to effectively process the hyper-filtered PPG signal-patterns and accurately predict the driver's drowsiness level. ## IV Experimental Results In order to evaluate the performance of our proposed pipeline, we conducted experiments using a dataset consisting of PPG measurements obtained from seventy patients. The age range of the recruited patients was between 21 and 70 years. To validate the effectiveness of our 1D-CNN framework, we collected PPG signals under both drowsy and wakeful conditions. These conditions were supervised and confirmed by experts in physiology, who also acquired the corresponding ECG signals to ensure the subject's level of awareness. Previous studies have shown that EEG signals can provide insights into the subject's level of attention by analyzing the presence of alpha and beta waves [17]. Therefore, we collected PPG signals alongside EEG signals to establish the correlation between physiological signals and drowsiness level. The PPG signals were collected at a sampling frequency of 1 KHz, and the data collection duration was set to 5 minutes. To perform the experiments, we divided the dataset into a training set (70% of the total dataset) and a testing/validation set (30% of the collected data). This division allowed us to assess the robustness and efficiency of our proposed approach. Fig. 3: Some instances of the hyper-filtered PPG generated patterns Additionally, we utilized specially designed PPG signals that represented both high and low attention levels to further evaluate the pipeline's discriminative capabilities. In Table IV, we present the performance of our proposed pipeline compared to other deep learning-based approaches [13]. The table showcases the effectiveness of our pipeline in accurately classifying different attention levels. Furthermore, in Fig. 5, we provide the dynamic learning error of the 1D-CNN, which demonstrates the network's ability to capture the correlation between the hyper-filtered PPG samples and the corresponding attention level of the monitored subjects. This graph illustrates the progressive improvement of the network's performance during the learning process. Overall, the experimental results validate the robustness and efficiency of our proposed pipeline in accurately identifying and classifying different levels of attention based on the hyper-filtered PPG samples. ## V Conclusion and Discussion Developing a deep learning architecture for classifying driver's attention levels is a highly challenging task, and in our research, we have successfully proposed a 1D-CNN model that outperforms other deep learning approaches in this domain. One major advantage of our method is that it does not require the acquisition of ECG or EEG signals to assess the driver's drowsiness level. This eliminates the issues associated with motion and noise artifacts that can affect the analysis of HRV. Additionally, our approach avoids the need for frequency domain analysis of PPG data, which is required by other HRV-based methods. We have emphasized the benefits of acquiring PPG signals in a vehicle environment by strategically placing embedded sensors on the steering wheel. The experimental results also demonstrate that our pipeline achieves accurate classification of the driver's attention level with only one minute of PPG acquisition, significantly shorter than the 10-12 minutes typically required by HRV-based methods. Furthermore, our pipeline can be combined with other promising solutions, such as imaging methods in the visible and infrared spectrum, to further enhance accuracy. The effectiveness of the deep learning framework is evident in its ability to learn signal Fig. 4: The proposed driver drowsiness monitoring pipeline. Fig. 5: The 1D-CNN Learning Loss dynamic. patterns generated from the pre-processing of acquired PPG waveforms. Our results clearly demonstrate the effectiveness of the proposed pipeline in assessing the driver's drowsiness level, as evidenced by the reliable performance on the collected dataset of 70 recruited subjects. Currently, we are in the process of porting the implemented deep learning algorithm to an embedded system based on the STMicroelectronics SoC STA1295 ACCORDO 5, which offers powerful computing capabilities with ARM Cortex cores and a dedicated GPU for graphics and image processing tasks. We are also porting the PPG data filtering and stabilization pipeline to a hardware/software environment based on the SPC5x CHORUS microcontroller technology provided by STMicroelectronics. In our future work, we plan to explore additional advanced solutions based on deep architectures that have shown promising results in other applications. This includes investigating nonlinear models and oral-based approaches, as mentioned in our previous works [18, 19]. By continuously expanding and refining our pipeline, we aim to make significant advancements in assessing and monitoring driver attention levels for enhanced road safety. ## Acknowledgment The authors thank the physiologists belonging to the Department of Biomedical and Biotechnological Sciences (BIOMETEC) of the University of Catania, who collaborated in this work in the context of the clinical study Ethical Committee CT1 authorization n.113 / 2018 / PO. This research was funded by the National Funded Program 2014-2020 under grant agreement n. 1733, (ADAS + Project).
2302.03750
Linking convolutional kernel size to generalization bias in face analysis CNNs
Training dataset biases are by far the most scrutinized factors when explaining algorithmic biases of neural networks. In contrast, hyperparameters related to the neural network architecture have largely been ignored even though different network parameterizations are known to induce different implicit biases over learned features. For example, convolutional kernel size is known to affect the frequency content of features learned in CNNs. In this work, we present a causal framework for linking an architectural hyperparameter to out-of-distribution algorithmic bias. Our framework is experimental, in that we train several versions of a network with an intervention to a specific hyperparameter, and measure the resulting causal effect of this choice on performance bias when a particular out-of-distribution image perturbation is applied. In our experiments, we focused on measuring the causal relationship between convolutional kernel size and face analysis classification bias across different subpopulations (race/gender), with respect to high-frequency image details. We show that modifying kernel size, even in one layer of a CNN, changes the frequency content of learned features significantly across data subgroups leading to biased generalization performance even in the presence of a balanced dataset.
Hao Liang, Josue Ortega Caro, Vikram Maheshri, Ankit B. Patel, Guha Balakrishnan
2023-02-07T20:55:09Z
http://arxiv.org/abs/2302.03750v2
# Towards causally linking architectural parametrizations to algorithmic bias in neural networks ###### Abstract Training dataset biases are by far the most scrutinized factors when explaining algorithmic biases of neural networks. In contrast, hyperparameters related to the neural network architecture, e.g., the number of layers or choice of activation functions, have largely been ignored even though different network parameterizations are known to induce different _implicit biases_ over learned features. For example, convolutional kernel size has been shown to bias CNNs towards different frequencies. In order to study the effect of these hyperparameters, we designed a causal framework for linking an architectural hyperparameter to algorithmic bias. Our framework is experimental, in that several versions of a network are trained with an intervention to a specific hyperparameter, and the resulting causal effect of this choice on performance bias is measured. We focused on the causal relationship between sensitivity to high-frequency image details and face analysis classification performance across different subpopulations (race/gender). In this paper, we present a novel framework for identifying the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between the causal relationship between causal relationship between the causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between causal relationship relationship between causal relationship between work, we show that modifying a CNN hyperparameter (convolutional kernel size), even in one layer of a CNN, will not only change a fundamental characteristic of the learned features (frequency content) but that this change can vary significantly across data subgroups (race/gender populations) leading to biased generalization performance even in the presence of a balanced dataset. Key Words and Phrases: high-frequency bias, model fairness, CNN + Footnote †: journal: Journal of Computer Vision and Pattern Recognition injections in Fourier passbands to probe frequency implicit biases. Third, we fit a linear regressor to predict a model's performance on an OOD test image as a function of the hyperparameter choice, degree of perturbation to the image, and various image attributes. Fourth and finally, we use the regression coefficients to measure the hyperparameter's causal effects on model performances across data subgroups. This analysis provides a quantitative answer to whether the hyperparameter has a disparate causal effect on the data subgroups. While our framework is general, we focused our experiments on studying the causal relationship between sensitivity to high-frequency image details induced by changes to convolutional kernel sizes and performance of face analysis classifiers across subpopulations (race/gender protected groups). We trained several research-grade face gender classifiers on public datasets, and show that modifying kernel size from a commonly used range: \(3\times 3\) to \(11\times 11\) even in just the first layer of these CNNs will not only change the frequency content of learned features, but that this change can vary significantly across race/gender groups. We established this effect using both adversarial perturbations and energy injections to the high-frequency bands of the test images. This work opens the door to further careful studies on understanding the impact of neural network design decisions on algorithmic bias. ## 2. Related Work ### Fairness in computer vision There is a growing literature on fairness in computer vision. Studies predominantly focus on measuring and mitigating possible biases of computer vision models and datasets (Finn et al., 2016; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019). Biases may be measured with a number of metrics (Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019) that quantify disparate performance differences of algorithms across population subgroups. Face recognition and analysis systems are often under the most scrutiny due to their sensitive nature (Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019). Perhaps the most famous of these studies was "Gender Shades" study (Gender, 2018), which identified the systematic failings of face analysis systems on particular racial and gender demographics. Image datasets are known to be biased due to sampling inequalities (Finn et al., 2016; Goyal et al., 2019; Goyal et al., 2019). A dataset has sampling biases if its joint distribution of attributes is far from random. For example, the CelebA face dataset is known to have significant sampling biases, such as a higher proportion of female with young ages compared to male (Goyal et al., 2018). Training on such a dataset can cause a model trained on it to also have biases (Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019) particularly for attribute subgroups that are underrepresented in the dataset. Therefore, algorithmic fairness issues can be greatly mitigated if the algorithm is trained on a more balanced dataset. Human face datasets have been particularly scrutinized (Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019) as models trained on these data can exhibit systematic failings with respect to attributes protected by the law (Goyal et al., 2019). Multiple approaches to mitigate dataset bias include collecting more diverse examples (Goyal et al., 2019), using image synthesis to compensate for distribution gaps (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), and resampling (Goyal et al., 2019). Our work, in contrast, is focused on understanding biases of deep learning model decisions instead of data (see Fig. 1). Researchers are also building novel computer vision model designs to combat biases (Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), typically by learning representations invariant to sensitive attributes via modified objective functions, or sampling in a balanced way during training. In contrast, our work is focused on linking how choices to fundamental building blocks of neural networks like kernel sizes and activation functions can affect algorithmic bias. There are also lines of work outside of computer vision on mitigating biases of machine learning models. Outside of computer vision, the most common approaches add additional fairness measures to loss functions (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), enforcing fair representations that are independent of protected attributes (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), and augmenting training data to promote balance (Troato and Tobin, 2018). None of these works try to establish a link between the functional components of a neural network to algorithmic bias. ### Adversarial attacks Neural networks have been shown to be surprisingly and stubbornly sensitive to imperceptibly small worst-case perturbations known as adversarial examples. An adversarial attack perturbs an image until a given network changes its prediction, usually by applying gradient descent on the image. The resulting changes to the image are high frequency, and imperceptible to the human eye (see Fig. 3). This lack of robustness has sparked many theories (Levy and Ganin, 2016; Ganin et al., 2017; Ganin et al., 2018; Ganin et al., 2019; Ganin et al., 2020; Troato and Tobin, 2018; Troato and Tobin, 2018), but a unified theoretical explanation of the nature of adversarial examples is still lacking. Recent work has shown that commonly found adversarial examples for state-of-the-art convolutional neural networks contain dataset-specific information (Tro coefficients corresponding to _protected_ attributes to evaluate bias. We describe the three steps of our framework in the following sections. ### Architecture training We first train \(K\) different versions of the same architecture, all identical in structure except for a modification to the hyperparameter of interest. We train all architectures on the same training dataset. We also initialize the weights and biases of all networks from identical normal distributions (i.e., identical mean and variances). After the networks are trained, we "freeze" their parameters, and will not modify them further in our framework. ### OOD perturbations to test data Given various trained architectures, our goal is to amplify biases across their learned features. One option is to run test samples coming from the same distribution as the training data through these networks, and measure performance across different protected attribute subgroups. The problem with this strategy is that deep neural networks are over-parameterized, so that they are able to fit any training distribution nearly perfectly. Hence, even if a hyperparameter is altered from one network to another, both networks will likely yield similar performances on training data points. However, as demonstrated in past works (Wang et al., 2018; Wang et al., 2018), out-of-distribution (OOD) samples can paint a far different picture, with some models suffering in performance compared to others, thereby exposing differences across learned features. Therefore, a key step in our model is to inject a test set of images with a subtle class of perturbations so that they Figure 2. **Method Framework. The framework of our method consists of three parts. First, we perturb all the test images with Out Of Distribution(OOD) perturbations (in this work, we use adversarial attack perturbation and frequency noise injection), to make a new test dataset which contains the same images but with different noises injected. Second, we send the new test images to different models, where the models share most of the architecture design but differs only in some small part (e.g. convolutional kernel size). Last, we collect the results got from the last step and split them into according to sensitive attributes, and apply our causal analysis on the results.** become OOD. In our experiments, we focus on frequency-related implicit biases of CNNs, and so we consider two types of perturbations from the neural network literature: adversarial attacks, and frequency energy injections. #### 3.2.1. Adversarial attacks We consider two types of adversarial attacks in our experiments. The first, **FGSM**[(18)], applies gradient descent on the loss of the network's output with respect to the input image to "nudge" the image in incremental steps towards a direction that changes the network's prediction. The second, **CW attack**[(7)], utilizes two separate losses: a gradient-based loss to make the classifier change its prediction (similar to FGSM), and a regularization to make the magnitude of the change to the image as small as possible. This makes the perturbation distance(i.e. \(l_{2}\) norm of the difference between perturbed image and original image) of CW attack a useful metric for measuring the degree of difficulty to perturb an image. Fig 3 shows an example of a CW and FGSM attack for the same input image. Notice that the CW perturbation is an order of magnitude smaller due to the effect of its regularization. #### 3.2.2. Frequency energy injection We also experiment with injecting energy to a specific frequency band to obtain a more fine-grained link between frequency content and network features. Fig. 4 depicts our process. For each test image, we use the DFT to obtain a Fourier spectrum, and amplify the amplitudes of Fourier coefficients lying on an annulus in the spectrum. In particular, let \(F[\omega_{x},\omega_{y}]=|A|e^{-j\phi}\) represent a complex coefficient in the Fourier spectrum of an image at location \((\omega_{x},\omega_{y})\) (corresponding to \(x\) and \(y\) frequencies), with radius \(r=\sqrt{\omega_{x}^{2}+\omega_{y}^{2}}\), lying in the annulus defined by \((r-r_{0})^{2}\leq\Delta^{2}\). We increase the amplitude \(A\) by a factor of \(1+\delta\), to yield a modified coefficient \(F^{\prime}[\omega_{x},\omega_{y}]=(1+\delta)|A|e^{-j\phi}\). In our experiments, we set \(\Delta=2\), and \(\delta=15\) and \(r_{0}>0\) is the frequency radius into which we are injecting energy. If \(r_{0}\) is small (large), we are modifying low (high) frequency components of the image. Finally, we reconstruct the perturbed image using an inverse DFT. Figure 3. **Example of adversarial attack perturbations.** By adding tiny noise-like perturbations (center image, amplified 100/1000 times for visualization) to a test image (left), a target neural network will output a wrong prediction. However, the perturbed image (right) has no perceptible differences with the original image to the human eye. We use the CW [(7)] and FGSM attacks [(18)] in our experiments. ### Causal analysis We run the test set of OOD images through the \(K\) networks, yielding \(K\) predictions per image. We assume each image also comes with annotations for various relevant semantic attributes (including protected attributes with which we may compute algorithmic bias measures), as well as perturbation attributes (e.g., frequency of energy injection). Our goal is to measure the causal effects of the architectural hyperparameter of interest on model performance per protected attribute subgroup. To do so, we use a multivariable linear regression model that predicts a dependent variable from multiple independent variables. For test image \(i\) processed in network \(k\), let \(x_{k}\) be the corresponding hyperparameter and \(y_{ik}\) be a measure of network performance on image \(i\). Then we can specify the following regression equation: \[y_{ik}=\beta x_{k}+\epsilon_{ik}^{0}, \tag{1}\] where \(\epsilon_{ik}\) is an error term. Our coefficient of interest is \(\beta\). Under the assumption that \(E[x_{k}\cdot\epsilon_{ik}^{0}]=0\), we can interpret \(\beta\) as the causal effect of network architecture on performance. Of course, this independence assumption is unlikely to hold, as image attributes, including the OOD perturbation value, will generally affect a neural network's performance. We can weaken this assumption by leveraging a vector of image attributes \(\mathbf{Z_{i}}\), and augmenting equation (1) as follows: \[y_{ik}=\beta x_{k}+\mathbf{Z_{i}}^{\prime}\mathbf{\gamma}+\epsilon_{ik}^{1}, \tag{2}\] where \(\mathbf{\gamma}\) is a vector of coefficients, and \(\beta\) is now identified as the causal effect of network architecture on performance under the weaker assumption \(E[x_{k}\cdot\epsilon_{ik}^{1}|\mathbf{Z_{i}}]=0\). Moreover, we hypothesize that the effect of architecture hyperparameter \(x\) on performance may vary by protected attributes, a subset of all image attributes in \(\mathbf{Z_{i}}\). In order to allow for this possibility, we further augment equation (2) as follows: \[y_{ik}=\mathbf{P_{i}}^{\prime}\mathbf{\beta}x_{k}+\mathbf{Z_{i}}^{\prime}\mathbf{\gamma}+ \epsilon_{ik}^{1}, \tag{3}\] Figure 4. **Examples of frequency energy injection perturbations. Our goal with this perturbation is ’jitter’ all frequencies with the same magnitude in an input image. To do this, we inject random noise to the Fourier coefficients of an image lying on an annulus of a particular radius in the Fourier spectrum (top row of annulus), according to: \((1+\delta)|A|e^{-j\delta}\).** where \(\mathbf{P_{i}}\) is a vector of protected image attributes, and \(\mathbf{\beta}\) is now a vector of coefficients. We use a heuristic approach to choose the vectors \(\mathbf{Z_{i}}\) and \(\mathbf{P_{i}}\) that is commonly used for causal inference in the social sciences (Romomero et al., 2017). First, we incrementally add controls to \(\mathbf{Z_{i}}\) and test whether our estimates of \(\mathbf{\beta}\) change under alternative specifications (using an F-test with the null-hypothesis that the estimates of \(\mathbf{\beta}\) are equal across specifications). This is a test of the exogeneity assumption; if \(\mathbf{Z_{i}}\) is a sufficiently rich vector of controls to satisfy \(E[x_{k}\cdot\epsilon_{ik}^{0}[\mathbf{Z_{i}}]=0\), then the assumption will also be satisfied conditional on an augmented vector of controls. Second, we start with a rich vector \(\mathbf{P_{i}}\) to allow for the effect of network architecture on performance to be highly flexibly estimated. In our application, we begin by specifying \(\mathbf{P_{i}}\) as a fully saturated vector of dummy variables corresponding to all protected attribute combinations (e.g., White Male, White Female, etc.) and estimate \(\mathbf{\beta}\). We then test whether the elements of \(\mathbf{\beta}\) are equal to each other (using pairwise F-tests with null-hypotheses that pairs of elements of \(\mathbf{\beta}\) are equal). To the extent that we are unable to reject equality of coefficients, we cannot reject that the effect of network architecture on performance varies across those two groups. For instance, if the coefficients on the "White Male" dummy variables and "Black Male" dummy variables are statistically indistinguishable from one another, then we cannot reject that the effect of network architecture on performance is the same for images of White males and images of Black males. \(\mathbf{\beta}\) encodes the joint causal effects of hyperparameter value \(x\) and protected attributes in \(\mathbf{P_{i}}\) on output \(y\). In particular, \(\mathbf{\beta}_{g}\) is the expected change in \(y\) due to a unit change to \(x\), when feature \(g\) is "True" (set to 1). In our experiments, we compare the values in \(\mathbf{\beta}_{g}\) corresponding to different protected attribute subgroups to one another (see Table 2, and Fig. 9). ## 4. Experiments & Results We evaluated our work on the task of gender classification from face images using two popular face analysis datasets: Fairface (Fairface, 2017) and UTKFace (Tou et al., 2018). Fairface was designed to contain a roughly equal number of samples from different race/gender groups, and has 86,744 training and 10,954 testing samples. FairFace contains labels for 7 race groups ('East Asian', 'White', 'Latino Hispanic', 'Southeast Asian', 'Black', 'Indian', 'Middle Eastern') and 2 gender groups ('Male' and 'Female'). UTKFace contains 20,000 training and 3,705 testing samples, but is not balanced across race groups. It contains labels for 5 race groups ('White', 'Black', 'Asian', 'Indian', 'Others') and two gender groups ('Male' and 'Female'). We remove faces from 'Others' because they have no consistent characteristics. To mitigate any effect of sampling biases during training, we used inverse sampling based on the number of examples from each race group. Training details are in Supplementary. We demonstrate results using the ResNet-34 (Fairface, 2017) base architecture for our experiments but obtained similar results using two other popular networks: DenseNet (Deng et al., 2019) and VGG-16 (Vogonyan and Zisserman, 2017). Please refer to Supplementary for results using these two models. Our architectural hyperparameter of interest was convolutional kernel size, a factor known to be related to implicit frequency biases of CNNs (Cheng et al., 2018). We considered two different scenarios: changing only the kernel size of the first layer and changing the kernel size of all layers simultaneously. Interestingly, both scenarios yielded similar results, and so we leave results for the latter in Supplementary. We varied the first layer kernel size (FLKS) within the range (Beng et al., 2019; Chen et al., 2019), which encompasses the popular choices for this hyperparameter for nearly all CNNs in the literature. We initialize the weights and biases of all of our models randomly by drawing from a Normal distribution with variance set to 0.02. For each network and kernel size value, we trained 3 independent models and presented average results to mitigate the influence of random initialization factors. We first report our networks' accuracies for different race groups in Table 1 on _non-OOD_ test images, to demonstrate that they all achieve reasonably high accuracies on both datasets. The performances do not significantly vary with FLKS because the training and testing images are all from the same distribution. We now present our results separately for the two OOD perturbations described in Sec. 3.2: adversarial attacks and frequency energy injections. ### Adversarial attacks We present results in this section using the CW adversarial attack. We obtained similar results using FGSM (see Supplementary). #### 4.1.1. Analyzing Fourier spectra We first visualize the average Fourier spectra magnitudes of the adversarial perturbation images (e.g., center image in Fig. 3), split by race/gender groups and FLKS in Fig. 5 (for Fairface) and Fig. 6 (for UTKFace). Results on both datasets show similar trends. First, as FLKS increases, the spectral energy becomes more focused at low-frequencies (closer to center). Second, holding FLKS value constant, we see that the spectrum for the Black group consistently contains less high-frequency energy compared to the spectra of other race groups. This result also holds for the Male group compared to Female. The difference between different subgroups shrinks as FLKS increases, in line with findings from a previous study showing that low FLKS leads to higher implicit frequency bias (Beng et al., 2019). To quantitatively assess differences in the perturbation spectra, we also compute the \(f_{0.5}\) metric, known as "half power frequency," or the frequency below which half of the signal's power lies. \(f_{0.5}\) is a robust measure of energy concentration in a spectrum. We show the \(f_{0.5}\) scores for the spectra in Fig. 5 and Fig. 6 in Fig 7-top. The \(f_{0.5}\) confirm the trend we observed visually. Please refer to the caption of Fig. 7 for more details. #### 4.1.2. Perturbation distances We next present the average perturbation _distances_ of adversarial attacks across race groups and models in Fig 7-bottom. The perturbation distance \(d_{p}\) between an original test image \(I\) and the perturbed image \(I^{\prime}\) may be computed by simply taking an L2 norm: \(d_{p}(I,I^{\prime})=||I^{\prime}-I||_{2}\), and quantifies how close/far the perturbed image to the original image. A larger distance indicates that more "work" must be done harder to fool the model, and its a reflection of the robustness of the model to other OOD perturbations (Shi et al., 2019). Results show that perturbation distance increases with FLKS for all race groups, with an associated increase to variance. It is therefore harder to adversarially attack a model with a larger FLKS, likely because such a model focuses more of its energy on low-frequency image information (see Fig 5 and 6) and is therefore robust. In addition, we see \begin{table} \begin{tabular}{|c|c c c c c c c|} \hline Dataset (FLKS) & Overall & White & Black & East Asian & Indian & Southeast Asian & Latino & Mid. Eastern \\ \hline Fairface (3) & 0.947 & 0.950 & 0.894 & 0.942 & 0.945 & 0.894 & 0.957 & 0.977 \\ Fairface (5) & 0.949 & 0.947 & 0.896 & 0.939 & 0.957 & 0.896 & 0.957 & 0.980 \\ Fairface (7) & 0.946 & 0.943 & 0.895 & 0.947 & 0.956 & 0.895 & 0.963 & 0.978 \\ Fairface (9) & 0.947 & 0.946 & 0.895 & 0.937 & 0.951 & 0.895 & 0.960 & 0.979 \\ Fairface (11) & 0.946 & 0.949 & 0.892 & 0.937 & 0.949 & 0.885 & 0.967 & 0.979 \\ \hline UTKFace (3) & 0.929 & 0.949 & 0.905 & 0.931 & 0.942 & / & / & / \\ UTKFace (5) & 0.935 & 0.951 & 0.901 & 0.940 & 0.951 & / & / & / \\ UTKFace (7) & 0.934 & 0.955 & 0.905 & 0.939 & 0.953 & / & / & / \\ UTKFace (9) & 0.937 & 0.955 & 0.910 & 0.941 & 0.955 & / & / & / \\ UTKFace (11) & 0.936 & 0.950 & 0.901 & 0.943 & 0.955 & / & / & / \\ \hline \end{tabular} \end{table} Table 1. **Model performances on _unperturbed_ (non-OOD) images from the Fairface & UTKFace datasets.** We report the trained models’ performances on test sets split by race group. Each number is an average over 3 different trained models. The first column indicates the dataset name and model’s First Layer Kernel Size (FLKS). Fairface has 7 annotated race groups and UTKFace has 4. The performances are relatively constant with a variation to kernel size because the test and training images belong to the same distribution. that \(d_{p}\) for the Black group is significantly lower than that of other groups. Please refer to the caption of Fig 7 for more details. #### 4.1.3. Causal analysis Next, we quantitatively analyze the causal relationship between race and gender on perturbation distance \(d_{p}\) by applying our causal analysis framework introduced in Sec. 3.3. Specifically, using Eq. 3, we set \(y\) to be \(d_{p}\), and set both \(\mathbf{P_{i}}\) and \(\mathbf{Z_{i}}\) to contain "dummy variables" corresponding to all race/gender combinations. We use the Fairface dataset for this analysis, and use the race groups East Asian, White, Latino Hispanic, Southeast Asian, Indian and Black, and gender groups of Male and Female. We ignore Middle Eastern since the numbers of samples for different genders are unbalanced in test dataset. We use the _statsmodel_ package from Python to run this regression, and we present the resulting values of \(\beta\) and \(\gamma\) in Table 2. The results of \(\beta\) are also shown in Fig. 9(a). Based on the results, it is obvious that the coefficients for Black and Indian are significantly higher than that of other race groups, indicating the impact of kernel size on perturbation distance is much more significant for the two groups. White, Black and Indian female have larger \(\beta\) values than their corresponding male group. Figure 5. **Average spectra of adversarial perturbation images split by race and gender for Fairface. Each row represents a model with a different first layer kernel size (FLKS). As FLKS increases, the spectra become more concentrated at low frequencies. The spectra for the Black race group consistently have less energy at high frequencies compared to the spectra of other race groups. Male spectra also have lower high frequency information compared to Female spectra. These results demonstrate that changes to FLKS induce different feature biases for networks, which also vary by protected attribute subgroups. See Fig. 6 for the analogous spectra for the UTKFace dataset.** \begin{table} \begin{tabular}{c c c c c|c c c c c} \hline \hline coef name & coef value & std err & t & \(P>|t|\) & coef name & coef value & std err & t & \(P>|t|\) \\ \hline \(\beta_{EM}\) & 0.0227 & 0.003 & 6.930 & 0.000 & \(\gamma_{EM}\) & 0.4251 & 0.025 & 17.308 & 0.000 \\ \(\beta_{EF}\) & 0.0274 & 0.003 & 8.323 & 0.000 & \(\gamma_{EF}\) & 0.4232 & 0.025 & 17.065 & 0.000 \\ \(\beta_{WM}\) & 0.0306 & 0.003 & 11.392 & 0.000 & \(\gamma_{WM}\) & 0.4453 & 0.020 & 21.969 & 0.000 \\ \(\beta_{WF}\) & 0.0254 & 0.003 & 8.614 & 0.000 & \(\gamma_{WF}\) & 0.4302 & 0.022 & 19.361 & 0.000 \\ \(\beta_{LM}\) & 0.0351 & 0.003 & 11.143 & 0.000 & \(\gamma_{LM}\) & 0.2088 & 0.012 & 17.566 & 0.000 \\ \(\beta_{LF}\) & 0.0269 & 0.003 & 8.521 & 0.000 & \(\gamma_{LF}\) & 0.2425 & 0.012 & 20.304 & 0.000 \\ \(\beta_{SM}\) & 0.0251 & 0.003 & 7.552 & 0.000 & \(\gamma_{BM}\) & 0.1856 & 0.013 & 14.783 & 0.000 \\ \(\beta_{SF}\) & 0.0189 & 0.004 & 5.982 & 0.000 & \(\gamma_{BF}\) & 0.2163 & 0.014 & 15.281 & 0.000 \\ \(\beta_{BM}\) & 0.0589 & 0.001 & 48.770 & 0.000 & \(\gamma_{BM}\) & 0.2088 & 0.012 & 17.505 & 0.000 \\ \(\beta_{BF}\) & 0.0659 & 0.001 & 48.588 & 0.000 & \(\gamma_{BF}\) & 0.2425 & 0.012 & 20.234 & 0.000 \\ \(\beta_{IM}\) & 0.0748 & 0.001 & 61.160 & 0.000 & \(\gamma_{BM}\) & 0.1856 & 0.013 & 14.783 & 0.000 \\ \(\beta_{IF}\) & 0.0846 & 0.001 & 65.294 & 0.000 & \(\gamma_{BF}\) & 0.2163 & 0.014 & 15.821 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 2: **Regression results of perturbation distance.** We report the coefficients \(\beta\) (left) and \(\gamma\) (right) for the regression described in Sec. 4.1.3. We also report the standard deviations of the coefficient values and calculate their \(t\) values according to \(t=\frac{coefvalue}{stderr}\), as well as \(P>|t|\). A \(P\leq 0.05\) indicates the value is significant. The \(\beta\) coefficient names use subscripts corresponding to race (E: East Asian, W: White, B: Black, I: Indian, L: Latino, S: Southeast Asian), and gender (M: Male, F: Female). Some large disparities between \(\beta\) values across groups are obvious, e.g., the values for Black Asian Female is \(\sim 100\%\) higher than that of East Asian Female. Refer to Fig. 9 for plots on \(\beta\). Figure 6: **Average spectra of adversarial perturbation images split by race and gender for UTKFace.** We see similar trends in these spectra to the ones shown for Fairface in Fig 5. ### Frequency Energy Injection We next perform the frequency-based OOD perturbation to the test images as described in 3.2.2 and visualize results in Fig 8. The results show that the accuracies of all models/groups are more influenced by perturbation for to low-to-mid frequencies (\(0.02--0.20(Hz)\)), and less so with mid-to-high frequencies. FLKS of 3 is less affected by frequency injections in the range \((0.08-0.15(Hz))\). In general though, it is difficult to ascertain significant trends from the plots alone. #### 4.2.1. Causal analysis Similar to Sec. 4.1.3, we now perform a regression to measure the impact of kernel size, frequency of energy injection, and protected attribute subgroup on model error per image. Using Eq. 3, we set \(y\) to be the error rate of an image and set both \(\mathbf{P_{i}}\) and \(\mathbf{Z_{i}}\) to contain "dummy variables" corresponding to race/frequency (of injected energy) combinations. We use four frequency subgroups: \(\{(0.05,0.07),(0.09,0.11),(0.13,0.15),(0.17,0.19)(Hz)\}\), which we label \(1,2,3,4\) for convenience. We report the results of coefficient \(\beta\) values in Fig 9(b). It is clear that the coefficients for frequency group 4 are significantly smaller than those of the rest of the frequencies, indicating that changes to kernel size influence the performance less on the OOD samples under relatively high-frequency injections. The coefficient for Figure 7. \(f_{0.5}\) **measures for adversarial perturbation spectra & adversarial perturbation distance** Each boxplot shows the median score (white/red line in the boxes) and the \(15\%-85\%\) confidence interval for a different protected attribute group. The x-axis for all plots indicates the models’ first layer convolutional kernel size (FLKS). **(a) and (b)** are \(f_{0.5}\) measures for adversarial perturbations using Fairface and UTKFace, respectively. The \(f_{0.5}\) score drops as FLKS increases for all demographic groups, which indicates that the adversarial attack focuses less on high-frequency information of the image for larger FLKS. **(c) and (d)** show the adversarial perturbation distances per race group using Fairface and UTKFace, respectively, where distance is simply the \(l_{2}\) norm of the perturbation image. As the FLKS increases, the perturbation distances generally increase too for all the demographic groups. In addition, for each FLKS value, the perturbation distances for the Black group are always significantly lower than those for other demographic groups. the Black group in frequency group 1 is also significantly smaller than those of the other groups. As the frequency increases, this gap reduces. ## 5. Discussion and Conclusion Our results in Figs. 5, 6, 7, 8, 9 first demonstrate that smaller convolutional kernel sizes can cause a CNN to be biased towards high-frequency features, and increasing the kernel size mitigates this bias. We also see that such frequency bias significantly differed across different race/gender subgroups. All models trained on the two datasets focused less on high-frequency features for the Black and Male subgroup. While different features do not necessarily indicate Figure 8. **Frequency energy injection result.** We show models’ performances with different FLKS for 6 race groups separately (We ignore Mid. Eastern since it has unequal number of samples for different genders in the test dataset). In each individual figure, the x-axis is the frequency we are injecting energy at and the y-axis is the accuracy of different models. It is obvious that all the models suffer from low to mid frequency’s energy injections, and become robust to mid to high frequency noises. It is hard to directly tell which group is getting influenced more than the others, which furthers asks for a quantitatively analysis. performance bias on test samples, our results allow us to conclude that these differences do lead to performance biases on out-of-distribution (OOD) samples. We observed that this is the case for two different types of OOD image perturbation operators: adversarial attacks and frequency domain energy injections. As expected, kernel size has no effect on model performance on within-distribution test samples (Table 1). Different population subgroups will have different image characteristics. For example, the Black group will likely have darker skin tones than other race groups, and Females will have more hair on average than Males. Hence, it is not surprising that there is some difference in how images from one race are processed by a network compared to another. However, our results indicate something more significant: that there is a fundamental difference in the frequency characteristics of the image features across groups used by the network to make its decision. This difference may also lead to a performance bias, depending on the type of OOD data model is faced with. Our two OOD perturbations, while conceptually clear and well-motivated, are not associated with any real phenomena. It would be an interesting next step to relate frequency biases of features to disparate model performance on real-world OOD artifacts like shot noise, fog, and motion blur [66]. Our work has several limitations. We cannot draw broad conclusions about the nature of kernel size for general CNNs across all applications, as we focused on a single application of interest (gender classification). Our aim in this work was to demonstrate that a hyperparameter may have a disparate impact on the internal biases of a network, and to that end our experiments succeeded. A further evaluation on a wider set of application domains is an important next step. We also limited our causal analyses to a few key variables. However, causal analysis typically relies on the "no hidden counfounders" assumption. An exhaustive set of image factors will help in computing more precise causal effects. We focused on convolutional kernel size of a network in this work due to past results establishing a clear link between this hyperparameter and frequency content [8]. However, our framework is agnostic to the nature of the hyperparameter. Indeed, next steps in this research space include similar analyses into a more comprehensive set of network hyperparameters, such as activation functions, depth of layers, weight initialization strategies, and even high-level designs (e.g., residual connections, transformer modules). We see our work as a first step in the important Figure 9. **Regression coefficient values for \(\beta\) for different race groups.****(a) \(\beta\)** values for the regression in Sec. 4.1.3 linking kernel size to adversarial perturbation distance, also shown in Table 2. There are significant differences across protected groups, e.g., the Black and Indian group has significant higher values compared to the other groups. **(b) \(\beta\)** values for the regression in Sec. 4.2.1. The coefficients for Black are always lower than those of other race groups. In addition, the coefficient values for different frequencies within the same race group are also significantly different. direction of understanding how neural network design choices impact bias, and hence, the fairness of these systems in our society.
2306.00976
TopEx: Topic-based Explanations for Model Comparison
Meaningfully comparing language models is challenging with current explanation methods. Current explanations are overwhelming for humans due to large vocabularies or incomparable across models. We present TopEx, an explanation method that enables a level playing field for comparing language models via model-agnostic topics. We demonstrate how TopEx can identify similarities and differences between DistilRoBERTa and GPT-2 on a variety of NLP tasks.
Shreya Havaldar, Adam Stein, Eric Wong, Lyle Ungar
2023-06-01T17:59:10Z
http://arxiv.org/abs/2306.00976v2
# TopEx: Topic-based Explanations for Model Comparison ###### Abstract Meaningfully comparing language models is challenging with current explanation methods. Current explanations are overwhelming for humans due to large vocabularies or incomparable across models. We present TopEx, an explanation method that enables a level playing field for comparing language models via model-agnostic topics. We demonstrate how TopEx can identify similarities and differences between DistilRoBERTa and GPT-2 on a variety of NLP tasks. ## 1 How do we compare language models? Language models (LMs) often exhibit differences in behavior even when trained on the same dataset. Architecture, pre-training, and hyperparameter choices can all lead to varying behaviors in the LM. However, understanding these differences beyond comparing performance metrics is challenging. Existing post-hoc interpretability approaches primarily focus on explaining individual models as opposed to comparing models. These explanations can be generally categorized by how behavior is explained: (a) feature-based, using feature attributions (Shapley et al., 1953; Sundararajan et al., 2017); (b) example-based, using previously observed samples or generated counterfactuals; or (c) concept-based, using concepts extracted from a model's latent space (Madsen et al., 2022). Methods that fall under (b) and (c) cannot be used for comparison, as the examples and concepts are model-specific and not easily comparable across models. Methods under (a) can be used to compare models, but the tens of thousands of unique tokens renders such comparisons uninterpretable. In order to meaningfully explain and compare LMs, we propose TopEx -- a topic-based explanation method. TopEx condenses feature attributions into a model-independent explanation using topic modeling, a popular statistical method that assigns words to meaningful categories. ## 2 Topic-based Explanations (TopEx) In this section, we outline our approach for generating topic-based explanations, which consists of two main steps: (1) calculation of feature-based scores followed by (2) aggregation into topics. Given two LMs trained on the same dataset, we first generate word-level importance scores for each model. We extract Shapley values1(Lundberg and Lee, 2017) for all instances and aggregate these scores into global importance scores \(g_{w}\) for each word \(w\). We then map these word-level importance scores \(g_{w}\) to topic-level importance scores \(G_{t}\) as follows: Footnote 1: Note that our approach works with any feature-based explanation. \[G_{t}=\sum_{w\text{topic}_{t}}P(\text{topic}_{t}|w)g_{w} \tag{1}\] Specifically, for all words in a given topic \(t\), we sum over word importance scores, weighted by word membership in each topic \(P(\text{topic}_{t}|w)\).2 Here, the weight comes from the topic model, which could come from an existing topic lexicon such as LIWC Pennebaker et al. (2001) or be automatically learned with e.g. Latent Dirochelt Allocation (LDA) (Blei et al., 2003). Details on token-to-word aggregation and topic weighting schemes are in Appendix B and C respectively. Footnote 2: When a word in our vocabulary is not in any topics, (e.g. punctuation, LDA stopwords or words not in LIWC) we naively treat it as a different topic. We leave other approaches, such as clustering, for future work. Figure 1 demonstrates our method on an example sentence. The first step of TopEx computes word-level importance scores. For example, the word "tasty" gets a an aggregate score of \(1.01\) as an average of its feature attribution scores \(0.73\) and \(1.29\). The second step of TopEx computes topic-level importance scores. For example, scores for food-related words such as "burgers" and "fries" are aggregated with "tasty" to get the final "food" topic score. The resulting topic-based explanation is a concise summary of the model that can be used to directly compare with other models. ## 3 TopEx Explains Differences Between Models We compare fine-tuned DistilRoBERTa (Sanh et al., 2019) and GPT-2(Radford et al., 2019) on the Yelp Reviews dataset (Zhang et al., 2015) and the GoEmotions dataset (Demszky et al., 2020). From our topic-based explanations of these models, \(G^{\text{BERT}}\) and \(G^{\text{GPT}}\), we calculate the distance between explanations as \(G^{\Delta}=\|G^{\text{BERT}}-G^{\text{GPT}}\|_{1}\). The two topics with the most different and most similar importance scores are highlighted in Figure 2. We can see that DistilRoBERTa focuses more than GPT-2 on descriptions of dining when classifying a 5-star rating, while GPT-2 looks more at negativity than DistilRoBERTa. In this case, GPT-2 may be determining bad reviews through negative words, while DistilRoBERTa has learned to better recognize descriptions of dining experiences characteristic of 5-star reviews. Experiment details and additional results are given in Appendix D and E. Conclusion.The vast array of possible LM architectures and training schemes motivates the need to deeper understand differences in model behavior beyond performance metrics. In this work, we present TopEx, a method that enables direct model comparisons via model-agnostic topics that can reveal _why_ and _how_ models behave differently. Figure 1: Generating a global explanation via TopEx. We extract an importance score for each token using Shapley values, aggregate to average global word importances, and map these importance scores to the corresponding topics for each word. Figure 2: We explain differences in behavior of DistilRoBERTa and GPT-2 via \(G^{\Delta}\). We show the two topics with most different (\(\max(|G^{\Delta}|)\)) and most similar (\(\min(|G^{\Delta}|)\)) importances between models, using LDA topics for Yelp and LIWC topics for GoEmotions. Topic visualizations in blue indicate \(G^{\Delta}_{t}>0\) (i.e. the topic is more important for DistilRoBERTa), while red indicates \(G^{\Delta}_{t}<0\) (i.e. the topic is more important to GPT-2). ### URM Statement The authors acknowledge that the first and second authors of this work meet the URM criteria of ICLR 2023 Tiny Papers Track.
2308.06673
Remarks on Greenberg's conjecture for Galois representations associated to elliptic curves
Let $E_{/\mathbb{Q}}$ be an elliptic curve and $p$ be an odd prime number at which $E$ has good ordinary reduction. Let $Sel_{p^\infty}(\mathbb{Q}_\infty, E)$ denote the $p$-primary Selmer group of $E$ considered over the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. The (algebraic) \emph{$\mu$-invariant} of $Sel_{p^\infty}(\mathbb{Q}_\infty, E)$ is denoted $\mu_p(E)$. Denote by $\bar{\rho}_{E, p}:Gal(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow GL_2(\mathbb{Z}/p\mathbb{Z})$ the Galois representation on the $p$-torsion subgroup of $E(\bar{\mathbb{Q}})$. Greenberg conjectured that if $\bar{\rho}_{E, p}$ is reducible, then there is a rational isogeny $E\rightarrow E'$ whose degree is a power of $p$, and such that $\mu_p(E')=0$. In this article, we study this conjecture by showing that it is satisfied provided some purely Galois theoretic conditions hold that are expressed in terms of the representation $\bar{\rho}_{E,p}$. In establishing our results, we leverage a theorem of Coates and Sujatha on the algebraic structure of the fine Selmer group. Furthermore, in the case when $\bar{\rho}_{E, p}$ is irreducible, we show that our hypotheses imply that $\mu_p(E)=0$ provided the classical Iwasawa $\mu$-invariant vanishes for the splitting field $\mathbb{Q}(E[p]):=\bar{\mathbb{Q}}^{ker\bar{\rho}_{E,p}}$.
Anwesh Ray
2023-08-13T03:29:07Z
http://arxiv.org/abs/2308.06673v1
# Remarks on Greenberg's conjecture for Galois representations associated to elliptic curves ###### Abstract. Let \(E_{/\mathbb{Q}}\) be an elliptic curve and \(p\) be an odd prime number at which \(E\) has good ordinary reduction. Let \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) denote the \(p\)-primary Selmer group of \(E\) considered over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). The (algebraic) \(\mu\)_-invariant_ of \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) is denoted \(\mu_{p}(E)\). Denote by \(\bar{\rho}_{E,p}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to \operatorname{GL}_{2}(\mathbb{Z}/p\mathbb{Z})\) the Galois representation on the \(p\)-torsion subgroup of \(E(\bar{\mathbb{Q}})\). Greenberg conjectured that if \(\bar{\rho}_{E,p}\) is reducible, then there is a rational isogeny \(E\to E^{\prime}\) whose degree is a power of \(p\), and such that \(\mu_{p}(E^{\prime})=0\). In this article, we study this conjecture by showing that it is satisfied provided some purely Galois theoretic conditions hold that are expressed in terms of the representation \(\bar{\rho}_{E,p}\). In establishing our results, we leverage a theorem of Coates and Sujatha on the algebraic structure of the _fine Selmer group_. Furthermore, in the case when \(\bar{\rho}_{E,p}\) is irreducible, we show that our hypotheses imply that \(\mu_{p}(E)=0\) provided the classical Iwasawa \(\mu\)-invariant vanishes for the splitting field \(\mathbb{Q}(E[p])\coloneqq\bar{\mathbb{Q}}^{\ker\bar{\rho}_{E,p}}\). Some results are proven in greater generality, for ordinary Galois representations satisfying further conditions. 2020 Mathematics Subject Classification: IIR23, IIG05 (primary) ## 1. Introduction Iwasawa studied growth questions of class groups in certain infinite towers of number fields. Given a number field \(F\) and a prime number \(p\), let \(F_{\infty}\) denote the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(F\). Letting \(F_{n}\subset F_{\operatorname{cyc}}\) be such that \([F_{n}:F]=p^{n}\), denote by \(\operatorname{Cl}_{p}(F_{n})\) the \(p\)-Sylow subgroup of the class group of \(F_{n}\). Iwasawa showed that there exists \(n_{0}\in\mathbb{Z}_{\geq 0}\) such that for all \(n\geq n_{0}\), (l.l) \[|\operatorname{Cl}_{p}(F_{n})|=p^{\left(p^{n}\mu_{p}(F)+\lambda_{p}(F)n+\nu_{p }(F)\right)},\] where \(\mu_{p}(F),\lambda_{p}(F)\in\mathbb{Z}_{\geq 0}\) and \(\nu_{p}(F)\in\mathbb{Z}\). We shall refer to \(\mu_{p}(F)\) as the _classical \(\mu\)-invariant_ of \(F\). Iwasawa conjectured that \(\mu_{p}(F)=0\) for all prime numbers \(p\) and all number fields \(F\). This conjecture was proved by Ferrero and Washington [10] for all abelian number fields. The Iwasawa theory of Galois representations arising from motives leads to very deep questions. Let \(p\) be an odd prime number and \(E_{/\mathbb{Q}}\) be an elliptic curve. Mazur [14] initiated the Iwasawa theory of elliptic curves, and studied the algebraic structure of the Selmer group over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). The (algebraic) Iwasawa \(\mu\)-invariant \(\mu_{p}(E)\) is defined in terms of the algebraic structure of this Selmer group, when viewed as a module over the Iwasawa algebra \(\Lambda\). When \(E\) has good ordinary reduction at \(p\), these Selmer groups are known to be cotorsion as \(\Lambda\)-modules, thanks to the work of Kato [15]. This property is especially favorable when it comes to studying the properties of the \(\mu\)-invariant. When the \(\mu\)-invariant vanishes, the Selmer group is cofinitely generated as a \(\mathbb{Z}_{p}\)-module. We recall a conjecture due to Greenberg [10, Conjecture I.ll] on the vanishing of the \(\mu\)-invariant. Let \(E[p]\subset E(\bar{\mathbb{Q}})\) be the \(p\)-torsion subgroup, we note that as an abstract abelian group, \(E[p]\) is isomorphic to \((\mathbb{Z}/p\mathbb{Z})^{2}\). Let \[\bar{\rho}_{E,p}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to \operatorname{GL}_{2}(\mathbb{Z}/p\mathbb{Z})\] be the representation of the absolute Galois group on \(E[p]\). The following conjecture is of paramount importance and has influenced major trends in Iwasawa theory. **Conjecture I.1** (Greenberg).: _Let \(E_{/\mathbb{Q}}\) be an elliptic curve and \(p\) a prime number at wbicb \(E\) has good ordinary reduction._ 1. _Suppose tbat the Galois representation_ \(\bar{\rho}_{E,p}\) _is reducible. Then,_ \(E\) _is_ \(\mathbb{Q}\)_-isogenous to an elliptic curve_ \(E^{\prime}\) _such that_ \(\mu_{p}(E^{\prime})=0\)_._ 2. _Suppose tbat the Galois representation_ \(\bar{\rho}_{E,p}\) _is irreducible, tben,_ \(\mu_{p}(E)=0\)_._ There are examples of elliptic curves \(E_{/\mathbb{Q}}\) for which \(\bar{\rho}_{E,p}\) is reducible and for which \(\mu_{p}(E)>0\). The first such example is due to Mazur, see [14, section]. ### Main results In this paper, we study Greenberg's conjecture from a new perspective. Before discussing the main results, let us introduce some further notation. Let \(L_{\infty}\) be the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(L:=\mathbb{Q}(E[p])\) and set \(G:=\operatorname{Gal}(L_{\infty}/\mathbb{Q}_{\infty})\). Choose an embedding \(\iota_{p}:\bar{\mathbb{Q}}\hookrightarrow\bar{\mathbb{Q}}_{p}\), and let \(\tilde{\beta}\) be the prime of \(\bar{\mathbb{Q}}\) that lies above \(p\), prescribed by this embedding. Let \(\beta\) be the prime of \(L_{\infty}\) that lies below \(\tilde{\beta}\), and \(I_{\beta}\) be the inertia group at \(\beta\). Let \(\Sigma\) be a finite set of prime numbers that contains \(p\) and the primes of bad reduction of \(E\), and let \(\mathbb{Q}_{\Sigma}\) be the maximal algebraic extension of \(\mathbb{Q}\) in which the primes \(\ell\notin\Sigma\) are unramified. Since \(E\) is assumed to be ordinary at \(p\), there is a unique \(\operatorname{G}_{p}\)-invariant line \(\bar{C}\) contained in \(E[p]\), such that the inertia group at \(p\) acts on \(\bar{C}\) via the mod-\(p\) cyclotomic character. Note that the Galois group \(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{\infty})\) acts trivially on \(E[p]\). At a prime \(\eta\) of \(L_{\infty}\) that lies above a prime in \(\Sigma\), let \(f_{\eta}\) be the restriction of \(f\) the decomposition group at \(\eta\). The residual Selmer group over \(L_{\infty}\) is defined as follows \[\operatorname{Sel}(L_{\infty},E[p]):= \{f\in\operatorname{Hom}\big{(}\operatorname{Gal}(\mathbb{Q}_{ \Sigma}/L_{\infty}),E[p]\big{)}\mid\] \[f_{\eta}=0\text{ for all primes }\eta\nmid\ell\text{ that lie above a prime of }\Sigma,\] \[\text{ and }f(I_{\beta})\subseteq\bar{C}\}.\] \(\Lambda\) homomorphism \(f\in\operatorname{Hom}\big{(}\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{ \infty}),E[p]\big{)}\) is \(G\)-equivariant if for any \(g\in G\), \[f(\tilde{g}x\tilde{g}^{-1})=gf(x),\] where \(\tilde{g}\) is a lift of \(g\) to \(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty})\). Let \(\operatorname{Sel}(L_{\infty},E[p])^{G}\) be the subgroup of \(\operatorname{Sel}(L_{\infty},E[p])\) consisting Selmer homomorphisms that are \(G\)-equivariant. **Conjecture 1.2**.: _Suppose tbat \(\bar{C}\) is not a \(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty})\) submodule of \(E[p]\). Then, tbe image of \(\operatorname{Sel}(L_{\infty},E[p])^{G}\) in \(\operatorname{Hom}(I_{\beta},\bar{C})\) is finite._ The above is a Galois theoretic criterion expressed purely in terms of the residual representation, which as we shall see gives a different formulation of Greenberg's conjectures. It follows from our arguments that the above condition is in fact in many situations equivalent to Greenberg's conjecture. In establishing our result, we crucially leverage results of Coates and Sujatha on the vanishing of the \(\mu\)-invariant of the _fine Selmer group_. We prove the first part of Conjecture 1.1, provided Conjecture 1.2 holds for the \(\mathbb{Q}\)-isogeny class of \(E\). **Theorem 1**.: _Let \(E\) be an elliptic curve over \(\mathbb{Q}\) with good ordinary reduction at \(p\) such \(\text{tbat}\ E[p]\) is reducible. Assume tbat the Conjecture 1.2 is true for all elliptic curves over \(\mathbb{Q}\) tbat are isogenous to \(E\). Then, there exists an elliptic curve \(E^{\prime}\) over \(\mathbb{Q}\) that is isogenous to \(E\), such that the \(\mu\)-invariant of \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E^{\prime})\) is \(0\). Moreover, the isogeny \(E\to E^{\prime}\) has degree \(p^{\mu_{p}(E)}\)._ Moreover, we show that the property that \(\mu_{p}(E)\) vanishes can be detected precisely from the structure of the representation \(\bar{\rho}_{E,p}\), see Theorem 6.4. We prove that the second part of this conjecture follows from certain additional conditions. **Theorem 2**.: _Let \(E_{/\mathbb{Q}}\) be an elliptic curve with good ordinary reduction at an odd prime \(p\) and assume tbat_ 1. _the residual representation_ \(\bar{\rho}_{E,p}\) _is irreducible; set_ \(L:=\mathbb{Q}(E[p])\)_._ 2. _The Conjecture_ 1.2 _holds for_ \(E\)_._ 3. _The classical Iwasawa_ \(\mu\)_-invariant_ \(\mu_{p}(L)\) _vanishes._ _Then, the \(\mu\)-invariant of the Greenberg Selmer group \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) is \(0\)._ The main novelty in the above results, is that the algebraic structure of the fine Selmer group plays an important role in establishing them. Indeed, the Conjecture 1.2 gives a precise criterion for the fine Selmer group and the Greenberg Selmer group to have the same \(\mu\)-invariant. We explain this in greater detail below. ### Method of proof The results are proven by proving an explicit relationship between the Selmer group over \(\mathbb{Q}_{\infty}\) and the fine Selmer group. This latter Selmer group is defined by imposing strict conditions at the prime above \(p\). The algebraic properties of the fine Selmer group closely resemble those of class groups, and the \(\mu\)-invariants of these Selmer groups are always conjectured to vanish. Suppose that the Conjecture 1.2 holds for all elliptic curve that are isogenous to \(E\). Then, we show that if either \(\bar{\rho}_{E,p}\) is irreducible, or if \(\bar{\rho}_{E,p}\) is reducible and satisfies some further conditions, then the \(\mu\)-invariant of the Greenberg Selmer group vanishes if and only if the \(\mu\)-invariant of the fine Selmer group vanishes. We show that any elliptic curve \(E\) for which \(\bar{\rho}_{E,p}\) is reducible isogenous to another elliptic curve \(E^{\prime}\) for which these additional conditions on \(\bar{\rho}_{E^{\prime},p}\) are satisfied. The results of Coates-Sujatha [10] and Ferrero-Washington [12] together imply that the \(\mu\)-invariant of the fine Selmer group vanishes when the residual representation is reducible (cf. Theorem 6.2). We leverage the vanishing of the \(\mu\)-invariant of the fine Selmer group to deduce that the \(\mu\)-invariant of \(E^{\prime}\) vanishes, thus proving Theorem 1. The Theorem 2 is proven via a similar technique. It is shown that if \(\bar{\rho}_{E,p}\) is irreducible, then Greenberg's conjecture is equivalent to a conjecture of Coates and Sujatha on the vanishing of the \(\mu\)-invariant of the fine Selmer group (cf. Corollary 5.5). The results establishing the relationship between the Greenberg Selmer group and the fine Selmer group hold in a more general context, namely, for ordinary Galois representations (cf. Theorem 5.4). ### Outlook The methods developed in this paper should lead to interesting generalizations of Greenberg's conjecture for ordinary Galois representations associated to modular forms and abelian varieties. Such questions have not been pursued in this paper, however would certainly be of interest to study in the future. It is of natural interest to ascertain if the statement Theorem 2 generalizes in some sense to the case of Kobayashi's signed Selmer groups [10] associated to elliptic curves with supersingular reduction at \(p\). The main difficulty here is that Kobayashi's Selmer groups have not been defined over fields other that the rational numbers, and we work with a suitably defined primitive residual Selmer group considered over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(L\). ### Related work Let us discuss related work towards Greenberg's conjecture in the residually reducible case. Greenberg and Vatsal [14] showed that if \(\bar{\rho}_{E,p}\) is reducible and \(E[p]\) contains a \(1\)-dimensional line which is \(\mathrm{G}_{\mathbb{Q}}\)-stable on which the action is via a character which is either 1. unramified at \(p\) and odd, 2. ramified at \(p\) and even, then, \(\mu_{p}(E)=0\). The case that remains is when the Galois representation \(\bar{\rho}_{E,p}\) is indecomposable of the form \(\left(\begin{array}{cc}\varphi_{1}&0\\ *&\varphi_{2}\end{array}\right)\), where \(\varphi_{2}\) is unramified at \(p\) and even. In this case, Greenberg conjectures that \(\mu_{p}(E)=0\). The conjecture has been studied for \(p=3\) by Hachimori in the case when \(E(\mathbb{Q})[3]\neq 0\) (i.e., \(\varphi_{2}=1\)), cf. [10, Theorem I.I]. On the other hand, Trifkovic [13] constructs infinitely many examples for \(p=3,5\) of elliptic curves \(E_{/\mathbb{Q}}\) for which \(\bar{\rho}_{E,p}\) is of the form \(\left(\begin{array}{cc}\varphi_{1}&0\\ *&\varphi_{2}\end{array}\right)\) described above, and such that \(\mu_{p}(E)=0\). The case of interest falls under "type \(3\)" under the classification in section 6. We note here that the methods developed in this paper do not immediately generalize to elliptic curves over arbitrary number fields, since the arguments make use of the fact that there is only one prime of \(\mathbb{Q}_{\infty}\) that lies above \(p\). For a number field in which \(p\) splits into \(2\) or more primes, the same reasoning no longer applies. In fact, Drinen [12] proved that there are large enough number fields over which the analogue of part (\(\mathfrak{l}\)) of Greenberg's conjecture does not hold. ### Organization Including the introduction, the article consists of \(6\) sections. In section 2, we introduce basic notation and conventions, introduce the Greenberg Selmer group associated with an ordinary Galois representations. These Selmer groups are modules over the Iwasawa algebra, the structure theory of such modules leads to the definition of the \(\mu\)-invariant. When the Selmer group is cotorsion over the Iwasawa algebra, the \(\mu\)-invariant vanishes precisely when it is cofinitely generated as a \(\mathbb{Z}_{p}\)-module. We conclude section 2 by discussing the relationship between the Bloch-Kato Selmer group and the Greenberg Selmer group. It turns out that for elliptic curves \(E_{/\mathbb{Q}}\), the Greenberg Selmer group coincides with the classical Selmer group for which the local conditions are defined via Kummer maps. The Greenberg Selmer groups are however more convenient to work with when employing Galois cohomological arguments. In section 3, we introduce a Selmer group associated to the residual representation. Such Selmer groups were considered by Greenberg and Vatsal in [10] in studying the role of congruences between elliptic curves in Iwasawa theory. In _loc. cit._, certain imprimitive residual Selmer groups are defined, i.e., the local conditions at a number of primes are not imposed on these Selmer groups. It is necessary to work with such imprimitive conditions when studing the effect of congruences on the \(\lambda\)-invariant. In this article however, we only study the \(\mu\)-invariant, and therefore work with primitive residual Selmer groups. The section ends with Proposition 3.3, which shows that the \(\mu\)-invariant vanishes for the Greenberg Selmer group of an ordinary representation if and only if the (primitive) residual Selmer group is finite. Such a result is well known to the experts, however, we do include it for completeness. The section 4 is devoted to the definition and basic properties of the fine Selmer group associated to a continuous Galois representation. We discuss a conjecture of Coates and Sujatha on the vanishing of the \(\mu\)-invariant. We end section 4 by recalling a key result on the vanishing of this \(\mu\)-invariant. In section 5 we introduce a key assumption on the residual representation. This assumption allows us to relate the Greenberg Selmer group to the fine Selmer group. It is shown that the residual fine Selmer group is of finite index in the residual Greenberg Selmer group, provided the residual representation satisfies the additional hypothesis. It the end of this section 5, we give a proof of Theorem 2. Finally, section 6 is devoted to the proof of Theorem 1. We give a classification of the residual representation into three types. It is shown that the \(\mu\)-invariant is positive when the residual representation is of type I, and is zero when it is of type 2 or 3. It is in the case when the representation is of type 3 that the results from section 5 are applied. We then recall a theorem of Schneider, which describes the difference between the \(\mu\)-invariants of isogenous elliptic curves. We use this result, along with our classification theorem to prove Theorem 1. ### Acknowledgments The author's research is supported by the CRM Simons postdoctoral fellowship. He thanks Jeffrey Hatey, Antonio Lei and Shaunak V. Deo for some helpful comments. ## 2. Notation and Preliminaries ### Notation In this section, we introduce some standard notation. * Throughout \(p\) will denote an odd prime number and \(K/\mathbb{Q}_{p}\) a finite extension. * Let \(\mathbb{F}_{p}\) be the field with \(p\) elements. * Denote by \(\mathcal{O}\) the valuation ring in \(K\). Let \(\varpi\) be a uniformizer of \(\mathcal{O}\) and set \(\mathbb{F}:=\mathcal{O}/\varpi\). * Let \(\bar{\mathbb{Q}}\) be an algebraic closure of \(\mathbb{Q}\). For a subfield \(F\) of \(\bar{\mathbb{Q}}\), we set \(\mathrm{G}_{F}\) to denote the absolute Galois group \(\mathrm{Gal}(\bar{\mathbb{Q}}/F)\). We make the convention that all algebraic extensions of \(\mathbb{Q}\) considered are contained in \(\bar{\mathbb{Q}}\). * For each prime \(\ell\), set \(\mathrm{G}_{\ell}:=\mathrm{Gal}(\bar{\mathbb{Q}}_{\ell}/\mathbb{Q}_{\ell})\). Choose an embedding \(\iota_{\ell}:\bar{\mathbb{Q}}\hookrightarrow\bar{\mathbb{Q}}_{\ell}\); set \(\iota_{\ell}^{*}:\mathrm{G}_{\ell}\hookrightarrow\mathrm{G}_{\mathbb{Q}}\) to denote the induced inclusion. * Given a finite set of prime numbers \(\Sigma\), set \(\mathbb{Q}_{\Sigma}\) to be the maximal algebraic extension of \(\mathbb{Q}\) in which all primes \(\ell\notin\Sigma\) are unramified. * Let \(\mathcal{F}\) be an extension of \(\mathbb{Q}\) contained in \(\mathbb{Q}_{\Sigma}\), set \(H^{i}(\mathbb{Q}_{\Sigma}/\mathcal{F},\cdot):=H^{i}\left(\mathrm{Gal}(\mathbb{ Q}_{\Sigma}/\mathcal{F}),\cdot\right)\). * Let \(\mu_{p^{n}}\subset\bar{\mathbb{Q}}\) denote the \(p^{n}\)-th roots of unity, and \(\mathbb{Q}(\mu_{p^{n}})\) be the cyclotomic field generated by \(\mu_{p^{n}}\). Set \(\mathbb{Q}(\mu_{p^{\infty}})\) to denote the union of number fields \(\mathbb{Q}(\mu_{p^{n}})\) for \(n\in\mathbb{Z}_{\geq 1}\). * Set \(\mathcal{G}_{\infty}:=\operatorname{Gal}(\mathbb{Q}(\mu_{p^{\infty}})/ \mathbb{Q})\) and \(\chi:\mathcal{G}_{\infty}\xrightarrow{\sim}\mathbb{Z}_{p}^{\times}\) be the \(p\)-adic cyclotomic character. * Denote by \(\mathbb{Q}_{\infty}\) the unique \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\) that is contained in \(\mathbb{Q}(\mu_{p^{\infty}})\), this is the _cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\)_. * Set \(\Gamma:=\operatorname{Gal}(\mathbb{Q}_{\infty}/\mathbb{Q})\) and \(\Delta:=\operatorname{Gal}(\mathbb{Q}(\mu_{p})/\mathbb{Q})\), note that \(\mathcal{G}_{\infty}\simeq\Delta\times\Gamma\). * For \(n\in\mathbb{Z}_{\geq 1}\), let \(\mathbb{Q}_{n}\) be the subfield of \(\mathbb{Q}_{\infty}\) with \(\operatorname{Gal}(\mathbb{Q}_{\infty}/\mathbb{Q}_{n})=\Gamma^{p^{n}}\). Identify \(\operatorname{Gal}(\mathbb{Q}_{n}/\mathbb{Q})\) with \(\Gamma_{n}:=\Gamma/\Gamma^{p^{n}}\). * Given a prime \(\eta\) of \(\mathbb{Q}_{\infty}\), denote by \(\mathbb{Q}_{\infty,\eta}\) the union of completions of the finite layers \(\mathbb{Q}_{n}\) at \(\eta\). * Let \(\eta_{p}\) be the unique prime of \(\mathbb{Q}_{\infty}\) that lies above \(p\). * The _luxasuva algebra_\(\Lambda\) over \(\mathcal{O}\) is the completed group algebra \(\varprojlim_{n}\mathcal{O}[\Gamma_{n}]\). * Let \(\gamma\) be a topological generator of \(\Gamma\), setting \(T:=(\gamma-1)\) we identify \(\Lambda\) with the formal power series ring \(\mathcal{O}[\![T]\!]\). ### The Greenberg Selmer group We recall the definition of the Greenberg Selmer group associated to an ordinary Galois representation. These Selmer groups are considered over \(\mathbb{Q}_{\infty}\), and were introduced in [6, 89]. We follow the notation and conventions in [60]. Let \(n\geq 2\) be an integer, and \(M\) be a free \(\mathcal{O}\)-module of rank \(n\), equipped with a continuous \(\mathcal{O}\)-linear action of \(\mathrm{G}_{\mathbb{Q}}\). Choose an \(\mathcal{O}\)-basis for \(M\), and identify the group of \(\mathcal{O}\)-linear automorphisms of \(M\) with \(\operatorname{GL}_{n}(\mathcal{O})\). The Galois action on \(M\) is encoded by a continuous Galois representation \[\rho_{M}:\mathrm{G}_{\mathbb{Q}}\to\operatorname{GL}_{n}(\mathcal{O}).\] Tensoring with \(K\), we obtain the representation on the \(K\)-vector space \(M_{K}:=M\otimes_{\mathcal{O}}K\), denoted \[\rho_{M_{K}}:\mathrm{G}_{\mathbb{Q}}\to\operatorname{GL}_{n}(K).\] Set \(d:=\dim_{K}M_{K}\), and \(d^{\pm}\) to denote the \(K\)-dimensions of the plus and minus eigenspaces of \(M_{K}\) for the action of complex conjugation. We set \(A:=M_{K}/M\); we take note of the fact that \(A\simeq(K/\mathcal{O})^{d}\). **Assumption 2.1**.: _With respect to notation above, assume tbat_ 1. _there are finitely many prime numbers at which_ \(\rho_{M}\) _is ramified._ 2. _The representation_ \(\rho_{M_{K}}\) _is irreducible._ 3. _There exists a_ \(\mathrm{G}_{p}\)_-stable_ \(K\)_-subspace_ \(W\) _of_ \(M_{K}\) _of dimension_ \(d^{+}\) _such tbat the action of_ \(\mathrm{G}_{p}\) _on_ \(M_{K}/W\) _is unramified._ A Galois representation satisfying the conditions of [6, p.98] is referred to as _ordinary_ and satisfies the above assumption. Let \(C\) to be the image of \(W\) in \(A\), and note that \(C\simeq(K/\mathcal{O})^{d^{+}}\). Setting \(D:=A/C\), we find that \(D\simeq(K/\mathcal{O})^{d^{-}}\). Following [60, section 2], we recall the definition of the Greenberg Selmer group associated with the pair \((A,C)\). For \(\ell\neq p\), set \[\mathcal{H}_{\ell}(\mathbb{Q}_{\infty},A):=\prod_{\eta|\ell}H^{1}(\mathbb{Q}_ {\infty,\eta},A),\] where \(\eta\) runs through the primes of \(\mathbb{Q}_{\infty}\) that lie above \(\ell\). The set of primes \(\eta\) of \(\mathbb{Q}_{\infty}\) that lie above any given rational prime is finite. Denote by \(I_{\eta_{p}}\) the inertia group of \(\operatorname{Gal}\left(\overline{\mathbb{Q}_{\infty,\eta_{p}}}/\mathbb{Q}_{ \infty,\eta_{p}}\right)\). We let \(L_{\eta_{p}}\) be defined as follows \[L_{\eta_{p}}:=\ker\left(H^{1}(\mathbb{Q}_{\infty,\eta_{p}},A)\stackrel{{ \kappa_{p}}}{{\longrightarrow}}H^{1}(I_{\eta_{p}},D)\right),\] where \(\kappa_{p}\) is the kernel of the composite of the natural maps \[H^{1}(\mathbb{Q}_{\infty,\eta_{p}},A)\to H^{1}(I_{\eta_{p}},A)\] \[H^{1}(I_{\eta_{p}},A)\to H^{1}(I_{\eta_{p}},D).\] The first of the above maps is the restriction map and the second map is induced by the map \(A\to D\). The local condition at \(p\) is prescribed as follows \(\mathcal{H}_{p}(\mathbb{Q}_{\infty},A):=H^{1}(\mathbb{Q}_{\infty},A)/L_{\eta _{p}}\). The Selmer group over \(\mathbb{Q}_{\infty}\) is defined as follows \[S_{A}(\mathbb{Q}_{\infty}):=\ker\left(H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{ \infty},A)\to\bigoplus_{\ell\in\Sigma}\mathcal{H}_{\ell}(\mathbb{Q}_{\infty}, A)\right)\] where \(\Sigma\) is a finite set of prime numbers containing \(p\) and all prime numbers at which \(\rho_{M}\) is ramified. As is well known, the Selmer group defined above is independent of the choice of primes \(\Sigma\). Given a module \(\mathbf{M}\) over \(\Lambda\), set \(\mathbf{M}^{\vee}:=\operatorname{Hom}_{\mathbb{Z}_{p}}\left(\mathbf{M}, \mathbb{Q}_{p}/\mathbb{Z}_{p}\right)\) to denote its Pontryagin dual. We say that \(\mathbf{M}\) is cofinitely generated (resp. cotorsion) over \(\Lambda\) if \(\mathbf{M}^{\vee}\) is finitely generated (resp. torsion) as a \(\Lambda\)-module. The Selmer group \(S_{A}(\mathbb{Q}_{\infty})\) is cofinitely generated as a module over \(\Lambda\). Throughout we make the following assumption. **Assumption 2.2** (Cotorsion hypothesis).: _The Selmer group \(S_{A}(\mathbb{Q}_{\infty})\) is cofinitely generated over \(\Lambda\)._ This assumption is known to hold in the main case of interest, namely for ordinary Galois representations associated with rational elliptic curves, cf. [1, Theorem I.5]. ### The Iwasawa \(\mu\)-invariant Let \(\mathbf{M}\) be a cofinitely generated and cotorsion \(\Lambda\)-module. We recall the definition of the Iwasawa \(\mu\)-invariant associated to \(\mathbf{M}\). A map of \(\Lambda\)-modules \(\mathbf{M}_{1}\to\mathbf{M}_{2}\) is said to be a _pseudo-isomorphbism_ if its kernel and cokernel are both finite. A polynomial \(f(T)\in\Lambda\) is _distinguished_ if it is a monic polynomial and all non-leading coefficients are divisible by \(\varpi\). According to the structure theorem of \(\Lambda\)-modules [10, Chapter I3], there is a pseudoisomorphism of the form \[\mathbf{M}^{\vee}\longrightarrow\left(\bigoplus_{i=1}^{s}\Lambda/(\varpi^{ \mu_{i}})\right)\oplus\left(\bigoplus_{j=1}^{t}\Lambda/\left(f_{j}(T)^{\lambda _{j}}\right)\right), \tag{2.1}\] where \(\mu_{i}\) are positive integers and \(f_{j}(T)\) are irreducible distinguished polynomials. The \(\mu\) invariant of \(\mathbf{M}\) is defined as follows \[\mu(\mathbf{M}):=\sum_{i=1}^{s}\mu_{i},\] where we set \(\mu(\mathbf{M})=0\) if \(s=0\). From the definition of the \(\mu\)-invariant, it is clear that \(\mu(\mathbf{M})=0\) if and only if \(M^{\vee}\) is finitely generated as a \(\mathbb{Z}_{p}\)-module. **Proposition 2.3**.: _Let \(\mathbf{M}\) be a cofinitely generated and cotorsion \(\Lambda\)-module. Then, \(\mu(\mathbf{M})=0\) if and only if \(\mathbf{M}[\varpi]\) is finite._ Proof.: The result is a direct consequence of the structure theorem for \(\Lambda\)-modules. We have a pseudo-isomorphism \[\mathbf{M}^{\vee}\longrightarrow\left(\bigoplus_{i=1}^{s}\Lambda/(\varpi^{ \mu_{i}})\right)\oplus\left(\bigoplus_{j=1}^{t}\Lambda/\left(f_{j}(T)^{\lambda _{j}}\right)\right),\] as described in (2.1). Let \(\Omega\) denote the mod-\(\varpi\) reduction of \(\Lambda\). We identify \(\left(\mathbf{M}[\varpi]\right)^{\vee}\) with \(\mathbf{M}^{\vee}/\varpi\mathbf{M}^{\vee}\). The mod-\(\varpi\) reduction of the above map is a pseudo-isomorphism \[\left(\mathbf{M}[\varpi]\right)^{\vee}\longrightarrow\Omega^{s}\oplus\left( \bigoplus_{j=1}^{t}\Omega/(T^{d_{j}\lambda_{j}})\right),\] where \(d_{j}=\deg f_{j}(T)\). Clearly, \(\Omega/(T^{d_{j}\lambda_{j}})\) is a finite dimensional \(\mathbb{F}\)-vector space, and \(\Omega\) is infinite. Therefore, \(M[\varpi]\) is finite if and only if \(s=0\). We note that \(s=0\) if and only if \(\mu(\mathbf{M})=0\), this proves the result. ### Selmer groups associated to modular forms and elliptic curves Let \(\tau\) be a variable in the complex upper half plane and set \(q:=\exp(2\pi i\tau)\). We take \(f=\sum_{n=1}^{\infty}a_{n}(f)q^{n}\) be a normalized Hecke eigencuspform of weight \(k\geq 2\). We assume that with respect to the embedding \(\iota_{p}\), the modular form \(f\) is ordinary. This means that \(\iota_{p}(a_{p}(f))\) is a unit of \(\mathcal{O}\). Let \(K\) be the extension of \(\mathbb{Q}_{p}\) generated by \(\{\iota_{p}(a_{n}(f))\mid n\in\mathbb{Z}_{\geq 1}\}\). We note that \(K\) is a finite extension of \(\mathbb{Q}_{p}\). Let \(\rho_{f,\iota_{p}}:\mathrm{G}_{\mathbb{Q}}\rightarrow\mathrm{GL}_{2}(K)\) be the Galois representation associated to \((f,\iota_{p})\). Let \(V=V_{f,\iota_{p}}\) be the underlying \(2\)-dimensional \(K\)-vector space on which \(\mathrm{G}_{\mathbb{Q}}\) acts by \(K\)-linear automorphisms. We choose a Galois stable \(\mathcal{O}\)-lattice \(M\) in \(V\), and set \(A:=V/M\). As a module over \(\mathrm{G}_{p}\), \(M\) fits into a short exact sequence \[0\to M_{0}\to M\to M_{1}\to 0.\] The modules \(M_{0}\) and \(M_{1}\) are uniquely determined by the property that \(M_{0}\simeq\mathcal{O}(\alpha\chi^{k-1})\) and \(M_{1}\simeq\mathcal{O}(\alpha^{\prime})\), where \(\alpha\) and \(\alpha^{\prime}\) are unramified characters. We set \(W:=(M_{0})\otimes_{\mathcal{O}}K\) and \(C:=\mathrm{image}\left(W\to A\right)\). Then the Greenberg Selmer group \(S_{A}(\mathbb{Q}_{\infty})\) associated to \((A,C)\) clearly satisfies the Assumption 2.1. As is well known, in this context, the Greenberg Selmer group is pseudo-isomorphic to the Bloch-Kato Selmer group considered over \(\mathbb{Q}_{\infty}\) (cf. [1, Corollary 4.3] for further details). It follows from results of Kato [10] that this Selmer groups are cotorsion over \(\Lambda\), i.e., Assumption 2.2 is satisfied. We note that the Selmer group \(S_{A}(\mathbb{Q}_{\infty})\) depends on the choice of embedding \(\iota_{p}\) and the choice of Galois stable \(\mathcal{O}\)-lattice \(M\). Let us now consider Galois representations associated with elliptic curves over \(\mathbb{Q}\). Let \(E_{/\mathbb{Q}}\) be an elliptic curve with good ordinary reduction at \(p\), and let \(M:=T_{p}(E)\) be its \(p\)-adic Tate-module. The \(p\)-divisible Galois module \(A\) is identified with \(E[p^{\infty}]\), the \(p\)-power torsion points in \(E(\bar{\mathbb{Q}})\). Since \(E\) has ordinary reduction at \(p\), there is a unique \(\mathbb{Z}_{p}[\mathrm{G}_{p}]\)-submodule \(C\simeq\mathbb{Q}_{p}/\mathbb{Z}_{p}(\alpha\chi)\) of \(E[p^{\infty}]\), where \(\alpha:\mathrm{G}_{p}\rightarrow\mathbb{Z}_{p}^{\times}\) is an unramified character. The quotient \(D:=A/C\) is unramified. The Greenberg Selmer group associated to \((A,C)\) is then denoted \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\). Since \(E\) arises from a Hecke eigencuspform of weight \(2\), it follows from results of Kato [10] that \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)^{\vee}\) is a torsion \(\Lambda\)-module, i.e., the Assumption 2.2 is satisfied. In this setting, there is no ambiguity in the definition of the Selmer group, since the field of coefficients equals \(\mathbb{Q}_{p}\), and the \(\mathbb{Z}_{p}\)-Galois module is prescribed to be the \(p\)-adic Tate module of \(E\). The Greenberg Selmer group coincides with the classical Selmer group, where the local conditions are defined via Kummer maps. We refer to [10, section 2] for further details. Throughout, we shall set \(\mu_{p}(E)\) to denote the \(\mu\)-invariant of the Selmer group \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\). If \(E^{\prime}\) is another elliptic curve over \(\mathbb{Q}\) which is \(\mathbb{Q}\)-isogenous to \(E^{\prime}\), then, \(T_{p}(E^{\prime})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) is isomorphic to \(T_{p}(E)\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) as a \(\mathbb{Q}_{p}[\operatorname{G}_{\mathbb{Q}}]\)-module, however, \(T_{p}(E)\) is not isomorphic to \(T_{p}(E^{\prime})\). It is possible that \(\mu_{p}(E^{\prime})=0\), while \(\mu_{p}(E)>0\), cf. [11, section 7]. ## 3. The residual Selmer group and the vanishing of the \(\mu\)-invariant ### The residual Selmer group Let \(M\) be a module over \(\operatorname{G}_{\mathbb{Q}}\) for which Assumption 2.1 is satisfied. Associated with \(S_{A}(\mathbb{Q}_{\infty})\) is the residual Selmer group associated to the pair \((A,C)\). Stipluate that the cotorsion Assumption 2.2 is also satisfied. Set \(A[\varpi^{n}]\) to denote the kernel of the multiplication by \(\varpi^{n}\) endomorphism of \(A\). We denote by \(\bar{A}:=A[\varpi]\), and refer to the representation \[\rho_{\bar{A}}:\operatorname{G}_{\mathbb{Q}}\to\operatorname{Aut}_{\mathbb{F}} (\bar{A})\xrightarrow{\sim}\operatorname{GL}_{n}(\mathbb{F})\] as the _residual representation_. This is because we may identify \(\bar{A}:=A[\varpi]\) with \(M/\varpi M\), and thus think of \(\rho_{\bar{A}}\) as the mod-\(\varpi\) reduction of \(\rho_{M}\). We let \(\bar{C}:=C[\varpi]\), and \(\bar{D}:=\bar{A}/\bar{C}\); note that the vector spaces \(\bar{A},\bar{D}\) and \(\bar{C}\) are \(\mathbb{F}[\operatorname{G}_{p}]\)-modules and they fit into a short exact sequence \[0\to\bar{C}\to\bar{A}\to\bar{D}\to 0.\] We now introduce the _residual Selmer group_ associated to \((\bar{A},\bar{C})\). For \(\ell\neq p\), set \[\mathcal{H}_{\ell}(\mathbb{Q}_{\infty},\bar{A}):=\prod_{\eta|\ell}H^{1}( \mathbb{Q}_{\infty,\eta},\bar{A}),\] where \(\eta\) runs over all primes of \(\mathbb{Q}_{\infty}\) that lie above \(\ell\). At \(p\), the local condition is defined by setting \(\mathcal{H}_{p}(\mathbb{Q}_{\infty},\bar{A}):=H^{1}(\mathbb{Q}_{\infty,\eta_{ p}},\bar{A})/\bar{L}_{\eta_{p}}\), where \[\bar{L}_{\eta_{p}}:=\ker\left(H^{1}(\mathbb{Q}_{\infty,\eta_{p}},\bar{A}) \xrightarrow{\bar{\kappa}_{p}}H^{1}(I_{\eta_{p}},\bar{D})\right);\] the map \(\bar{\kappa}_{p}\) is the mod-\(\varpi\) reduction of \(\kappa_{p}\). **Definition 3.1**.: _With respect to notation above, the residual Selmer group is defined as follows_ \[S_{\bar{A}}(\mathbb{Q}_{\infty}):=\ker\left(H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{ Q}_{\infty},\bar{A})\to\bigoplus_{\ell\in\Sigma}\mathcal{H}_{\ell}( \mathbb{Q}_{\infty},\bar{A})\right).\] We note in passing that this Selmer group depends not only on the residual representation, but also the choice \(\bar{C}\). For an elliptic curve \(E_{/\mathbb{Q}}\) with good ordinary reduction at \(p\), the space \(\bar{A}\) is identified with \(E[p]\), and there is a unique one dimensional subspace \(\bar{C}\) on which the inertia group at \(p\) acts via the mod-\(p\) cyclotomic character. Therefore, there is no ambiguity in the definition when it is specialized to an elliptic curve with good ordinary reduction at \(p\). We now study the relationship between the residual Selmer group and the \(\mu\)-invariant of \(S_{A}(\mathbb{Q}_{\infty})\). **Lemma 3.2**.: _There is a natural map_ \[g:S_{\bar{A}}(\mathbb{Q}_{\infty})\to S_{A}(\mathbb{Q}_{\infty})[\varpi]\] _with finite kernel and cokernel._ Proof.: Recall that \(\bar{A}=A[\varpi]\), consider the Kummer sequence of \(\mathbb{Z}_{p}[\mathrm{G}_{\mathbb{Q}}]\)-modules \[0\to\bar{A}\to A\xrightarrow{\times\varpi}A\to 0. \tag{3.l}\] This induces and exact sequence \[0\to\left(\frac{H^{0}\left(\mathbb{Q}_{\infty},A\right)}{\varpi H^{0}\left( \mathbb{Q}_{\infty},A\right)}\right)\to H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_ {\infty},\bar{A})\xrightarrow{f}H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty},A)[\varpi]\to 0. \tag{3.2}\] Let \(\ell\) be a prime, and \(\eta\) be a prime of \(\mathbb{Q}_{\infty}\) that lies above \(\ell\). From the Kummer sequence (3.l), we obtain an exact sequence \[0\to\left(\frac{H^{0}\left(\mathbb{Q}_{\infty,\eta},A\right)}{\varpi H^{0} \left(\mathbb{Q}_{\infty,\eta},A\right)}\right)\to H^{1}(\mathbb{Q}_{\infty, \eta},\bar{A})\xrightarrow{f_{\eta}}H^{1}(\mathbb{Q}_{\infty,\eta},A)[\varpi ]\to 0. \tag{3.3}\] For \(\ell\neq p\), we let \[f_{\ell}:\mathcal{H}_{\ell}(\mathbb{Q}_{\infty},\bar{A})\to\mathcal{H}_{\ell} (\mathbb{Q}_{\infty},A)[\varpi]\] be the product of the maps \(f_{\eta}\), where \(\eta\) ranges over the primes above \(\ell\). Since \(\ell\) is finitely decomposed in \(\mathbb{Q}_{\infty}\), it follows from (3.3) that the kernel of \(f_{\ell}\). Consider the commutative square We identify \(\mathcal{H}_{p}(\mathbb{Q}_{\infty},A)\) (resp. \(\mathcal{H}_{p}(\mathbb{Q}_{\infty},\bar{A})\)) with the image of \(\kappa_{p}\) (resp. \(\bar{\kappa}_{p}\)). Form the commutativity of the above square, we obtain a map \[f_{p}:\mathcal{H}_{p}(\mathbb{Q}_{\infty},\bar{A})\to\mathcal{H}_{p}(\mathbb{ Q}_{\infty},A)[\varpi].\] From the exact sequence of \(I_{\eta_{p}}\)-modules \[0\to D[\varpi]\to D\xrightarrow{\times\varpi}D\to 0,\] we obtain an exact sequence \[0\to\frac{H^{0}(I_{\eta_{p}},D)}{\varpi H^{0}(I_{\eta_{p}},D)}\to H^{1}(I_{ \eta_{p}},D[\varpi])\to H^{1}(I_{\eta_{p}},D)\to 0.\] We note that \(D\) is divisible and unramified at \(\eta_{p}\), hence, \(H^{0}(I_{\eta_{p}},D)=D\). Since \(D\) is divisible, it follows that the map \[H^{1}(I_{\eta_{p}},D[\varpi])\to H^{1}(I_{\eta_{p}},D)\] is injective, and hence \(f_{p}\) is injective. The map \(f\) restricts to a map \[g:S_{\bar{A}}(\mathbb{Q}_{\infty})\to S_{A}(\mathbb{Q}_{\infty})[\varpi]\] which fits into a commutative diagram Let \(h^{\prime}\) be the restriction of \(h\) to the image of \(\Phi_{V}\). From the snake lemma, we obtain an exact sequence \[0\to\ker g\to\ker f\to\ker h^{\prime}\to\operatorname{coker}g\to\operatorname {coker}f=0.\] From (3.2), we know that the kernel of \(f\) is finite, and hence, the kernel of \(g\) is finite. We have shown that the kernel of \(h\) is finite, and hence, \(\ker h^{\prime}\) is finite. Therefore, we find that both the kernel and cokernel of \(g\) are finite, and this completes the proof. **Proposition 3.3**.: _Suppose that the Assumptions 2.1 and 2.2 are satisfied. Then, the following conditions are equivalent._ 1. _The_ \(\mu\)_-invariant of_ \(S_{A}(\mathbb{Q}_{\infty})\) _is equal to_ \(0\)_._ 2. _The residual Selmer group_ \(S_{\bar{A}}(\mathbb{Q}_{\infty})\) _is finite._ Proof.: It follows from Proposition 2.3 that the \(\mu\)-invariant of \(S_{A}(\mathbb{Q}_{\infty})\) is \(0\) if and only if \(S_{A}(\mathbb{Q}_{\infty})[\varpi]\) is finite. Then it follows from Lemma 3.2 that \(S_{A}(\mathbb{Q}_{\infty})[\varpi]\) is finite if and only if \(S_{\bar{A}}(\mathbb{Q}_{\infty})\) is finite. This completes the proof. ## 4. The fine Selmer group In this section, we recall the definition of the fine Selmer group associated to \(A\). For further details, we refer to [1, 1]. We do _not_ insist that \(A\) satisfies the Assumption 2.1 in this section. Recall that \(\Sigma\) is a finite set of prime numbers containing \(p\) and the primes that are ramified in \(A\). Let \(F\) be a number field and \(F_{\infty}\) be the composite of \(F\) with \(\mathbb{Q}_{\infty}\). For any prime \(\ell\), set \(\mathcal{K}_{\ell}(F_{\infty},A):=\prod_{\eta\mid\ell}H^{1}(F_{\infty,\eta},A)\), where \(\eta\) runs over the primes of \(F_{\infty}\) that lie above \(\ell\). We note that for any prime \(\ell\neq p\), the local condition \(\mathcal{K}_{\ell}(\mathbb{Q}_{\infty},A)\) coincides with \(\mathcal{H}_{\ell}(\mathbb{Q}_{\infty},A)\), the difference lies at the prime \(p\). The _fine Selmer group_ is defined as follows \[S_{A}^{0}(F_{\infty}):=\ker\left(H^{1}(F_{\Sigma}/F_{\infty},A)\to\bigoplus_{ \ell\in\Sigma}\mathcal{K}_{\ell}(F_{\infty},A)\right).\] Here, \(F_{\Sigma}\) is the maximal extension of \(F\) in which all primes \(\ell\notin\Sigma\) are unramified. As is well known, the definition above is independent of the choice of \(\Sigma\). For further details, we refer to [1, Lemma 3.2]. Given an elliptic curve \(E\) over a number field \(F\), and a prime number \(p\), set \(\operatorname{Sel}_{p^{\infty}}^{0}(F_{\infty},E)\) be the fine Selmer group associated to \(A=E[p^{\infty}]\) over \(F_{\infty}\). Define the _residual fine Selmer group_ by setting \[\mathcal{K}_{\ell}(F_{\infty},\bar{A}):=\prod_{\eta|\ell}H^{1}(F_{\infty,\eta}, \bar{A})\] for all prime numbers \(\ell\in\Sigma\), and setting \[S^{0}_{\bar{A}}(F_{\infty}):=\ker\left(H^{1}(F_{\Sigma}/F_{\infty},\bar{A}) \rightarrow\bigoplus_{\ell\in\Sigma}\mathcal{K}_{\ell}(F_{\infty},\bar{A}) \right).\] The fine Selmer group fits into a left exact sequence \[0\to S^{0}_{A}(\mathbb{Q}_{\infty})\to S_{A}(\mathbb{Q}_{\infty}) \to L_{\eta_{p}}.\] **Lemma 4.1**.: _There is a natural map_ \[g_{0}:S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})\to S^{0}_{A}(\mathbb{Q}_{ \infty})[\varpi]\] _with finite kernel and cokernel._ Proof.: The proof is similar to that of Lemma 3.2, we provide a sketch of the details. The map \(g_{0}\) is induced by restricting \(f\) to \(S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})\). It is easy to see that the image of this restriction lies in \(S^{0}_{A}(\mathbb{Q}_{\infty})[\varpi]\). The map \(g_{0}\) fits into a natural commutative diagram depicted below In the above diagram, the horizontal maps \(\Phi^{\prime}_{\bar{A}}\) and \(\Phi^{\prime}_{A}\) are induced by restriction maps. The kernels of both vertical maps in the diagram are finite. By the same argument as in the proof of Lemma 3.2, it follows that the kernel and cokernel of \(g_{0}\) are finite. **Proposition 4.2**.: _With respect to notation above, the following conditions are equivalent._ 1. _The_ \(\mu\)_-invariant of_ \(S^{0}_{A}(\mathbb{Q}_{\infty})\) _is equal to_ \(0\)_._ 2. _The residual fine Selmer group_ \(S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})\) _is finite._ Proof.: It follows from Proposition 2.3 that the \(\mu\)-invariant of \(S^{0}_{A}(\mathbb{Q}_{\infty})\) is \(0\) if and only if \(S^{0}_{A}(\mathbb{Q}_{\infty})[\varpi]\) is finite. Then it follows from Lemma 4.1 that \(S^{0}_{A}(\mathbb{Q}_{\infty})[\varpi]\) is finite if and only if \(S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})\) is finite. This completes the proof. At this point, it is pertinent to recall a conjecture of Coates and Sujatha on the structure of the fine Selmer group associated with an elliptic curve. For futher details, see [13, Conjecture A]. **Conjecture 4.3** (Coates-Sujatha).: _Let \(E\) be an elliptic curve over a number field \(F\) and \(p\) be a prime above wibich \(E\) has good reduction. Then, tbe fine Selmer group \(\operatorname{Sel}^{0}_{p^{\infty}}(F_{\infty},E)\) is cofinitely generated as a \(\mathbb{Z}_{p}\)-module._ Under some additional conditions, the above conjecture is known to hold. Let \(F(E_{p^{\infty}})\) be the Galois extension of \(F\) generated by \(E_{p^{\infty}}\). In greater detail, letting \(\rho_{E,p}:\mathrm{G}_{F}\to\mathrm{GL}_{2}(\mathbb{Z}_{p})\) be the Galois representation on the \(p\)-adic Tate module of \(E\), the extension \(F(E_{p^{\infty}}):=\bar{F}^{\ker\rho_{E,p}}\). Note that \(\rho_{E,p}\) induces an inclusion of \(\mathrm{Gal}(F(E_{p^{\infty}})/F)\) into \(\mathrm{GL}_{2}(\mathbb{Z}_{p})\). **Theorem 4.4** (Coates-Sujatha).: _Let \(E_{/F}\) be an elliptic curve and \(p\) an odd prime sucb total \(F(E_{p^{\infty}})\) is a pro-\(p\) extension of \(F\). Then, the following conditions are equivalent_ 1. _Conjecture_ 4.3 _is valid, i.e.,_ \(\mathrm{Sel}_{p^{\infty}}^{0}(F_{\infty},E)\) _is cofinitely generated as a_ \(\mathbb{Z}_{p}\)_-module._ 2. _The classical Iwasawa_ \(\mu\)_-invariant_ \(\mu_{p}(F)\) _vanishes._ Proof.: The above result is [13, Theorem 3.4]. ## 5. Structure of the residual Greenberg Selmer group In this section, we prove some of the main results of the article which will be of key importance in the proof of Theorem l. At the end of this section, we shall prove Theorem 2. We begin by proving an explicit relationship between the residual Selmer group and the residual fine Selmer group. These residual Selmer groups were introduced in the previous section. It is necessary to introduce an assumption on the Galois action on the residual representation \(\bar{A}\). **Assumption 5.1**.: _Assume that \(\bar{C}\) does not contain a non-zero \(\mathrm{G}_{\mathbb{Q}_{\infty}}\)-submodule of \(\bar{A}\)._ For Galois representations associated with elliptic curves, we characterize precisely when the above Assumption holds. **Proposition 5.2**.: _Let \(E_{/\mathbb{Q}}\) be an elliptic curve with good ordinary reduction at \(p\), and let \(\bar{\rho}_{E,p}:\mathrm{G}_{\mathbb{Q}}\to\mathrm{GL}_{2}(\mathbb{F}_{p})\) be the Galois representation on the torsion subgroup \(E[p]\subset E(\bar{\mathbb{Q}})\). Thus, \(A=E[p^{\infty}]\) and \(\bar{A}=E[p]\). Let \((e_{1},e_{2})\) be an ordered basis of \(E[p]\) sucb total \(\bar{C}=\mathbb{F}_{p}\cdot e_{1}\). The following assertions bold._ 1. _Assume tbat_ \(\bar{\rho}_{E,p}\) _is irreducible. Then, the Assumption_ 5.1 _bolds._ 2. _Assume tbat_ \(\bar{\rho}_{E,p}\) _is reducible and indecomposable_1_, and with respect to the basis_ \((e_{1},e_{2})\)_, takes tbe form_ Since \(\mathbb{Q}_{\infty}/\mathbb{Q}\) is a \(\mathbb{Z}_{p}\)-extension, it follows that \(\mathcal{H}\) is a normal subgroup of \(\mathcal{G}\) of index \(|\mathcal{G}/\mathcal{H}|=p^{t}\), where \(t\in\mathbb{Z}_{\geq 0}\). If \(\mathcal{G}=\mathcal{H}\), then, \(\bar{\rho}_{E,p}\) remains irreducible when restricted to \(\mathrm{G}_{\mathbb{Q}_{\infty}}\). Therefore, in this case, \(\bar{C}\) is not a \(\mathrm{G}_{\mathbb{Q}_{\infty}}\)-submodule and the Assumption 5.1 holds. Therefore, we assume that \(p\) divides \(|\mathcal{G}/\mathcal{H}|\). Since \(p\) divides \(\mathcal{G}\), it follows that either \(\mathcal{G}\) contains \(\mathrm{SL}_{2}(\mathbb{F}_{p})\), or \(\mathcal{G}\) is contained in a Borel subgroup of \(\mathrm{GL}_{2}(\mathbb{F}_{p})\) (cf. [13, Proposition 3.l]). Since \(\bar{\rho}_{E,p}\) is irreducible, \(\mathcal{G}\) is not contained in a Borel subgroup. Hence, \(\mathcal{G}\) contains \(\mathrm{SL}_{2}(\mathbb{F}_{p})\). Suppose that \(\mathcal{H}\) is contained in a Borel subgroup; then we find that \(|\mathcal{H}|\) divides \(p(p-1)^{2}\). On the other hand, since \(\mathcal{G}\) contains \(\mathrm{SL}_{2}(\mathbb{F}_{p})\), it follows that \(|\mathcal{G}|\) is divisible by \(|\,\mathrm{SL}_{2}(\mathbb{F}_{p})|=(p^{2}-1)p\). Since \(|G|=p^{t}|H|\), it follows that \((p^{2}-1)\) divides \(p(p-1)^{2}\), which is a contradiction. Therefore, \(\mathcal{H}\) is not contained in a Borel subgroup and hence, the representation \(\bar{\rho}_{E,p}\) is irreducible when restricted to \(\mathrm{G}_{\mathbb{Q}_{\infty}}\). Therefore, \(\bar{A}\) does not contain any non-zero proper \(\mathrm{G}_{\mathbb{Q}_{\infty}}\) submodules, and this completes the proof of part (l). For the proof of part (2) it suffices to show that \(\bar{\rho}_{E,p}\) remains indecomposable even after restriction to \(\mathrm{G}_{\mathbb{Q}_{\infty}}\). We define a function \(\beta:\mathrm{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q})\to\mathbb{F}_{p}\) by setting \(\beta(g)\) to denote the lower left entry of \(\bar{\rho}_{E,p}\). It is easy to see that \(\beta\) gives rise to a cocycle \[\beta\in Z^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q},\mathbb{F}_{p}(\varphi_{2} \varphi_{1}^{-1})\right).\] Let \([\beta]\in H^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q},\mathbb{F}_{p}(\varphi_{ 2}\varphi_{1}^{-1})\right)\) denote the corresponding cohomology class. Since it is assumed that \(\bar{\rho}_{E,p}\) is indecomposable, it follows that \([\beta]\) is a non-zero cohomology class. In order to show that the restriction \[\bar{\rho}_{E,p}:\mathrm{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty})\to \mathrm{GL}_{2}(\mathbb{F}_{p})\] is indecomposable, it suffices to show that the restriction of \([\beta]\) to \(H^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty},\mathbb{F}_{p}(\varphi_{ 2}\varphi_{1}^{-1})\right)\) is non-zero. From the inflation restriction sequence, the kernel of the restriction map \[H^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q},\mathbb{F}_{p}(\varphi_{2}\varphi_{1 }^{-1})\right)\to H^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty},\mathbb{ F}_{p}(\varphi_{2}\varphi_{1}^{-1})\right) \tag{5.l}\] is \(H^{1}\left(\mathbb{Q}_{\infty}/\mathbb{Q},\left(\mathbb{F}_{p}(\varphi_{2} \varphi_{1}^{-1})\right)^{\mathrm{G}_{\mathbb{Q}_{\infty}}}\right)\). Since \(\varphi_{1}\) is ramified at \(p\) and \(\varphi_{2}\) is unramified at \(p\), we find that \(\varphi_{2}\varphi_{1}^{-1}\neq 1\). Since \(\mathrm{Gal}(\mathbb{Q}_{\infty}/\mathbb{Q})\) is a pro-\(p\) extension and the character \(\varphi_{2}\varphi_{1}^{-1}\) takes values in \(\mathbb{F}_{p}^{\times}\), we find that the restriction of \(\varphi_{2}\varphi_{1}^{-1}\) to \(\mathrm{G}_{\mathbb{Q}_{\infty}}\) is non-trivial. Therefore, \(\left(\mathbb{F}_{p}(\varphi_{2}\varphi_{1}^{-1})\right)^{\mathrm{G}_{ \mathbb{Q}_{\infty}}}=0\) and the restriction map (5.l) is injective. Hence, the restriction of \(\beta\) to \(H^{1}\left(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty},\mathbb{F}_{p}(\varphi_{2} \varphi_{1}^{-1})\right)\) is non-zero. This proves that \[\bar{\rho}_{E,p|\mathbb{Q}_{\infty}}:\mathrm{G}_{\mathbb{Q}_{\infty}}\to \mathrm{GL}_{2}(\mathbb{F}_{p})\] is indecomposable. We therefore have shown that \(\bar{C}\) is not a \(\mathrm{G}_{\mathbb{Q}_{\infty}}\)-submodule of \(\bar{A}\). This completes the proof of part (2). For part (3), observe that \(\bar{C}\) is a \(\mathrm{G}_{\mathbb{Q}}\)-submodule of \(\bar{A}\). In particular, it is a \(\mathrm{G}_{\mathbb{Q}_{\infty}}\) submodule of \(\bar{A}\), and the Assumption 5.1 is not satisfied. The residual Selmer group \(S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})\) is contained in \(S_{\bar{A}}(\mathbb{Q}_{\infty})\); set \[\bar{S}_{\bar{A}}(\mathbb{Q}_{\infty}):=\frac{S_{\bar{A}}(\mathbb{Q}_{\infty})}{ S^{0}_{\bar{A}}(\mathbb{Q}_{\infty})}.\] We postpone the proof of the above result till the end of this section. First, we introduce some further notation. Let \(L=\mathbb{Q}(\bar{A})\) be the field cut out by the residual representation. In other words, \(L\) is the field \(\bar{\mathbb{Q}}^{\ker\rho_{\bar{A}}}\), the field fixed by the kernel of the residual representation \(\rho_{\bar{A}}\). We note that \(\mathbb{Q}(\bar{A})\) is a finite Galois extension of \(\mathbb{Q}\) and the Galois group \(\operatorname{Gal}(\mathbb{Q}(\bar{A})/\mathbb{Q})\) is naturally isomorphic to the image of \(\rho_{\bar{A}}\); the representation \(\rho_{\bar{A}}\) induces an isomorphism \[\operatorname{Gal}(\mathbb{Q}(\bar{A})/\mathbb{Q})\xrightarrow{\sim}\operatorname {image}\rho_{\bar{A}}.\] We observe that \(\operatorname{G}_{L}\) is the kernel of \(\rho_{\bar{A}}\), and hence acts trivially on \(\bar{A}\). Let \(L_{\infty}\coloneqq L\cdot\mathbb{Q}_{\infty}\) be the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(L\). Let \(\beta\) be the prime of \(L_{\infty}\) that lies above \(p\) that coincides with the choice of embedding \(\iota_{p}\), and denote by \(I_{\beta}\) the inertia group at \(\beta\). Note that \(I_{\beta}\) is contained in the inertia group \(I_{\eta_{p}}\). We define a Selmer group \(S_{\bar{A}}(L_{\infty})\) associated to \((\bar{A},\bar{C})\) over \(L_{\infty}\). For each prime number \(\ell\), we define a local condition \(\mathcal{H}_{\ell}(L_{\infty},\bar{A})\). For \(\ell\neq p\), set \[\mathcal{H}_{\ell}(L_{\infty},\bar{A}):=\prod_{\eta|\ell}H^{1}(L_{\infty,\eta},\bar{A}),\] where \(\eta\) runs through all primes of \(L_{\infty}\) that lies above \(\ell\). We note that this is a finite set of primes. At the prime \(p\), we set \[\mathcal{H}_{p}(L_{\infty},\bar{A}):=\left(\frac{H^{1}(L_{\infty,\beta},\bar{ A})}{\bar{L}_{\beta}}\right),\] where \[\bar{L}_{\beta}:=\ker\left(H^{1}(L_{\infty,\beta},\bar{A})\to H^{1}(I_{\beta},\bar{D})\right).\] Note that since \(\rho_{\bar{A}}\) is unramified outside \(\Sigma\), and hence, \(L\) is contained in \(\mathbb{Q}_{\Sigma}\). With respect to notation above, the residual Selmer group is defined as follows \[S_{\bar{A}}(L_{\infty}):=\ker\left(H^{1}(\mathbb{Q}_{\Sigma}/L_{\infty},\bar{ A})\to\bigoplus_{\ell\in\Sigma}\mathcal{H}_{\ell}(L_{\infty},\bar{A})\right).\] We relate the two residual Selmer groups \(S_{\bar{A}}(\mathbb{Q}_{\infty})\) and \(S_{\bar{A}}(L_{\infty})\). We shall set \(G:=\operatorname{Gal}(L_{\infty}/\mathbb{Q}_{\infty})\). We note that \(\operatorname{Gal}(\bar{\mathbb{Q}}/L)\) is the kernel of \(\rho_{\bar{A}}\) and therefore, the Galois action of \(\operatorname{Gal}(\bar{\mathbb{Q}}/L)\) on \(\bar{A}\) is trivial. We identify \(H^{1}(\mathbb{Q}_{\Sigma}/L_{\infty},\bar{A})\) with the group of homomorphisms \(\operatorname{Hom}\left(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{\infty}), \bar{A}\right)\). For \(g\in G\), take \(\tilde{g}\in\operatorname{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty})\) to be a lift of \(g\). Take \(\psi\in\operatorname{Hom}\left(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{ \infty}),\bar{A}\right)\), we note that since \(\bar{A}\) is abelian, \(\psi(\tilde{g}x\tilde{g}^{-1})\) is independent of the choice of lift \(\tilde{g}\). Define an action of \(G\) on \(\operatorname{Hom}\left(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{\infty}), \bar{A}\right)\), by setting \[(g\cdot\psi)(x):=g^{-1}\psi(\tilde{g}x\tilde{g}^{-1}).\] Therefore, a homomorphism \(\psi\) in \(\operatorname{Hom}\left(\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{\infty}), \bar{A}\right)^{G}\) is one which is \(G\)-equivariant, in the sense that \[\psi(\tilde{g}x\tilde{g}^{-1})=g\psi(x).\] Consider the inflation-restriction sequence \[0\to H^{1}(G,\bar{A})\xrightarrow{inf}H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{ \infty},\bar{A})\xrightarrow{res}\operatorname{Hom}\left(\operatorname{Gal}( \mathbb{Q}_{\Sigma}/L_{\infty}),\bar{A}\right)^{G}. \tag{5.2}\] The restriction map \[\operatorname{res}:H^{1}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{\infty},\bar{A})\to H^{1} (\mathbb{Q}_{\Sigma}/L_{\infty},\bar{A})\] induces a map \[\operatorname{res}:S_{\bar{A}}(\mathbb{Q}_{\infty})\to S_{\bar{A}}(L_{\infty}).\] Since \(G\) is finite, \(H^{1}(G,\bar{A})\) is finite, and thus the kernel of this restriction map is finite. We let \(S_{\bar{A}}^{\operatorname{nr}}(L_{\infty})\) be the subspace of \(S_{\bar{A}}(L_{\infty})\) consisting of the classes that are unramified at \(\beta\). Note that \(S_{\bar{A}}(L_{\infty})\) consists of homomorphisms \[\psi:\operatorname{Gal}(\mathbb{Q}_{\Sigma}/L_{\infty})\to\bar{A}\] that satisfy the following conditions 1. \(\psi\) trivial when restricted to the decomposition group of any prime \(\eta\) of \(L_{\infty}\) that lies above a prime \(\ell\in\Sigma\backslash\{p\}\), 2. \(\psi(I_{\beta})\) is contained in \(\bar{C}\). The subset \(S_{\bar{A}}^{\operatorname{nr}}(L_{\infty})\) consists of those classes for which \(\psi(I_{\beta})=0\). **Conjecture 5.3**.: _Suppose tbat Assumption 5.1 holds, then, the image of the restriction map_ \[S_{\bar{A}}(L_{\infty})^{G}\to\operatorname{Hom}\big{(}I_{\beta},\bar{C}\big{)}\] _is finite._ **Theorem 5.4**.: _Let \((A,C)\) be such tbat Assumption 5.1 holds for \((\bar{A},\bar{C})\). Furthermore, assume tbat the Conjecture 5.3 is also satisfied. Then, the following assertions bold_ 1. \(\bar{S}_{\bar{A}}(\mathbb{Q}_{\infty}):=\frac{S_{\bar{A}}(\mathbb{Q}_{\infty}) }{S_{\bar{A}}^{\operatorname{d}}(\mathbb{Q}_{\infty})}\) _is finite._ 2. _The_ \(\mu\)_-invariant of_ \(S_{A}(\mathbb{Q}_{\infty})\) _vanishes if and only if the_ \(\mu\)_-invariant of_ \(S_{A}^{0}(\mathbb{Q}_{\infty})\) _vanishes._ Proof of Tbeorem 5.4.: We shall set \(S_{\bar{A}}^{\operatorname{nr}}(\mathbb{Q}_{\infty})\) to consist of all classes \(f\in S_{\bar{A}}^{\operatorname{nr}}(\mathbb{Q}_{\infty})\) that are unramified at \(\eta_{p}\). It is easy to see that \(S_{\bar{A}}^{0}(\mathbb{Q}_{\infty})\) is of finite index in \(S_{\bar{A}}^{\operatorname{nr}}(\mathbb{Q}_{\infty})\). We begin by proving part (i). We have a short exact sequence \[0\to S_{\bar{A}}^{\operatorname{nr}}(\mathbb{Q}_{\infty})\to S_{\bar{A}}( \mathbb{Q}_{\infty})\to H^{1}(I_{\eta_{p}},\bar{C}). \tag{5.3}\] Consider the commutative square It follows from Conjecture 1.2 that the image of the composed map \[S_{\bar{A}}(\mathbb{Q}_{\infty})\to H^{1}(I_{\beta},\bar{C})\] is finite. Since \(I_{\beta}\) has finite index in \(I_{\eta_{p}}\), it follows that the kernel of the restriction map \[H^{1}(I_{\eta_{p}},\bar{C})\to H^{1}(I_{\beta},\bar{C})\] is finite. Therefore, we find that the image of \[S_{\bar{A}}(\mathbb{Q}_{\infty})\to H^{1}(I_{\eta_{p}},\bar{C})\] is finite. From the exact sequence (5.3), we deduce that \(S_{A}^{\mathrm{nr}}(\mathbb{Q}_{\infty})\) is of finite index in \(S_{\bar{A}}(\mathbb{Q}_{\infty})\). Therefore, \(S_{\bar{A}}^{0}(\mathbb{Q}_{\infty})\) is of finite index in \(S_{\bar{A}}(\mathbb{Q}_{\infty})\), and the statement of part (!) follows from this. It follows from part (!) that \(S_{\bar{A}}(\mathbb{Q}_{\infty})\) is finite if and only if \(S_{\bar{A}}^{0}(\mathbb{Q}_{\infty})\) is finite. Proposition 3.3 asserts that \(S_{\bar{A}}(\mathbb{Q}_{\infty})\) is finite if and only if the \(\mu\)-invariant of \(S_{A}(\mathbb{Q}_{\infty})\) is \(0\). On the other hand, Proposition 4.2 asserts that \(S_{\bar{A}}^{0}(\mathbb{Q}_{\infty})\) is finite if and only if the \(\mu\)-invariant of \(S_{A}^{0}(\mathbb{Q}_{\infty})\) is \(0\). Hence, the \(\mu\)-invariant of \(S_{A}(\mathbb{Q}_{\infty})\) is \(0\) if and only if the \(\mu\)-invariant of \(S_{A}^{0}(\mathbb{Q}_{\infty})\) is \(0\). This proves part (?). **Corollary 5.5**.: _Let \(E_{/\mathbb{Q}}\) be an elliptic curve with good ordinary reduction at an odd prime \(p\). Assume tbd \(\bar{\rho}_{E,p}\) is irreducible and Conjecture 5.3 is satisfied.. Then, tbe following are equivalent._ 1. _The_ \(\mu\)_-invariant of_ \(\mathrm{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) _vanishes, i.e., Greenberg's conjecture holds._ 2. _The_ \(\mu\)_-invariant of_ \(\mathrm{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) _vanishes, i.e., tbe Conjecture_ 4.3 _holds._ Proof.: Since \(\bar{\rho}_{E,p}\) is irreducible, Proposition 5.2 shows that the Assumption 5.1 is satisfied. The result therefore follows from part (?) of Theorem 5.4. Let \(E_{/\mathbb{Q}}\) be an elliptic curve and \(p\) a prime at which \(E\) has good ordinary reduction. Let \[\bar{\rho}_{E,p}:\mathrm{G}_{\mathbb{Q}}\to\mathrm{Aut}(E[p])\xrightarrow{ \sim}\mathrm{GL}_{2}(\mathbb{F}_{p})\] be the residual representation on \(E[p]\). The splitting field \(\mathbb{Q}(E[p])\) is the field extension of \(\mathbb{Q}\) which is fixed by the kernel of \(\bar{\rho}_{E,p}\). Proof of Theorem 2.: Since \(\bar{\rho}_{E,p}\) is irreducible, it follows from Corollary 5.5 that the \(\mu\)-invariant of \(\mathrm{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) is equal to \(0\) if and only if the \(\mu\)-invariant of \(\mathrm{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is equal to \(0\). Consider the Galois representation \[\rho_{E,p}:\mathrm{G}_{\mathbb{Q}}\to\mathrm{GL}_{2}(\mathbb{Z}_{p})\] associated with the \(p\)-adic Tate module of \(E\). The restriction of \(\rho_{E,p}\) to \(\mathrm{G}_{L}\) is trivial modulo \(p\). This is because \(L\) is the splitting field \(\mathbb{Q}(E[p]):=\bar{\mathbb{Q}}^{\ker\bar{\rho}_{E,p}}\), and \(\mathrm{G}_{L}\) is the kernel of \(\bar{\rho}_{E,p}=\rho_{E,p}\mod p\). Therefore, the representation \(\rho_{E,p}\) identifies \(\mathrm{Gal}\left(L(E_{p^{\infty}})/L\right)\) with a subgroup of \[\widehat{\mathrm{GL}_{2}}(\mathbb{Z}_{p}):=\ker\{\mathrm{GL}_{2}(\mathbb{Z}_{p })\to\mathrm{GL}_{2}(\mathbb{Z}/p\mathbb{Z})\}.\] It is easy to see that \(\widehat{\mathrm{GL}_{2}}(\mathbb{Z}_{p})\) is a pro-\(p\) group.2 Hence, the Galois group \(\mathrm{Gal}\left(L(E_{p^{\infty}})/L\right)\)is a pro-\(p\) group. Since it is assumed that the classical Iwasawa \(\mu\)-invariant \(\mu_{p}(L)\) vanishes, it follows from Theorem 4.4 that \(\mathrm{Sel}_{p^{\infty}}^{0}(L_{\infty},E)\) is cofinitely generated as a \(\mathbb{Z}_{p}\)-module. In other words, \(\mathrm{Sel}_{p^{\infty}}^{0}(L_{\infty},E)\) is a cotorsion \(\Lambda\)-module whose \(\mu\)-invariant vanishes. It is easy to see that the kernel of the natural restriction map Footnote 2: The author is willing to provide further details in support of this claim (if the referee insists). \[\mathrm{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\to\mathrm{Sel}_{p^{\infty }}^{0}(L_{\infty},E)\] is cofinitely generated as a \(\mathbb{Z}_{p}\)-module, and hence the \(\mu\)-invariant of \(\mathrm{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is \(0\). This completes the proof. 6. Residually reducible Galois representations arising from elliptic curves and Greenberg's conjecture Throughout this section, we fix and elliptic curve \(E/\mathbb{Q}\) and an odd prime \(p\) at which \(E\) has good ordinary reduction. Let \(M\) denote the \(p\)-adic Tate module of \(E\). Recall that \(\bar{A}\) is the mod-\(p\) reduction of \(M\), which we may identify with \(E[p]\). The module \(\bar{C}\) is the \(1\)-dimensional \(\mathrm{G}_{p}\)-submodule which is ramified, and the quotient \(\bar{D}:=\bar{A}/\bar{C}\) is unramified. The residual representation \(\bar{\rho}_{E,\underline{p}}=\rho_{\bar{A}}\) is the representation of \(\mathrm{G}_{\mathbb{Q}}\) on \(\bar{A}\). We shall assume throughout this section that \(\bar{A}\) is reducible as a Galois module. Choose a basis \((e_{1},e_{2})\) of \(\bar{A}\) such that \(\bar{C}=\mathbb{F}_{p}\cdot e_{1}\). Call such a basis _admissible_; note that for any other admissible basis \((e_{1}^{\prime},e_{2}^{\prime})\), there are constants \(c_{1},c_{2}\in\mathbb{F}_{p}^{\times}\) and \(d\in\mathbb{F}_{p}\) for which \[e_{1}^{\prime}=c_{1}e_{1}\text{ and }e_{2}^{\prime}=c_{2}e_{2}+de_{1}.\] With respect to an admissible basis \((e_{1},e_{2})\), the restriction of \(\bar{\rho}_{E,\underline{p}}\) to the decomposition group at \(p\) takes the form \[\bar{\rho}_{E,p|\,\mathrm{G}_{p}}=\left(\begin{array}{cc}\alpha\bar{\chi}& *\\ 0&\alpha^{-1}\end{array}\right),\] where \(\alpha:\mathrm{G}_{p}\to\mathbb{F}_{p}^{\times}\) is an unramified character and \(\bar{\chi}\) is the mod-\(p\) cyclotomic character. There are \(3\) possibilities for the representation \(\bar{\rho}_{E,\underline{p}}\). These are described below and all matrices are written with respect to an admissible basis \((e_{1},e_{2})\). **Type 1:**: The representation \(\bar{\rho}_{E,\underline{p}}\) is upper triangular of the form \(\left(\begin{array}{cc}\varphi_{1}&*\\ 0&\varphi_{2}\end{array}\right)\), where \(\varphi_{1}\) is odd and \(\varphi_{2}\) is even. **Type 2:**: The representation \(\bar{\rho}_{E,\underline{p}}\) is upper triangular of the form \(\left(\begin{array}{cc}\varphi_{1}&*\\ 0&\varphi_{2}\end{array}\right)\), where \(\varphi_{1}\) is even and \(\varphi_{2}\) is odd. **Type 3:**: The representation \(\bar{\rho}_{E,\underline{p}}\) is indecomposable and lower triangular of the form \(\left(\begin{array}{cc}\varphi_{1}&0\\ *&\varphi_{2}\end{array}\right)\). In this context, to be indecomposable means that there is no admissible basis with respect to which \(\bar{\rho}_{E,\underline{p}}\) is a direct sum of characters. We note that \(\varphi_{1|\,\mathrm{G}_{p}}=\alpha\chi\) and \(\varphi_{2|\,\mathrm{G}_{p}}\simeq\alpha^{-1}\). Note that the Conjecture 5.3 specializes to the Conjecture 1.2. The vanishing of the \(\mu\)-invariant of \(\mathrm{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) shall be detected by the structure of the residual representation. We shall first recall a result of Schneider on isogenies between elliptic curves. Given a finite Galois stable submodule \(\alpha\) of \(E[p^{\infty}]\), set \(\alpha^{+}:=C\cap\alpha\); set \[\delta(\alpha):=\mathrm{ord}_{p}|\alpha^{+}|-\mathrm{ord}_{p}|H^{0}(\mathbb{R},\alpha)|.\] We note that since \(p\) is assumed to be odd, the above definition coincides with that of [1, Definition 2.I]. In particular, it is easy to see that the quantities \(\epsilon_{v}=0\) from _loc. cit._ are trivial. **Theorem 6.1** (Schneider).: _Let \(E\) and \(E^{\prime}\) be elliptic curves with good ordinary reduction at \(p\) and \(\phi:E\to E^{\prime}\) an isogeny with kernel \(\alpha\). then the difference between \(\mu\)-invariants is given by_ \[\mu_{p}(E)-\mu_{p}(E^{\prime})=\delta(\alpha).\] _In particular, it follows tbat \(\mu_{p}(E)\geq\delta(\alpha)\)._ Proof.: We refer to [14] or [15, Theorem 2.2] for the proof of the above result. We recall a result of Coates and Sujatha which will be of key importance in the proof of Greenberg's conjecture in the residually reducible case. **Theorem 6.2** (Coates and Sujatha).: _Let \(E\) be an elliptic curve over \(\mathbb{Q}\) such that \(\rho_{\bar{A}}\) is a reducible Galois representation. Then, the \(\mu\)-invariant of the fine Selmer group \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is equal to \(0\)._ Proof.: We write \(\rho_{\bar{A}}=\left(\begin{array}{cc}\varphi_{1}&0\\ *&\varphi_{2}\end{array}\right)\) with respect to some basis of \(\bar{A}\). Let \(\mathcal{K}=\mathbb{Q}(\varphi_{1},\varphi_{2})\) be the abelian extension of \(\mathbb{Q}\) generated by \(\varphi_{1}\) and \(\varphi_{2}\). Let \(\mathcal{K}(E_{p^{\infty}})\) be the extension generated by the \(p\)-primary torsion points of \(E\). In other words, \(\mathcal{K}(E_{p^{\infty}})\) is the field extension of \(\mathcal{K}\) which is fixed by the kernel of \(\rho_{M}\). Let \(I\) be the subgroup of \(\operatorname{GL}_{2}(\mathbb{Z}_{p})\) consisting of all matrices \(A\) for which the mod-\(p\) reduction is a unipotent lower triangular matrix \(\left(\begin{array}{cc}1&0\\ *&1\end{array}\right)\). Via \(\rho_{M}:\operatorname{G}_{\mathbb{Q}}\to\operatorname{GL}_{2}(\mathbb{Z}_{p})\), the Galois group \(\operatorname{Gal}(\mathcal{K}(E_{p^{\infty}})/\mathcal{K})\) is identified with a subgroup of \(I\). Since \(I\) is a pro-\(p\) group, so is the Galois group \(\operatorname{Gal}(\mathcal{K}(E_{p^{\infty}})/\mathcal{K})\). Recall that by the celebrated result of Ferrero and Washington [13], the classical Iwasawa \(\mu\)-invariant \(\mu_{p}(\mathcal{K})\) vanishes, since \(\mathcal{K}\) is an abelian extension of \(\mathbb{Q}\). It then follows from Theorem 4.4 that \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathcal{K}_{\infty},E)\) is cofinitely generated as a \(\mathbb{Z}_{p}\)-module. The kernel of the restriction map \[\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\to\operatorname{ Sel}_{p^{\infty}}^{0}(\mathcal{K}_{\infty},E)\] is contained in \(H^{1}(H,E(\mathcal{K}_{\infty})[p^{\infty}])\), where \(H=\operatorname{Gal}(\mathcal{K}_{\infty}/\mathbb{Q}_{\infty})\). Since \(H\) has order prime to \(p\), it follows that this cohomology group vanishes. Therefore, \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is cofinitely generated as a \(\mathbb{Z}_{p}\)-module. In particular, the \(\mu\)-invariant of \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is \(0\). **Theorem 6.3**.: _Let \(E/_{\mathbb{Q}}\) be an elliptic curve with good ordinary reduction at an odd prime \(p\) for wbich the following conditions bold._ 1. _The residual representation is reducible._ 2. _The Assumption_ 5.1 _holds._ 3. _The Conjecture_ 1.2 _holds._ _Then, we find tbat \(\mu_{p}(E)=0\)._ Proof.: The assumption 5.1 holds, and therefore, by part (2) of Theorem 5.4, the \(\mu\)-invariant of \(\operatorname{Sel}_{p^{\infty}}(\mathbb{Q}_{\infty},E)\) vanishes if and only if the \(\mu\)-invariant of \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) vanishes. Since the residual representation is reducible, it follows from Theorem 6.2 that the \(\mu\)-invariant of \(\operatorname{Sel}_{p^{\infty}}^{0}(\mathbb{Q}_{\infty},E)\) is \(0\), and the result follows from this. Assuming that Conjecture 1.2 holds for the isogeny class of \(E\), we have a complete description for the \(\mu=0\) condition based purely on the residual representation \(\bar{\rho}_{E,p}\). **Theorem 6.4** (\(\mu=0\) condition).: _Let \(E/_{\mathbb{Q}}\) be an elliptic curve and \(p\) an odd prime at wbich \(E\) has good ordinary reduction. Assume tbat the Conjecture 1.2 holds for all elliptic curves tbat are defined over \(\mathbb{Q}\) and are \(\mathbb{Q}\)-isogenous to \(E\). Then, \(\mu_{p}(E)=0\) if and only if \(\bar{\rho}_{E,p}\) is of type 2 or 3. Equivalently, \(\mu_{p}(E)>0\) if and only if it is of type l._ Proof.: First, we consider the case when \(\bar{\rho}_{E,p}\) is of type 1. Note that since the representation \(\bar{\rho}_{E,p}\) is upper triangular, \(\alpha:=\bar{C}\) is a \(\mathrm{G}_{\mathbb{Q}}\)-submodule of \(E[p]\). Since \(\varphi_{1}\) is odd, \(H^{0}(\mathbb{R},\alpha)=0\). Then, we find that \(\delta(\alpha)=1\), and it follows from Theorem 6.1 (or [1, Theorem 2.1]) that \(\mu_{p}(E)\geq\delta(\alpha)\geq 1\). Next, we consider type 2 representations. Greenberg and Vatsal [13] showed that if \(E[p]\) contains a \(1\)-dimensional \(\mathrm{G}_{\mathbb{Q}}\)-stable subspace which is ramified at \(p\) and even or unramified at \(p\) and odd, then, \(\mu_{p}(E)=0\). In this case, \(\bar{C}\) is a subspace which is \(\mathrm{G}_{\mathbb{Q}}\)-stable, ramified at \(p\) and even, and therefore, their result applies to show that \(\mu_{p}(E)=0\). Finally, consider the type 3 representations. Note that \(\varphi_{2}\) is unramified at \(p\). Thus, if \(\varphi_{2}\) is odd, then the aforementioned result of Greenberg and Vatsal applies to show that \(\mu_{p}(E)=0\). For type 3 representations for which \(\varphi_{2}\) is even however, it was expected that \(\mu_{p}(E)=0\) should hold, however, not proved. We complete the proof by noting that Proposition 5.2 implies that when \(\bar{\rho}_{E,p}\) is of type 3, the Assumption 5.1 holds. Then it follows from Theorem 6.3 that \(\mu_{p}(E)=0\). We now give the proof of our main theorem. Proof of Theorem 1.: If \(\mu:=\mu_{p}(E)=0\), then the result is vacuously true, setting \(E^{\prime}:=E\). Therefore, assume without loss of generality that \(\mu>0\). Thus, it follows from Theorem 6.4 that \(\bar{\rho}_{E,p}\) is of type l, i.e., \(\bar{C}\) is an odd \(\mathrm{G}_{\mathbb{Q}}\)-submodule of \(E[p]\). In this case, setting \(\alpha:=\bar{C}\), we observe that \(\delta(\alpha)=1\) (see the first paragraph in the proof of Theorem 6.4). We set \(E_{1}:=E/\alpha\). It follows from Theorem 6.1 that \[\mu_{p}(E_{1})=\mu_{p}(E)-\delta(\alpha)=\mu-1.\] In this way, we obtain a sequence of elliptic curves \(E=E_{0},E_{1},E_{2},\ldots,E_{\mu}\) over \(\mathbb{Q}\) along with isogenies \(\phi_{i}:E_{i-1}\to E_{i}\) such that \(\mu_{p}(E_{i})=\mu-i\). Set \(E^{\prime}:=E_{\mu}\) and consider the composite isogeny \[E\xrightarrow{\phi_{1}}E_{1}\xrightarrow{\phi_{2}}E_{2}\xrightarrow{\phi_{3} }\cdots\to E^{\prime}.\] We find that \(\mu(E^{\prime})=\mu-\mu=0\). This completes the proof.
2310.13752
Excited-State Downfolding Using Ground-State Formalisms
Downfolding coupled cluster (CC) techniques are powerful tools for reducing the dimensionality of many-body quantum problems. This work investigates how ground-state downfolding formalisms can target excited states using non-Aufbau reference determinants, paving the way for applications of quantum computing in excited-state chemistry. This study focuses on doubly excited states for which canonical equation-of-motion CC approaches struggle to describe unless one includes higher-than-double excitations. The downfolding technique results in state-specific effective Hamiltonians that, when diagonalized in their respective active spaces, provide ground- and excited-state total energies (and therefore excitation energies) comparable to high-level CC methods. The performance of this procedure is examined with doubly excited states of H$_{2}$, Methylene, Formaldehyde, and Nitroxyl.
Nicholas P. Bauman
2023-10-20T18:27:29Z
http://arxiv.org/abs/2310.13752v1
# Excited-State Downfolding Using Ground-State Formalisms ###### Abstract Downfolding coupled cluster (CC) techniques are powerful tools for reducing the dimensionality of many-body quantum problems. This work investigates how ground-state downfolding formalisms can target excited states using non-Aufbau reference determinants, paving the way for applications of quantum computing in excited-state chemistry. This study focuses on doubly excited states for which canonical equation-of-motion CC approaches struggle to describe unless one includes higher-than-double excitations. The downfolding technique results in state-specific effective Hamiltonians that, when diagonalized in their respective active spaces, provide ground- and excited-state total energies (and therefore excitation energies) comparable to high-level CC methods. The performance of this procedure is examined with doubly excited states of H\({}_{2}\), Methylene, Formaldehyde, and Nitroxyl. ## I Introduction Calculating excited states remains an important challenge in quantum chemistry despite over a decade of strong theoretical development. The many reasons why it is challenging to describe excited states are compounding. First, there is the chemical nature of the problem, which includes the range of excitation energies, the density of states, and the nature of the excited state, to name a few factors.[1; 2] Then there is the task of determining an appropriate methodology to accurately describe the state of interest. For higher-than-singly excited states, reliable high-level theories are required to accurately capture these states, which can be prohibitively expensive. After determining a desirable methodology, one is subjected to the available eigensolver algorithms and their corresponding operation count, memory requirements, and convergence issues. Given these hurdles, it seems fitting that excited state calculations can greatly benefit from incorporating modern technology and unconventional approaches. In chemistry, quantum computing has the potential to revolutionize the field by enabling the simulation of molecular systems that are beyond the capabilities of classical computers.[3; 4; 5; 6] The development of quantum algorithms has been primarily and largely devoted to describing the ground state. However, there are a handful of algorithms for excited states. The quantum phase estimation (QPE) algorithm[7; 8] is a powerful tool for calculating excited states, as the probability of sampling a particular state is directly proportional to the square of the overlap of the target state and a trial state. In earlier work, we demonstrated the ease of use of the QPE algorithm in calculating a variety of high-energy excited states using simple postulated initial states.[9] The variational quantum eigensolver, usually applied to ground-state problems, can be extended to calculate excited states through the multistate-contracted,[10; 11] folded spectrum,[12; 13] and Lagrangian-based approaches.[14; 15; 16] Other methods for excited states involve the imaginary-time variational quantum simulator,[17; 18; 19] Peeters-Devreese-Solatov energy functional,[20; 21] quantum subspace expansion,[22; 23] and the quantum equation-of-motion approach,[24] to name a few. However, until quantum algorithms and hardware mature, the application of these techniques is greatly limited. In order to perform meaningful quantum calculations, methods for calculating excited states must be coupled with formalisms to reduce the dimensionality of the problem (see discussion in Ref. [25]). Mathematically rigorous formulations for reducing the dimensionality/cost of quantum formulations are necessary for expanding the envelope of system sizes tractable with current and near-term quantum simulators and hardware. There have been a variety of approximations and techniques in recent years for reducing the dimensionality and the complexity of quantum calculations.[26; 27; 28; 29; 30; 31; 32; 33; 34] One of the most promising formalisms is the downfolding technique based on the double unitary coupled cluster (DUCC) ansatz,[25; 35; 36; 37; 38; 39; 40] which constructs effective (or downfolded) Hamiltonians in a small-dimensionality sub-space of the entire Hilbert space, which is commonly defined as an active space. The resulting downfolded Hamiltonians integrate out the external (out-of-active-space) Fermionic degrees of freedom from the internal (in-the-active-space) parameters of the wave function, which can be determined as components of the eigenvectors of the downfolded Hamiltonians in the active space. In earlier work, we introduced an extension of the DUCC approach for excited states.[37] The method combined excited-state equation-of-motion CC (EOM-CC) theory operators with ground-state cluster operators to construct a state-selective downfolded Hamiltonian for the targeted excited state, which worked well for singly excited states. Since this approach requires an underlying EOM-CC calculation, it is computationally expensive to extend to describe higher-than-singly excited states and subject to all the hurdles described earlier. So, it would seem beneficial to circumvent any EOM-CC calculation and find a technique that utilizes the well-defined ground-state downfolding procedure to construct excited-state effective Hamiltonians. One of the simplest yet attractive approaches to approximating excited states has been to use self-consistent field (SCF) solutions.[41; 42; 43; 44; 45; 46] In this way, excitation energies are given by the energy difference between two distinct SCF solutions. This approach, often referred to as the \(\Delta\)-SCF approximation, brings the advantage of orbital relaxation. One can also im prove the electron-electron correlation effects by carrying out many-body approaches on the distinct SCF solutions to build on these solutions. This means robust high-level ground-state methodologies can be applied to study excited states while avoiding all of the stumbling blocks of conventional excited-state many-body approaches. The single-reference coupled cluster approach [47; 48; 49; 50; 51; 52] is well known for accurately describing ground state properties in molecular systems. However, the nature of the non-linear CC equations is sometimes undervalued and underutilized. One of the appealing features of the CC equations is the ability to converge to excited states with the same symmetry as the reference function. [53] Convergence to a targeted solution is aided by starting with an SCF reference solution corresponding to a targeted state. The difference between ground- and excited-state solutions define the \(\Delta\)-CC approach, which has been strongly demonstrated and investigated. [54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67] It is also worth mentioning that the nature of the CC equations to converge to excited states has been studied in the context of multi-reference CC methods which has lead to the understanding of the 'intruder' state problem. [68] Starting with an SCF solution for an excited state, one can utilize ground-state downfolding techniques based on the DUCC formalism to construct a state-specific effective Hamiltonian for the target excited state. In this work, I investigate the utility of this procedure by applying it to the study of doubly excited, for which conventional many-body methods struggle to describe. Within the CC framework, one often turns the excited-state equation-of-motion (EOM) extensions. [69; 70; 71; 72; 73; 74; 75; 76; 77; 78] with higher-than-double excitations to describe these states, such as the EOM approach with single, double, and triple excitations (EOM-CCSDT) [72; 73; 74; 75; 76] or up to quadruple excitations (EOM-CCSDTQ) [72; 73; 77; 78]. The full treatment of triple excitations with the EOM-CCSDT and EOM-CCSDT approaches have \(\mathcal{N}^{8}\) and \(\mathcal{N}^{10}\) computational steps, respectfully, where \(\mathcal{N}\) is a measure of the system size. In contrast, the DUCC approaches in this paper have \(\mathcal{N}^{6}\) steps when the ground-state CCSD approach is used as a source of amplitudes while avoiding complications associated with excited-state algorithms. The final reduced dimensionality of each problem, expressed by the downfolded Hamiltonian, represents system sizes amenable to current and near-term quantum computing architecture. ## II Theory ### Canonical CC Theory Determining the excitation energy of an excited state using a non-Aufbau reference determinant requires two separate calculations. The first uses a standard Aufbau reference corresponding to the ground state, denoted \(\ket{\Phi_{0}}\). The second calculation uses the non-Aufbau reference determinant that correlates with the \(\mu\)-th excited state, denoted \(\ket{\Phi_{\mu}}\). One can take the energy difference at this stage, which is referred to as the \(\Delta\)-SCF approximation. These states' descriptions are then improved by post-SCF calculations using suitable methods that treat these states separately. The exponential ansatz of coupled cluster theory is particularly well-suited for such calculations. The single-reference CC theory utilizes the exponential representation of the wave function \(\ket{\Psi_{\mu}}\) \[\ket{\Psi_{\mu}}=e^{T_{\mu}}\ket{\Phi_{\mu}}\,, \tag{1}\] where \(T_{\mu}\) is the cluster operator and \(\ket{\Phi_{\mu}}\) is the single-determinantal reference function corresponding to either the ground state (\(\mu=0\)) or an excited state (\(\mu>0\)). The cluster operator can be represented through its many-body components \(T_{\mu,n}\) \[T_{\mu}=\sum_{n=1}^{M}T_{\mu,n}\,, \tag{2}\] where the individual component \(T_{\mu,n}\) take the form \[T_{\mu,n}=\frac{1}{(n!)^{2}}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{n}\\ a_{1},\ldots,i_{n}\end{subarray}}t_{\mu,a_{1}\ldots a_{n}}^{i_{1}\ldots i_{n}}E _{i_{1}\ldots i_{n}}^{a_{1}\ldots a_{n}}\,. \tag{3}\] In the expressions, indices \(i_{1},i_{2},\ldots\) (\(a_{1},a_{2},\ldots\)) refer to occupied (unoccupied) spin orbitals in the reference function \(\ket{\Phi_{\mu}}\). The excitation operators \(E_{i_{1}\ldots i_{n}}^{a_{1}\ldots a_{n}}\) are defined through strings of standard creation (\(a_{p}^{\dagger}\)) and annihilation (\(a_{p}\)) operators \[E_{i_{1}\ldots i_{n}}^{a_{1}\ldots a_{n}}=a_{a_{1}}^{\dagger}\ldots a_{a_{n}}^ {\dagger}a_{i_{n}}\ldots a_{i_{1}}\,, \tag{4}\] where creation and annihilation operators satisfy conventional anti-commutation rules. When \(M\) in the summation in Eq. 2 is equal to the number of correlated electrons (\(N_{e}\)), then the corresponding CC formalism is equivalent to the exact, full configuration interaction (FCI) solution, while truncations (\(M<N_{e}\)) lead to the hierarchy of standard CC approximations such as CC with singles and doubles (CCSD, \(M=2\)), CC with singles, doubles, and triples (CCSDT, \(M=3\)), and so on. The amplitudes \(i_{\mu,a_{1}\ldots a_{n}}^{i_{1}\ldots a_{n}}\) in Eq. 3 are determined by solving a coupled system of energy-independent non-linear algebraic equations \[\left\langle\Phi_{\mu,i_{1}\ldots a_{n}}^{a_{1}\ldots a_{n}}\Big{|}\overline{ \mu}_{\mu}\Big{|}\Phi_{\mu}\right\rangle\,, \tag{5}\] corresponding to the amplitudes that are being solved, where \(n=1,\ldots,M\), \[\overline{H}_{\mu}=e^{-T_{\mu}}He^{T_{\mu}}=(He^{T_{\mu}})_{C}\,, \tag{6}\] is the similarity-transformed Hamiltonian, \(C\) designates the connected part of the operator expression, and \(\ket{\Phi_{\mu,i_{1}\ldots a_{n}}^{a_{1}\ldots a_{n}}}=a_{a_{1}}^{\dagger} \ldots a_{a_{n}}^{\dagger}a_{i_{n}}\ldots a_{i_{1}}\ket{\Phi_{\mu}}\) are the \(n\)-tuply excited determinants relative to the reference \(\ket{\Phi_{\mu}}\). Once the amplitudes are solved, the CC energy is determined using the equation \[E_{\mu}=\left\langle\Phi_{\mu}\big{|}\overline{H}_{\mu}\big{|}\Phi_{\mu} \right\rangle\,. \tag{7}\] The excitation energy for the \(\mu\)-th excited state \(\Delta E_{\mu}\) is the difference between the excited state energy \(E_{\mu}\) (\(\mu>0\)) and the ground state energy \(E_{0}\). \[\Delta E_{\mu}=E_{\mu}-E_{0}\,. \tag{8}\] This procedure is referred to as the \(\Delta\)CC method. ### DUCC Approach The DUCC formalism was developed for meaningful quantum chemistry calculations on limited quantum computing resources. The method is based on constructing effective (or downfolded) Hamiltonians in a small-dimensionality subspace of the entire Hilbert space. While the DUCC approach is based on coupled cluster theory, one can not utilize the standard CC ansatz, given by Eq. 1, as it leads to similarity-transformed Hamiltonians, Eq. 6, that are non-Hermitian. Instead, an active space is used to introduce the DUCC ansatz that explicitly decouples excitations describing correlation effects inside (internal) and outside (external) of an active space, i.e., \[\left|\Psi_{\mu,\text{DUCC}}\right>=e^{\sigma_{\mu,\text{ext}}}e^{\sigma_{\mu, \text{int}}}\left|\Phi_{\mu}\right>\,, \tag{9}\] where \(\sigma_{\mu,\text{int}}\) and \(\sigma_{\mu,\text{ext}}\) are anti-Hermitian cluster operators \[\sigma_{\mu,\text{int}}^{\dagger} = -\sigma_{\mu,\text{int}}\;, \tag{10}\] \[\sigma_{\mu,\text{ext}}^{\dagger} = -\sigma_{\mu,\text{ext}}\;. \tag{11}\] The algebraic form of the exact \(\sigma_{\mu,\text{int}}\) and \(\sigma_{\mu,\text{ext}}\) operators can effectively be approximated using UCC formalism, i.e., \[\sigma_{\mu,\text{int}} \simeq T_{\mu,\text{int}}-T_{\mu,\text{int}}^{\dagger}\;, \tag{12}\] \[\sigma_{\mu,\text{ext}} \simeq T_{\mu,\text{ext}}-T_{\mu,\text{ext}}^{\dagger}\;. \tag{13}\] Once again, the \(\mu\) notation is used because the DUCC approach can describe excited states with a proper reference determinant choice despite starting as a ground-state formalism. The DUCC formalism is described in greater detail in Refs. [25; 35; 35], and [40], but the foundation of the downfolding approach is that the energy \(E_{\mu}\) can be obtained by diagonalizing an effective Hamiltonian \(\tilde{H}_{\text{ext}}^{\text{eff(DUCC)}}\) in the corresponding active space (defined by projection operator \(P+Q_{\text{int}}\), where \(P\) and \(Q_{\text{int}}\) are projection operators onto the reference function and electron-promoted determinants in the active space, respectively) \[\tilde{H}_{\mu}^{\text{eff(DUCC)}}e^{\sigma_{\mu,\text{int}}}\left|\Phi_{\mu} \right>=E_{\mu}e^{\sigma_{\mu,\text{int}}}\left|\Phi_{\mu}\right>\;, \tag{14}\] where \[\tilde{H}_{\mu}^{\text{eff(DUCC)}}=(P+Q_{\mu,\text{int}})\tilde{H}_{\mu,\text {ext}}^{\text{DUCC}}(P+Q_{\mu,\text{int}}) \tag{15}\] and \[\tilde{H}_{\mu,\text{ext}}^{\text{DUCC}}=e^{-\sigma_{\mu,\text{ext}}}He^{ \sigma_{\mu,\text{ext}}}\;. \tag{16}\] In practice, one constructs approximate many-body forms of \(H_{\mu,\text{ext}}^{\text{eff(DUCC)}}\). Three considerations go into the approximation of \(H_{\mu,\text{ext}}^{\text{eff(DUCC)}}\): (1) the rank of many-body effects included in \(\tilde{H}_{\text{ext}}^{\text{eff(DUCC)}}\), (2) the approximate representation of \(\sigma_{\mu,\text{ext}}\) (\(T_{\mu,\text{ext}}\)), and (3) the length of the commutator expansion for \(e^{-\sigma_{\text{ext}}}He^{\sigma_{\text{ext}}}\). In this paper, I limit the effective Hamiltonian to one- and two-body elements. I also employ traditional CC theory to source the approximate \(T_{\text{ext}}\) operators that parameterize \(\sigma_{\text{ext}}\) (\(\sigma_{\mu,\text{ext}}\simeq T_{\mu,\text{ext}}-T_{\mu,\text{ext}}^{\dagger}\)). Finally, in a previous study, we showed that approximating the expansion of \(e^{-\sigma_{\text{ext}}}He^{\sigma_{\text{ext}}}\) so as to be consistent through second-order perturbation theory labeled DUCC(2), given by \[\tilde{H}_{\mu,\text{ext}}^{\text{DUCC}(2)}\simeq H+[H_{N},\sigma_{\mu,\text{ext}}]+\frac{1}{2!}[[F_{N},\sigma_{\mu, \text{ext}}],\sigma_{\mu,\text{ext}}]\;, \tag{17}\] performs well in describing the system with the third-order consistent expression labeled DUCC(3) and given by \[\tilde{H}_{\mu,\text{ext}}^{\text{DUCC}(3)}\simeq H+[H_{N},\sigma_{\mu,\text{ext}}]+\frac{1}{2!}[[H_{N},\sigma_{\mu, \text{ext}}],\sigma_{\mu,\text{ext}}]+ \tag{18}\] \[\frac{1}{3!}[[[F_{N},\sigma_{\mu,\text{ext}}],\sigma_{\mu,\text{ ext}}],\sigma_{\mu,\text{ext}}]\;,\] further improving upon the results. In Eqs. 17 and 18, \(F_{N}\) and \(H_{N}\) are the normal-product Fock and Hamiltonian operators. ## III Results and Discussion I investigated the doubly excited states of four molecules: H\({}_{2}\), Methylene, Formaldehyde, and Nitroxyl. For H\({}_{2}\), the cc-pVTZ basis set was employed, while aug-cc-pVDZ was used for Methylene, Formaldehyde, and Nitroxyl. The \(T_{\mu,\text{ext}}\) amplitudes defining \(\sigma_{\mu,\text{ext}}\) are the sourceed from traditional CCSD (\(\sigma_{\mu,\text{ext}}\simeq T_{\mu,\text{ext}}^{\text{(CCSD)}}-T_{\mu,\text{ ext}}^{\text{(CCSD)}\dagger}\)). Any method that utilizes a non-Aufbau reference determinant to describe an excited state is given the prefix "ES-". Lastly, excitation energies computed as the difference between "ES-" excited state energies and the canonical ground state energies using their corresponding reference determinants are labeled with the prefix "\(\Delta\)-". Core orbitals were frozen in all post-SCF calculations of Formaldehyde and Nitroxyl. In the case of the DUCC methods, core orbitals were frozen after forming the effective Hamiltonian from all-electron calculations. ### H\({}_{2}\) We start our discussion by investigating the \((1\sigma_{g})^{2}\rightarrow(1\sigma_{u})^{2}\) excitation along the \(H\)-\(H\) bond-breaking coordinate. In this case, CCSD is equivalent to FCI, so using CCSD as a source of amplitudes to define \(\sigma_{\mu,\text{ext}}\) removes any concern about the role of higher-order excitations. An active space constructed from the four lowest-energy orbitals was used for the downfolding procedure, where the full space consists of 28 orbitals. The results are shown in Fig. 1. The \(\Delta\)-SCF is praised for providing excitation energies similar to FCI near equilibrium, but that is because of fortuitous balanced errors in the total energies for the ground and excited state, which is not the case for other geometries along the potential energy surface. Diagonalizing the bare Hamiltonian in the active space provides a much-needed improvement, especially at stretched bond lengths, but it is not balanced for both states. When correlation is downfolded with the DUCC(2) approach, total energies for the ground state agree with FCI to within 0.16 eV along the whole potential energy surface and improve upon the bare Hamiltonian for the excited state with the exception of significantly compressed bond lengths. When the DUCC(3) method is utilized, all energies are improved compared to the DUCC(2) approach, and one gets both total and excitation energies that agree excellently with FCI results by diagonalizing the effective Hamiltonian in an active space of only four orbitals. ### Methylene The first excited state (1 \({}^{1}A_{1}\)) and third excited state (2 \({}^{1}A_{1}\)) of Methylene have considerable mixing of configurations \((1a_{1})^{2}(2a_{1})^{2}(1b_{1})^{2}(3a_{1})^{2}\) and \((1a_{1})^{2}(2a_{1})^{2}(1b_{1})^{2}(1b_{2})^{2}\). The (1 \({}^{1}A_{1}\)) state can be computed using ground state methods as it is the lowest-energy singlet state. The (2 \({}^{1}A_{1}\)) state can then be calculated as an excited state formed from a double excitation out of the (1 \({}^{1}A_{1}\)) state. The multiconfigurational character of the (1 \({}^{1}A_{1}\)), while non-negligible, is minor enough that ground-state methods such as CCSD can provide an accurate description of the state. However, the mixing of configurations is much more prominent in the (2 \({}^{1}A_{1}\)). When calculated as a doubly excited state from (1 \({}^{1}A_{1}\)), one needs to include higher-than-double excitations to capture this state accurately. As shown in Fig. 2, while CCSD and CCSDT provide accurate results of the (1 \({}^{1}A_{1}\)) state, EOM-CCSD struggles to describe the (2 \({}^{1}A_{1}\)) state, with an error of 1.67 eV. This error vanishes with EOM-CCSDT, which is nearly exact for this system. For the downfolding procedure, several active spaces were explored consisting of 5-20 of the lowest-energy molecular orbital of the Aufbau and non-Aufbau reference determinants (denoted as \(n\)), whereas the full orbital space consists of 41 orbitals. The DUCC(2) approximation substantially improves the total energies over the bare Hamiltonian in the same size active space. Both states' DUCC(2) energies are well-balanced, leading to excitation energies that agree well with the benchmark EOM-CCSDTQ result. However, there is still room for improvement in the total energies with the DUCC(2) approach. Higher-order terms in the commutator expansion play a crucial role in providing accurate total energies for the two states, as shown with the DUCC(3) approach. For the (1 \({}^{1}A_{1}\)) state, the DUCC(3) approach provides total energies between CCSD and CCSDT, while for the (2 \({}^{1}A_{1}\)) state it reduces the errors to less than 0.44 eV for any active space. The state-specific nature of the downfolding procedure means that the errors are not always balanced. between given states, which is why the DUCC(3) approach has larger errors in the excitation energy than the DUCC(2) approximation. Still, there is a substantial improvement with the DUCC(3) procedures over EOM-CCSD in describing this excitation with a steady improvement to the EOM-CCSDTQ benchmark as the active-space size increases. provides excited-state total energy comparable to canonical EOM-CCSDT, while the ES-CCSDT method provides canonical EOM-CCSDTQ quality results. The agreement of the ES-CCSD and ES-CCSDT methods reinforces the notion of using the [\((2^{1}B_{1})^{2}\rightarrow(2^{1}B_{2})^{2}\)] determinant as the reference for the downfolding procedure. For the downfolding procedure, an active space consisting of the 15 (13 after freezing the core orbitals) lowest-energy molecular orbitals for both the Aufbau and non-Aufbau reference determinants was utilized. The full calculation involves 64 orbitals. Through the maximum overlap method, I confirmed that all of the orbitals in the Aufbau determinant have corresponding analogous orbitals in the non-Aufbau reference determinant and that different orbitals are not introduced into the active space during the SCF procedure. Illustrated in Fig. 3, the bare Hamiltonian provides excellent excitation energy but at the cost of over 9 eV errors in both the ground and excited state. With only 13 orbitals, the DUCC(2) approach greatly improved both total energies and provided EOM-CCSDT quality excitation energy. The DUCC(3) approach further improves the total energies, while a slight imbalance in errors leads to a small increase in the error of the excitation energy. Both the DUCC(2) and DUCC(3) approaches provide an impressive improvement compared to EOM-CCSD in describing the excited state, with the DUCC(3) method providing total energies for both states comparable to high-level CC approaches. Figure 3: Errors relative to CCSDTQ and EOMCCSDTQ for the ground state (Top Panel), [\((2^{1}B_{1})^{2}\rightarrow(2^{1}B_{2})^{2}\)] excited state (Middle Panel), and the corresponding excitation energy (Bottom Panel) of the formaldehyde molecule. The equilibrium geometry was taken from Ref. [79]. Figure 2: Errors relative to CCSDTQ and EOMCCSDTQ for the \((1\ ^{1}A_{1})\) state (Top Panel), \((2\ ^{1}A_{1})\) excited state (Middle Panel), and the corresponding excitation energy (Bottom Panel) of the methylene molecule. The \(C-H\) bond length was 1.107Å, and the \(H-C-H\) angle was 102.4 degrees. ### Nitroxyl Nitroxyl is a weak acid that has a low-lying [(\(7^{1}a^{\prime}\))\({}^{2}\)\(\rightarrow\) (\(2^{1}a^{\prime\prime}\))\({}^{2}\)] excited state. In a similar fashion to Formaldehyde, the EOM-CCSD method overestimates the excitation energy by 4 eV. The EOM-CCSDT method reduces the excitation energy error to 0.3 eV, and once again, quadruple excitations through EOM-CCSDTQ are needed to get nearly exact results. For the EOM-CCSD and EOM-CCSDT approaches, the errors in the excited state are approximately 10 times greater than the ground state, which contributes to the excitation energy errors. However, when the non-Aufbau reference determinant is used, once again, the ES-CCSD result is comparable to canonical EOM-CCSDT, and the ES-CCSDT method provides canonical EOM-CCSDTQ-level results. So the [(\(7^{1}a^{\prime}\))\({}^{2}\)\(\rightarrow\) (\(2^{1}a^{\prime\prime}\))\({}^{2}\)] reference determinant was used in the ground-state DUCC pipeline. An active space consisting of the 17 (15 after freezing the core orbitals) lowest-energy molecular orbitals for both the Aufbau and non-Aufbau reference determinants was used, whereas the full calculation contains 55 molecular orbitals. Through the maximum overlap method, I confirmed that all of the orbitals in the Aufbau determinant have corresponding analogous orbitals in the non-Aufbau reference determinant. As shown in Fig. 4, diagonalizing the truncated bare Hamiltonian gives nearly exact excitation energy, but ground and excited states have total energy errors over 9 eV. Both the DUCC(2) and DUCC(3) approaches greatly improve the total energies in the active space compared to the bare Hamiltonian. However, his improvement is not always balanced and can lead to excitation energies with undesired errors. Since either approach significantly improves the EOM-CCSD total energy for the excited state, the DUCC(2) and DUCC(3) approximations still provide better excitation energy than EOM-CCSD despite the imbalance total energy errors. Compared to Formaldehyde, Nitroxyl exhibits a greater difference between CCSD and CCSDT errors for the ground state and ES-CCSD and ES-CCSDT errors for the excited state. Between ES-CCSD and ES-CCSDT, the energy difference is 0.66 eV. Upon investigation, there is a \(T_{2}\) amplitude in the ES-CCSD calculation with a value of 0.61, which is less than 0.1 in the ES-CCSD calculation. Given the largest cluster amplitude fluctuation, we performed the downfolding procedure with \(T_{1}\) and \(T_{2}\) amplitudes from the corresponding CCSDT and ES-CCSDT calculations as terms with explicit \(T_{3}\) contributions have not been coded yet. Results in Fig. 5 show that higher-order excitations play a non-negligible role in defining the effective Hamiltonian for Nitroxyl, changing total energies by up to 2 eV. In the case of the excited state, we notice a positive improvement in total energies, but the same cannot be said for the ground state. However, for the time being, this story is left incomplete because if triple excitations play such a significant role, it stands to reason those explicit \(T_{3}\) (and higher-order) contributions, longer commutator expansions, and higher-order many-body elements of the Hamiltonian can all endow further changes to the description of the ground and excited states. ### Overall Through the four examples, several trends can be observed. First, downfolding procedures with non-Aufbau references can accurately describe excited states in compact active spaces with substantial improvements over the active-space bare Hamiltonian and even EOM-CCSD for the representative difficult cases studied. Second is that higher-order terms in the commutator expansion play a significant role in improving accuracies of total energies, as seen with the difference between the DUCC(2) and DUCC(3) approaches. Third, since the downfolding procedure is state state-specific, the description between two states is not always balanced. It is important to reiterate that accurate excitation energies in and method may be fortuitous at the expense of total energies. The downfolding procedures in this paper aim to provide accurate total Figure 4: Errors relative to CCSDTQ and EOMCCSDTQ for the ground state (Top Panel), [(\(7^{1}a^{\prime}\))\({}^{2}\)\(\rightarrow\) (\(2^{1}a^{\prime\prime}\))\({}^{2}\)] excited state (Middle Panel), and the corresponding excitation energy (Bottom Panel) of the nitroxyl molecule. The equilibrium geometry was taken from Ref. [79]. energies and consequently excitation energies. To further improve upon the total energies and provide a balanced description, the role of longer commutator expansions, higher-order excitations, and higher body terms in the downfolded Hamiltonian needs to be investigated. As seen with methylene, the description of the two states systematically improves as the active space size increases. The active spaces of the remaining molecules represent system sizes well-suited for current and near-term quantum computing architectures. Larger active spaces can be explored as quantum computers grow in their quantum capacity. This means that the approximations in the downfolding procedure are more befitting, and the description of states becomes more accurate and balanced. One can imagine reliably downfolding 100s or 1000s of orbitals to active spaces that are magnitudes of order smaller, greatly expanding the envelope of problems one can tackle with quantum computers. ## IV Conclusion In this paper, I explored how ground-state downfolding coupled cluster techniques are efficient tools that can target excited states using non-Aufbau reference determinants. The double unitary coupled cluster ansatz employed produces state-specific hermitian effective Hamiltonians of reduced dimensionality corresponding to an active space used to partition the excitations in the ansatz. The final reduced dimensionality of each problem, expressed by the downfolded Hamiltonian, represents system sizes amenable to current and near-term quantum computing architecture. We focused on doubly excited states of H\({}_{2}\), Methylene, Formaldehyde, and Nitroxyl, and demonstrated that downfolding techniques result in state-specific effective Hamiltonians that, when diagonalized in their respective active spaces, provide ground- and excited-state total energies (and therefore excitation energies) comparable to high-level CC methods. The downfolding procedures consistently improve the accuracy of total energies compared to the bare Hamiltonian within the same active space size. The downfolding procedures also provide excited state energies that outperform EOM-CCSD, leading to better total and excitation energies. This technique of combining non-Aufbau reference determinants with downfolding procedures allows for accurate and reliable investigation of excited states with a significant reduction in dimensionality. Together with excited-state quantum computing techniques and algorithms, this technique greatly broadens the envelope of excited-state quantum chemistry problems for quantum computers. ## V Acknowledgement This work was supported the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (under FWP 76213) and the Laboratory Directed Research and Development (LDRD) Program at PNNL. This work used resources from the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for the U.S. Department of Energy under Contract DE-AC05-76RL01830.
2301.05671
Renormalized electric and magnetic charges for $O(r^n)$ large gauge symmetries
In this work we present the construction of a renormalized symplectic form on an extended phases space where the higher order large gauge transformations act canonically. The expressions of the sub$^n$-leading electric charges associated to each $O(r^n)$ LGT are then obtained, in agreement with the expressions previously proposed in arXiv:1810.04619 by means of the tree-level sub$^n$-leading formulas. We also present the duality extension of the extended phase space, computing the full electromagnetic charge algebra, showing a tower of central extensions.
Javier Peraza
2023-01-13T17:30:31Z
http://arxiv.org/abs/2301.05671v3
# Renormalized electric and magnetic charges for \(O(r^{n})\) large gauge symmetries ###### Abstract In this work we present the construction of a renormalized symplectic form on an extended phases space where the higher order large gauge transformations (LGT) act canonically. The expressions of the sub\({}^{n}\)-leading electric charges associated to each \(O(r^{n})\) LGT are then obtained, in agreement with the expressions previously proposed in [1] by means of the tree-level sub\({}^{n}\)-leading formulas. We also present the duality extension of the extended phase space, computing the full electromagnetic charge algebra, showing a tower of central extensions. ## 1 Introduction Over the past years, the understanding of asymptotic symmetries in gravity and gauge theories has been deepened due to several results that relates them to soft theorems in field theory. The seminal works of Strominger and collaborators (e.g., [2; 3; 4; 5; 6; 7; 8; 9]) showed that the well known Weinberg's soft theorem [10] can be understood as a Ward identity associated to an infinite dimensional symmetry group. The group is constructed via large gauge transformations (LGT) at null infinity, and imply an infinite number of conservation laws in the scattering process from the past to the future asymptotic regions. In the case of Quantum Electrodynamics (QED), it was shown in [11] and [12] that for tree level amplitudes there exist an infinite number of soft theorems, each of them implying a conservation law for the tree level scattering process. Weinberg's soft photon theorem corresponds to the first level in the hierarchy, while Low's sub-leading soft photon theorem [13; 14] corresponds to the second level. The conserved quantities found in [2] for the S-matrix constitute the first level in an infinite hierarchy of soft theorems. A first approach towards higher orders was done by Seraj in [15], where an infinite number of conserved quantities are shown at spatial infinity, proportional to the multipole moments, and generated by specific large gauge transformations of order \(O(r^{n})\). At null infinity, in [1] Campiglia and Laddha showed that for tree level scattering and restricting the radiative data space to a suitable subset, there exists an infinite tower of conservation laws such that at each level there is an infinite dimensional family of conserved charges \(Q_{\epsilon}^{n}\), labelled by functions on the sphere. The authors also presented evidence that the Ward identities associated to the level \(n\) of the charges are equivalent to sub-\(n\) soft photon theorems, along with the conservation laws within the classical theory. The non-abelian case is substantially harder, since the charges up to level \(n\) of the hierarchy do not form a close algebra, as in the abelian case. In [16] it is suggested a first step towards a classical derivation of the charge hierarchy in the non-abelian case. Some recent developments in celestial holography using Operator Product Expansion (OPE) tools [17; 18; 19] seem to be promising avenues in the study of asymptotic symmetries and the role of CCFT in flat holography for Yang-Mills and gravity. Working in terms of retarded coordinates \((u,r,x^{1},x^{2})\), the massless fields at the asymptotic region are determined by the limit \(t:=r+u\to+\infty\) at constant \(u\), where \(t\) is the usual Minkowski time. This limit moves the Cauchy slices to a well defined manifold, called the future null infinity and denoted by \(\mathcal{I}^{+}\). 1 The topology of \(\mathcal{I}^{+}\) is that of \(\mathbb{R}\times S^{2}\), and its boundaries at \(u=\pm\infty\) are denoted \(\mathcal{I}^{+}_{\pm}\) and are diffeomorphic to \(S^{2}\). Footnote 1: This convergence is point-wise equivalent to the limit \(r\to+\infty\) at \(u=cnt\), but taking \(t\to+\infty\) is more natural since we are defining the charges in terms of Cauchy slices. The \(r\)-expansion of the LGT's at the bulk establishes a hierarchy of charges at the asymptotic region. \(O(1)\) LGT's correspond to leading charges (for instance, by imposing a constant LGT we obtain the total electric charge of the system, [20]), while \(O(r)\) LGT corresponds to sub-leading charges, see [6; 21]. The canonical derivation of conserved quantities at null infinity in the context of the classical theory at the leading and sub-leading imposes the following question: can the infinite tower of charges associated to sub\({}^{n}\)-leading soft theorems be canonically derived within the classical theory? One of the main issues that arise when studying \(O(r)\) LGT is the divergent formulas for the charges when calculated from the usual phase space structure, both at null (e.g. [21]) and spatial (e.g. [15]) infinities. In particular, the expressions for the symplectic form evaluated on a LGT at level \(n\) (and therefore the charges) diverge both in the \(t\to+\infty\) and \(u\to-\infty\) limits. In this paper we provide a renormalization procedure that removes both divergences. Following ideas from [22], we show that there exist suitable boundary and corner terms for the symplectic form that renormalise the divergences, while not changing the dynamics of the fields. We define a subset of the radiative space and an extended phase space that contains all LGT's up to arbitrary order. This extended space is provided with a sympletic structure, that allows us to calculate the electric-type charges. Finally, allowing the duality symmetry to act and extending the phase space with extra boundary gauge fields (e.g. [23; 24; 25]), the magnetic analogue of the electric hierarchy is also presented, as well as the full electromagnetic charge algebra. The paper is organized as follows. In section 2 we review the asymptotic structure of Maxwell theory at null infinity. For simplicity, the charged matter consist of a massless complex scalar field coupled to the \(U(1)\) gauge field. We review also the structure of the LGT for arbitrary order. In section 3 we revisit the derivation of asymptotic charges associated to leading and sub-leading soft photons theorems, defining an extended phase space and calculating the leading and sub-leading charges. Our derivation is along the lines of [1], but we make special emphasis in the symplectic structure, which will be used later. Section 4 contains the main result: we can renormalise the symplectic potential in order to have a finite value for every \(O(r^{n})\) LGT. Section 5 contains the magnetic charges derivation, and the algebra of electromagnetic charges is presented. Finally in section 6 we discuss the results and possible future directions. ## 2 Preliminaries In this section we review previous results on the asymptotic expansion of Maxwell fields at null infinity. ### Radiative phase space Consider retarded coordinates \((u,r,x^{a})\), in terms of which the Minkowski metric is \[ds^{2}=-du^{2}-2dudr+r^{2}q_{ab}dx^{a}dx^{b}. \tag{1}\] Indices \(a,b,c,...\) indicate sphere coordinates, while greek indices \(\mu,\nu,\sigma,...\) indicate spacetime coordinates. The metric \(q_{ab}\) is the standard round metric with constant curvature in the sphere \(S^{2}\), with connection \(D\). The limit \(r+u=:t\rightarrow+\infty\) at fixed \(u\) defines \(\mathcal{I}^{+}\),'scri plus', a null hypersurface with the topology of \(\mathbb{R}\times S^{2}\). Its boundaries are defined by the limits \(u\rightarrow\pm\infty\), denoted by \(\mathcal{I}^{+}_{\pm}\) respectively, and have the topology of a sphere. We consider a massless charged scalar field \(\phi\) coupled to the Maxwell field \(\mathcal{A}_{\mu}\) in Minkowski spacetime, with lagrangian \[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\mathcal{D}_{\mu}\phi\overline{ \mathcal{D}^{\mu}\phi}, \tag{2}\] and satisfying the field equations, \[\nabla^{\nu}F_{\mu\nu} = j_{\mu}, \tag{3}\] \[\mathcal{D}_{\mu}\mathcal{D}^{\nu}\phi = 0, \tag{4}\] where \(j_{\mu}=ie\phi\overline{\mathcal{D}_{\mu}\phi}+c.c.\), with \(\mathcal{D}_{\mu}\phi:=\partial_{\mu}\phi-ie\mathcal{A}_{\mu}\phi\), the gauge covariant derivative and \(\nabla\) the metric covariant derivative. In retarded coordinates, Maxwell equations are \[r^{2}j_{r} = -\partial_{r}(r^{2}F_{ru})+D^{a}F_{ra}, \tag{5}\] \[r^{2}j_{u} = -\partial_{r}(r^{2}F_{ru})+\partial_{u}(r^{2}F_{ru})+D^{a}F_{ua},\] (6) \[j_{a} = \partial_{r}(F_{ua}-F_{ra})+\partial_{u}F_{ra}+\frac{1}{r^{2}}D^{ b}F_{ab}. \tag{7}\] Bianchi identities, \(0=\partial_{[a}F_{bc]}\), are the integrability conditions for the electromagnetic strength tensor: there exists a one-form \(\mathcal{A}_{\mu}\) such that \(F_{\mu\nu}=\partial_{[\mu}\mathcal{A}_{\nu]}\). We will work in the harmonic gauge, \(\nabla^{\mu}\mathcal{A}_{\mu}=0\)2, which in this particular coordinates implies Footnote 2: We leave for future work the study of the renormalization procedure in other gauges. In particular, lightcone gauge in the self dual sector of Yang-Mills theory seems a promising avenue to extend the present results to non-abelian theories, [26] \[r^{2}\partial_{u}\mathcal{A}_{u}+\partial_{r}(r^{2}\mathcal{A}_{r})+r^{2}D^{a }\mathcal{A}_{a}=0. \tag{8}\] We are interested in the symplectic structure and charges at \(\mathcal{I}\), so we will need to take the \(t\to+\infty\) limit, fixed \(u\). Therefore, we need the \(1/r\)-expansion of the fields. The usual fall off for the electromagnetic tensor are (see [20] and [16]): \[F_{ru}=\frac{1}{r^{2}}F_{ru}^{(-2)}+o(r^{-2}),\quad F_{ar}=o(r^{-1}),\quad F_ {au}=O(1),\quad F_{ab}=F_{ab}^{(0)}+o(1) \tag{9}\] where it is understood that all the coefficients in the expansions are functions of \(u\) and \(x^{a}\). The fal-off for the scalar field is, \[\phi=\frac{\phi^{(-1)}}{r}+o(r^{-1}). \tag{10}\] These expressions imply the following fall offs on the charge current: \[j_{u}=\frac{j_{u}^{(-2)}}{r^{2}}+o(r^{-2}),\quad j_{a}=\frac{j_{A}^{(-2)}}{r^ {2}}+o(r^{-2}),\quad j_{r}=\frac{j_{r}^{(-2)}}{r^{4}}+o(r^{-4}). \tag{11}\] Fall offs for \(\mathcal{A}_{\mu}\) compatible with the expansion above and the harmonic gauge condition are: \[\mathcal{A}_{a}=\mathcal{A}_{a}^{(0)}+o(1),\quad\mathcal{A}_{u}=\mathcal{A}_{ u}^{(-1,\ln r)}\frac{\ln r}{r}+o(r^{-1}),\quad\mathcal{A}_{r}=o(r^{-1}). \tag{12}\] The previous asymptotic behaviours are consistent with the field equations and the harmonic gauge condition. Using Maxwell equations, the scalar field equation, Bianchi identities and the harmonic gauge condition, we can solve all the components of the electromagnetic tensor and the scalar field in terms of \(\mathcal{A}_{a}^{(0)}\) and \(\phi^{(-1)}\) (see appendix A of [16] for Yang-Mills case). These functions are the free data for the Maxwell field and the scalar field, respectively. To simplify notation, we will refer \(\mathcal{A}_{a}^{(0)}\) as \(A_{a}\) and \(\phi^{(-1)}\) as \(\varphi\), respectively. The hypothesis of "tree-level" decays for \(A_{a}\) in the limits \(u\to\pm\infty\), \[\partial_{u}A_{a}(u,x^{1},x^{2})=O(1/|u|^{\infty}), \tag{13}\] that is, its decay is faster than that of any power \(1/|u|^{n}\), implies the following fall offs for the radiative data of a generic solution of Maxwell's equations [27], [1], \[F_{ru}^{(-2)}(u,x^{1},x^{2})=F_{ru}^{-2,0}(x^{1},x^{2})+O(1/|u|^{\infty}), \tag{14}\] For the massless field we assume no "soft" charged particles, \[\varphi(u,x^{1},x^{2})=O(1/|u|^{\infty}). \tag{15}\] Our radiative phase space is thus defined in terms of the functions \(A_{a}\) and \(\varphi\), \[\mathcal{F}_{0}=\{(A_{A}(u,x^{a}),\varphi(u,x^{a})):\partial_{u}A_{a}(u,x^{1}, x^{2}),\varphi(u,x^{1},x^{2})=O(1/|u|^{\infty})\}. \tag{16}\] ### \(u\)-expansions for fields From Maxwell equations and Bianchi identities, we can obtain recursive formulas for the coefficients in both \(F_{ru}\) and \(\epsilon^{ab}F_{ab}\) expansions in \(r\) and \(u\), where \(\epsilon^{ab}\) is the area form of the sphere. By Bianchi identity \(\partial_{[a}F_{ru]}=0\), contracting with \(D^{a}\) and the first two Maxwell equations, we have \[\Delta F_{ru}+\partial_{r}(\partial_{r}(r^{2}F_{ru})-2r^{2}\partial_{u}F_{ru })=r^{2}\partial_{u}j_{r}-\partial_{r}(r^{2}j_{u}) \tag{17}\] where \(\Delta\) denotes the Laplacian operator on the sphere, with metric \(q_{ab}\). We assume that \(F_{ru}\) can be expanded in an \(r\)-series, \(F_{ru}=\frac{1}{r^{2}}\sum_{k=0}^{\infty}\frac{F_{ru}^{(2-k)}}{r^{k}}\) and substituting in (17), \[2(k+1)\partial_{u}F_{ru}^{(-2-k-1)}+(\Delta+k(k+1))\,F_{ru}^{(-2-k)}=\partial_ {u}j_{r}^{(-2-k)}+kj_{r}^{(-2-k)}. \tag{18}\] From the assumed fall off (14), and equation (18), it is clear that the behaviour of \(F_{ru}^{(-2-n)}\) in the limit \(u\rightarrow-\infty\) is \[F_{ru}^{(-2-n)}=\sum_{j=0}^{n}u^{j}F_{ru}^{(-2-n,j)}(x^{a})+r_{n}(u,x^{a}), \tag{19}\] where each of the \(F_{ru}^{(-2-n,0)}(x^{a})\) is a function on the sphere, and \(r_{n}\) some function with an \(O(1/u^{\infty})\) decay (analogous expansion can be done in the limit \(u\rightarrow+\infty\)). We can solve order by order recursively in terms of the current and this free functions. As a reference, the full expression for \(F_{ru}\) is \[r^{2}F_{ru}=\sum_{k\geq 0}\frac{1}{r^{k}}\sum_{j=0}^{k}u^{j}F_{ru}^{(-2-k,j)}(x^ {a})+... \tag{20}\] The same analysis can be carried out for the function \(\epsilon^{ab}F_{ab}\), obtaining the following equation, \[2\epsilon^{ab}D_{a}j_{b}=2\partial_{u}\partial_{r}\epsilon^{ab}F_{ab}- \partial_{r}\partial_{r}\epsilon^{ab}F_{ab}-\frac{1}{r^{2}}(\Delta\epsilon^{ ab}F_{ab}+4\epsilon^{ab}F_{ab}), \tag{21}\] and by performing the \(r-\) and \(u-\)expansions we obtain the recursive formula for the coefficients in \(\epsilon^{ab}F_{ab}\). In section (5) we will use these results. ### Variation space We now turn to the large gauge transformations (LGT) on the variation space. The usual formulas for the gauge symmetries, \[\mathcal{A}_{\mu}\mapsto\mathcal{A}_{\mu}+\partial_{\mu}\epsilon,\quad\phi\mapsto e ^{-ie\epsilon}\phi \tag{22}\] establish the following action for variations of the fields, \[\delta_{\epsilon}\mathcal{A}_{\mu}=\partial_{\mu}\epsilon,\quad\delta_{ \epsilon}\phi=-ie\epsilon\phi. \tag{23}\] The variations allowed in our radiative phase space are those that are tangent to \(\mathcal{F}_{0}\), i.e., that maintain the fall offs of the fields. By the definition of finite symmetry, given a gauge symmetry generator \(\epsilon\) we see that \(\partial_{\mu}\epsilon\) must have the same fall offs as \(\mathcal{A}_{\mu}\): \[\partial_{a}\epsilon=O(1),\quad\partial_{u}\epsilon=o(1),\quad\partial_{r} \epsilon=o(r^{-1}) \tag{24}\] We study the global symmetries as arising from the residual LGT, and by the choice of harmonic gauge, are solutions to the wave equation, \[\Box\epsilon=0. \tag{25}\] This equation can be solved up to order \(O(r^{-1})\) (see Appendix A in [21]), \[\epsilon(u,r,x^{1},x^{2})=\epsilon_{0}(x^{1},x^{2})+O(\ln(r)/r). \tag{26}\] ### Higher order LGT We are interest in relating higher orders in \(r\) LGTs with the charges that arise from \(\mathrm{sub}^{n}-\)leading soft photons theorems. The usual mode expansion reasoning in the soft theorem derivation suggests that for a \(\mathrm{sub}^{n}-\)leading soft photon we need to look for a LGT \(\Lambda\) whose \(O(1)\) in the \(r-\)expansion behaves as \(u^{n}\). This asymptotic behaviour of the gauge generator must be compatible with the harmonic gauge, and therefore, implies an \(O(r^{n})\) leading behaviour, as we show below by solving \(\Box\Lambda=0\). Consider the following \(r-\)expansion for a \(O(r^{n})\) large gauge parameter, \[\Lambda(u,x^{a})=r^{n}\epsilon^{(n)}+\sum_{k=0}^{n-1}r^{k}\epsilon^{(k)}+ \frac{\ln r}{r}\epsilon^{ln}+O(r^{-1}), \tag{27}\] where \(\epsilon^{(i)}=\epsilon^{(i)}(u,x^{1},x^{2})\). We have \(\Box\Lambda=0\), which in retarded coordinates reads, \[0 = -6r^{n-1}\partial_{u}\epsilon^{(n)}+\sum_{k=-1}^{n-2}r^{k}\left( \Delta\epsilon^{(k+2)}-2(k+2)\partial_{u}\epsilon^{(k+1)}+(k+2)(k+3)\epsilon ^{(k+2)}\right) \tag{28}\] \[+\frac{\ln r}{r^{3}}\Delta\epsilon^{(ln)}+\frac{2}{r^{2}}( \Delta\epsilon^{(0)}-\partial_{u}\epsilon^{(ln)})+\frac{1}{r^{3}}\epsilon^{( ln)}+....\] The first term in (28) imply that \(\epsilon^{(n)}\) is a free function on the sphere. Next, we have a recursive equation between the successive coefficients: \[2(k+1)\partial_{u}\epsilon^{(k)}=\Delta\epsilon^{(k+1)}+(k+1)(k+2)\epsilon^{(k+1)} \tag{29}\] Integrating (29) and fixing each integration constant to zero in each step gives a LGT of order \(O(r^{n})\) generated by \(\epsilon\equiv\epsilon^{(n)}\), which we will call \(\Lambda_{\epsilon}^{n}\). If the integration constants are non-zero, each one of them will be a free \(S^{2}\) function that contribute linearly with a LGT of corresponding order: \[\Lambda_{\alpha}=\Lambda_{\epsilon_{n}}^{n}+\Lambda_{\epsilon_{(n-1)}}^{n-1}+..., \tag{30}\] where \(\alpha=\{\epsilon_{j}\}_{j}\) is the sequence of integration constants \(\epsilon_{j}\) in the equation (29), that are free \(S^{2}-\)functions, each one generating an \(O(r^{j})\)\(LGT\). We will call a LGT "pure" if there is only one free function generating it. When using the notation \(\Lambda_{f}^{m}\), subscripts indicate the generating function or sequence of functions, and superscripts indicate the leading term in the \(r-\)expansion, if the generating function is not a sequence. Some remarks are in order. First, one implication of equation (29) for a pure \(O(r^{n})\) LGT is the following property: \[\epsilon^{(n-1)}=O(u),\quad...,\quad\epsilon^{(k)}=O(u^{n-k}) \tag{31}\] This shows that the order \(O(r^{n})\) is necessary for a \(u^{n}\) asymptotic behaviour at order \(O(r^{0})\) for the LGT, as was stated at the beginning of the section. Second, the term \(\ln(r)/r\) is needed for the \(O(r^{0})\) to be consistent, otherwise we would get \(\Delta\epsilon^{(0)}=0\), and since we are in a sphere, that would give a trivial function. Third, the non-trivial fact that equation (29) resembles the form of equation (18), but it presents crucial differences in the constants multiplying the functions. This similarity between the recursive expressions is useful when showing the Ward identity equivalence with the sub\({}^{n}-\) soft theorems. ## 3 Leading and Subleading charges In this section we review the phase space construction and the symplectic charges in the case where the large gauge transformation are \(O(r)\). We leave the renormalization procedure for the next section, focusing exclusively in the first step of the phase space extension and in the recovery of the charges. ### Extended phase space The usual phase space, (16), contains the physical information regarding the leading order charges, restricted to \(O(r^{0})\) LGT. Their usual expressions are ([2],[20]): \[Q_{\epsilon_{0}}=\int_{S^{2}}\sqrt{q}\epsilon_{0}\int_{\mathbb{R}}\partial_{u} F_{ru}^{(-2)}du, \tag{32}\] where \(\epsilon_{0}\) is a function on the sphere. As soon as we lift the condition on the LGT order, the fall offs (12) are not preserved by an \(O(r^{1})\) LGT (through its action (22)) and therefore the variations are no longer tangent to the radiative phase space \(\mathcal{F}_{0}\), but rather have another direction. We expand the phase space in this direction by first defining an extended version of the potential sector in (16). Consider the following space: \[\mathcal{F}_{1}=\mathcal{F}_{0}\times\{\psi(x^{1},x^{2}):\psi\in C^{\infty}(S^{2})\} \tag{20}\] We define the electromagnetic potential as \(\hat{\mathcal{A}}_{\mu}=\mathcal{A}_{\mu}+\partial_{\mu}\Lambda^{1}_{\psi}\), where \(\mathcal{A}_{\mu}\) is the vector potential that has \(A_{a}\) as initial data (from section (2)) and \(\Lambda^{1}_{\psi}\) is the pure \(O(r)\) LGT generated by \(\psi\). Observe that this definition is indeed consistent, since \(\partial_{[\mu}\partial_{\nu]}\Lambda^{1}_{\psi}=0\) and thus \(a\) makes no contribution to the electromagnetic tensor, i.e. \(\hat{F}=F\). 3 Observe also that the harmonic gauge condition is trivially satisfied for the extended electromagnetic potential 4. Footnote 3: This feature in the abelian case is in sharp contrast to the non-abelian case, where the linear extension was studied in [16] Given a general \(O(r)\) LGT, \(\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}\), the variations generated by it on \(\mathcal{F}^{0}_{1}\) are splitted in terms of the \(S^{2}\) free functions \(\epsilon_{1}\) and \(\epsilon_{0}\) corresponding to order \(O(r)\) and order \(O(1)\) in the \(r-\)expansion respectively (see (30)): \[\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}=r\epsilon_{1}+(\epsilon_{0}+u\frac{1} {2}(\Delta+2)\epsilon_{1})+o(r^{0}) \tag{21}\] The action on the phase space \(\mathcal{F}_{1}\) comes from the identity \(\delta_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}\hat{\mathcal{A}}_{\mu}= \partial_{\mu}\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}\), which after the splitting reads: \[\delta_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}A_{a} = \partial_{a}\epsilon_{0}, \tag{22}\] \[\delta_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}\psi = \epsilon_{1}. \tag{23}\] In the massless field sector, allowing a \(O(r)\) LGT implies also a change in the massless field \(\phi\). The equations of motion are invariant under the simultaneous change \[\mathcal{A}_{\mu}\mapsto\hat{\mathcal{A}}=\mathcal{A}_{\mu}+\partial_{\mu} \Lambda^{1}_{\psi},\quad\phi\mapsto\hat{\phi}=e^{ie\Lambda^{1}_{\psi}}\phi \tag{24}\] Since the finite gauge symmetry involves a product \(e^{-ie\Lambda^{1}_{\psi}}\phi\), we can define an extended field \(\hat{\phi}=e^{-ie\Lambda^{1}_{\psi}}\phi\), where \(\psi\) is the free \(S^{2}\) function now generating a phase for the scalar field, while \(\phi\) is the massless field with the usual fall off, with \(\varphi\in\mathcal{F}_{0}\) as free data. The covariant gauge derivative is given by \[\hat{\mathcal{D}}_{\mu}:=\partial_{\mu}-ie\hat{\mathcal{A}}_{\mu}, \tag{25}\] from where we have that the new current \(\hat{j}_{\mu}\) maintain its original form, \[\hat{j}_{\mu}=ie\hat{\phi}(\hat{\mathcal{D}}_{\mu}\hat{\phi})^{*}+c.c.=ie\phi( \mathcal{D}_{\mu}\phi)^{*}+c.c., \tag{26}\] The consistency of the action of the \(O(r)\) LGT action on \(\hat{\phi}\) with the splitting of the extended phase space implies \[\delta_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}\varphi=ie\epsilon_{0}\varphi. \tag{27}\] This type of extension of the phase space and the dressing of the fields is part of a more general procedure, using"Goldstone modes" on the boundary, that has been introduced both in the context of gauge theories and gravity (see [16; 24; 28; 29] and references there in). ### Calculation of leading and subleading charges In this subsection, we review the covariant phase space procedure for the calculation of charges associated to a gauge transformation generated by \(\epsilon\). Consider the Lagrangian (2), in our extended phase space we have the usual symplectic potential current, \[\theta^{\mu}(\delta)=\sqrt{g}\left(\hat{F}^{\mu\nu}\delta\hat{\mathcal{A}}_{\nu }+\hat{\mathcal{D}}^{\mu}\hat{\phi}\delta\bar{\hat{\phi}}+c.c.\right), \tag{20}\] and the symplectic current by taking the exterior derivative in the phase space, \[\omega^{\mu}(\delta,\delta^{\prime})=\delta\theta^{\mu}(\delta^{\prime})- \delta^{\prime}\theta^{\mu}(\delta)-\theta([\delta,\delta^{\prime}]) \tag{21}\] The symplectic form is obtained by integrating the symplectic current over \(\Sigma_{t}\), \(\Omega(\delta,\delta^{\prime})=\int_{\Sigma_{t}}\omega^{\mu}(\delta,\delta^{ \prime})dS_{\mu}\). We evaluate it on a variation generated by a general LGT (\(\Lambda_{\epsilon_{1},\epsilon_{0}}\)) and an admissible variation (denoted by \(\delta\)), obtaining an expression for the charge, \[\delta Q_{\Lambda_{\epsilon_{1},\epsilon_{0}}}=\Omega(\delta,\delta_{\Lambda_ {\epsilon_{1},\epsilon_{0}}})=\int_{\Sigma_{t}}\omega^{\mu}(\delta,\delta_{ \Lambda_{\epsilon_{1},\epsilon_{0}}})dS_{\mu} \tag{22}\] where the integrals are taken over a \(t=cnt\) surface. As it was shown in [21], one could find the leading and subleading charges (consistent with the Ward identities) by taking the limit \(t=r+u\rightarrow+\infty\) at constant \(u\), \[Q_{\Lambda_{\epsilon_{1},\epsilon_{0}}}=\lim_{t\rightarrow\infty}\int_{ \Sigma_{t}}(\partial_{r}-\partial_{u})(r^{2}\Lambda_{\epsilon_{1},\epsilon_{0} }\hat{F}_{ru})dx^{2}du, \tag{23}\] and considering the finite part in the limit. By counting orders in \(t\), it is straightforward to see that the expression (23) contains divergent terms and therefore in general the definition of the charge at the limit is ill-defined. In what follows we drop the hat \(\hat{}\) in \(F_{ru}\), since is the same field as in the radiative space. As we previously mentioned, the main result of this paper is that we can define a procedure to renormalize the symplectic potential and get rid of the divergent terms in (23), for any arbitrary higher order \(O(r^{n})\). This will be the content of the next section, while in the remainder of this section we motivate the renormalization in the particular case of the extension for \(n=1\). Since we can trace back the divergences to the symplectic potential, due to varying with \(\delta_{\Lambda_{\epsilon_{1},\epsilon_{0}}}\), our starting point is to compute the symplectic potential on the hypersurfaces \(\Sigma_{t}\), \[\theta^{t}(\delta)=\sqrt{q}\left(r^{2}F_{ru}(\delta A_{r}-\delta A_{u})+q^{bc} F_{ub}\delta A_{c}\right)+\sqrt{q}(\partial_{r}-\partial_{u})(r^{2}F_{ru} \delta\Lambda_{\psi}^{1}), \tag{24}\] where we did not write the total derivative \(r^{2}D_{c}(\sqrt{q}q^{bc}F_{ub}\delta\Lambda_{\psi}^{1})\), since it vanishes after integration on \(\Sigma_{t}\). The first term can be regarded as the radiative phase space symplectic potential, \(\theta^{t}_{0}\), while the second term is the new extended term, which we will call \(\theta^{t}_{1}\). The term \(\theta^{t}_{0}(\delta)\) will contribute to the symplectic form (by integrating by parts and using the equations of motion) as usual, \[\omega^{t}_{0}(\delta,\delta^{\prime})=\sqrt{q}q^{bc}\delta F_{ub}\wedge \delta^{\prime}\mathcal{A}_{c}+\sqrt{q}r^{2}\delta F_{ru}\wedge\delta^{\prime }(\mathcal{A}_{r}-\mathcal{A}_{u}), \tag{25}\] The term \(\theta_{1}^{t}(\delta)\) presents the divergence: the action of \(\partial_{u}\) on \(\delta\Lambda^{1}_{\psi}\) leaves an \(O(r)\) term, which in turns imply a \(t\) factor when changing variables from \((u,r,x^{1},x^{2})\) to \((t,r,x^{1},x^{2})\). In the next section we give a systematic approach for the renormalization of such terms. For now, we assume that we can discard the divergent term and that the expression we obtain has also a finite limit \(u\to-\infty\). Assuming the above, we find the following expression for the renormalization of \(\theta_{1}^{t}(\delta)\), \[\theta_{1}^{ren,t}(\delta)=\sqrt{q}\left(D^{a}j_{a}^{(0)}-\frac{u}{2}\Delta \partial_{u}F_{ru}^{(-2)}\right)\delta\psi, \tag{3.16}\] where \(ren\) stands for "renormalized". The symplectic current is splitted then, \[\omega^{ren,t}(\delta,\delta^{\prime})=\omega_{0}^{t}(\delta,\delta^{\prime}) +\omega_{1}^{ren,t}(\delta,\delta^{\prime}), \tag{3.17}\] where the last term comes from the exterior derivative of \(\theta_{1}^{ren,t}(\delta)\), and the total symplectic form on \(\mathcal{I}^{+}\) is well defined (by taking \(t\to+\infty\)), \[\Omega^{ren}(\delta,\delta^{\prime})=\int_{\mathcal{I}}\omega^{ren}(\delta, \delta^{\prime})=\int_{\mathcal{I}}\omega_{0}(\delta,\delta^{\prime})+\int_{S ^{2}}\sqrt{q}(\delta F_{ru}^{(-3,0)}\wedge\delta^{\prime}\psi). \tag{3.18}\] The last term comes form the value of \(F_{ru}^{(-3,0)}\) in (2.19), which can be seen as the value of the following limit (see [1] for details): \[F_{ru}^{(-3,0)}=\lim_{u\to-\infty}F_{ru}^{(-3)}-uF_{ru}^{(-3,1)}=\int_{\mathbb{ R}}\left(D^{a}j_{a}^{(0)}-\frac{u}{2}\Delta\partial_{u}F_{ru}^{(-2)}\right)du, \tag{3.19}\] where the contribution from \(u=+\infty\) in integral zero due to the absence of massive charges (\(F_{ru}^{(2)}(u=+\infty,x^{1},x^{2})=0\)). Since \(\partial_{u}F_{ru}^{(-2)}\) decays faster than any polynomial in \(u\), the above integral is convergent. Observe that \(F_{ru}^{(-3,0)}\) is the canonical conjugate to \(\psi\). Next, we compute the leading and subleading charges. Taking \(\delta^{\prime}\) to be a large gauge transformation, and \(\delta\) any arbitrary admissible variation (compatible with \(\mathcal{F}_{1}\)), we calculate the charge associated to any LGT \(\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}\) by the equation (3.12). Since \(F_{\mu\nu}\) is invariant under \(\delta_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}\) and \(\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}\) is not affected by \(\delta\)5, the calculation is straightforward, Footnote 5: This again is in contrast with the non-Abelian case, where the harmonic gauge condition implies a field dependent LGT’s. \[Q_{\Lambda_{\{\epsilon_{1},\epsilon_{0}\}}}=\int_{S^{2}}\sqrt{q}\left(\epsilon _{0}F_{ru}^{(-2,0)}+\epsilon_{1}F_{ru}^{(-3,0)}\right)dx^{2}=:Q_{\epsilon_{0}} ^{0}+Q_{\epsilon_{1}}^{1}, \tag{3.20}\] where we also used (2.19) in the radiative space sector, and \(Q_{\epsilon_{i}}^{i}\), with \(i=0,1\), denotes the leading and subleading charges, respectively. The charge \(Q_{\epsilon_{0}}^{0}\) is the usual for a \(O(1)\) gauge, while the second term is the one obtained in [21] and [1]. In both cases, we obtained "corner" charges, in the sense that they depend on the values of the fields at the boundary of \(\mathcal{I}\), which by itself is the boundary of the domain we started with (as in [24; 28]). Tower of asymptotic charges In this section, we derive an infinite hierarchy of charges from a symplectic form in an extended phase space that contains sufficient degrees of freedom to allow for \(O(r^{n})\) LGT's, for arbitrary \(n\). Certain difficulties in the definition of the symplectic potential arise, in particular the appearance of several divergent integrals, as was shown in the previous section. The renormalization procedure we apply is based on [22]. First, we define the extended phase space and show the type of divergences we have, both in the \(t\to+\infty\) and \(u\to+\infty\) limits inside the expresion (3.13). Then, we proceed to prescribe a renormalization on the symplectic potential that will lead to the correct expression for the charges, while the symplectic form remains finite. ### Extended phase space and charges Let \(\mathcal{S}\) be the space of sequences \(\{\psi_{i}\}_{i>0}\) of functions \(\psi_{i}:S^{2}\to\mathbb{R}\) such that only finitely many are non-zero 6. Given a sequence \(\Psi\in\mathcal{S}\), we define the LGT associated to the sequence as Footnote 6: In what follows we assume that the sequences of functions have this property, unless stated otherwise \[\Lambda_{\Psi}:=\sum_{i>0}\Lambda^{i}_{\psi_{i}} \tag{4.1}\] where each \(\Lambda^{i}_{\psi_{i}}\) is a pure \(O(r^{i})\) LGT associated to \(a_{i}\), in the sense of section (2). Observe that the sum is finite for every \(\Psi\in\mathcal{S}\). We define the extended phase space as the following set, \[\mathcal{F}_{\infty}=\mathcal{F}_{0}\times\mathcal{S}, \tag{4.2}\] with the extended electromagnetic potential and scalar field are defined as \[\hat{\mathcal{A}}_{\mu}=\mathcal{A}_{\mu}+\partial_{\mu}\Lambda_{\Psi},\quad \hat{\phi}=e^{ie\Lambda_{\Psi}}\phi, \tag{4.3}\] where \(\mathcal{A}_{\mu}\) and \(\phi\) are the vector potential and the scalar field generated by the free data \(A_{a}\) and \(\varphi\) from the space \(\mathcal{F}_{0}\), respectively. The admissible variations \(\delta\) of this phase space are such that when acting on the degrees of freedom parametrized by \(\Psi\), it satisfies \(\delta\Psi\in\mathcal{S}\). This property is not restrictive regarding the variations, as we will see below. Given a sequence \(\varepsilon=\{\epsilon_{0},\epsilon_{1},...,\epsilon_{i},...\}\) of free \(S^{2}\) functions, such that \(\{\epsilon_{i}\}_{i>0}\in\mathcal{S}\), consider the LGT associated to it, \(\Lambda_{\varepsilon}=\Lambda^{0}_{\epsilon_{0}}+\sum_{i>0}\Lambda^{i}_{ \epsilon_{i}}\). The variation generated by this LGT acts on \(\mathcal{F}_{\infty}\) by acting in \(\mathcal{A}_{\mu}\) with its \(O(r^{0})\) free function and by acting on \(\alpha\) on each sequence term, \[\delta_{\Lambda_{\varepsilon}}A_{A}=\partial_{A}\epsilon_{0},\quad\delta_{ \Lambda_{\varepsilon}}\varphi=ie\epsilon_{0}\varphi,\quad\delta_{\Lambda_{ \varepsilon}}\Psi=\{\epsilon_{i}\}_{i>0} \tag{4.4}\] This structure is the same as in the previous section, extended to contain any order in the \(r-\)expansion. We can write the full symplectic potential, equation (3.10), and proceed in the same way as in the previous section, obtaining the expression (3.14), but with \(\Lambda_{\Psi}\) in place of \(\Lambda_{\psi}^{1}\), and split the symplectic potential in the radiative phase space contribution and the extended part, given by \[\theta_{\infty}^{t}(\delta)=\sqrt{q}(\partial_{r}-\partial_{u})(r^{2}F_{ru} \delta\Lambda_{\Psi}), \tag{4.5}\] where the \(\infty\) stands for the extension to all orders in \(r\). Given \(\delta\) and \(\Lambda_{\Psi}\), let us calculate the symplectic potential evaluated at \(\delta\). Consider the integral, \[\Theta_{t,\infty}(\delta)=\int_{\Sigma_{t}}\sqrt{q}(\partial_{r}-\partial_{u}) (r^{2}F_{ru}\delta\Lambda_{\Psi})dx^{2}du, \tag{4.6}\] and observe that the term inside the integral is divergent in the limit \(t\rightarrow+\infty\) with the same order as the highest power of \(r\) in \(\delta\Lambda_{\alpha}\). Our aim in this section is to understand better this integral. For brevity let us call \[\rho_{k}(\delta)=\sum_{i=k}^{+\infty}F_{ru}^{(-2+k-i)}\delta\Lambda_{\Psi}^{( i)}, \tag{4.7}\] where \(\delta\Lambda_{\Psi}^{(i)}\) is the coefficient corresponding to \(r^{i}\) in the \(r-\)expansion of \(\delta\Lambda_{\Psi}\). \(\rho_{k}(\delta)\) is thus the \(O(r^{k})\) coefficient in the expansion of the term inside the brackets. Upon direct computation, we have \[\Theta_{t,\infty}(\delta)=\int_{\Sigma_{t}}\sqrt{q}\sum_{k=1}^{\infty}\left( kr^{k-1}\rho_{k}(\delta)-r^{k}\partial_{u}\rho_{k}(\delta)\right)dx^{2}du, \tag{4.8}\] which after we substitute \(r=t-u\), gives \[\Theta_{t,\infty}(\delta)=\sum_{j=0}^{\infty}t^{j}\int_{\Sigma_{t}}\theta_{j} ^{t}(\delta)dx^{2}du, \tag{4.9}\] for some \(t-\)independent functions \(\theta_{j}^{t}(\delta)\). This gives us a \(t-\)expansion of the symplectic potential. In the next subsection we show that these divergences can be renormalized by adding total variations and total derivatives (corner) terms to the symplectic potential. Assuming such procedure can be done, we are left with the \(O(t^{0})\) term, which satisfies the identity \[\Theta_{\infty}^{\mathcal{I}}(\delta) := \lim_{t\rightarrow+\infty}\Theta_{t,\infty}(\delta)=\int_{ \mathcal{I}}\sqrt{q}\sum_{k=1}^{\infty}\left(k(-u)^{k-1}\rho_{k}(\delta)-(-u)^ {k}\partial_{u}\rho_{k}(\delta)\right)dx^{2}du \tag{4.10}\] \[= -\int_{\mathcal{I}}\partial_{u}\left(\sqrt{q}\sum_{k=1}^{\infty} (-u)^{k}\rho_{k}(\delta)\right)dx^{2}du,\] which gives us a boundary term. The charges associated to higher order LGT can be directly computed using the identity \(\delta Q_{\Lambda_{\varepsilon}}=\Omega_{\infty}^{\mathcal{I}}(\delta,\delta _{\Lambda_{\varepsilon}})\), \[Q_{\epsilon}=\int_{\mathcal{I}}\partial_{u}\left(\sum_{k=1}^{\infty}(-u)^{k} \rho_{k}(\delta_{\Lambda_{\varepsilon}})\right)dud^{2}x. \tag{4.11}\] When evaluating the term in the brackets in the last line of (4.10) at \(u=+\infty\), we use the hypothesis that \(F_{ru}=0\) at \(\mathcal{I}^{+}_{+}\). When evaluating at \(\mathcal{I}^{+}_{-}\), we run into divergences. Since the general behaviour of \(\rho_{k}(\delta)\) admitted by (14) and (17) near spatial infinity is polynomial in \(u\) plus a \(O(1/|u|^{\infty})\) remainder, we have that the above expression for \(\Theta_{t}(\delta)\) is not well defined. By the renormalization procedure of the next subsection, we will be able to regularized the above expression, keeping only the \(O(u^{0})\) in \(\rho_{0}(\delta)\), \[\Theta^{\mathcal{I}}_{\infty}(\delta)=\int_{S^{2}}\sqrt{q}\sum_{i=1}^{\infty} F^{(-2-i,0)}_{ru}\delta\psi_{i}dx^{2}du, \tag{4.12}\] where \(F^{(-2-i,0)}_{ru}\) are the \(O(u^{0})\) of \(F^{(-2-i)}_{ru}\). The renormalization procedure of the next subsectionhas to address the previous two divergences: the \(t\) divergence from the limit to \(\mathcal{I}\), and the \(u\) divergences in the integrals over \(\mathcal{I}\). We left as a future work to understand the physical meaning of the boundary and corner terms in the context of covariant phase space quantities. We end this subsection with some remarks regarding previous works. The idea in [21] is to relate the divergent terms to the conserved quantities, therefore obtaining a "projected out" charge equal to the \(t^{0}\) term, while the discarded terms are proportional to lower order charges. While this is the case for the \(O(r)\) subleading charge (the \(O(t^{1})\) part in the charge is proportional to \(Q_{\epsilon_{0}}\)), there is however a remaining divergence in the \(O(r^{2})\) that leads to an unresolved tension, in particular the \(O(t^{1})\) term is not proportional to any lower order charge. This tension is solved once we renormalized the symplectic potential. Regarding the concrete expressions of the charges, the order \(O(u^{0})\) of \(Q_{\Lambda_{\varepsilon}}\) in equation (4.11) is exactly what was presented in [1]. This is equivalent to prove that the \(O(u^{0})\) coefficient of \(Q_{\Lambda_{\varepsilon}}\) is \(\sum_{k=1}^{+\infty}\int_{S^{2}}\epsilon_{k}F^{-2-k,0}_{ru}d^{2}x\) (we are considering \(\epsilon_{0}=0\), only higher order charges). As it stands, (4.11) diverges, because of the orders of \(u^{n}\) that are involved in the integral. If we want to write the charge as a corner integral on the sphere at \(u\to-\infty\), we should inspect the \(O(u^{0})\) term, corresponding to the finite limit term. Here, we take the \(u-\)decay in the remainder functions \(r_{i}\) in equation (2.19) as faster than any polynomic decay. Therefore, inspecting the expressions for \(\Lambda_{\varepsilon}^{(k)}\) and \(F^{-2+k-i}_{ru}\), we see that each \(\rho_{k}(\delta_{\Lambda_{\varepsilon}})\) has at least order \(u^{0}\), therefore the term in the sum contributes with at least \(u^{k}\). The only term with a possible \(u^{0}\) order is thus \(\rho_{0}(\delta_{\Lambda_{\varepsilon}})\), \[\rho_{0}(\delta_{\Lambda_{\varepsilon}})=\sum_{i=1}^{\infty}\Lambda_{ \varepsilon}^{(k)}F^{(-2+k)}_{ru}. \tag{4.13}\] Again, a close inspection in the \(u-\) expansion of the functions shows that the order \(u^{0}\) is given by the sum of the products \(\epsilon_{k}F^{(-2-k,0)}_{ru}\). ### Regularization procedure In this subsection, following [22], we will renormalise the symplectic potential for QED in the extended phase space, in order to eliminate the divergences. The idea is to write the higher order terms in the \(t\) component of the symplectic potential as a boundary plus corner terms, and to substract them from the original, obtaining a finite expression in the \(t\to\infty\) limit. From the first variation of the Lagrangian (2), we have \[\delta\mathcal{L}=E^{\mu}\delta\hat{\mathcal{A}}_{\mu}+E\delta\hat{ \phi}+\partial_{\mu}\theta^{\mu}(\delta) \tag{4.14}\] where \(E^{\mu}\) and \(E\) are the field equations for \(\hat{\mathcal{A}}_{\mu}\) and the massless scalar, respectively. By taking the retarded coordinates \(u,t,x^{1},x^{2}\) on Minkowski space time, we write the previous equation on-shell and obtain an equation for \(\partial_{t}\theta^{t}(\delta)\) \[\partial_{t}\theta^{t}(\delta)=\delta\mathcal{L}-\partial_{u} \theta^{u}(\delta)-D_{a}\theta^{a}(\delta) \tag{4.15}\] We will assume that all the functions have \(t\) and \(u\) expansions around \(t=+\infty\) and \(u=\pm\infty\), as is the case for \(F_{ru}^{(2)}\), \(A_{a}\) and \(\varphi\) (equations (2.14), (2.12), and (2.10)). Consider the derivation of the divergent part of the symplectic potential done in the previous section, but now applied to our extended phase space: \[\theta^{\mu}(\delta)=\sqrt{q}r^{2}\left(F^{\mu\nu}\delta\hat{ \mathcal{A}}_{\nu}\right)+\overline{\mathcal{D}^{\mu}}\hat{\phi}\delta\hat{ \phi}+c.c.\right). \tag{4.16}\] Remember that we use \((u,r,x^{1},x^{2})\) coordinates to integrate, and then take the limit \(t\to+\infty\) at fixed \(u\). The general form for the symplectic potential is,7 Footnote 7: In the following equations we write the explicit dependence of the functions on variations and coordinates. \[\theta^{t}(\delta)=Y_{0}(\delta)(u,t,x^{a})+\sum_{i=1}^{\infty}t^ {i}Y_{i}(\delta)(u,x^{a}). \tag{4.17}\] where \(Y_{0}(\delta)(u,t,x^{a})\) is such that \(\lim_{t\to+\infty}Y_{0}(\delta)(u,t,x^{a})\) is a well defined function, \(Y_{0}(\delta)(u,x^{a})\). We introduce the renormalized symplectic potential as \(\theta^{t}_{ren}:=\theta^{t}-H_{ren}\), where \(H_{ren}\) is such that \[\partial_{t}\theta^{t}(\delta)-\partial_{t}H_{ren}(\delta)=K( \delta)(u,t,x^{a}), \tag{4.18}\] where \(K\) is such that its limit when \(t\to+\infty\) vanishes. In general \(K\) and \(H_{ren}\) are not uniquely determined by the previous equation. The natural prescription for \(H_{ren}\) to resolve the divergences is the following, \[H_{ren}(\delta)=\sum_{i=1}^{+\infty}t^{i}Y_{i}(\delta)(u,x^{a})+ C(\delta)(u,x^{a}), \tag{4.19}\] where \(C(u,x^{a})\) is a function to be determined. Observe that \(H_{ren}\) has the same order than \(\theta^{t}\) in the \(t\) expansion, and that the divergences in the \(t\) parameter are cancelled, so \(\theta^{t}_{ren}\) converges in the limit \(t\to+\infty\). The coefficients \(Y_{i}\) are obtained from the integration of the terms in the variation of the lagrangian and the total derivative of the symplectic potential in equation (4.15), on \(\{t=cnt\}\) surfaces, directly by the \(t\) expansion. Therefore, we can prescribe \[Y_{i}(\delta)=\text{Finite part}\left(\lim_{t\to+\infty}\frac{1}{t^{i}}\left( \delta\mathcal{L}-\partial_{u}\theta^{u}(\delta)-D_{a}\theta^{a}(\delta)\right) \right), \tag{4.20}\] for each \(i\). Observe in (4.20) that each \(Y_{i}\) can be written as a total derivative plus a total variation. By taking the free function \(C\) to be a total derivative, \(C=\partial_{u}X^{u}+D_{a}X^{a}\), we can add the last term in (4.20) to obtain a new total derivative term. Then, the renormalized symplectic potential has the form \[\theta^{t}_{ren}(\delta):=\theta^{t}(\delta)+\partial_{\nu}\Upsilon^{t\nu}( \delta)+\delta\Xi^{t}=P(\delta)(u,t,x^{a}) \tag{4.21}\] where \(\Upsilon\) and \(\Xi\) are calculated from \(Y_{i}\), \(X^{u}_{i}\) and \(X^{a}_{i}\) directly, and \(P\) is at most \(O(t^{0})\) in the \(t-\)expansion. This symplectic potential does not contain divergences in the limit \(t\to\infty\). The general form of the symplectic potential will be changing the upper index \(t\) by a 4d index \(\mu\). We have that \(\Upsilon^{\mu\nu}=-\Upsilon^{\nu\mu}\), by definition of "corner terms" (see [22]). Without any loss of generality, we can define \(\Upsilon^{jl}=0\), for \(j,l\) running in the set \(\{u,x^{a}\}\), since these terms are not uniquely defined and do not affect the renormalization of \(\theta^{t}\). Therefore, we have a well defined limit \[\theta^{\mathcal{I}}_{ren}(\delta)(u,x^{a}):=\lim_{t\to+\infty} \theta^{t}_{ren}(\delta)(t,u,x^{a})=Y_{0}(\delta)(u,x^{a})-C(\delta)(u,x^{a}) \tag{4.22}\] We have still at our disposal the function \(C(u,x^{1},x^{2})\) (the only condition we imposed so far is that it is a total derivatives), which can be determined by imposing a finite limit when \(u\to-\infty\) for the symplectic potential. As it was shown in the previous subsection, under general LGT's the \(O(t^{0})\) of the symplectic potential has \(O(u^{N})\) terms, and therefore \(\theta^{t}_{ren}\) will in general have an expansion in powers of \(u\), starting in some \(u^{N}\) (corresponding to the highest power in \(\delta\) or \(\alpha\)), the coefficients of the expansion depending in general on which limit we are computing, \(u\to\pm\infty\). We consider the following \(u\)-expansion for \(Y_{0}(\delta)\) near \(u=\pm\infty\), \[Y_{0}(\delta)(u,x^{a})\stackrel{{ u\to\pm\infty}}{{=}}R_{Y_{0}}( \delta)(u,x^{a})+\sum_{k=1}^{\infty}u^{k}Y_{0,k}^{\pm}(\delta)(x^{a}), \tag{4.23}\] where \(\partial_{u}R_{Y_{0}}(u,x^{a})=O(1/|u|^{\infty})\). This condition comes from the tree level assumption on the soft theorems, and implies in particular that the limits when \(u\to\pm\infty\) are in principle different, \[R^{\pm}_{Y_{0}}(\delta)(x^{a}):=\lim_{u\to\pm\infty}R_{Y_{0}}( \delta)(u,x^{a}). \tag{4.24}\] By inserting (4.23) in (4.22), we have \[\theta^{\mathcal{I}}_{ren}(\delta)=R_{Y_{0}}(\delta)(u,x^{a})+ \sum_{k=1}^{\infty}u^{k}Y_{0,k}^{\pm}(\delta)(x^{a})-\partial_{u}X^{u}( \delta)(u,x^{a})-D_{a}X^{a}(\delta)(u,x^{a}), \tag{4.25}\] and immediately we can find functions \(X^{u},X^{a}\) such that their expansions around \(u=\pm\infty\) renormalise the limits of the symplectic potential. For \(X^{u}\) we find, \[X^{u}_{\pm}(\delta)(u,x^{a})=\sum_{k=1}^{\infty}\frac{1}{k+1}u^{k+1}Y^{\pm}_{0,k }(\delta)(x^{a}), \tag{4.26}\] while for \(X^{a}\) we have, \[D_{a}X^{a}(\delta)(u,x^{a})=\left\{\begin{array}{ll}R^{-}_{Y_{0}}(\delta)(x^ {a})+O(1/|u|^{\infty})\ \text{when}\ u\rightarrow-\infty\\ R^{+}_{Y_{0}}(\delta)(x^{a})+O(1/|u|^{\infty})\ \text{when}\ u\rightarrow+\infty \end{array}\right. \tag{4.27}\] Finally, the symplectic potential density gives a finite result upon integration on \(\mathcal{I}\), due to the fall offs of \(R_{Y_{0}}\) ### Electric-like charge algebra The previous renormalization procedure adjust exactly all the divergences, while maintaining the same convergent terms discussed in subsection 4.1. The expression for the renormalized symplectic potential is therefore: \[\Theta_{ren}(\delta)=\int_{\mathcal{I}^{+}}\theta_{0}(\delta)dudx^{2}+\int_{ S^{2}}\sum_{i=1}^{\infty}F^{(-2-i,0)}_{ru}\delta a_{i}dx^{2} \tag{4.28}\] where \(\theta_{0}\) is the usual symplectic potential in \(\mathcal{F}_{0}\). The symplectic form is the exterior derivative (in the extended phase space) of the symplectic potential: \[\Omega_{ren}(\delta,\delta^{\prime})=\int_{\mathcal{I}}\omega_{0}(\delta, \delta^{\prime})dudx^{2}+\int_{S^{2}}\sum_{i=1}^{\infty}\delta F^{(-2-i,0)}_{ ru}\wedge\delta^{\prime}a_{i}dx^{2} \tag{4.29}\] Now, all three ingredients in the charge calculation are well defined and finite: the limit \(t\rightarrow+\infty\), the integration on \(\mathcal{I}\) and the series. We are now in position to show the full hierarchy of charges for arbitrary \(O(r^{n})\) LGT in QED. The electric charges associated to a LGT \(\Lambda_{\varepsilon}\) can be calculated form (4.29), substituting the sequence coordinates \(\{\epsilon_{i}\}\). By the equation \[\delta Q_{\varepsilon}=\Omega_{ren}(\delta,\delta_{\Lambda_{\varepsilon}}), \tag{4.30}\] we have \[Q_{\varepsilon}=\sum_{j=0}^{\infty}\int_{S^{2}}\sqrt{q}\epsilon_{j}F^{-2-j,0} _{ru}dx^{2} \tag{4.31}\] where we are using that \(F_{\nu\mu}\) is invariant under \(\delta_{\Lambda_{\varepsilon}}\). This expression is the same as the one obtained in [1]. Observe that the full algebra of charges is abelian: \[\{Q_{\varepsilon_{1}},Q_{\varepsilon_{2}}\}=0,\quad\forall\varepsilon_{1}, \varepsilon_{2} \tag{4.32}\] Duality extension of tower of asymptotic charges In the previous sections we treated only the electric part of Maxwell theory, renormalizing the symplectic potential in the extended phase space to contain the \(\text{sub}^{n}\)-leading charges in a natural framework. In this section, we extend the phase space (again) in order to include the magnetic freedom, _a la_ Freidel-Pranzetti, as in [24]. This type of extensions has been thoroughly studied in recent years in several contexts: electromagnetic duality (e.g. [23; 30]), BF theories ([25]) and under more general structures ([29]). Throughout this section we are using form notation, without writing the indexes explicitly, in order to ease the notation. Also, we are considering no extra fields. Electromagnetism posses a _duality symmetry_, which can be characterized as follows: the Lagrangian for the theory is \[\mathcal{L}[F]=\frac{1}{2}*F\wedge F, \tag{10}\] where \(\wedge\) is the wedge product in the space of \(p\)-forms on Minkowski space \(M\) and \(*\) is the Hodge dual operator, \(*:\Omega^{p}(M)\rightarrow\Omega^{4-p}(M)\), in \(M\). This operator satisfies \[**\alpha=(-1)^{p(4-p)+1}\alpha,\quad\alpha\in\Omega^{p}(M), \tag{11}\] where the extra \(+1\) in the exponent comes from the signature of the metric in Minkowski space. Therefore, taking \(p=2\) and applying \(*\) to \(F\) in (10), we have \[\mathcal{L}[*F]=\frac{1}{2}*F\wedge F, \tag{12}\] which shows the duality symmetry. The first step in the inclusion of the duality symmetry is to consider the duality extension in the standard radiative phase space. On each \(\Sigma_{t}\), we have the Freidel-Pranzetti extension for the symplectic form, [24], \[\Omega(\delta,\delta^{\prime})=\int_{\Sigma_{t}}\delta A\wedge\delta^{\prime} \star F+\int_{S^{2}}\delta a_{0}\wedge\delta^{\prime}B_{0} \tag{13}\] where \(\star\) is the Hodge dual in the hypersurface, \(a_{0}\overset{S^{2}}{=}A+d\phi_{0}\) is the electric boundary gauge field, and \(B_{0}\) is the magnetic boundary gauge field. \(\phi_{0}\) is the _edge mode_, which extends the phase space, \((A,a_{0})\), which now contains this boundary field. We see that the symplectic form now contains a corner term living in \(\partial\Sigma_{t}\). To make the connection with our past sections definition for \(A\), we have \[A_{new}+d\phi=A_{old}, \tag{14}\] where \(old\) refers to the \(A\) used in the previous sections, and \(new\) is the one in the present section. In particular, the expressions for curvature tensor and the charges are still valid. Observe that \(\phi\) can be thought as a zero-order extension, using the same idea as the previous sections: extending the vector potential with a large gauge symmetry. We distinguish between symmetries that leave fixed the bulk variable \(A\), and symmetries that act only on the boundary. In the previous sections, we make use of this difference when defining the extension to higher order LGT, where \(\delta_{\Lambda_{\varepsilon}}\) only acts on \(A_{A}\) through the first component. In the present section, as it was done in [24], we are isolating the bulk from the boundary action on the \(\epsilon_{0}\) variation, in order to have a well define canonical action that includes the duality symmetry, and such that the symplectic potential is invariant under the gauge transformation of the fields. We are working in \(\mathcal{I}^{+}\), so in (101) we take \(t\to+\infty\). The "bulk" part now is \(A\) along \(\mathcal{I}\), while the boundary is \(\mathcal{I}^{+}_{-}\), with topology \(S^{2}\). The values at the boundary are not independent, since the boundary symmetries act simultaneously on both \(\mathcal{I}^{+}_{\pm}\) (i.e., they are independent of \(u\)). Under a gauge transformation generated by \(G\), both the bulk and the corner fields transform, \[\delta_{G}(A,a_{0},B_{0})=(dG,-dG,0), \tag{102}\] so the variation \(\delta_{G}\) is indeed gauge, in the sense that has a vanishing charge \(\Omega(\delta,\delta_{G})=0\), on shell. The electric (magnetic) symmetry \(\delta_{\epsilon_{0}}(\delta_{\lambda_{0}})\) acts only on the electric (magnetic) boundary field, \[\delta_{\epsilon_{0}}(A,a_{0},B_{0})=(0,d\epsilon_{0},0),\quad\delta_{\lambda _{0}}(A,a_{0},B_{0})=(0,0,d\lambda_{0}), \tag{103}\] where \(d\lambda_{0}\) is locally but not globally exact (such as in the standard examples of a charge in the \(z-\)axis, see section V in [24]). Observe that on-shell, upon acting with \(G\), we obtain the identity \[dB_{0}=\star F, \tag{104}\] which on \(\mathcal{I}^{+}\) gives us \(dB_{0}=F_{ru}^{(-2,0)}\). Our extended phase space of section 4.1 adapts well to the construction given above to the duality extension. The gauge transformation \(\Lambda_{\alpha}\) is the "bulk" potential, generated by the boundary fields in the sequence \(\alpha\), in a hierarchy graded by the correspondent power of \(r\). Therefore, we can extend directly as \[\Omega_{ren}(\delta,\delta^{\prime})=\int_{\mathcal{I}}\left[\delta A\wedge \delta^{\prime}\star F\right]_{ren}+\int_{S^{2}}\delta a_{0}\wedge\delta^{ \prime}B_{0}+\int_{S^{2}}\sum_{k=1}^{\infty}\delta a_{k}\delta^{\prime}dB_{k}, \tag{105}\] where here \(a_{k}\) are functions on the sphere, \(B_{k}\) are 1-forms in the sphere locally (but not necessary globally) exact, \(a_{0}\) is a \(1-\)form 8, and \(ren\) indicates that is the renormalized term, given by (103). We define the action of a gauge transformations \(G\) (of order \(r^{n}\) arbitrary) as Footnote 8: \(a_{0}\) is not generally a gradient. \[\delta_{G}A=dG,\quad\delta_{G}a_{0}=dG_{0},\quad\delta_{G}a_{j}=G_{j},\quad \delta_{G}B_{j}=0,j\geq 1. \tag{106}\] Evaluating the sympletic form in \(\delta_{G}\), \[\Omega_{ren}(\delta,\delta_{G})=-\delta\left(\int_{\mathcal{I}}\left[dG\wedge \star F\right]_{ren}+\int_{S^{2}}\delta dG_{0}\wedge\delta^{\prime}B_{0}+\int _{S^{2}}\sum_{k=0}^{\infty}G_{k}\delta dB_{k}\right), \tag{107}\] which on-shell and after integrating by parts, we obtain (after the renormalization, allowing variations \(\delta\) such that \(\delta A\) has order higher than \(r^{0}\) before taking the limit \(t\to+\infty\)) \[dB_{k}=F_{ru}^{(-2-k,0)},k\geq 0. \tag{108}\] This equality establishes the value of the magnetic boundary gauge field as the field strength functions. Finally, we will denote the magnetic variations acting on \(B_{k}\)'s as \(\lambda=\{\lambda_{i}\}_{i\geq 0}\), in the same fashion as we define the LGT generators. Electric (magnetic) variations act as follows on the extended phase space variables, \[\delta_{\epsilon_{k}}A=0,\quad\delta_{\epsilon_{k}}a_{0}=\delta_{0k}d \epsilon_{k},\quad\delta_{\epsilon_{k}}a_{j}=\delta_{kj}\epsilon_{k},\quad \delta_{\epsilon_{k}}B_{j}=0,\quad k\geq 0,j\geq 1 \tag{111}\] \[\delta_{\lambda_{k}}A=0,\quad\delta_{\lambda_{k}}a_{j}=0,\quad\delta_{\lambda_ {k}}B_{j}=\delta_{kj}d\lambda_{k},\quad k,j\geq 0, \tag{112}\] where \(d\lambda_{k}\) is locally but not globally defined, and \(\delta_{ij}\) is Kroenecker delta. ### Charges and dual charges and their algebra By computing \(\Omega_{ren}(\delta,\delta_{\Lambda_{\varepsilon}})\) and \(\Omega_{ren}(\delta,\delta_{\Lambda_{\lambda}})\), we obtain the electric (denoted as \(Q\)) and magnetic (denoted as \(\tilde{Q}\)) charges, \[Q_{\varepsilon} =\sum_{k=0}^{\infty}\int_{S^{2}}\epsilon_{k}dB_{k} \tag{113}\] \[\tilde{Q}_{\lambda} =\sum_{k=0}^{\infty}\int_{S^{2}}a_{k}d^{2}\lambda_{k}, \tag{114}\] where the first integral gives directly 4.31, thanks to 5.12, and the last integral does no vanish due to the failure of \(d\lambda\) to be globally exact. Finally, we have can compute the charge algebra. As the electric charges, the magnetic charges \(\tilde{Q}_{\lambda}\) are abelian, \[\{\tilde{Q}_{\lambda},\tilde{Q}_{\lambda^{\prime}}\}=\delta_{\lambda}\sum_{k=0 }^{\infty}\int_{S^{2}}a_{k}d^{2}\lambda^{\prime}_{k}=0. \tag{115}\] A non-trivial component of algebra is given by the mixed Poisson bracket, \[\{Q_{\varepsilon},\tilde{Q}_{\lambda}\}=\delta_{\varepsilon}\sum_{k=0}^{ \infty}\int_{S^{2}}a_{k}d^{2}\lambda_{k}=\sum_{k=0}^{\infty}\int_{S^{2}} \epsilon_{k}d^{2}\lambda_{k}=:c_{k} \tag{116}\] This term shows that the boundary duality symmetry algebra posses a hierarchy of central charges, \(\{c_{k}\}_{k\geq}\). We leave to future works to analyse in detail this fact in the context of soft theorems and Ward identities. ## 6 Outlook In this work we obtain a well-defined symplectic form on \(\mathcal{I}^{+}\) for the extended phase space of classical QED through a renormalization procedure from the original symplectic form, giving a derivation from first principles. With this symplectic form, the higher order LGT can be associated to the \(\mathrm{sub}^{n}\)-leading electric charges acting canonically on the phase space. The expressions of the charges associated to the \(O(r^{n})\) LGT are then obtained, in agreement with the expressions previously proposed in [1] by means of the tree-level sub\({}^{n}\)-leading formulas. Using the duality symmetry extension, we compute the full electromagnetic charge algebra, showing a hierarchy of central extensions. Several future directions are possible in the framework of our work. First, within the abelian theory, it would be interesting to extend the analysis to include loop corrections to the soft photon factorization formulas ([31; 32]). This could led to some new structure within the charge hierarchy, and between electric and magnetic charges. One of the main difficulties in this line of work is the appearance of infrared divergences. New advances in celestial CFT methods (see [17; 19; 33; 34; 35], and references therein) seem to be well suited for the incorporation of these effects. It would also be interesting, given the recent developments in the study of electromagnetic asymptotic charges at spatial infinity, such as consistently accommodating \(\ln(r)\) terms ([36]) and the study in higher dimensions ([37]), to stablish a connection between the symplectic structure at null infinity with that at spatial infinity. Second, the extension to non-abelian gauge theories. In particular, in Yang-Mills theory, extending the renormalization procedure would allow us to construct a well-defined symplectic structure on an extended phase space, to compute the subleading charges and their algebra. As it was shown in [16], a first step towards this is to consider a linearized extension of the phase space, and restrict the charges up to \(O(r)\) terms. Some progress is being made in this direction, [26]. Finally, it would be interesting to study possible extensions of this renormalization procedure in the context of gravity. The study of higher order diffeomorphisms seem to be a key ingredient in the extensions of the phase spaces for gravity, as recent works suggest. In [38] higher order multipole moments generated via specific diffeomorphisms were studied, and showed that they are Noether charges. In the null infinity sector it has been proposed in [39; 40], and worked out more recently in [41; 42], that asymptotic diffeomorphisms generated by certain \(O(r)\) sphere-vector fields are behind the sub-subleading soft graviton factorization [5]. A similar idea as the one presented here could be used to identify an extended space supporting these singular transformations. ## Aknowledgements I would like to thank Miguel Campiglia for his comments, discussions and feedback on the process and the final manuscript, and also to the participants of the Workshop on Celestial Symmetries, held in Montevideo in March, 2022, with whom the contents of this paper were discussed, in particular Laurent Freidel, Marc Geiller, Alok Laddha, Silvia Nagy, Daniele Pranzetti and Celine Zwickel. I would also like to thank Ali Seraj and Oscar Fuentealba for their comments on the first version of the paper. This work was partly supported by a CAP Ph.D. fellowship, by the ANII project FCE 2019-155865, and by PEDECIBA. ## Appendix A Recursive formula for \(\epsilon^{ab}F_{ab}\) In this appendix we prove eq. (21). By writing the Bianchi identities \[\partial_{r}F_{ab}+\partial_{a}F_{br}+\partial_{b}F_{ra} = 0, \tag{10}\] \[\partial_{u}F_{ab}+\partial_{a}F_{bu}+\partial_{b}F_{ua} = 0,\] (11) \[D_{c}F_{ab}+D_{a}F_{bc}+D_{b}F_{ca} = 0, \tag{12}\] and taking the \(u\) and \(r\) derivative of the first equation, the \(r\) derivative of the second one, the \(D_{d}\) derivative of the third one and contracting with \(\epsilon^{ab}\), we obtain, \[\partial_{u}\partial_{r}\epsilon^{ab}F_{ab} = 2\partial_{u}\epsilon^{ab}D_{a}F_{rb} \tag{13}\] \[\partial_{r}\partial_{r}\epsilon^{ab}F_{ab} = 2\partial_{r}\epsilon^{ab}D_{a}F_{rb}\] (14) \[\partial_{u}\partial_{r}\epsilon^{ab}F_{ab} = 2\partial_{r}\epsilon^{ab}D_{a}F_{ub}\] (15) \[D_{d}\epsilon^{ab}D_{c}F_{ab} = -2\epsilon^{ab}D_{d}D_{a}F_{bc}. \tag{16}\] In equations (13), (14) and (15) we substituted \(\partial_{i}\) by \(D_{i}\) for every \(i=a,b\), since we are contracting with \(\epsilon^{ab}\). Using identities \[D_{a}D_{b}F_{cd}=D_{b}D_{a}F_{cd}-q^{ef}R_{ecab}F_{fd}-q^{ef}R_{edab}F_{cf}, \quad R_{abcd}=\frac{R}{2}(q_{ac}q_{bd}-q_{ad}q_{bc}), \tag{17}\] we have \[D^{d}D_{a}F_{bd}=D_{a}D^{d}F_{bd}+RF_{ab}. \tag{18}\] By contracting (16) with \(q^{cd}\) and u, and the previous equation, \[\Delta\epsilon^{ab}F_{ab}=-2\epsilon^{ab}D_{a}D^{d}F_{bd}-2R\epsilon^{ab}F_{ab}. \tag{19}\] Next, consider Maxwell equation (7), and take de \(D_{d}\) derivative and contract with \(\epsilon^{da}\), \[\epsilon^{da}D_{d}j_{a}=-\partial_{r}\epsilon^{da}D_{d}(F_{ua}-F_{ra})+ \partial_{u}\epsilon^{da}D_{d}F_{ra}+\frac{1}{r^{2}}\epsilon^{da}D_{d}D^{b}F_ {ab}. \tag{20}\] Substituting the previous equations, we arrive at \[2\epsilon^{ab}D_{a}j_{b}=2\partial_{u}\partial_{r}\epsilon^{ab}F_{ab}- \partial_{r}\partial_{r}\epsilon^{ab}F_{ab}-\frac{1}{r^{2}}(\Delta\epsilon^{ ab}F_{ab}+2R\epsilon^{ab}F_{ab}). \tag{21}\] where \(R=2\), is the scalar curvature of \(q_{ab}\).
2307.01799
Multiparameter universality and intrinsic diversity of critical phenomena in weakly anisotropic systems
Recently a unified hypothesis of multiparameter universality for the critical behavior of bulk and confined anisotropic systems has been formulated [V. Dohm, Phys. Rev. E {\bf 97}, 062128 (2018)]. We prove the validity of this hypothesis on the basis of the principle of two-scale-factor universality for isotropic systems. We introduce an angular-dependent correlation vector and a generalized shear transformation that transforms weakly anisotropic systems to isotropic systems. As examples we consider the $O(n)$-symmetric $\varphi^4$, Gaussian, and $n$-vector model. We determine the structure of the bulk order-parameter correlation function, of the singular bulk part of the critical free energy, and of critical bulk amplitude relations of anisotropic systems. It is shown that weakly anisotropic systems exhibit a high degree of intrinsic diversity due to $d(d+1)/2-1$ independent parameters. Exact results are derived for the $d=2$ Ising universality class and for the spherical and Gaussian universality classes. For the $d=3$ Ising universality class we identify the universal scaling function of the isotropic bulk correlation function from the nonuniversal result of the functional renormalization group. A proof is presented for the validity of multiparameter universality of the exact critical Casimir amplitude in a rectangular geometry of weakly anisotropic systems with periodic boundary conditions in the Ising universality class. This confirms the validity of recent predictions of self-similar structures of finite-size effects at $T=T_c$ derived from conformal field theory. This also substantiates the previous notion of an effective shear transformation for anisotropic two-dimensional Ising models. Our theory paves the way for a quantitative theory of nonuniversal critical Casimir forces in anisotropic superconductors.
Volker Dohm
2023-07-04T16:08:20Z
http://arxiv.org/abs/2307.01799v1
# Multiparameter universality and intrinsic diversity of critical phenomena ###### Abstract Recently a unified hypothesis of multiparameter universality for the critical behavior of bulk and confined anisotropic systems has been formulated [V. Dohm, Phys. Rev. E **97**, 062128 (2018)]. We prove the validity of this hypothesis in \(d\geq 2\) dimensions on the basis of the principle of two-scale-factor universality for isotropic systems. We introduce an angular-dependent correlation vector and a generalized shear transformation that transforms weakly anisotropic systems to isotropic systems. As examples we consider the \(O(n)\)-symmetric \(\varphi^{4}\) model, Gaussian model, and \(n\)-vector model. By means of the inverse of the shear transformation we determine the general structure of the bulk order-parameter correlation function, of the singular bulk part of the critical free energy, and of critical bulk amplitude relations of anisotropic systems at and away from \(T_{c}\). It is shown that weakly anisotropic systems exhibit a high degree of intrinsic diversity due to \(d(d+1)/2-1\) independent parameters that cannot be determined by thermodynamic measurements. Exact results are derived for the \(d=2\) Ising universality class and for the spherical and Gaussian universality classes in \(d\geq 2\) dimensions. For the \(d=3\) Ising universality class we identify the universal scaling function of the isotropic bulk correlation function from the nonuniversal result of the functional renormalization group. A proof is presented for the validity of multiparameter universality of the exact critical free energy and critical Casimir amplitude in a finite rectangular geometry of weakly anisotropic systems with periodic boundary conditions in the Ising universality class. This confirms the validity of recent predictions of self-similar structures of finite-size effects in the (\(d=2,n=1\)) universality class at \(T=T_{c}\) derived from conformal field theory [V. Dohm and S. Wessel, Phys. Rev. Lett. **126**, 060601 (2021)]. This also substantiates the previous notion of an effective shear transformation for anisotropic two-dimensional Ising models. Our theory paves the way for a quantitative theory of nonuniversal critical Casimir forces in anisotropic superconductors for which experiments have been proposed by G.A. Williams, Phys. Rev. Lett. **92**,197003 (2004). ## I Introduction Spatial anisotropy is a fundamental property that is omnipresent in condensed matter physics where it is the origin of a wide variety of nonuniversal effects. Substantial evidence has emerged over the last two decades [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] that part of this nonuniversal diversity persists near ordinary critical points and in the near-critical Goldstone regime of so-called weakly anisotropic systems. This includes magnetic materials, superconductors, alloys, solids with structural phase transitions, liquid crystals, compressible solids, and ultracold gases in anisotropic optical lattices. Recently an unexpected complex form of self-similarity of anisotropy effects in finite weakly anisotropic systems with periodic boundary conditions has been found [15] near the instability where weak anisotropy breaks down. Ordinary bulk critical phenomena can be divided into universality classes characterized by the dimension \(d\) and the symmetry properties of the ordered state [17; 18]. As an example we consider \(O(n)\)-symmetric systems with short-range interactions and with an \(n\)-component order parameter. Within each \((d,n)\) universality class, all systems have the same universal quantities (critical exponents, amplitude ratios, and scaling functions) which includes isotropic and weakly anisotropic systems since spatial anisotropy is only a marginal perturbation in the renormalization-group sense [17; 19; 20; 21; 22; 23]. The principle of two-scale-factor universality (or hyperuniversality) [17; 20; 22; 24; 25; 26; 27; 28] predicts that, once the universal quantities of a universality class are known, the asymptotic critical behavior of any particular system of this universality class is known completely provided that only two nonuniversal amplitudes are given. This principle was stated to be valid for all systems in a universality class [17; 29; 30]. Furthermore it was asserted [31; 32; 33; 34; 35; 36; 37; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123] that asymptotic isotropy can be restored in weakly anisotropic systems by a suitable anisotropic scale transformation and that universality can be restored [33], reintroduced [36], or repaired [17] in some cases. However the question was left unanswered how the various nonuniversal effects due to non-cubic anisotropy in bulk [19; 21; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52] and confined [32; 33; 34; 46; 47; 48; 49; 50; 51; 52] systems could be reconciled with the principle of two-scale-factor universality. So far the issue of weak anisotropy has not been addressed in the approach based on the nonperturbative functional renormalization-group (FRG) [53; 54; 55; 56; 57; 58; 59] where an extended universality was claimed to be valid for the isotropic bulk order-parameter correlation function in the nonasymptotic critical region with only two nonuniversal parameters [54; 55]. A systematic study of the effect of non-cubic spatial anisotropy on the critical behavior in bulk and confined systems was begun [1] by introducing a nondiagonal anisotropy matrix \(\mathbf{A}\) into the \(O(n)\)-symmetric \(d\)-dimensional \(\varphi^{4}\) field theory. An anisotropy-induced nonuniversality was discovered for the bulk correlation function as well as for the critical Casimir amplitude [60; 61; 62; 63; 64] and the critical Binder cumulant ratio [17] which were the hallmarks of finite-size universality. The analytic results [1] were based on the exact large-\(n\) limit and on the \(\varphi^{4}\) theory in the minimal subtraction scheme in three dimensions [65; 66]. These results demonstrated that restoring isotropy relates the critical behavior of anisotropic systems to that of isotropic systems but does not restore two-scale-factor universality. The violation of two-scale-factor universality in weakly anisotropic bulk and confined systems was subsequently confirmed and substantiated [3; 4; 5; 6; 8; 9; 10; 12; 13; 14], most recently by exact analytic results for the critical free energy and Casimir amplitude in two dimensions [15] based on conformal field theory [67], as well as by Monte Carlo simulations [16]. The main analytical and conceptional progress was achieved within the anisotropic \(\varphi^{4}\) theory in three [13] and two [14; 15] dimensions. A central role is played by a shear transformation between anisotropic and isotropic systems which is formulated in terms of eigenvectors and eigenvalues of the anisotropy matrix \(\mathbf{A}\). It was shown that weakly anisotropic \(\varphi^{4}\) models do not have a unique single bulk correlation length but rather an angular-dependent correlation length [14] with \(d\) independent nonuniversal amplitudes in the \(d\) principal directions which are determined by \(d(d-1)/2\) nonuniversal angles. As a consequence, two-scale-factor universality is absent in the anisotropic \(\varphi^{4}\) theory and is replaced by multiparameter universality which allows for up to \(d(d+1)/2+1\) independent nonuniversal parameters. This implies a revised notion of a universality class: it must be divided into subclasses [6; 7] of isotropic and weakly anisotropic systems with different scaling forms for the bulk correlation function and for the free energy of confined systems within the same universality class. These anisotropic scaling forms are governed by a reduced anisotropy matrix \(\mathbf{\tilde{A}}\) which has a universal structure in terms of principal correlation lengths and principal directions. It was hypothesized [13] that this revised picture of a universality class holds not only for the subclass of anisotropic \(\varphi^{4}\) models but is valid quite generally for all other weakly anisotropic systems beyond the \(\varphi^{4}\) theory. So far no general proof has been given for this hypothesis. It is the aim of this paper to present a general proof for the validity of multiparameter universality of the bulk critical behavior of weakly anisotropic \(d\)-dimensional systems at and away from \(T_{c}\) for \(d\geq 2\). Our strategy is to perform a generalized shear transformation from the anisotropic to an isotropic system, then to invoke two-scale-factor universality for the isotropic system and to derive the anisotropic structures by inverting the shear transformation. It is shown that among the \(d(d+1)/2+1\) nonuniversal parameters of weakly anisotropic systems there are only two parameters that can be determined by thermodynamic measurements whereas the \(d(d+1)/2-1\) independent parameters contained in \(\mathbf{\tilde{A}}\) cause a high degree of intrinsic diversity arising from the nonuniversal angular dependence of the critical correlations. In most cases their principal directions depend in a generically unknown way on the anisotropic interactions [14; 46]. This implies that in most cases, e.g., in anisotropic \(n\)-vector models and for real systems, it is unknown _a priori_ how to perform the anisotropic scale transformation invoked in the literature [31; 32; 33; 34; 35; 36; 37], with the exception of anisotropic \(\varphi^{4}\) models with known eigenvectors of the anisotropy matrix \(\mathbf{A}\). Special attention is payed to the bulk order-parameter correlation function whose universal scaling function \(\Psi_{\pm}\) in the asymptotic critical region is the same for both isotropic and weakly anisotropic systems in the same universality class. This function is known exactly for the \(d=2\) Ising universality class [38; 14]. For the \(d=3\) Ising universality class we determine the universal Fourier transform \(\hat{\Psi}_{\pm}\) from an approximate nonasymptotic scaling form that has been derived within the FRG approach [54; 55]. We refute the claim [54; 55] that an extended universality is valid in the nonasymptotic region. We also reanalyze the finite-size effects on the critical free energy and Casimir amplitude at \(T_{c}\) of anisotropic two-dimensional systems on a rectangle with periodic boundary conditions [15; 16]. We prove the validity of multiparameter universality for these quantities of the \(d=2\) Ising universality class including the feature of self-similarity on the basis of conformal field theory [67] without recourse to the anisotropic \(\varphi^{4}\) theory. The outline of this paper is as follows. In Sec. II we give basic definitions within the anisotropic \(\varphi^{4}\) and \(n\)-vector models. In Sec. III we recall important facts with regard to two-scale-factor universality of isotropic systems. Secs. IV and V are the central sections of this paper. In Sec. IV we introduce the notion of principal correlation vectors and a generalized shear transformation that depends on a free parameter and that is applicable to all weakly anisotropic systems, in contrast to the special shear transformation for the \(\varphi^{4}\) model introduced earlier [1]. In particular we demonstrate the universality of the structure of the reduced \(d\times d\) anisotropy matrix \(\mathbf{\tilde{A}}\) as a function of the nonuniversal angles of the principal axes and of the nonuniversal ratios of the principal correlation lengths. In Sec. V we prove that multiparameter universality is generally valid for the bulk order-parameter correlation function in the scaling limit of weakly anisotropic systems in \(d\geq 2\) dimensions. As applications we consider in Sec. V the anisotropic correlation functions of the Gaussian universality class for \(d\geq 2\) and of the spherical universality class for \(d>2\). The latter universality class includes the exactly solvable large-\(n\) limit of the anisotropic \(\varphi^{4}\) and \(n\)-vector models as well as the exactly solvable spherical and the mean spherical models [68; 69; 70; 71; 72]. We also reanalyze the nonuniversal scaling form of the isotropic bulk correlation function derived within the \(d=3\) FRG approach [54; 55] and invoke an exact sum rule that leads to an identification of the universal part of this scaling form in the asymptotic region. Our proof of multiparameter universality is extended in Sec. VI to the bulk part of the free energy and to critical bulk amplitude relations of anisotropic systems for \(d\geq 2\). As a major application we discuss in Sec. VII the anisotropic bulk correlation function of the \((d=2,n=1)\) universality class which includes both anisotropic Ising and scalar \(\varphi^{4}\) mod els. The exactly solvable triangular-lattice Ising model [14; 39; 73] is discussed as an example. In Sec. VIII we present the shear transformation of the two-dimensional angular-dependent correlation vector and derive a general transformation formula that is applicable to both bulk and confined systems. An application is given in Sec. IX where we take up the analysis of finite-size effects on the critical free energy and Casimir amplitude of the anisotropic two-dimensional Ising model in [15]. We show that our generalized shear transformation substantiates and specifies the notion of the "effective shear transformation" mentioned in [15]. We also discuss the critical Casimir force in anisotropic superconducting films. Sec. X contains a summary. ## II Basic definitions We consider isotropic and weakly anisotropic systems near their critical points. In order to discuss the issue of multiparameter universality it is appropriate to analyze and compare different types of anisotropic models that belong to the same universality classes. As examples we consider (i) the \(O(n)\)-symmetric anisotropic "soft-spin" \(\varphi^{4}\) lattice model and (ii) the \(O(n)\)-symmetric anisotropic \(n\)-vector model which is a fixed-length spin model. We assume short-range interactions in a range of couplings where the systems have an isotropic or weakly anisotropic critical point. Although the models (i) and (ii) have the same critical exponents and therefore belong to the same \((d,n)\) universality classes the analytic description of their anisotropy properties require different approaches. The reason is that the principal directions of the anisotropic \(\varphi^{4}\) model are well defined at the outset through the anisotropy matrix \({\bf A}\) whereas these axes are generically unknown for the \(n\)-vector model. We also consider the \(O(n)\)-symmetric Gaussian model which is defined only for \(T\geq T_{c}\). ### \(\varphi^{4}\) model The \(\varphi^{4}\) lattice Hamiltonian divided by \(k_{B}T\) reads [4; 6] \[H = v\Bigg{[}\sum_{i=1}^{N}\Big{(}\frac{r_{0}}{2}\varphi_{i}^{2}+u_{ 0}(\varphi_{i}^{2})^{2}-{\bf h}\cdot\varphi_{i}\Big{)} \tag{1}\] \[+ \sum_{i,j=1}^{N}\frac{K_{i,j}}{2}(\varphi_{i}-\varphi_{j})^{2} \Bigg{]},\] with \(u_{0}>0\) and an ordering field \({\bf h}=h\;{\bf e}_{h}\) with a unit vector \({\bf e}_{h}\). The variables \(\varphi_{i}\equiv\varphi({\bf x}_{i})\) are \(n\)-component vectors on \(N\) lattice points \({\bf x}_{i}\equiv(x_{i1},x_{i2},\ldots,x_{id})\) of a \(d\)-dimensional Bravais lattice of volume \(V=Nv\) where \(v\) is the volume of the primitive cell. We assume periodic boundary conditions. The components \(\varphi_{i}^{(\mu)}\;,\mu=1,2,\ldots,n\) of \(\varphi_{i}\) vary in the continuous range \(-\infty\leq\varphi_{i}^{(\mu)}\leq\infty\). For an appropriate class of pair interactions \(K_{i,j}\) the model undergoes a phase transition in the bulk limit at a finite bulk critical temperature \(T_{c}\) for \(n=1,d>1\) and \(n\geq 2,d>2\). For \(n=2,d=2\) this is a Kosterlitz-Thouless transition. No finite \(T_{c}\) exists for \(n>2\) in \(d\leq 2\) dimensions. For \(n=1,2,3,\infty\) the \(\varphi^{4}\) model belongs to the Ising, \(XY\), Heisenberg and spherical universality classes [17; 18], respectively. The distance from bulk criticality is described by the variable \[r_{0}(T)-r_{0c}=a_{0}t,\;\;\;t=(T-T_{c})/T_{c} \tag{2}\] with \(a_{0}>0\) where \(T_{c}\) is the bulk critical temperature. The critical value \(r_{0}(T_{c})=r_{0c}=r_{0c}(v,u_{0},K_{i,j},d,n)\) is determined by the divergence of the bulk susceptibility at \(h=0\)[6] and depends on the lattice structure, on \(v\), on \(u_{0}\), and on all couplings \(K_{i,j}\). If \(u_{0}=0\) the model (1) with \(r_{0c}=0\) and \(r_{0}=a_{0}t\geq 0\) belongs to the Gaussian universality class with a critical point at \(r_{0}=0,h=0\) in \(d>0\) dimensions. The large-distance anisotropy is described by a dimensionless symmetric anisotropy matrix \({\bf A}(\{K_{i,j}\})\)[1] whose matrix elements \[A_{\alpha\beta}(\{K_{i,j}\})=N^{-1}\sum_{i,j=1}^{N}(x_{i\alpha}-x_{j\alpha})( x_{i\beta}-x_{j\beta})\;K_{i,j} \tag{3}\] are determined by the second moments of the microscopic pair interactions \(K_{i,j}\)[4]. A characteristic feature of weakly anisotropy systems is that they have the same critical exponents as the isotropic system [1]. This requires \[\det{\bf A}(\{K_{i,j}\})>0 \tag{4}\] as a necessary (but not yet sufficient [6; 13]) condition. The condition for isotropy in the large-distance regime is \[{\bf A}(\{K_{i,j}\})=c_{0}^{\rm iso}{\bf 1} \tag{5}\] with \(c_{0}^{\rm iso}>0\) where \({\bf 1}\) is the unity matrix. Eq. (3) and the criteria (4) and (5) are valid for general \(d\) and \(n\)[6; 13] including the large-\(n\) limit and the Gaussian model. The dimensionless partition function \(Z\), the total free energy \({\cal F}_{\rm tot}\) (divided by \(k_{B}T\)), and the total free-energy density \(f\) of the \(\varphi^{4}\) model are defined by \[Z = \Big{[}\prod_{i=1}^{N}\frac{\int d^{n}\varphi_{i}}{v^{n(2-d)/(2d) }}\Big{]}\exp{(-H)}\,, \tag{6}\] \[{\cal F}_{\rm tot} = -\ln Z,\] (7) \[f = {\cal F}_{\rm tot}/V, \tag{8}\] and the bulk parts above (\(+\)) and below (\(-\)) \(T_{c}\) by \[f_{b,\pm} = \lim_{V\rightarrow\infty}{\cal F}_{\rm tot}/V, \tag{9}\] \[{\cal F}_{b,\pm} = Vf_{b,\pm}. \tag{10}\] Near \(T_{c}\) we use the decompositions \[{\cal F}_{\rm tot} = {\cal F}_{s}+{\cal F}_{ns}, \tag{11}\] \[f = f_{s}+f_{ns},\] (12) \[f_{b,\pm} = f_{b,s,\pm}+f_{b,ns,\pm} \tag{13}\] into singular and nonsingular parts. The singular bulk part \(f_{b,s,\pm}(t,h)\) has the scaling form of Eq. (1) of [6] with the two nonuniversal constants \(A_{1}\) and \(A_{2}\), and it vanishes at \(t=0,h=0\). We shall discuss the leading singular bulk part at \(h=0\) \[{\cal F}_{b,s,\pm} = Vf_{b,s,\pm} \tag{14}\] of \({\cal F}_{\rm tot}\) for \(d\geq 2\) and, for \(d=2\), the finite critical free energy \[{\cal F}_{c}=\lim_{h\to 0,T\to T_{c}}{\cal F}_{s} \tag{15}\] which is identical with the singular finite-size part of \({\cal F}_{\rm tot}\) at \(T_{c}\), \(h=0\) studied in [15]. The bulk order-parameter correlation function is \[G_{\pm}({\bf x_{i}}-{\bf x_{j}},t,h)=\lim_{V\to\infty}\big{[}<\varphi_{i}\cdot \varphi_{j}>-{\cal M}^{2}\big{]}. \tag{16}\] where \({\cal M}^{2}=\lim_{|{\bf x_{i}}-{\bf x_{j}}|\to\infty}<\varphi_{i}\cdot\varphi _{j}>\) is the square of the bulk order parameter. The static version of the dissipation-fluctuation theorem [74; 75] yields the exact sum rule [6] \[v\sum_{\bf x}\ G_{\pm}({\bf x},t,h)=\chi_{\pm}(t,h)=\partial^{2}f_{b,\pm}(t,h) /\partial h^{2} \tag{17}\] where \(\chi_{\pm}(t,h)\) is the bulk susceptibility. Here we have assumed general \(n\geq 1\) for \(T\geq T_{c}\) and \(n=1\) for \(T<T_{c}\). For \(n>1,T<T_{c}\) it is necessary to distinguish between longitudinal and transverse correlation functions [17]. For the description of the asymptotic critical behavior on a long-distance scale it suffices to study the continuum version of this model in terms of the vector field \(\varphi({\bf x})\). The continuum Hamiltonian reads [1; 6] \[H^{\rm field} = \int_{V}d^{d}x\Bigg{[}\frac{r_{0}}{2}\varphi^{2}+\sum_{\alpha, \beta=1}^{d}\frac{A_{\alpha\beta}}{2}\frac{\partial\varphi}{\partial x_{ \alpha}}\frac{\partial\varphi}{\partial x_{\beta}} \tag{18}\] \[+ u_{0}(\varphi^{2})^{2}-h\varphi\Bigg{]}\] with a finite anisotropic cutoff in momentum space where now the sum rule has the form \[\int d^{d}{\bf x}\ \ G_{\pm}(|{\bf x}|,t,h)=\chi_{\pm}(t,h)=\partial^{2}f^{ \rm field}_{b,\pm}(t,h)/\partial h^{2}. \tag{19}\] The sum rules (17) and (19) have nothing to do with the existence of a critical point and therefore are exactly valid in the entire range of \(t\) and \(h\), not only in the asymptotic critical region. In particular the sum rule (19) remains exactly valid for any choice of a finite cutoff and for both isotropic and anisotropic \(\varphi^{4}\) models. The susceptibility plays an important role for the structure of the Fourier transform \(G_{\pm}({\bf k},t,h)\) of the correlation function. It can be uniquely divided into a thermodynamic part \(\chi_{\pm}(t)\) and a correlation part \(\hat{D}({\bf k},t,h)\) in the Fisher-Aharony scaling form [76; 77; 27], \[\hat{G}_{\pm}({\bf k},t,h) = \chi_{\pm}(t,h)\ \hat{D}_{\pm}({\bf k},t,h) \tag{20}\] with the normalization \[\hat{D}_{\pm}({\bf 0},t,h)=1. \tag{21}\] Here \(\chi_{\pm}(t,h)\) is determined entirely by the bulk free-energy density defined at \({\bf k}={\bf 0}\) whereas \(\hat{D}({\bf k},t,h)\) requires \({\bf k}\)-dependent investigations. Experimentally this means that \(\chi_{\pm}(t,h)\) is determined by thermodynamic measurements whereas \(\hat{D}({\bf k},t,h)\) requires spatially resolved scattering experiments. This physical distinction is of fundamental importance in the asymptotic critical region where \(\hat{D}_{\pm}({\bf k},t,h)\) becomes a universal function whereas \(\chi_{\pm}(t,h)\) still contains a nonuniversal thermodynamic amplitude that is independent of the correlation-length amplitude. Thus near criticality the relevance of (20) is that it provides a unique decomposition into a nonuniversal and a universal part [14; 17; 27; 77]. Right at criticality with \(t=0,h=0\), \(\chi_{\pm}(0,0)\) does not exist and two alternative decompositions have been introduced [14; 22] that remain applicable to the asymptotic critical region including the critical point \(t=0,h=0\), as discussed in Sec. III.C. These general considerations are valid for both isotropic and weakly anisotropic systems. They are of relevance for a discussion in Sec. V. E of the claims with regard to the universality properties of the correlation function of the isotropic \(\varphi^{4}\) model derived in the framework of the FRG [54; 55] where \(\hat{G}_{\pm}({\bf k},t,0)\) was divided into two nonuniversal parts, as we shall see. In the anisotropic case, the principal correlation lengths and the principal axes of \(G_{\pm}\) are determined by the eigenvalues and the eigenvectors of \({\bf A}\)[1; 4], thus they are well-defined functions of the couplings and the lattice structure, as shown explicitly for \(d=2\) in [14]. This differs fundamentally from the \(n\)-vector model. ### Fixed-length spin model We consider an anisotropic \(O(n)\)-symmetric fixed-length spin model defined on the same lattice with the same boundary conditions as the \(\varphi^{4}\) model. It is called \(n\)-vector model and has the Hamiltonian [17; 69] \[H^{\rm sp}=-\sum_{i,j}E_{i,j}{\bf S}_{i}\cdot{\bf S}_{j}-\sum_{i}{\bf h}^{\rm sp }\cdot{\bf S}_{i} \tag{22}\] with the statistical weight \(\exp(-H^{\rm sp}/(k_{B}T))\). The classical dimensionless spin variables \({\bf S}_{i}\) are \(n\)-component vectors with a fixed length \({\bf S}_{i}^{2}=1\). They have components \(S_{i}^{(\mu)}\), \(\mu=1,2,...,n\) which are continuous variables for \(n>1\). For \(n=2,3\) the model is called \(XY\) and Heisenberg model, respectively. For \(n\to\infty\) the model belongs to the spherical universality class. For \(n=1\) (22) is the anisotropic Ising model with the discrete variables \(S_{i}\equiv\sigma_{i}=\pm 1\). The dimensionless partition function \(Z^{\rm sp}\), the total free energy \({\cal F}^{\rm sp}_{\rm tot}\) (divided by \(k_{B}T\)), and the bulk correlation function are defined by \[Z^{\rm sp}=\Big{[}\prod_{i=1}^{N}\int d^{n}{\bf S}_{i}\Big{]}\exp \left(-\beta H^{\rm sp}\right), \tag{23}\] \[{\cal F}^{\rm sp}_{\rm tot}=-\ln Z^{\rm sp},\] (24) \[G^{\rm sp}_{\pm}({\bf x}_{i}-{\bf x}_{j},t,{\bf h}^{\rm sp})= \lim_{V\to\infty}\big{[}<{\bf S}_{i}\cdot{\bf S}_{j}>-({\cal M}^{\rm sp})^{2}\big{]} \tag{25}\] with \(\beta=1/(k_{B}T)\) and the constraint \({\bf S}_{i}^{2}=1\). The definitions of \(f^{\rm sp}\), \({\cal F}_{b,\pm}^{\rm sp}\), \({\cal F}_{c}^{\rm sp}\), \(\chi_{\pm}^{\rm sp}\), etc. are analogous to (2.8)-(2.17). The counterpart of the sum rule (2.17) in the \(n\)-vector model is formulated for the example of the Ising model in Eq. (2.43) of [38]. If the variables \(S_{i}^{(\mu)}\) have an unrestricted range, \(-\infty\leq S_{i}^{(\mu)}\leq\infty\), the model (2.22) with \(T\geq T_{c}\) belongs to the Gaussian universality class. The general considerations with regard to the structure of the correlation functions \(G_{\pm}\) and \(\hat{G}_{\pm}\) remain valid also for \(G_{\pm}^{\rm sp}\) and \(\hat{G}_{\pm}^{\rm sp}\). In view of the generalized shear transformation which is a pure coordinate transformation to be introduced in Sec. IV we make the following general remark on the fixed-length spin model (2.22). The definition of the Hamiltonian \(H^{\rm sp}\) and the partition function \(Z^{\rm sp}\) in terms of the dimensionless variables \({\bf S}_{i}\) implies that \(Z^{\rm sp}\) and the total free energy (2.24) do not depend on the details of the position of the lattice points but only on the value and the topology of the pair interactions \(E_{i,j}\). This means that, for given \(n\), given boundary conditions, and given number of lattice points, two \(n\)-vector models \(H^{\rm sp}\) with the same coupling constants \(E_{i,j}\) on two different lattices with the same topology of the pair interactions have the same partition function and free energy. Thus, if a transformation is performed at fixed boundary conditions that involves only a smooth change of the coordinates of the lattice points without changing the value and topology of the pair interactions \(E_{i,j}\) and without changing the amplitudes of the spin variables \({\bf S}_{i}\), the partition function and total free energy are not affected and remain invariant under such a transformation. This implies that also the decomposition into singular and nonsingular parts \[{\cal F}_{\rm tot}^{\rm sp}={\cal F}_{s}^{\rm sp}+{\cal F}_{ns}^{\rm sp} \tag{2.26}\] remains unchanged and that \({\cal F}_{s}^{\rm sp}\) remains invariant. In particular the critical free energy \[{\cal F}_{c}^{\rm sp}=\lim_{{\bf k}^{\prime\prime}\rightarrow{\bf 0},T \to T_{c}}{\cal F}_{s}^{\rm sp} \tag{2.27}\] of the finite system remains invariant. Furthermore such a pure coordinate transformation changes only the spatial argument of the correlation function \(G_{\pm}^{\rm sp}\) but leaves its amplitude invariant. No general approach to an analytic construction of the principal axes and correlation lengths has been developed so far for the model (2.22). Furthermore, no analytic condition for criticality is known for general couplings \(E_{i,j}\), and general criteria for weak anisotropy and isotropy analogous to the general conditions (2.4) and (2.5) for \(\varphi^{4}\) models are as yet unknown for the \(n\)-vector model. Apart from anisotropic \(d=2\) Ising models, very little is known with regard to the anisotropy properties of \(n\)-vector models for \(d>2\) as compared to those of \(\varphi^{4}\) models where quantitative renormalized perturbation calculations can be performed within the minimal subtraction scheme at fixed dimension \(d\)[65; 66; 13; 6]. Nevertheless it is possible to derive general structural properties of the anisotropic critical behavior of the model (2.22) for general \(d\) and \(n\) in the scaling limit, as will be shown below. In the remainder of this paper we consider only the case of vanishing external field. ## III Isotropic case: two-scale-factor universality For any given Bravais lattice of the models (2.1) and (2.22) the couplings \(K_{i,j}\) and \(E_{i,j}\) can be chosen such that the bulk system has isotropic correlations in the large-distance scaling regime near \(T_{c}\). Since the bulk critical behavior of isotropic systems plays a fundamental role for the development of the theory of weakly anisotropic systems we recall and reformulate important definitions and facts in an appropriate way such that they can be extended to weakly anisotropic systems. A detailed exposition of the universal isotropic reformulation is also necessary for a comparison with the nonuniversal formulation for the correlation function of the FRG [54; 55] discussed in Sec. V.E. ### Isotropic bulk correlation function In the isotropic case, the correlation function (2.16) has the established Privman-Fisher scaling form in the asymptotic critical scaling region above (\(+\)) and below (\(-\)) \(T_{c}\) below \(d=4\) dimensions [22] \[G_{\pm}^{\rm iso}(|{\bf x}|,t) = \frac{D_{1}^{\rm iso}}{|{\bf x}|^{d-2+\eta}}\;\Phi_{\pm}\Big{(} \frac{|{\bf x}|}{\xi_{\pm}^{\rm iso}(t)}\Big{)}\;, \tag{3.1}\] \[\xi_{\pm}^{\rm iso}(t) = \xi_{0\pm}^{\rm iso}\;|t|^{-\nu} \tag{3.2}\] with the universal critical exponents \(\eta\) and \(\nu\) and the universal scaling function \(\Phi_{\pm}\). For an alternative form see (3.37). The constant \(D_{1}^{\rm iso}\) is the same above, at, and below \(T_{c}\). The scaling region is defined by \(|{\bf x}|\gg v^{1/d}\) and \(\xi_{\pm}^{\rm iso}\gg v^{1/d}\) at fixed finite ratio \(|{\bf x}|/\xi_{\pm}^{\rm iso}\). The function \(G_{\pm}^{\rm iso}\) is finite and continuous at \(T_{c}\) with \(\Phi_{+}(0)=\Phi_{-}(0)\). The correlation lengths \(\xi_{\pm}^{\rm iso}\) may be chosen to be either the "true" (exponential) [14; 18; 22] or the second-moment [6] correlation lengths. The ratios \(Q_{\pm}^{+}\) and \(Q_{\xi}^{-}\) of these correlation lengths above and below \(T_{c}\) defined in Table 2 of [18] are universal. In this paper we employ the exponential correlation length defined by the exponential decay \(\sim\exp(-|{\bf x}|/\xi_{\pm}^{\rm iso})\) for large but finite \(|{\bf x}|/\xi_{\pm}^{\rm iso}\) within the scaling region. We recall, however, that the universal scaling form (3.1) is not valid for \(|{\bf x}|/\xi_{\pm}^{\rm iso}\gg 1\) where two-scale-factor universality and scaling in the sense of (3.1) is violated even arbitrarily close to \(T_{c}\)[6; 11; 78; 79; 80], see the shaded region in Fig. 2 of [6]. Among the three nonuniversal amplitudes \(D_{1}^{\rm iso}\), \(\xi_{0+}^{\rm iso}\), and \(\xi_{0-}^{\rm iso}\) there are only two independent amplitudes since the amplitudes \(\xi_{0\pm}^{\rm iso}\) are universally related by \[\xi_{0+}^{\rm iso}/\xi_{0-}^{\rm iso}=X_{\xi}={\rm universal}. \tag{3.3}\] In [18]\(X_{\xi}\) is denoted by \(U_{\xi_{\rm sp}}\) for the exponential correlation length. It corresponds to \(1/X_{-}(0)\) in [6; 13] (denoted by \(U_{\xi}\) in [18]) where the second-moment correlation length is employed. We shall also consider the asymptotic susceptibility above and below \(T_{c}\) \[\chi^{\rm iso}_{\pm}(t)=\Gamma^{\rm iso}_{\pm}|t|^{-\gamma}, \tag{10}\] with \(\gamma=(2-\eta)\nu\) and the bulk order parameter \[{\cal M}^{\rm iso}(t)=B^{\rm iso}|t|^{\beta} \tag{11}\] below \(T_{c}\) with the nonuniversal amplitudes \(\Gamma^{\rm iso}_{\pm}\) and \(B^{\rm iso}\) with the universal ratio [18] \[\Gamma^{\rm iso}_{+}/\Gamma^{\rm iso}_{-}=U_{2}\ =\ {\rm universal}. \tag{12}\] The amplitude \(B^{\rm iso}\) can be expressed in terms of \(\Gamma^{\rm iso}_{+}\) and \(\xi^{\rm iso}_{0+}\) through the universal relation [14; 17; 18] \[(B^{\rm iso})^{2}(\Gamma^{\rm iso}_{+})^{-1}(\xi^{\rm iso}_{0+})^{d}=Q_{c}={ \rm universal}, \tag{13}\] for general \(n\) with the universal constant \(Q_{c}\). For \(n>1,T<T_{c}\) the transverse bulk correlation function of isotropic systems has the algebraic large-distance behavior for \(d>2\)[17] \[G^{\rm iso}_{\rm T}(|{\bf x}|,t) = {\cal C}_{\rm T}[{\cal M}^{\rm iso}(t)]^{2}\big{[}\xi^{\rm iso}_{ \rm T}(t)/|{\bf x}|\big{]}^{d-2}, \tag{14}\] \[{\cal C}_{\rm T} = \Gamma(d/2)/[2\pi^{d/2}(d-2)]. \tag{15}\] The bulk transverse correlation length near \(T_{c}\) is \[\xi^{\rm iso}_{\rm T}(t)=\xi^{\rm iso}_{0\rm T}|t|^{-\nu} \tag{16}\] with the nonuniversal amplitude \(\xi^{\rm iso}_{0\rm T}\) and the universal amplitude ratio \[\xi^{\rm iso}_{0+}/\xi^{\rm iso}_{0\rm T}=X_{\rm T\xi}={\rm universal}. \tag{17}\] ### Singular part of the bulk free-energy density The following relations apply to isotropic systems of both the \((d,n)\) universality class (including \(n\)-vector and \(\varphi^{4}\) models) and the Gaussian universality class for \(t>0\) (with \(\nu=1/2\) for general \(d\) and \(n\)). The singular part of the bulk free-energy density has the asymptotic form [17; 18] \[f^{\rm iso}_{b,s,\pm}(t)=\left\{\begin{array}{rl}A^{\rm iso}_{\pm}|t|^{d\nu }&\quad{\rm for}\ 2<d<4\\ \frac{1}{2}A^{\rm iso}_{\pm}|t|^{2\nu}\ln|t|&\quad{\rm for}\ d=2.\end{array}\right. \tag{18}\] The nonuniversal amplitudes \(A^{\rm iso}_{\pm}\) have the universal ratio \[f^{\rm iso}_{b,s+}(t)/f^{\rm iso}_{b,s-}(t)=A^{\rm iso}_{+}/A^{\rm iso}_{-}= \ \ {\rm universal}. \tag{19}\] Due to two-scale-factor universality [17; 20; 22; 24] these amplitudes are universally related to the correlation-length amplitude \(\xi^{\rm iso}_{0+}\) through \[\left(\xi^{\rm iso}_{0+}\right)^{d}A^{\rm iso}_{+}=\ Q_{1}=\ \ {\rm universal},\ {\rm d}\geq 2 \tag{20}\] with the universal constant \(Q_{1}\), with \(Q_{1}=1/(2\pi)\) for the \(d=2\) Ising universality class (called \((R_{\xi}^{+})^{2}\) in Eq. (6.31) of [17]) for the case that the true correlation length is used. The validity of (20) has been established by the renormalization-group theory [20; 25; 26; 27]. Thus we obtain in \(2<d<4\) dimensions \[f^{\rm iso}_{b,s,+}(t) = Q_{1}\left(\xi^{\rm iso}_{0+}\right)^{-d}t^{d\nu}, \tag{21}\] \[f^{\rm iso}_{b,s,-}(t) = \frac{A^{\rm iso}_{-}}{A^{\rm iso}_{+}}Q_{1}\left(\xi^{\rm iso}_{ 0+}\right)^{-d}|t|^{d\nu}, \tag{22}\] whereas for the \((d=2,n=1)\) universality class \(f^{\rm iso}_{b,s,\pm}\) depends on \(|t|\) rather than \(t\) \[f^{\rm iso}_{b,s,\pm}(t) = \frac{1}{2}Q_{1}\left(\xi^{\rm iso}_{0+}\right)^{-2}\ t^{2}\ln|t| \,\ d=2, \tag{23}\] because of \[A^{\rm iso}_{+}/A^{\rm iso}_{-}=1,\ \ \nu=1,\ \ d=2 \tag{24}\] for this universality class [17; 18]. We note that the amplitude \(\xi^{\rm iso}_{0+}\) defined above \(T_{c}\) appears in the above relations for both \(t>0\) and \(t<0\). For the purpose of a structural analysis of critical bulk amplitude relations and a later extension to weakly anisotropic systems it is advantageous to express the relations (21)-(23) for the bulk free-energy density \(f^{\rm iso}_{b,s,\pm}\) in terms of the singular bulk part \({\cal F}^{\rm iso}_{b,s,\pm}\) of the free energy in a arbitrary finite volume \(V^{\rm iso}\) according to the definition (14), \[{\cal F}^{\rm iso}_{b,s,\pm}(t,V^{\rm iso})\ =\ V^{\rm iso}f^{\rm iso}_{b,s,\pm}(t). \tag{25}\] This leads in a natural way to the scaling variable \[\widetilde{x}\ =\ t[V^{\rm iso}/(\xi^{\rm iso}_{0+})^{d}]^{1/(d\nu)} \tag{26}\] appearing in the scaling form obtained from (21)- (25) in \(2<d<4\) dimensions \[{\cal F}^{\rm iso}_{b,s,+}(t,V^{\rm iso}) = Q_{1}\ \widetilde{x}^{d\nu}, \tag{27}\] \[{\cal F}^{\rm iso}_{b,s,-}(t,V^{\rm iso}) = \frac{A^{\rm iso}_{-}}{A^{\rm iso}_{+}}Q_{1}\ |\widetilde{x}|^{d\nu}, \tag{28}\] and for the \((d=2,n=1)\) universality class \[{\cal F}^{\rm iso}_{b,s,\pm}(t,V^{\rm iso})\ =\ \frac{1}{2}Q_{1}\ |\widetilde{x}|^{2}\ln|t|, \tag{29}\] with a nonscaling logarithmic factor. Unlike \(f^{\rm iso}_{b,s,\pm}\), both \({\cal F}^{\rm iso}_{b,s,\pm}\) and \(\widetilde{x}\) satisfy important invariance properties with respect to the shear transformations between isotropic and anisotropic systems as we shall see in Eqs. (14)-(16). The singular part of the Gaussian bulk free-energy density per component divided by \(k_{B}T\) has the asymptotic behavior above \(T_{c}\)[10] \[f^{\rm G,iso}_{b,s,+}(t)=\left\{\begin{array}{rl}A^{\rm G,iso}_{+}t^{d/2}& \quad{\rm for}\ 2<d<4\;,\\ \frac{1}{2}A^{\rm G,iso}_{+}t\ln t&\quad{\rm for}\ d=2.\end{array}\right. \tag{30}\] Due to two-scale-factor universality the amplitude \(A^{\rm G,iso}_{+}\) is universally related to the correlation-length amplitude \(\xi^{\rm G,iso}_{0+}\) through \[\left(\xi^{\rm G,iso}_{0+}\right)^{d}A^{\rm G,iso}_{+}\ =\ Q^{\rm G}_{1}\ \ ={\rm universal} \tag{31}\] where the universal constant is [10] \[Q_{1}^{\rm G}=-\frac{\Gamma(-d/2)}{2(4\pi)^{d/2}}\,2<d<4, \tag{3.26}\] with \(Q_{1}^{\rm G}=-1/(12\pi)\) for \(d=3\), and [10] \[Q_{1}^{\rm G}=-1/(4\pi),\ \ d=2, \tag{3.27}\] thus we obtain for \(t>0\) \[f_{b,s,+}^{\rm G,iso}(t)=\left\{\begin{array}{cc}&Q_{1}^{\rm G}\left(\xi_{0+ }^{\rm G,iso}\right)^{-d}t^{d/2}&\quad\mbox{for $d>2$,}\\ &\frac{1}{2}Q_{1}^{\rm G}\left(\xi_{0+}^{\rm G,iso}\right)^{-2}\ t\ln t&\quad \mbox{for $d=2$.}\end{array}\right. \tag{3.28}\] For the singular bulk part \[{\cal F}_{b,s,+}^{\rm G,iso}(t,V^{\rm iso})\ =\ V^{\rm iso}f_{b,s,+}^{\rm G, iso}(t) \tag{3.29}\] of the free energy of the isotropic Gaussian model in a finite volume \(V^{\rm iso}\) this leads to the scaling form \[{\cal F}_{b,s,+}^{\rm G,iso}(t,V^{\rm iso})=\left\{\begin{array}{cc}Q_{1}^ {\rm G}\left(\widetilde{x}^{\rm G}\right)^{d/2}&\quad\mbox{for $d>2$,}\\ &\frac{1}{2}Q_{1}^{\rm G}\ \widetilde{x}^{\rm G}\ln t&\quad\mbox{for $d=2$,}\end{array}\right. \tag{3.30}\] with a nonscaling logarithmic factor for \(d=2\) and with the Gaussian scaling variable \[\widetilde{x}^{\rm G}\ =\ t[V^{\rm iso}/(\xi_{0+}^{\rm G,iso})^{d}]^{2/d}\ \ \mbox{for $d\geq 2$.} \tag{3.31}\] ### Sum rule and thermodynamic amplitudes of isotropic systems Two-scale-factor universality implies that the scaling form of the isotropic order-parameter correlation function is fully determined once two independent nonuniversal parameters are given. In the correlation function \(G_{+}^{\rm iso}({\bf x},t)\), (3.1), these parameters were chosen to be the correlation-length amplitude \(\xi_{0+}^{\rm iso}\) (or \(\xi_{0-}^{\rm iso}\)) and the amplitude \(D_{1}^{\rm iso}\). However, in view of the sum rule (2.19) \[\chi_{\pm}^{\rm iso}(t)=\int d^{d}{\bf x}\ \,G_{\pm}^{\rm iso}(|{\bf x}|,t) \tag{3.32}\] it is advantageous to express \(D_{1}^{\rm iso}\) in terms of the amplitude \(\Gamma_{\pm}^{\rm iso}\) of the directly obervable isotropic susceptibility (3.4). From (3.1),(3.4), and (3.32) we obtain two relations \[D_{1}^{\rm iso} = \Gamma_{+}^{\rm iso}(\xi_{0+}^{\rm iso})^{-2+\eta}\ \ \widetilde{\Phi}_{+}^{-1}\ \ \mbox{for general}\ n, \tag{3.33}\] \[D_{1}^{\rm iso} = \Gamma_{-}^{\rm iso}(\xi_{0-}^{\rm iso})^{-2+\eta}\ \ \widetilde{\Phi}_{-}^{-1}\ \ \mbox{for}\ n=1, \tag{3.34}\] with the two universal constants \[\widetilde{\Phi}_{\pm} = 2\pi^{d/2}\Gamma(d/2)^{-1}\int_{0}^{\infty}dss^{1-\eta}\Phi_{\pm} (s). \tag{3.35}\] Thus the susceptibility can be expressed as \[\chi_{\pm}^{\rm iso}(t)=D_{1}^{\rm iso}[\xi_{\pm}^{\rm iso}(t)]^{2-\eta} \widetilde{\Phi}_{\pm}. \tag{3.36}\] This shows that \(\chi_{\pm}^{\rm iso}(t)\) and \(\xi_{\pm}^{\rm iso}(t)\) are not universally related. Equations (3.1) and (3.33) yield the alternative representation of the correlation function above and below \(T_{c}\) for general \(n\) in terms of the observable amplitudes \(\Gamma_{+}^{\rm iso}\) and \(\xi_{0+}^{\rm iso}\), \[G_{\pm}^{\rm iso}(|{\bf x}|,t)=\frac{\Gamma_{+}^{\rm iso}(\xi_{0+}^{\rm iso}) ^{-2+\eta}}{|{\bf x}|^{d-2+\eta}}\ \Psi_{\pm}\Big{(}\frac{|{\bf x}|}{\xi_{\pm}^{\rm iso}(t)}\Big{)}\, \tag{3.37}\] with the universal scaling function \(\Psi_{\pm}\) above and below \(T_{c}\), \[\Psi_{+}(y_{+})=\frac{\Phi_{+}(y_{+})}{\widetilde{\Phi}_{+}},\ \ \ \Psi_{-}(y_{-})=\frac{\Phi_{-}(y_{-})}{\widetilde{\Phi}_{+}}. \tag{3.38}\] The value of \(\Psi_{\pm}\) at \(T_{c}\) is a universal constant [6] \[\Psi_{+}(0)=\Psi_{-}(0)=\widetilde{Q}_{3}=\mbox{universal}, \tag{3.39}\] compare Eqs. (3.15) and (A16) of [6] and Eq. (5.32) of [13]. The sum rule (3.32) implies the normalization \[2\pi^{d/2}\big{[}\Gamma(d/2)\big{]}^{-1}\int_{0}^{\infty}dyy^{1-\eta}\Psi_{\pm }(y)=1 \tag{3.40}\] which provides a unique separation of the universal and nonuniversal parts of the correlation function. The scaling function \(\Psi_{\pm}\) depends on the universality class. Its exact analytic form is known for the \(d=2\) Ising universality class [38; 14; 39] and is presented in Secs. V.C and V.D. for the spherical universality class corresponding to the large-\(n\) limit and for the Gaussian universality class. It is remarkable that, although no finite bulk correlation length and bulk susceptibility exist at \(T_{c}\), the isotropic bulk correlation function at \(T_{c}\) \[G_{\pm}^{\rm iso}(|{\bf x}|,0)\ =\ \widetilde{Q}_{3}\ \frac{\Gamma_{+}^{\rm iso}(\xi_{0+}^{ \rm iso})^{-2+\eta}}{|{\bf x}|^{d-2+\eta}} \tag{3.41}\] is expressed in terms of the two independent bulk amplitude \(\xi_{0+}^{\rm iso}\) and \(\Gamma_{+}^{\rm iso}\) defined _above_\(T_{c}\) for general \(n\). (According to (3.34), \(G^{\rm iso}({\bf x},0)\) can also be expressed in terms of the bulk amplitudes \(\xi_{0-}^{\rm iso}\) and \(\Gamma_{-}^{\rm iso}\) _below_\(T_{c}\) but this is applicable only for \(n=1\).) The representation (3.41) demonstrates that, together with \(\Gamma_{+}^{\rm iso}\), the amplitude \(\xi_{0+}^{\rm iso}\) constitutes an important reference length for the spatial decay of the correlation function at \(T_{c}\) which is not apparent in the form (3.1) with the single amplitude \(D_{1}^{\rm iso}\). The physical significance of \(\xi_{0\pm}^{\rm iso}\) right at \(T_{c}\) is relevant for the applicability of our generalized shear transformation at \(T_{c}\). We shall also need the Fourier transform \[\hat{G}_{\pm}^{\rm iso}(|{\bf k}|,t) = \int d^{d}{\bf x}\ e^{-i{\bf k}\cdot{\bf x}}G_{\pm}^{\rm iso}(|{ \bf x}|,t) \tag{3.42}\] \[= \frac{\Gamma_{+}^{\rm iso}}{\left(|{\bf k}|\ \xi_{0+}^{\rm iso})^{2-\eta} \right.}\ \hat{\Psi}_{\pm}\Big{(}|{\bf k}|\xi_{\pm}^{\rm iso}(t)\Big{)} \tag{3.43}\] with the universal scaling function \[\hat{\Psi}_{\pm}(y_{\pm})=\frac{2\pi^{(d-1)/2}}{\Gamma\big{(}(d-1)/2 \big{)}}\int\limits_{0}^{\infty}ds\ s^{1-\eta}\] \[\times\int\limits_{-1}^{1}d(\cos\vartheta)(\sin\vartheta)^{d-3} \ e^{-is\cos\vartheta}\Psi_{\pm}(s/y_{\pm}). \tag{3.44}\] The existence of the scaling function \(\hat{\Psi}_{+}\Big{(}|{\bf k}|\xi^{\rm iso}_{+}(t)\Big{)}\) at infinite cutoff has been shown in \(4-\varepsilon\) dimensions up to \(O(\varepsilon^{2})\) for general \(n\)[76]. Because of the sum rule (3.32) we have for \(t\neq 0\) \[\hat{G}^{\rm iso}_{\pm}(0,t)=\chi^{\rm iso}_{\pm}(t)=\Gamma^{\rm iso}_{\pm}|t| ^{-\gamma}. \tag{3.45}\] The scaling function \(\hat{\Psi}_{\pm}\) has a universal value at \(T_{c}\) corresponding to \(y_{\pm}=\infty\)[17; 77], \[\hat{\Psi}_{+}(\infty)=\hat{\Psi}_{-}(\infty)=Q_{3}={\rm universal}. \tag{3.46}\] This implies the asymptotic universal scaling form at \(T_{c}\) \[\hat{G}^{\rm iso}_{\pm}(|{\bf k}|,0)=Q_{3}\frac{\Gamma^{\rm iso}_{+}}{\big{(}| {\bf k}|\;\xi^{\rm iso}_{0+}\big{)}^{2-\eta}} \tag{3.47}\] where again two independent bulk amplitudes \(\Gamma^{\rm iso}_{+}\) and \(\xi^{\rm iso}_{0+}\) appear. The universal constants \(Q_{3}\) and \(\widetilde{Q}_{3}\) are related by a Fourier transformation which yields [14; 6] (see App. A) \[\widetilde{Q}_{3}=\frac{2^{d-2+\eta}\Gamma[(d-2+\eta)/2]}{(4\pi)^{d/2}) \Gamma[(2-\eta)/2]}\;Q_{3}. \tag{3.48}\] Both \(Q_{3}\) and \(\widetilde{Q}_{3}\) are known exactly for the \(d=2\) Ising universality class [14; 77] and approximately for \(d=3,n=1\)[17; 77; 81]. In Sec. V. E we shall show how to derive \(\hat{\Psi}_{\pm}\) and \(Q_{3}\) from an approximate result of the FRG [54; 55]. Owing to the sum rule the nonuniversal overall amplitudes of \(\hat{G}^{\rm iso}_{\pm}\) can be expressed completely in terms of \(\chi^{\rm iso}_{\pm}(t)\) if \(t\neq 0\). Using (3.43), (3.45), (3.3), (3.4)and (3.6) we obtain the representation for \(t\neq 0\) \[\hat{G}^{\rm iso}_{\pm}(|{\bf k}|,t) = \Gamma^{\rm iso}_{\pm}|t|^{-\gamma}\;\hat{D}_{\pm}\Big{(}|{\bf k}| \xi^{\rm iso}_{\pm}(t)\Big{)}, \tag{3.49}\] where the universal single-parameter scaling functions \(\hat{D}_{\pm}\) are universally related to the universal functions \(\hat{\Psi}_{\pm}\) as \[\hat{D}_{+}(y_{+}) = y_{+}^{-2+\eta}\hat{\Psi}_{+}(y_{+}), \tag{3.50}\] \[\hat{D}_{-}(y_{-}) = U_{2}\;(X_{\xi})^{-2+\eta}\;y_{-}^{-2+\eta}\hat{\Psi}_{-}(y_{-}), \tag{3.51}\] with the universal constants \(U_{2}\) and \(X_{\xi}\). These scaling functions satisfy the normalization condition \[\hat{D}_{\pm}(0) = 1 \tag{3.52}\] in agreement with the scaling form (2.20 and (2.21). This condition is equivalent to our normalization condition (3.40). The same scaling form was derived within the framework of two-scale-factor universality by Hohenberg et al. [27] where \(\hat{D}_{\pm}\) was denoted by \(\widetilde{Z}\). Thus, in the asymptotic region where \(\hat{D}_{\pm}\) is calculated at infinite cutoff, \(\hat{G}^{\rm iso}_{\pm}(|{\bf k}|,t)\) depends on two independent nonuniversal amplitudes \(\Gamma^{\rm iso}_{+}\) and \(\xi^{\rm iso}_{0+}\). Since (3.49) does not exist at \(t=0\) our representations (3.37) and (3.43) have the advantage that they are applicable to the asymptotic region including the critical point \(t=0\). In particular our scaling functions \(\Psi_{\pm}\) and \(\hat{\Psi}_{\pm}\) immediately capture the universal constants \(\widetilde{Q}_{3}=\Psi_{\pm}(0)\) and \(Q_{3}=\hat{\Psi}_{\pm}(\infty)\), in contrast to the scaling functions \(\hat{D}_{\pm}\) and its inverse Fourier transformed counterpart \(D_{\pm}\) in real space. The advantage of the representations (3.37) and (3.40) over (3.1) is that, unlike the amplitude \(D^{\rm iso}_{1}\), both the amplitude \(\Gamma^{\rm iso}_{+}\) of the bulk susceptibility \(\chi^{\rm iso}_{+}\) above \(T_{c}\) and \(\xi^{\rm iso}_{0+}\) can be determined directly from independent thermodynamic measurements as follows. The bulk susceptibility \[\chi^{\rm iso}_{+}(t) = -\lim_{h\to 0}\partial^{2}f^{\rm iso}_{b,+}(t,h)/\partial h^{2} \tag{3.53}\] \[= \lim_{h\to 0}\partial m^{\rm iso}(t,h)/\partial h \tag{3.54}\] can be determined via a measurement of the order parameter \(m^{\rm iso}(t,h)\) for small \(h\to 0\). Furthermore, owing to two-scale-factor universality, the bulk correlation length \[\xi^{\rm iso}_{0+} = (Q_{1}/A^{\rm iso}_{+})^{1/d} \tag{3.55}\] is determined by the amplitude \(A^{\rm iso}_{+}\) of the free energy density according to (3.14). The latter amplitude can be measured via the singular part of the bulk specific heat per unit volume (divided by \(k_{B}\)) above \(T_{c}\) \[C^{\rm iso}_{b,s,+}(t) = -\partial^{2}f^{\rm iso}_{b,s,+}(t)/\partial t^{2} \tag{3.56}\] of isotropic systems where \(f^{\rm iso}_{b,s,+}(t)\) is given by (3.12). This yields the dependence on \(\xi^{\rm iso}_{0+}\) \[C^{\rm iso}_{b,s+}(t) = \frac{\big{(}R^{+}_{\xi}\big{)}^{d}}{\alpha\;(\xi^{\rm iso}_{0+})^ {d}}\;t^{-\alpha},\;\;\;d>2, \tag{3.57}\] \[C^{\rm iso}_{b,s+}(t) = -\;\frac{Q_{1}}{\big{(}\xi^{\rm iso}_{0+}\big{)}^{2}}\;\ln t,\;\; d=2, \tag{3.58}\] with the universal constant \[\frac{(R^{+}_{\xi})^{d}}{\alpha(1-\alpha)(2-\alpha)}=-\;Q_{1},\;\;\;d>2. \tag{3.59}\] Thus the important consequence of two-scale-factor universality for isotropic bulk systems is that all nonuniversal parameters of the isotropic bulk correlation function can be completely determined by the thermodynamic amplitudes of the bulk susceptibility and the bulk specific heat without involvement of measurements of the spatial dependence of the correlation function via scattering experiments. We shall show in Sec. V that weak anisotropy destroys this important feature due to the intrinsic nonuniversality of the correlation function. ### Critical free energy in finite isotropic systems In Sec. IX we shall also consider the singular part of the free energy at \(T_{c}\) of a finite anisotropic system on the basis of its structure in the isotropic case. For finite isotropic systems, e.g., in a \(d\)-dimensional parallelepiped of volume \(V^{\rm iso}\) with given aspect ratios and given angles between the confining surfaces, the hypothesis of two-scale-factor universality for finite systems [17; 22] predicts that, for given boundary conditions, the singular part \({\cal F}_{s}^{\rm iso}\) of the free energy \({\cal F}_{\rm tot}^{\rm iso}\) has a universal value at \(T=T_{c}\) with the finite critical amplitude \[{\cal F}_{c}^{\rm iso}=\lim_{T\to T_{c}}{\cal F}_{s}^{\rm iso}={\rm universal} \tag{3.60}\] which depends on \(d,n\), and is a universal function of the aspect ratios and angles. The universal structure of (3.1), (3.7), (3.37), (3.43), and (3.57)-(3.60) as well as the scaling relations (3.20)-(3.23) and (3.30) are a consequence of the principle of two-scale-factor universality for isotropic systems. In the subsequent sections we address the question how weak anisotropy affects the structure of these results. We shall show that multiparameter universality with up to \(d(d+1)/2+1\) independent nonuniversal parameters replaces two-scale-factor universality not only within the anisotropic \(\varphi^{4}\) theory and the anisotropic Gaussian model [6; 13; 14] but quite generally in all weakly anisotropic systems. ## IV Shear transformations of anisotropic systems Although the analysis of this section is formulated primarily for \(O(n)\)-symmetric systems with general \(n\geq 1,T\geq T_{c}\) and \(n=1,T<T_{c}\) all definitions and results can be extended to systems with \(n>1,T<T_{c}\) where transverse correlation lengths come into play [13]. Our strategy is to perform a shear transformation from the anisotropic to an isotropic system, then to invoke two-scale-factor universality for the isotropic system with the known structures of the correlation function, of the free energy, and of universal amplitude relations, and finally to derive the anisotropic structures by inverting the shear transformation. However, the choice of the shear transformation is not unique. ### Special shear transformation of the \(\varphi^{4}\) theory For the subsequent development of the anisotropic theory it is indispensable to complement and reformulate the special shear transformation that is most suitable for the weakly anisotropic \(\varphi^{4}\) model [1; 4; 6; 13]. In the absence of an external field this transformation is defined by \[{\bf x}^{\prime} = {\mathbf{\lambda}}^{-1/2}{\bf U}{\bf x}, \tag{4.1}\] \[\varphi^{\prime}({\bf x}^{\prime}) = (\det{\bf A})^{1/4}\varphi({\bf x}),\] (4.2) \[u_{0}^{\prime} = (\det{\bf A})^{-1/2}u_{0}, \tag{4.3}\] where \({\bf A}={\bf A}(\{K_{i,j}\})\) is given by (2.3). This \(T\)-independent transformation is defined at fixed couplings \(K_{i,j}\) for arbitrary temperatures above, at, and below \(T_{c}\) and leaves the distance \(r_{0}-r_{0c}\) from criticality invariant [6]. It consists of a rotation of the lattice points \({\bf x}\) provided by the orthogonal matrix \({\bf U}\) and a subsequent spatial rescaling by the diagonal rescaling matrix \({\mathbf{\lambda}}\) such that the \(O({\bf k^{2}})\) part of the Fourier transform \(\delta\hat{K}({\bf k})\) of the anisotropic interaction of the Hamiltonian (2.1) [4; 6] is brought into the isotropic form \[\delta\hat{K}({\bf k}) = \delta\hat{K}({\bf U}^{-1}{\mathbf{\lambda}}^{-1/2}{\bf k}^{\prime}) \equiv\delta\hat{K}^{\prime}({\bf k}^{\prime})={\bf k}^{\prime}\cdot{\bf k}^{ \prime}, \tag{4.4}\] \[{\bf k}^{\prime} = {\mathbf{\lambda}}^{1/2}{\bf U}{\bf k}. \tag{4.5}\] Note, however, that \(O({\bf k^{4}})\) parts remain anisotropic in general and give rise to anisotropic corrections outside the isotropic scaling regime of the transformed system (see Fig. 2 of [6]). The matrix elements of \[{\bf U}={\bf U}\big{(}\{{\bf e}^{(\alpha)}\}\big{)} \tag{4.6}\] are \(U_{\alpha\beta}=e_{\beta}^{(\alpha)}\) which are determined by the \(d\) eigenvectors \({\bf e}^{(\alpha)}\) of the matrix \({\bf A}\) whose Cartesian components are denoted by \(e_{\beta}^{(\alpha)}\). We call these vectors principal unit vectors. They satisfy the eigenvalue equation \[{\bf A}(\{K_{i,j}\})\;{\bf e}^{(\alpha)}\;=\;\lambda_{\alpha}\;{\bf e}^{( \alpha)},\;\alpha=1,2,...,d, \tag{4.7}\] with \({\bf e}^{(\alpha)}\cdot{\bf e}^{(\beta)}=\delta_{\alpha\beta}\). The matrix \({\bf U}\) diagonalizes the matrix \({\bf A}\) according to \[{\mathbf{\lambda}}(\{K_{i,j}\})={\bf U}{\bf A}(\{K_{i,j}\}){\bf U}^{-1} \tag{4.8}\] whose diagonal elements are the eigenvalues \(\lambda_{\alpha}>0\) of \({\bf A}\). Thus we have \[\det{\bf A}=\det{\mathbf{\lambda}}=\prod_{\alpha=1}^{d}\lambda_{\alpha}. \tag{4.9}\] The eigenvalue equation (4.7) determines the eigenvalues \(\lambda_{\alpha}\) and the principal unit vectors \({\bf e}^{(\alpha)}\) as a function of the couplings \(K_{i,j}\). The latter determine the directions of the principal axes of the large-distance correlations of the anisotropic system above, at, and below \(T_{c}\)[1; 6; 13]. For the formulation of bulk and finite-size properties of anisotropic systems it will be necessary to define the reduced anisotropy matrix \[\bar{\bf A}(\{K_{i,j}\})\;=\;{\bf A}/(\det{\bf A})^{1/d}. \tag{4.10}\] An important alternative definition is \[\bar{\bf A}\;=\;{\bf U}^{-1}\bar{\bf\lambda}{\bf U} \tag{4.11}\] where \(\bar{\bf\lambda}\) is the reduced rescaling matrix \[\bar{\mathbf{\lambda}}={\mathbf{\lambda}}/(\det{\mathbf{\lambda}})^{1/d} \tag{4.12}\] as follows from (4.8). According to (4.11) and (4.12), \(\bar{\bf A}\) is known if the _ratios_ of the eigenvalues \(\lambda_{\alpha}\) of \({\bf A}\) and the principal unit vectors \({\bf e}^{(\alpha)}\) are given. This makes possible to determine \(\bar{\bf A}\) even if the matrix \({\bf A}\) is not known. It is the definition (4.11) rather than (4.10) that is used in deriving the shear transformation of the bulk correlation functions in Sec. V [compare (5.19) and (5.37)]. The anisotropic continuum Hamiltonian (2.18) is invariant under the special shear transformation (4.1)-(4.3) and is transformed to the Hamiltonian \(H^{\prime}_{\rm field}\) which has the standard isotropic form \[H_{\rm field} = H^{\prime}_{\rm field} \tag{4.13}\] \[= \int_{V^{\prime}}d^{d}x^{\prime}\Big{[}\frac{r_{0}}{2}\varphi^{ \prime}({\bf x}^{\prime})^{2}+\frac{1}{2}(\nabla^{\prime}\varphi^{\prime})^{2}+ u^{\prime}_{0}(\varphi^{\prime 2})^{2}\Big{]},\] with the transformed volume \[V^{\prime} = \int_{V^{\prime}}d^{d}x^{\prime}=\det{\mathbf{\lambda}}^{-1/2}\ \int_{V}d^{d}x \tag{4.14}\] \[= (\det{\mathbf{\lambda}})^{-1/2}\ V=(\det{\bf A})^{-1/2}\ V \tag{4.15}\] and with a transformed cutoff which in general is still anisotropic. The anisotropic bulk correlation function (2.16), its Fourier transform, and the bulk order parameter \({\cal M}=<\varphi>\) are transformed as \[G^{\prime}_{\pm}({\bf x}^{\prime},t) = (\det{\bf A})^{1/2}G_{\pm}({\bf x},t), \tag{4.16}\] \[\check{G}^{\prime}_{\pm}({\bf k}^{\prime},t) = \hat{G}_{\pm}({\bf k},t),\] (4.17) \[{\cal M}^{\prime}(t) = (\det{\bf A})^{1/4}{\cal M}(t). \tag{4.18}\] The bulk susceptibility remains invariant as follows from the sum rules for the bulk correlation functions \[\chi^{\prime}_{\pm}(t)=\int d^{4}{\bf x}^{\prime}\ \ G^{ \prime}_{\pm}({\bf x}^{\prime},t) \tag{4.19}\] \[=\int d^{4}{\bf x}\ \ G_{\pm}({\bf x},t)=\chi_{\pm}(t). \tag{4.20}\] The partition function and the total free energy are not invariant but are transformed as [6] \[Z^{\prime} = (\det{\bf A})^{nN/(2d)}\ Z, \tag{4.21}\] \[{\cal F}^{\prime}_{\rm tot} = {\cal F}_{\rm tot}\ -\ [nN/(2d)]\ln(\det{\bf A}) \tag{4.22}\] where the last term is a nonsingular bulk contribution (which is absent in our generalized shear transformation of the \(n\)-vector model to be defined below.) Thus the singular part \({\cal F}_{s}\) of the total free energy is invariant under the shear transformation, \[{\cal F}^{\prime}_{s}={\cal F}_{s}, \tag{4.23}\] and the singular part \(f_{b,s}(t)\) of the bulk-free energy density is transformed as \[f^{\prime}_{b,s,\pm}(t) = \lim_{V^{\prime}\to\infty}{\cal F}^{\prime}_{s}/V^{\prime}=(\det {\bf A})^{1/2}f_{b,s,\pm}(t). \tag{4.24}\] This implies that the singular bulk part \({\cal F}_{b,s,\pm}\) of the free energy \[{\cal F}^{\prime}_{b,s,\pm}=V^{\prime}f^{\prime}_{b,s}=Vf_{b,s,\pm}={\cal F}_{ b,s,\pm} \tag{4.25}\] is invariant under the shear transformation. All of the above relations are exact and a consequence of the special shear transformation (4.1)-(4.3). They are valid for arbitrary \(t\) above, at, and below \(T_{c}\). So far the quantities \({\bf e}^{(\alpha)}\), \(\bar{\mathbf{\lambda}}\), \(\bar{\mathbf{\lambda}}\), \({\bf A}\), and \(\bar{\bf A}\) governing the transformations are nonuniversal quantities which are defined as a function of the couplings \(K_{i,j}\) according to (2.3). This was called parametrization (i) in [13]. They are independent of \(n\) and \(u_{0}\) and remain valid in the large-\(n\) limit and for the Gaussian model. For the purpose of a later extension of the anisotropic theory to systems other than the \(\varphi^{4}\) model we confine ourselves in the following to the asymptotic critical scaling region where this parametrization in terms of \(K_{i,j}\) can be reformulated in favor of a parametrization in terms of correlation lengths. This was called parametrization (ii) in Sec. V. of [13]. This critical scaling region is defined by large \(|{\bf x}^{\prime}|\), large \(\xi^{\prime}_{\pm}(t)\), but finite \(|{\bf x}^{\prime}|/\xi^{\prime}_{\pm}(t)\geq 0\), where \[\xi^{\prime}_{\pm}(t)=\xi^{\prime}_{0\pm}|t|^{-\nu}. \tag{4.26}\] is the asymptotic isotropic correlation length defined through the exponential decay of the correlation function \(G^{\prime}_{\pm}({\bf x}^{\prime},t)\) of the Hamiltonian (4.13). In this scaling region \(G^{\prime}_{\pm}({\bf x}^{\prime},t)\) depends only on \(|{\bf x}^{\prime}|\) rather than \({\bf x}^{\prime}\) thus we shall make the replacement \[G^{\prime}_{\pm}({\bf x}^{\prime},t)\longrightarrow G^{\prime}_{\pm}(|{\bf x}^{ \prime}|,t) \tag{4.27}\] in the remainder of this paper, and correspondingly \[\hat{G}^{\prime}_{\pm}({\bf k}^{\prime},t)\longrightarrow\hat{G}^{\prime}_{\pm} (|{\bf k}^{\prime}|,t). \tag{4.28}\] The amplitudes \(\xi^{\prime}_{0+}\) and \(\xi^{\prime}_{0-}\) are universally related by \[\xi^{\prime}_{0+}/\xi^{\prime}_{0-}=X_{\xi} \tag{4.29}\] where \(X_{\xi}\) is the same universal constant as in (3.3). The principal correlation lengths above and below \(T_{c}\) have been determined as [1] \[\xi^{(\alpha)}_{\pm}(t) = \xi^{(\alpha)}_{0\pm}|t|^{-\nu}={\lambda_{\alpha}}^{1/2}\xi^{ \prime}_{\pm}(t). \tag{4.30}\] As a consequence their amplitude ratios are independent of the direction \(\alpha\) and are universal quantities given by \[\xi^{(\alpha)}_{0+}/\xi^{(\alpha)}_{0-}=\xi^{\prime}_{0+}/\xi^{\prime}_{0-}=X_ {\xi}\ \ \mbox{for each}\ \ \alpha \tag{4.31}\] where \(X_{\xi}\) is independent of \(\alpha\) and the same universal constant as in (3.3). This holds for arbitrary short-range interactions and lattice structures of weakly anisotropic \(\varphi^{4}\) models. Together with the fact that the critical exponents above, at, and below \(T_{c}\) of weakly anisotropic systems are the same as those of isotropic systems in the same universality class we consider the result (4.31) for the amplitude ratio \(\xi^{(\alpha)}_{0+}/\xi^{(\alpha)}_{0-}\) to be an additional characteristic feature of weakly anisotropic systems. According to (4.30) the eigenvalues can be expressed in terms of ratios of correlation lengths as \[{\lambda_{\alpha}}^{1/2} = \xi^{(\alpha)}_{0+}/\xi^{\prime}_{0+}=\xi^{(\alpha)}_{0-}/\xi^{ \prime}_{0-} \tag{4.32}\] which are the same above and below \(T_{c}\). This implies \[(\det{\mathbf{\lambda}})^{1/2} = \prod_{\alpha=1}^{d}\big{(}\xi^{(\alpha)}_{0\pm}/\xi^{\prime}_{0 \pm}\big{)}=\big{(}\tilde{\xi}_{0\pm}/\xi^{\prime}_{0\pm}\big{)}^{d} \tag{4.33}\] with the mean correlation length \[\bar{\xi}_{\pm}(t) = \big{[}\prod_{\alpha=1}^{d}\xi_{\pm}^{(\alpha)}(t)\big{]}^{1/d}= \bar{\xi}_{0\pm}|t|^{-\nu}, \tag{4.34}\] \[\bar{\xi}_{0\pm} = \big{[}\prod_{\alpha=1}^{d}\xi_{0\pm}^{(\alpha)}\big{]}^{1/d},\] (4.35) \[\bar{\xi}_{0+}/\bar{\xi}_{0-} = X_{\xi}. \tag{4.36}\] Equations (4.33) and (4.9) yield the desired representation of \(\det{\bf A}\) in terms of correlation lengths as \[(\det{\bf A})^{1/2} = \prod_{\alpha=1}^{d}\big{(}\xi_{0+}^{(\alpha)}/\xi_{0+}^{\prime} \big{)}=\big{(}\bar{\xi}_{0+}/\xi_{0+}^{\prime}\big{)}^{d} \tag{4.37}\] \[= \prod_{\alpha=1}^{d}\big{(}\xi_{0-}^{(\alpha)}/\xi_{0-}^{\prime} \big{)}=\big{(}\bar{\xi}_{0-}/\xi_{0-}^{\prime}\big{)}^{d}. \tag{4.38}\] This enables us to reformulate the special shear transformations (4.14)-(4.18) and (4.24) in terms of correlation lengths as \[V^{\prime} = \big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{-d}\;V, \tag{4.39}\] \[G^{\prime}_{\pm}(|{\bf x}^{\prime}|,t) = \big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{d}\;G_{\pm}({ \bf x},t),\] (4.40) \[{\cal M}^{\prime}(t) = \big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{d/2}\;{\cal M }(t),\] (4.41) \[f^{\prime}_{b,s}(t) = \big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{d}\;f_{b,s}(t), \tag{4.42}\] with the \(T\)-independent transformation factor \(\big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{d}\). This reformulation is needed for the derivation of the anisotropic physical properties within the \(\varphi^{4}\) theory in the subsequent sections. The factor \(\big{(}\bar{\xi}_{0\pm}/\xi_{0\pm}^{\prime}\big{)}^{d}\) can be interpreted as the ratio of the ellipsoidal correlation volume \[V_{\text{cor},\pm}=\prod_{\alpha=1}^{d}\xi_{0\pm}^{(\alpha)}=\big{(}\bar{\xi}_ {0\pm}\big{)}^{d} \tag{4.43}\] of the anisotropic system and the spherical correlation volume \[V^{\prime}_{\text{cor},\pm}=\big{(}\xi_{0\pm}^{\prime}\big{)}^{d} \tag{4.44}\] of the transformed isotropic system, with the ratio \[V_{\text{cor},+}/V_{\text{cor},-}=V^{\prime}_{\text{cor},+}/V^{\prime}_{\text {cor},-}=(X_{\xi})^{d}. \tag{4.45}\] Eq. (4.39) can be rewritten as \[\frac{V^{\prime}}{V^{\prime}_{\text{cor},+}}=\frac{V}{V_{\text{cor},+}},\;\; \;\frac{V^{\prime}}{V^{\prime}_{\text{cor},-}}\;=\;\frac{V}{V_{\text{cor},-}}. \tag{4.46}\] This means that the ratios of the geometric volumes and correlation volumes remain invariant under the special shear transformation (4.1) above and below \(T_{c}\). This implies that the quantity \[t[V^{\prime}/(\xi_{0+}^{\prime})^{d}]^{1/(d\nu)}=t[V/(\bar{\xi}_{0+})^{d}]^{1 /(d\nu)}, \tag{4.47}\] appearing as the scaling variable in the bulk and finite-size theory of the \(\varphi^{4}\) theory, remains invariant under the special shear transformation. Eqs. (4.37) and (4.38) can be rewritten as \[\big{(}\xi_{0\pm}^{\prime}\big{)}^{d} = (\det{\bf A})^{-1/2}\big{(}\bar{\xi}_{0\pm}\big{)}^{d}, \tag{4.48}\] which describes the shear transformation of the anisotropic correlation volume to the isotropic correlation volume. The invariance of the susceptibility (4.19) yields in the asymptotic critical region \[\chi_{\pm}(t) = \Gamma_{\pm}|t|^{-\gamma}=\chi_{\pm}^{\prime}(t)=\Gamma_{\pm}^{ \prime}|t|^{-\gamma}, \tag{4.49}\] \[\Gamma_{\pm}^{\prime} = \Gamma_{\pm}. \tag{4.50}\] This implies the invariance of the combination \[\big{(}\xi_{0\pm}^{\prime}\big{)}^{d}G_{\pm}^{\prime}(|{\bf x}^{\prime}|,t)/ \Gamma^{\prime} = \big{(}\bar{\xi}_{0\pm}\big{)}^{d}G_{\pm}({\bf x},t)/\Gamma \tag{4.51}\] under the special shear transformation. We shall show that the invariance of both the combination (4.51) and the volume ratio (4.46) are universal features of weakly anisotropic systems. The two independent nonuniversal parameters \(\xi_{0\pm}^{\prime}\) and \(\Gamma_{\pm}^{\prime}\) will not be specified further. They can be determined from the correlation function and the susceptibility as functions of the parameter \(a_{0}\) in (2.2) and the coupling \(u_{0}^{\prime}\) of the isotropic Hamiltonian (4.13) where, according to (4.3), \(u_{0}^{\prime}\) depends on the four-point coupling \(u_{0}\) and on \(K_{i,j}\) through the anisotropy matrix \({\bf A}\) of the original anisotropic \(\varphi^{4}\) Hamiltonian. We note that \(\xi_{0\pm}^{\prime}\) plays the role as a reference length in the framework of renormalized perturbation theory [6; 13] based on the transformed \(\varphi^{4}\) Hamiltonian (4.13). It is important to note that the ratios of the eigenvalues can be expressed in terms of the ratio of the principal correlation lengths \[\lambda_{\alpha}/\lambda_{\beta}=\Big{(}\xi_{0\pm}^{(\alpha)}/\xi_{0\pm}^{( \beta)}\Big{)}^{2} \tag{4.52}\] as follows from (4.30). This makes possible to reexpress the diagonal elements of the reduced rescaling matrix (4.12) in terms of ratios of principal correlation lengths as \[\bar{\lambda}_{\alpha}\big{(}\{\xi_{0\pm}^{(\alpha)}\}\big{)} = \prod_{\beta=1,\;\beta\neq\alpha}^{d}\Big{(}\frac{\xi_{0+}^{(\alpha) }}{\xi_{0+}^{(\beta)}}\Big{)}^{2/d}=\Big{(}\frac{\xi_{0+}^{(\alpha)}}{\xi_{0+ }}\Big{)}^{2} \tag{4.53}\] \[= \prod_{\beta=1,\;\beta\neq\alpha}^{d}\Big{(}\frac{\xi_{0-}^{( \alpha)}}{\xi_{0-}^{(\beta)}}\Big{)}^{2/d}=\Big{(}\frac{\xi_{0-}^{(\alpha)}}{\xi_ {0-}^{(\beta)}}\Big{)}^{2}\,. \tag{4.54}\] Here the isotropic correlation length \(\xi_{0\pm}^{\prime}\) has been canceled. The rescaling and reduced rescaling matrices are related by \[\mathbf{\lambda}^{1/2}\xi_{0\pm}^{\prime} = \bar{\mathbf{\lambda}}^{1/2}\bar{\xi}_{0\pm}. \tag{4.55}\] From the alternative definition (4.11) and from (4.53) and (4.54) we then obtain the reduced anisotropy matrix in terms of ratios of principal correlation lengths as \[\bar{\bf A}=\bar{\bf A}\big{(}\big{\{}\xi_{0\pm}^{(\alpha)},{\bf e}^{(\alpha)} \big{\}}\big{)}={\bf U}(\{{\bf e}^{(\alpha)}\})^{-1}\bar{\bf A}\big{(}\{\xi_{0 \pm}^{(\alpha)}\}\big{)}{\bf U}(\{{\bf e}^{(\alpha)}\}) \tag{4.56}\] which is the parametrization (ii) of \(\mathbf{\bar{A}}\) as given in Eqs. (5.12) and (5.30) of [13]. Both \(\mathbf{\bar{A}}\) and \(\mathbf{\bar{A}}\) are nonuniversal quantities since they depend on nonuniversal parameters \(\xi^{(\alpha)}_{0\pm}\) and \(\mathbf{e}^{(\alpha)}\). We shall show in Sec. IV.C, however, that in this parametrization (ii) the dependence of the matrices \(\mathbf{\bar{\lambda}}\big{(}\{\xi^{(\alpha)}_{0\pm}\}\big{)}\) and \(\mathbf{\bar{A}}\big{(}\{\xi^{(\alpha)}_{0\pm},\mathbf{e}^{(\alpha)}\}\big{)}\) on these parameters attains a universal structure, unlike the parametrization (i) in terms of the couplings. All of the relations presented above for the \(\varphi^{4}\) theory at finite \(n\) remain applicable also in the large \(n\)-limit. Furthermore they remain applicable to the Gaussian lattice model for \(T\geq T_{c}\), \[H^{G} = v\Bigg{[}\sum_{i=1}^{N}\frac{r_{0}}{2}\varphi_{i}^{2}+\sum_{i,j=1}^{N} \frac{K_{i,j}}{2}(\varphi_{i}-\varphi_{j})^{2}\Bigg{]}, \tag{4.57}\] \(r_{0}(T)=a_{0}t\geq 0\), which is obtained from (2.1) by setting \(u_{0}=0\) and \(r_{0c}=0\). We shall consider the continuum version of this model in the form \[H^{G}_{\rm field}=\int_{V}d^{d}x\Big{[}\frac{r_{0}}{2}\varphi^{2}+\sum_{ \alpha,\beta=1}^{d}\frac{A_{\alpha\beta}}{2}\frac{\partial\varphi}{\partial x _{\alpha}}\frac{\partial\varphi}{\partial x_{\beta}}\Big{]} \tag{4.58}\] with the same anisotropy matrix \(\mathbf{A}\) (2.3) as for the \(\varphi^{4}\) model. Correspondingly the isotropic Gaussian Hamiltonian obtained after the special shear transformation (4.1) and (4.2) reads \[H^{\prime G}_{\rm field} = \int_{V^{\prime}}d^{d}x^{\prime}\Big{[}\frac{r_{0}}{2}\varphi^{ \prime}(\mathbf{x}^{\prime})^{2}+\frac{1}{2}(\nabla^{\prime}\varphi^{\prime}) ^{2}\Big{]}. \tag{4.59}\] The advantage of the special shear transformation (4.1)-(4.3) is that the transformed Hamiltonian (4.13) has the form of the standard isotropic field-theoretic Landau-Ginzburg-Wilson Hamiltonian. This implies that the same renormalization factors (\(Z\) factors) can be employed for the transformed system as for the established isotropic \(\varphi^{4}\) field theory. This has permitted us to perform renormalized perturbation theory [6; 13] for the transformed isotropic system in \(2<d<4\) dimensions. The disadvantage of this shear transformation is, however, that it is not directly applicable to other weakly anisotropic systems such as the \(n\)-vector model where it is unknown how to construct an appropriate continuum Hamiltonian with a large-distance anisotropy matrix as a function of the couplings \(E_{i,j}\) that plays the same role as \(\mathbf{A}\) in the \(\varphi^{4}\) theory. In the following we introduce a generalized shear transformation that does not need an anisotropy matrix and that is applicable to all weakly anisotropic systems. ### Generalized shear transformation of the \(\varphi^{4}\) theory We aim at introducing a shear transformation of the \(\varphi^{4}\) theory in the scaling region that differs from the special shear transformation in that the explicit use of the anisotropy matrix \(\mathbf{A}\) is avoided. The principal correlation lengths of the \(\varphi^{4}\) model above, at, and below \(T_{c}\) are directed along the \(T\)-independent principal unit vectors \(\mathbf{e}^{(\alpha)}\). Accordingly we introduce the orthogonal set of \(d\)_principal correlation vectors_ \[\mathbf{\xi}^{(\alpha)}_{\pm}(t)=\mathbf{\xi}^{(\alpha)}_{0\pm}|t|^{-\nu}=\xi^{( \alpha)}_{0\pm}|t|^{-\nu}\mathbf{e}^{(\alpha)},\ \ \alpha=1,...,d \tag{4.60}\] where \(\xi^{(\alpha)}_{0\pm}\) are the amplitudes of the principal correlation lengths. We use a Cartesian coordinate system with orthogonal unit vectors \(\mathbf{\epsilon}^{(\alpha)}\), \(\alpha=1,...,d\) along the \(d\) Cartesian axes. The shear transformation is achieved by a rotation of the vectors \(\mathbf{\xi}^{(\alpha)}_{0\pm}\) by means of the orthogonal matrix \(\mathbf{U}\big{(}\{\mathbf{e}^{(\alpha)}\}\big{)}\) with matrix elements \(U_{\alpha\beta}=e^{(\alpha)}_{\beta}\) (which are the same as in the special shear transformation) such that these vectors point along the direction of the Cartesian axes, \[\mathbf{U}\mathbf{e}^{(\alpha)}=\mathbf{\epsilon}^{(\alpha)},\ \ \mathbf{U}\mathbf{\xi}^{( \alpha)}_{0\pm}=\xi^{(\alpha)}_{0\pm}\mathbf{\epsilon}^{(\alpha)}, \tag{4.61}\] and by a subsequent rescaling of their lengths by means of a diagonal matrix \(\mathbf{\widetilde{\lambda}}^{-1/2}\). This matrix can be chosen to be independent of the temperature, i.e., it can be chosen to be the same above, at, and below \(T_{c}\) as will be specified below. This yields the \(d\) transformed correlation vectors \[\mathbf{\widetilde{\xi}}^{(\alpha)}_{0\pm}=\mathbf{\widetilde{\lambda}}^{-1/2} \mathbf{U}\mathbf{\xi}^{(\alpha)}_{0\pm}=\widetilde{\lambda}^{-1/2}_{\alpha}\xi^{ (\alpha)}_{0\pm}\mathbf{\epsilon}^{(\alpha)} \tag{4.62}\] where \(\widetilde{\lambda}_{\alpha}\) are the diagonal elements of \(\mathbf{\widetilde{\lambda}}\). To obtain a system that is isotropic in the scaling region it is necessary and sufficient that \(\widetilde{\lambda}^{1/2}_{\alpha}\) is proportional to \(\xi^{(\alpha)}_{0\pm}\), \[\widetilde{\lambda}^{1/2}_{\alpha}=\widetilde{c}_{\pm}\ \xi^{(\alpha)}_{0\pm}, \tag{4.63}\] where \(\widetilde{c}_{+}>0\) and \(\widetilde{c}_{-}>0\) are independent of the direction \(\alpha\). This rescaling guarantees that the lengths \(|\mathbf{\widetilde{\xi}}^{(\alpha)}_{0\pm}|\) of the transformed correlation vectors (4.62) \[|\mathbf{\widetilde{\xi}}^{(\alpha)}_{0\pm}|=|\mathbf{\widetilde{\lambda}}^{-1/2} \mathbf{U}\mathbf{\xi}^{(\alpha)}_{0\pm}|=\widetilde{\lambda}^{-1/2}_{\alpha}\xi^{ (\alpha)}_{0\pm}=\widetilde{c}^{-1}_{\pm} \tag{4.64}\] become independent of the direction \(\alpha\). This implies that isotropic correlations are obtained in the large-distance scaling regime where the isotropic lengths \(\widetilde{c}^{-1}_{\pm}\) are as yet unspecified. The transformation (4.1)-(4.3) corresponds to the special choice \(\widetilde{c}_{\pm}=(\xi^{\prime}_{0\pm})^{-1}\) where \(\xi^{\prime}_{0\pm}\) is the correlation-length amplitude of the isotropic Hamiltonian \(H^{\prime}_{\rm field}\), (4.13) and thus depends on the parameters \(a_{0}\) and \((\det\mathbf{A})^{1/2}u_{0}\) of the original anisotropic \(\varphi^{4}\) model through (4.3). For the explicit expression of \(\xi^{\prime}_{0\pm}\) see Eq. (2.82) of [12]. Here we make a more general and conceptually different choice of \(\widetilde{c}_{\pm}\)_that is independent of any parameter of the original anisotropic system_. We choose \[\widetilde{c}^{-1}_{+} = \xi^{\rm iso}_{0+},\ \ \widetilde{c}^{-1}_{-}=\xi^{\rm iso}_{0-}, \tag{4.65}\] thus \[\mathbf{\widetilde{\xi}}^{(\alpha)}_{0\pm} = \xi^{\rm iso}_{0\pm}\mathbf{\epsilon}^{(\alpha)},\ \ \ |\mathbf{\widetilde{\xi}}^{(\alpha)}_{0\pm}|=\xi^{\rm iso}_{0\pm},\ \ \alpha=1,...,d \tag{4.66}\] where \(\xi^{\rm iso}_{0+}\) and \(\xi^{\rm iso}_{0-}\) are free parameters, together with the requirement that \[\xi^{\rm iso}_{0+}/\xi^{\rm iso}_{0-}=X_{\xi}={\rm universal}. \tag{100}\] This requirement is necessary in order to comply with the fact implied by two-scale-factor universality that the ratio of the correlation lengths above and below \(T_{c}\) of any isotropic system near \(T_{c}\) must satisfy the universal relation (10). According to (101) this implies \(\xi^{(\alpha)}_{0+}/\xi^{(\alpha)}_{0-}=\xi^{\rm iso}_{0+}/\xi^{\rm iso}_{0-}\) or \[\xi^{(\alpha)}_{0-}/\xi^{\rm iso}_{0-}=\xi^{(\alpha)}_{0+}/\xi^{\rm iso}_{0+}, \tag{101}\] thus we obtain from (100)-(100) \(d\) different constants \[\widetilde{\lambda}^{1/2}_{\alpha} = \xi^{(\alpha)}_{0-}/\xi^{\rm iso}_{0-}=\xi^{(\alpha)}_{0+}/\xi^{ \rm iso}_{0+} \tag{102}\] which are the same above and below \(T_{c}\). By continuity, this \(T\)-independent identification of \(\widetilde{\lambda}_{\alpha}\) is applicable also at \(T=T_{c}\). This property is analogous to the relation (100) where the \(T\)-independent eigenvalues \(\lambda_{\alpha}(K_{i,j})\) of the anisotropy matrix \({\bf A}(K_{i,j})\) can be expressed in terms of \(\xi^{(\alpha)}_{0\pm}/\xi^{\prime}_{0\pm}\), thus both diagonal matrices \(\widetilde{\boldsymbol{\lambda}}\) and \(\boldsymbol{\lambda}\) are \(T\)-independent and are related by \[\widetilde{\boldsymbol{\lambda}} = \left(\frac{\xi^{\prime}_{0-}}{\xi^{\rm iso}_{0-}}\right)^{2} \boldsymbol{\lambda}=\left(\frac{\xi^{\prime}_{0+}}{\xi^{\rm iso}_{0+}}\right) ^{2}\boldsymbol{\lambda}. \tag{103}\] We note that the mean correlation length can be used to express \(\det\widetilde{\boldsymbol{\lambda}}\) as \[\det\widetilde{\boldsymbol{\lambda}}=(\bar{\xi}_{0\pm}/\xi^{\rm iso}_{0\pm})^ {2d}. \tag{104}\] We apply this generalized shear transformation to the lattice vectors \({\bf x}\) of the anisotropic system \[\widetilde{\bf x}=\widetilde{\boldsymbol{\lambda}}^{-1/2}{\bf U}{\bf x}, \tag{105}\] with \(\widetilde{\boldsymbol{\lambda}}\) given by (102). This generates lattice vectors \(\widetilde{\bf x}\) of an isotropic system with the correlation-length amplitudes \(\xi^{\rm iso}_{0\pm}\) in the scaling region. This transformation is applicable not only in the presence of finite correlations lengths \(\xi^{\rm iso}_{\pm}(t)\) for \(T\neq T_{c}\) but also right at \(T=T_{c}\). We may also consider the inverse shear transformation \[{\bf x}={\bf U}^{-1}\widetilde{\boldsymbol{\lambda}}^{1/2}\widetilde{\bf x} \tag{106}\] of the isotropic system to the anisotropic system with given principal axes and given principal correlation lengths. Eq. (106) describes a rescaling of the coordinates of the isotropic system in the direction of the Cartesian axes followed by a rotation of these axes into the direction of the principal axes of the anisotropic system. Our generalized shear transformation differs significantly from the special transformation (100)-(101) in that our transformation is a pure coordinate transformation without transforming the field \(\varphi\) and the coupling \(u_{0}\). As a consequence, this transformation leaves the amplitude of the order-parameter correlation function \(G^{\rm sp}\) invariant (as discussed in Sec. II. B) but not of the susceptibility. Within this generalized transformation, we are primarily interested in determining the reduced anisotropy matrix \(\overline{\bf A}^{\rm gen}={\bf U}^{-1}\widetilde{\bf\lambda}^{\rm gen}{\bf U}\) via the alternative definition given in (100) in terms of a reduced rescaling matrix \(\widetilde{\bf\lambda}^{\rm gen}\). In this transformation the reduced rescaling matrix is defined by \[\widetilde{\boldsymbol{\lambda}}^{\rm gen} = \frac{\widetilde{\boldsymbol{\lambda}}}{(\det\widetilde{ \boldsymbol{\lambda}})^{1/d}}\;\;, \tag{107}\] \[\det\widetilde{\boldsymbol{\lambda}} = \prod_{\alpha=1}^{d}\widetilde{\lambda}_{\alpha}, \tag{108}\] where the diagonal elements \(\widetilde{\lambda}_{\alpha}\) of \(\widetilde{\boldsymbol{\lambda}}\) are given by (102). The rescaling matrix \(\widetilde{\boldsymbol{\lambda}}\) and the reduced rescaling matrix \(\widetilde{\boldsymbol{\lambda}}^{\rm gen}\) are related by \[\widetilde{\boldsymbol{\lambda}}^{1/2}\xi^{\rm iso}_{0\pm} = \big{(}\overline{\boldsymbol{\lambda}}^{\rm gen}\big{)}^{1/2}\bar{ \xi}_{0\pm}. \tag{109}\] We see that the dependence on \(\xi^{\rm iso}_{0\pm}\) is canceled in the diagonal matrix (107). This implies that the matrix (107) is the same as that defined in (104), \[\widetilde{\boldsymbol{\lambda}}^{\rm gen}=\widetilde{\boldsymbol{\lambda}}, \tag{110}\] with the same diagonal elements as given in (102) and (103). As a consequence, the reduced anisotropy matrix in the generalized shear transformation \[\widetilde{\boldsymbol{\Lambda}}^{\rm gen}=\widetilde{\boldsymbol{\Lambda}} \big{(}\{\xi^{(\alpha)}_{0\pm},{\bf e}^{(\alpha)}\}\big{)} \tag{111}\] is identical with that given in (109) within the special shear transformation (100)-(101). This result demonstrates that the generalized shear transformation is capable deriving the structure of the reduced anisotropy matrix \(\widetilde{\boldsymbol{\Lambda}}\) of the anisotropic \(\varphi^{4}\) model without explicit knowledge of the anisotropy matrix \({\bf A}\). Furthermore the generalized shear transformation is simpler than the special shear transformation in that it is a pure coordinate transformation. For this reason this transformation is applicable also to other weakly anisotropic systems. The generalized shear transformation introduced above for the \(\varphi^{4}\) theory at finite \(n\) remains applicable also in the large \(n\)-limit of the \(\varphi^{4}\) theory and to the Gaussian model for \(T\geq T_{c}\). We note, however, that there is no advantage within the \(\varphi^{4}\) theory to work with the generalized shear transformation (105) since the corresponding transformed Hamiltonian \(\widetilde{H}_{\rm field}\) would not have the form of a standard isotropic \(\varphi^{4}\) theory. For this reason it is more convenient within the \(\varphi^{4}\) theory to employ the special shear transformation (100)-(101) which allows one to use ordinary renormalized isotropic perturbation theory within the transformed system on the basis of the standard isotropic Landau-Ginzburg-Wilson Hamiltonian (102). ### Generalized shear transformation of the n-vector model As a representative of weakly anisotropic systems other than the \(\varphi^{4}\) model we take the fixed-length \(n\)-vector Hamiltonian \(H^{\rm sp}\), (2.22), for general \(d\) and \(n\). We consider only the scaling region. Our only assumptions for this model are (i) that there exist \(d\) principal unit vectors \({\bf e}^{{\rm sp}(\alpha)}\) in the directions of \(d\) principal axes together with \(d\) principal correlation lengths above and below \(T_{c}\) \[\xi_{\pm}^{{\rm sp}(\alpha)}(t)=\xi_{0\pm}^{{\rm sp}(\alpha)}|t|^{-\nu} \tag{4.79}\] along these directions where \(\nu\) is the critical exponent of the isotropic \(n\)-vector model, and (ii) that the amplitude ratios satisfy \[\xi_{0+}^{{\rm sp}(\alpha)}/\xi_{0-}^{{\rm sp}(\alpha)}=X_{\xi}\;\;{\rm for\ each}\;\;\alpha \tag{4.80}\] where \(X_{\xi}\) is the same universal constant as in (3.3) for isotropic systems of the \((d,n)\) universality class. These assumptions are supported by our exact results (4.31) for the \(\varphi^{4}\) model for general \((d,n)\) and for the anisotropic two-dimensional Ising model [14], as discussed in Sec. VIII. B. We shall show that for deriving the universal structures of the correlation function and of bulk amplitude relations it is not necessary to know the dependence of \(\xi_{0\pm}^{{\rm sp}(\alpha)}\) and of the principal unit vectors \({\bf e}^{{\rm sp}(\alpha)}\) on the microscopic couplings \(E_{i,j}\). For most of our results and conclusions the assumption (4.80) is not needed. Our introduction of the generalized shear transformation of the \(n\)-vector model is parallel to (4.60) - (4.73). Accordingly we introduce the orthogonal set of \(d\) principal correlation vectors \[{\bf\xi}_{0\pm}^{{\rm sp}(\alpha)}=\xi_{0\pm}^{{\rm sp}(\alpha)}{\bf e}^{{\rm sp }(\alpha)},\;\;\;\alpha=1,...,d \tag{4.81}\] and perform a rotation by means of \[\widehat{\bf U}={\bf U}\big{(}\big{\{}{\bf e}^{{\rm sp}(\alpha)}\big{\}}\big{)} \tag{4.82}\] with matrix elements \(\widehat{U}_{\alpha\beta}=e_{\beta}^{{\rm sp}(\alpha)}\) such that these vectors point along the direction of the Cartesian axes, \[\widehat{\bf U}{\bf e}^{{\rm sp}(\alpha)}={\bf\epsilon}^{(\alpha)},\;\; \widehat{\bf U}{\bf\xi}_{0\pm}^{{\rm sp}(\alpha)}=\;\xi_{0\pm}^{{\rm sp}( \alpha)}{\bf\epsilon}^{(\alpha)}. \tag{4.83}\] As for the \(\varphi^{4}\) model, the orthogonal matrix \(\widehat{\bf U}\) contains \(d(d-1)/2\) independent matrix elements which are needed to specify the directions \({\bf e}^{{\rm sp}(\alpha)}\) of the principal axes. These \(d\) mutually orthogonal axes can be described by \(d(d-1)/2\) independent angles \(\Omega_{i}\), \(i=1,2,...,d(d-1)/2\), i.e., one angle \(\Omega\) in two dimensions, three angles in three dimensions, six angles in four dimensions, etc., \[{\bf U}\big{(}\{{\bf e}^{{\rm sp}(\alpha)}\}\big{)}={\bf U}\big{(}\Omega_{1},\Omega_{2},\Omega_{3},...\big{)}. \tag{4.84}\] A subsequent rescaling of the lengths \(\xi_{0\pm}^{{\rm sp}(\alpha)}\) by means of a diagonal matrix \(\widehat{\bf\lambda}^{-1/2}\) yields the \(d\) transformed vectors \[\widehat{\bf\xi}_{0\pm}^{(\alpha)} = \widehat{\bf\lambda}^{-1/2}\widehat{\bf U}{\bf\xi}_{0\pm}^{{\rm sp }(\alpha)}=\widehat{\lambda}_{\alpha}^{-1/2}\xi_{0\pm}^{{\rm sp}(\alpha)}{ \bf\epsilon}^{(\alpha)}. \tag{4.85}\] Here we make the choice, similar to (4.63), \[\widehat{\lambda}_{\alpha}^{1/2}=\widetilde{c}_{\pm}\;\xi_{0\pm}^{{\rm sp}( \alpha)}, \tag{4.86}\] together with the same choice for \(\widetilde{c}_{\pm}\) as specified in (4.65) and (4.67), with free parameters \(\xi_{0\pm}^{{\rm iso}}\). Because of (4.67) and (4.80) this implies \(\xi_{0+}^{{\rm sp}(\alpha)}/\xi_{0-}^{{\rm sp}(\alpha)}=\xi_{0+}^{{\rm iso}}/ \xi_{0-}^{{\rm iso}}\) or \[\xi_{0-}^{{\rm sp}(\alpha)}/\xi_{0-}^{{\rm iso}}=\xi_{0+}^{{\rm sp}(\alpha)}/ \xi_{0+}^{{\rm iso}}, \tag{4.87}\] thus the diagonal elements of \(\widehat{\bf\lambda}\) are defined to be \(T\)-independent parameters given by \[\widehat{\lambda}_{\alpha} = \Big{(}\xi_{0-}^{{\rm sp}(\alpha)}/\xi_{0-}^{{\rm iso}}\Big{)}^{2 }=\Big{(}\xi_{0+}^{{\rm sp}(\alpha)}/\xi_{0+}^{{\rm iso}}\Big{)}^{2}, \tag{4.88}\] similar to (4.32) and (4.69). This transforms the \(d\) different principal correlation vectors (4.81) to the \(d\) vectors \(\widehat{\bf\xi}_{0\pm}^{(\alpha)}=\xi_{0\pm}^{{\rm iso}}{\bf\epsilon}^{(\alpha)}\) with the angular-independent lengths \(\xi_{0+}^{{\rm iso}}\) and \(\xi_{0-}^{{\rm iso}}\) above and below \(T_{c}\), respectively, representing an isotropic system. Correspondingly we define the \(T\)-independent shear transformation of the lattice points \({\bf x}\to\widehat{\bf x}\) at fixed couplings \(E_{i,j}\) of the anisotropic \(n\)-vector model by \[\widehat{\bf x}=\widehat{\bf\lambda}^{-1/2}\widehat{\bf U}{\bf x}, \tag{4.89}\] with \(\widehat{\bf\lambda}\) given by (4.88) and \(\widehat{\bf U}\) defined by (4.82). This generates an \(n\)-vector model on a transformed lattice with lattice points \(\widehat{\bf x}\) and with isotropic correlations characterized by the correlation length \(\xi_{0\pm}^{{\rm iso}}\) for which two-scale-factor universality can be invoked. In constructing this transformation no assumption has been made other than the existence of the principal correlation lengths (4.79) and, for the application to \(T<T_{c}\), the universality of the amplitude relation (4.80). Now we have arrived at the position to derive the structure of the reduced anisotropy matrix \(\widehat{\bf A}^{\rm sp}\) that will enter the anisotropic correlation function \(G^{\rm sp}({\bf x},t)\) of the \(n\)-vector model to be derived in Sec. V. B. We define the reduced rescaling matrix \[\overline{\bf\lambda}^{\rm sp}\big{(}\{\xi_{0\pm}^{{\rm sp}(\alpha)}\}\big{)} = \widehat{\bf\lambda}/\big{(}\det\widehat{\bf\lambda}\big{)}^{1/d} \tag{4.90}\] which has the diagonal elements \[\bar{\lambda}_{\alpha}^{\rm sp}=\prod_{\beta=1,\;\beta\neq\alpha}^{d}\Big{(} \frac{\xi_{0\pm}^{{\rm sp}(\alpha)}}{\xi_{0\pm}^{{\rm sp}(\beta)}}\Big{)}^{2/d }=\Big{(}\frac{\xi_{0\pm}^{{\rm sp}(\alpha)}}{\xi_{0\pm}^{{\rm sp}}}\Big{)}^{2} \tag{4.91}\] with the amplitude \(\bar{\xi}_{0\pm}^{{\rm sp}}\) of the mean correlation length \[\bar{\xi}_{\pm}^{{\rm sp}}(t)=\bar{\xi}_{0\pm}^{{\rm sp}}|t|^{-\nu},\;\;\bar {\xi}_{0\pm}^{{\rm sp}}=\big{[}\prod_{\alpha=1}^{d}\xi_{0\pm}^{{\rm sp}( \alpha)}\big{]}^{1/d}, \tag{4.92}\] \[\bar{\xi}_{0+}^{{\rm sp}}/\bar{\xi}_{0-}^{{\rm sp}}=X_{\xi}. \tag{4.93}\] In the ratios (4.91) the dependence on the free parameters \(\xi_{0\pm}^{{\rm iso}}\) is canceled. Similar to (4.71) and (4.76), we have the relations \[\det\widehat{\bf\lambda}=(\bar{\xi}_{0\pm}^{{\rm sp}}/\xi_{0\pm}^{{\rm iso}})^{2d} \tag{4.94}\] and \[\widehat{\bf\lambda}^{1/2}\xi_{0\pm}^{{\rm iso}} = \big{(}\widehat{\bf\lambda}^{\rm sp}\big{)}^{1/2}\bar{\xi}_{0\pm}^{{\rm sp }}. \tag{4.95}\] Finally we obtain the reduced anisotropy matrix via the definition \[\mathbf{\bar{A}}^{\rm sp} = \mathbf{\widehat{U}}^{-1}\mathbf{\bar{A}}^{\rm sp}\mathbf{\widehat{U}} \tag{4.96}\] \[= \mathbf{\bar{A}}\big{(}\{\xi^{\rm sp(\alpha)}_{0\pm},\mathbf{e}^{ \rm sp(\alpha)}\}\big{)}\] (4.97) \[= \mathbf{U}\big{(}\{\mathbf{e}^{\rm sp(\alpha)}\}\big{)}^{-1} \mathbf{\bar{A}}^{\rm sp}\big{(}\{\xi^{\rm sp(\alpha)}_{0\pm}\}\big{)}\mathbf{ U}\big{(}\{\mathbf{e}^{\rm sp(\alpha)}\}\big{)} \tag{4.98}\] where \(\mathbf{\bar{A}}^{\rm sp}\) has the same structure as the matrix \(\mathbf{\bar{A}}\) in (4.56) for the \(\varphi^{4}\) model but with the arguments \(\xi^{(\alpha)}_{0\pm}\) and \(\mathbf{e}^{(\alpha)}\) replaced by \(\xi^{\rm sp(\alpha)}_{0\pm}\) and \(\mathbf{e}^{\rm sp(\alpha)}\). This can be extended to any weakly anisotropic system beyond the \(\varphi^{4}\) theory without explicit knowledge of an anisotropy matrix \(\mathbf{A}\) as a function of the couplings. This means that we have identified a temperature-independent universal structure of the reduced anisotropy matrix \(\mathbf{\bar{A}}(\{\xi^{(\alpha)}_{0\pm},\mathbf{e}^{(\alpha)}\})\) whose dependence either on the nonuniversal ratios \(\xi^{(\alpha)}_{0\pm}/\xi^{(\beta)}_{0\pm}\) and nonuniversal principal unit vectors \(\mathbf{e}^{(\alpha)}\) or on \(\xi^{\rm sp(\alpha)}_{0\pm}/\xi^{\rm sp(\beta)}_{0\pm}\) and \(\mathbf{e}^{\rm sp(\alpha)}\) has a universal functional form that is the same for all weakly anisotropic systems including the \(\varphi^{4}\) model and the \(n\)-vector model. We have shown that for the construction of this reduced anisotropy matrix the existence and knowledge of an anisotropy matrix \(\mathbf{A}\) is not required, thus our construction of the structure of \(\mathbf{\bar{A}}\) is applicable to any weakly anisotropic system. This is an important ingredient for the general validity of multiparameter universality of weakly anisotropic systems. ## V Multiparameter universality of the anisotropic correlation function Besides the bulk free energy density the most fundamental physical quantity describing the critical fluctuations is the bulk order-parameter correlation function. However, in the traditional theory of critical phenomena [17; 18; 22; 27; 82; 83; 84; 84] including the nonperturbative functional renormalization group [54; 55; 57] the issue of the general universality properties of weakly anisotropic correlation functions was not addressed. Only special anisotropic models were studied whose bulk correlation functions [19; 41; 42; 43; 44; 45; 46] were presented in a nonuniversal form. Recently a general scaling form of the anisotropic bulk correlation function of the \(O(n)\)-symmetric \(d\)-dimensional \(\varphi^{4}\) model has been presented [13; 14] in terms of a reduced anisotropy matrix \(\mathbf{\bar{A}}\) which was found to violate two-scale-factor universality in both two and three dimensions. Instead, within the \(\varphi^{4}\) theory, these results exhibit the feature of multiparameter universality [13] and it was hypothesized that this structure is valid quite generally for weakly anisotropic systems beyond \(\varphi^{4}\) models. In this section we prove the validity of this hypothesis for the bulk correlation function and point to the intrinsic diversity of this scaling structure. ### Anisotropic correlation function of the \(\varphi^{4}\) theory In [6; 13] the representation (3.1) of the isotropic correlation function was used in order to derive the anisotropic correlation function \(G_{\pm}(\mathbf{x},t)\) of the \(\varphi^{4}\) theory in the form presented in Eq. (1.6) of [13]. In the following we shall work with the alternative representation (3.37) and its Fourier transform (3.43) and present a more transparent derivation. Application of (3.37) to the isotropic correlation function \(G^{\prime}_{\pm}(|\mathbf{x}^{\prime}|,t)\) of the transformed isotropic Hamiltonian (4.13) yields \[G^{\prime}_{\pm}(|\mathbf{x}^{\prime}|,t)=\frac{\Gamma^{\prime}_{+}(\xi^{ \prime}_{0+})^{-2+\eta}}{|\mathbf{x}^{\prime}|^{d-2+\eta}}\,\Psi_{\pm}\Big{(} \frac{|\mathbf{x}^{\prime}|}{\xi^{\prime}_{\pm}(t)}\Big{)} \tag{5.1}\] which can be rewritten as \[G^{\prime}_{\pm}(|\mathbf{x}^{\prime}|,t)=\frac{\Gamma^{\prime}_{+}}{(\xi^{ \prime}_{0+})^{d}}\Big{(}\frac{\xi^{\prime}_{0+}}{|\mathbf{x}^{\prime}|}\Big{)} ^{d-2+\eta}\,\Psi_{\pm}\Big{(}\frac{|\mathbf{x}^{\prime}|}{\xi^{\prime}_{\pm}( t)}\Big{)}\,. \tag{5.2}\] Now we derive the anisotropic correlation function from the isotropic correlation function (5.2) by inverting (4.40), \[G_{\pm}(\mathbf{x},t)=\left(\xi^{\prime}_{0\pm}/\bar{\xi}_{0\pm}\right)^{d}G^{ \prime}_{\pm}(|\mathbf{x}^{\prime}|,t), \tag{5.3}\] and expressing \(|\mathbf{x}^{\prime}|\) in terms of \(\mathbf{x}\). Using the inverse of (4.11) \[\mathbf{\bar{\lambda}}^{-1}=\mathbf{U}\mathbf{\bar{A}}^{-1}\mathbf{U}^{-1} \tag{5.4}\] we first rewrite \(|\mathbf{x}^{\prime}|\) as \[|\mathbf{x}^{\prime}|=\big{[}\mathbf{x}^{\prime}\cdot\mathbf{\bar{ \lambda}}^{1/2}\mathbf{\bar{\lambda}}^{-1}\mathbf{\bar{\lambda}}^{1/2}\mathbf{x }^{\prime}\big{]}^{1/2} \tag{5.5}\] \[=\big{[}\mathbf{x}^{\prime}\cdot\mathbf{\bar{\lambda}}^{1/2} \mathbf{U}\mathbf{\bar{A}}^{-1}\mathbf{U}^{-1}\mathbf{\bar{\lambda}}^{1/2} \mathbf{x}^{\prime}\big{]}^{1/2}\] (5.6) \[=\big{[}(\mathbf{\bar{\lambda}}^{1/2}\mathbf{U})^{T}\mathbf{x}^{ \prime}\cdot\mathbf{\bar{A}}^{-1}\mathbf{U}^{-1}\mathbf{\bar{\lambda}}^{1/2} \mathbf{x}^{\prime}\big{]}^{1/2}\] (5.7) \[=\big{[}\mathbf{U}^{-1}\mathbf{\bar{\lambda}}^{1/2}\mathbf{x}^{ \prime}\cdot\mathbf{\bar{A}}^{-1}\mathbf{U}^{-1}\mathbf{\bar{\lambda}}^{1/2} \mathbf{x}^{\prime}\big{]}^{1/2} \tag{5.8}\] where we have used \[(\mathbf{\bar{\lambda}}^{1/2}\mathbf{U})^{T}=\mathbf{U}^{T}(\mathbf{\bar{ \lambda}}^{1/2})^{T}=\mathbf{U}^{-1}\mathbf{\bar{\lambda}}^{1/2}. \tag{5.9}\] Then we express \(\mathbf{\bar{\lambda}}^{1/2}\) in terms of \(\mathbf{\lambda}^{1/2}\) according to (4.55), \[\mathbf{\bar{\lambda}}^{1/2}=\frac{\xi^{\prime}_{0\pm}}{\bar{\xi}_{0\pm}}\, \mathbf{\lambda}^{1/2} \tag{5.10}\] which yields the identity \[|\mathbf{x}^{\prime}|\ =\ \frac{\xi^{\prime}_{0\pm}}{\bar{\xi}_{0\pm}}\,\big{[} \mathbf{U}^{-1}\mathbf{\lambda}^{1/2}\mathbf{x}^{\prime}\cdot\mathbf{\bar{A}}^{-1} \mathbf{U}^{-1}\mathbf{\lambda}^{1/2}\mathbf{x}^{\prime}\big{]}^{1/2}. \tag{5.11}\] From the original shear transformation (4.1) we obtain the inverse shear transformation \[\mathbf{U}^{-1}\mathbf{\lambda}^{1/2}\mathbf{x}^{\prime}=\mathbf{x}. \tag{5.12}\] This leads to the relation \[\frac{\mathbf{x}^{\prime}{}^{2}}{\big{(}\xi^{\prime}_{0\pm}\big{)}^{2}}\ =\ \frac{ \mathbf{x}\cdot\mathbf{\bar{A}}^{-1}\mathbf{x}}{\big{(}\bar{\xi}_{0\pm}\big{)}^{2}}. \tag{5.13}\] Here the representation (4.56) of the reduced anisotropy matrix \(\bar{\bf A}\) is to be inserted as a function of the principal unit vectors and principal correlation lengths. Eq. (5.13) describes how the isotropic structure is transferred to the anisotropic structure by means of the inverse shear transformation. Using (5.3) together with the invariance (4.50) and substituting (5.13) into (5.2) we arrive at the anisotropic bulk order-parameter correlation function above, at, and below \(T_{c}\) of the \(\varphi^{4}\) model for \(2\leq d<4\) \[G_{\pm}({\bf x},t)=\frac{\Gamma_{+}(\widetilde{\xi}_{0+})^{-2+\eta}}{({\bf x} \cdot\bar{\bf A}^{-1}{\bf x})^{(d-2+\eta)/2}}\Psi_{\pm}\Big{(}\frac{[{\bf x} \cdot\bar{\bf A}^{-1}{\bf x}]^{1/2}}{\widetilde{\xi}_{\pm}(t)}\Big{)} \tag{5.14}\] with the universal scaling function \(\Psi_{\pm}\), (3.38), of the isotropic system. At \(T_{c}\) we obtain \[G_{\pm}({\bf x},0)=\widetilde{Q}_{3}\frac{\Gamma_{+}\big{(}\widetilde{\xi}_{0 +}\big{)}^{-2+\eta}}{({\bf x}\cdot\bar{\bf A}^{-1}{\bf x})^{(d-2+\eta)/2}} \tag{5.15}\] with the universal constant \(\widetilde{Q}_{3}\) defined in (3.48). These results are equivalent to the results of the \(\varphi^{4}\) theory in Eqs. (1.6) and (5.32) of [13]. An analogous anisotropic representation can be derived in Fourier space based on the shear transformation (4.5). Application of (3.43) to the Fourier transformed isotropic correlation function \(\hat{G}^{\prime}_{\pm}(|{\bf k}^{\prime}|,t)\) of the transformed isotropic Hamiltonian (4.13) yields \[\hat{G}^{\prime}_{\pm}(|{\bf k}^{\prime}|,t) = \frac{\Gamma^{\prime}_{+}}{\big{(}|{\bf k}^{\prime}|\,\xi^{ \prime}_{0+}\big{)}^{2-\eta}}\;\hat{\Psi}_{\pm}\Big{(}|{\bf k}^{\prime}|\xi^{ \prime}_{\pm}(t)\Big{)}, \tag{5.16}\] \[\hat{G}^{\prime}(|{\bf k}|,0) = Q_{3}\frac{\Gamma^{\prime}_{+}}{\big{(}|{\bf k}|\,\xi^{\prime}_{ 0+}\big{)}^{2-\eta}}\;. \tag{5.17}\] Because of the sum rule (3.45) we have \[\lim_{|{\bf k}^{\prime}|\to 0}\hat{G}^{\prime}_{\pm}(|{\bf k}^{\prime}|,t)= \chi^{\prime}_{\pm}(t)=\Gamma^{\prime}_{\pm}|t|^{-\gamma}. \tag{5.18}\] Using (4.11) in the form \[\bar{\bf\lambda}={\bf U}\bar{\bf A}{\bf U}^{-1} \tag{5.19}\] we rewrite \(|{\bf k}^{\prime}|\) in a way analogous to \(|{\bf x}^{\prime}|\) and obtain the relation \[|{\bf k}^{\prime}|^{2}(\xi^{\prime}_{0\pm})^{2} = ({\bf k}\cdot\ \bar{\bf A}{\bf k})(\bar{\xi}_{0\pm})^{2}. \tag{5.20}\] Using the invariance (4.17) together with the invariance (4.50) and substituting (5.20) into (5.16) we obtain the anisotropic correlation function of the \(\varphi^{4}\) model in \({\bf k}\) space \[\hat{G}_{\pm}({\bf k},t) = \Gamma_{+}\;\frac{\hat{\Psi}_{\pm}\Big{(}[{\bf k}\cdot\ \bar{\bf A}{\bf k}]^{1/2}\bar{\xi}_{\pm}(t)\Big{)}}{ \big{(}[{\bf k}\cdot\ \bar{\bf A}{\bf k}]^{1/2}\ \bar{\xi}_{0+}\big{)}^{2-\eta}} \tag{5.21}\] with \[\hat{G}_{\pm}({\bf k},0)=Q_{3}\frac{\Gamma_{+}}{\big{(}[{\bf k}\cdot\ \bar{\bf A}{\bf k}]^{1/2}\bar{\xi}_{0+}\big{)}^{2-\eta}} \tag{5.22}\] at \(T_{c}\), with the universal constant \(Q_{3}\), (3.46). A discussion of these results for the \(\varphi^{4}\) theory is given in Sec. V.B. in the context of analogous results for the \(n\)-vector model. ### Proof of multiparameter universality of the anisotropic correlation function Our proof is formulated for the anistropic \(n\)-vector model as an example for a system other than the anisotropic \(\varphi^{4}\) model. It is based on the following properties: (a) The universal structure of the scaling form (3.1) or (3.37) of the isotropic correlation function, (b) the general applicability of the generalized shear transformations (4.72) or (4.89) to any weakly anisotropic system including the \(n\)-vector model, (c) the universality of the structure of the reduced anisotropy matrix \(\bar{\bf A}^{\rm sp}\), (4.98), (d) the sum rule for the anisotropic correlation function near \(T_{c}\) \[\chi^{\rm sp}_{\pm}(t)=\int d^{4}{\bf x}\ \ G^{\rm sp}_{\pm}({\bf x},t)= \Gamma^{\rm sp}_{\pm}|t|^{-\gamma} \tag{5.23}\] where \(\Gamma^{\rm sp}_{\pm}\) is the nonuniversal critical amplitude of the susceptibility above and below \(T_{c}\), respectively, and (e) the invariance of the bulk correlation function \(G^{\rm sp}_{\pm}({\bf x},t)\) under our shear transformation. The latter issue is due to the fact that, unlike the special shear transformation of the \(\varphi^{4}\) theory [1; 6] where the transformation (4.2) of the variable \(\varphi\) is involved, our generalized shear transformation (4.89) transforms only the spatial coordinates \({\bf x}\rightarrow\widehat{\bf x}\) of the lattice points without changing the topology of the interactions. As discussed in Sec. II. B, this leaves the amplitude of the order-parameter correlation function \(G^{\rm sp}_{\pm}({\bf x},t)\) invariant. Thus our generalized shear transformation implies that the special transformations (4.16), (4.40), and (5.3) are replaced by the invariance \[G^{\rm sp}_{\pm}({\bf x},t)=G^{\rm sp,iso}_{\pm}(|\widehat{\bf x}|,t)=G^{\rm sp,iso}_{\pm}(|\widehat{\bf\lambda}^{-1/2}\widehat{\bf U}{\bf x}|,t) \tag{5.24}\] for the \(n\)-vector model where the transformed isotropic correlation function \(G^{\rm sp,iso}_{\pm}\) has the isotropic structure given in (3.37). The susceptibility, however, is not invariant under the generalized shear transformation. Eq. (5.23) can be combined with the sum rule of the isotropic system and with (4.89), (4.94), and (5.24) to obtain the susceptibility of the isotropic system as \[\chi^{\rm sp,iso}_{\pm}(t) = \int d^{d}\widehat{\bf x}\ \ G^{\rm sp,iso}_{\pm}(|\widehat{\bf x}|,t)= \Gamma^{\rm sp,iso}_{\pm}\ |t|^{-\gamma} \tag{5.25}\] \[= (\det\widehat{\bf\lambda})^{-1/2}\int d^{d}{\bf x}\ \ G^{\rm sp}_{\pm}({\bf x},t)\] (5.26) \[= (\xi^{\rm iso}_{0\pm}/\bar{\xi}^{\rm sp}_{0\pm})^{d}\chi^{\rm sp}_{ \pm}(t) \tag{5.27}\] where \(\bar{\xi}^{\rm sp}_{0\pm}\) is the amplitude of the mean correlation length. This implies the relation between the amplitudes of the susceptibilities of the isotropic and anisotropic systems \[\Gamma^{\rm sp,iso}_{\pm}=\left(\xi^{\rm iso}_{0\pm}/\bar{\xi}^{\rm sp}_{0\pm} \right)^{d}\Gamma^{\rm sp}_{\pm} \tag{5.28}\] which can be interpreted as the invariance of the ratio of the correlation volume and the susceptibility amplitude \[\left(\xi^{\rm iso}_{0\pm}\right)^{d}/\Gamma^{\rm sp,iso}_{\pm}=\ \left(\bar{\xi}^{\rm sp}_{0\pm}\right)^{d}/\Gamma^{\rm sp}_{\pm} \tag{5.29}\] under the generalized shear transformation. We note that also the combination \[\big{(}\xi_{0\pm}^{\rm iso}\big{)}^{d}G_{\pm}^{\rm op,iso}(|\widehat{\bf x}|,t)/ \Gamma_{\pm}^{\rm sp,iso}=\ \big{(}\bar{\xi}_{0\pm}^{\rm sp}\big{)}^{d}G_{\pm}^{\rm sp}({\bf x},t)/ \Gamma_{\pm}^{\rm sp} \tag{5.30}\] remains invariant which shows the universality of the same feature (4.51) of the special shear transformation of the \(\varphi^{4}\) theory. From (3.37) we have the representation of the isotropic bulk correlation function \[G_{\pm}^{\rm sp,iso}(|\widehat{\bf x}|,t)=\frac{\Gamma_{+}^{\rm sp,iso}\big{(} \xi_{0+}^{\rm iso}\big{)}^{-2+\eta}}{|\widehat{\bf x}|^{(d-2+\eta)/2}}\ \Psi_{\pm}\Big{(}\frac{|\widehat{\bf x}|}{\xi_{\pm}^{\rm iso}(t)}\Big{)}. \tag{5.31}\] or \[G_{\pm}^{\rm sp,iso}(\widehat{\bf x},t)=\frac{\Gamma_{+}^{\rm sp,iso}}{\big{(} \xi_{0+}^{\rm iso}\big{)}^{d}}\bigg{(}\frac{\xi_{0+}^{\rm iso}}{|\widehat{\bf x }|}\bigg{)}^{d-2+\eta}\ \Psi_{\pm}\Big{(}\frac{|\widehat{\bf x}|}{\xi_{\pm}^{\rm iso}(t)}\Big{)}. \tag{5.32}\] The derivation of the anisotropic bulk correlation function \(G_{\pm}^{\rm sp}({\bf x},t)\), (5.24) from (5.32) is analogous to that in (5.2)-(5.14) for the \(\varphi^{4}\) model. Thus we rewrite \[|\widehat{\bf x}|=\big{[}\widehat{\bf x}\cdot\big{(}\widetilde{ \bf x}^{\rm sp}\big{)}^{1/2}\big{(}\widetilde{\bf x}^{\rm sp}\big{)}^{-1} \big{(}\widetilde{\bf x}^{\rm sp}\big{)}^{1/2}\widehat{\bf x}\big{]}^{1/2} \tag{5.33}\] \[=\big{[}\widehat{\bf x}\cdot\big{(}\widetilde{\bf x}^{\rm sp} \big{)}^{1/2}\widehat{\bf U}\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{-1} \widehat{\bf U}^{-1}\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{1/2}\widehat{\bf x }\big{]}^{1/2}\] (5.34) \[=\big{[}\widehat{\bf U}^{-1}\big{(}\widetilde{\bf x}^{\rm sp} \big{)}^{1/2}\widehat{\bf x}\cdot\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{-1 }\widehat{\bf U}^{-1}\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{1/2}\widehat{ \bf x}\big{]}^{1/2}\] (5.35) \[=\frac{\xi_{0\pm}^{\rm iso}}{\xi_{0\pm}^{\rm sp}}\ \widehat{\bf U}^{-1}\widehat{\bf\lambda}^{1/2}\widehat{\bf x}\cdot\big{(} \widetilde{\bf A}^{\rm sp}\big{)}^{-1}\widehat{\bf U}^{-1}\widehat{\bf\lambda }^{1/2}\widehat{\bf x}\big{]}^{1/2} \tag{5.36}\] where we have used \[\big{(}\widetilde{\bf x}^{\rm sp}\big{)}^{-1}=\widehat{\bf U}\big{(}\widetilde {\bf A}^{\rm sp}\big{)}^{-1}\widehat{\bf U}^{-1} \tag{5.37}\] which is the inverse of (4.96), and \[\big{(}\widetilde{\bf x}^{\rm sp}\big{)}^{1/2}=\frac{\xi_{0\pm}^{\rm iso}}{ \xi_{0\pm}^{\rm sp}}\ \widehat{\bf\lambda}^{1/2} \tag{5.38}\] according to (4.95). Finally we employ the inverse of the transformation (4.89), \[\widehat{\bf U}^{-1}\widehat{\bf\lambda}^{1/2}\widehat{\bf x}={\bf x} \tag{5.39}\] and obtain from (5.36) the relation \[\frac{|\widehat{\bf x}|}{\xi_{0\pm}^{\rm iso}}\ =\ \frac{[{\bf x}\cdot\big{(} \widetilde{\bf A}^{\rm sp}\big{)}^{-1}{\bf x}]^{1/2}}{\bar{\xi}_{0\pm}^{\rm sp}}\, \tag{5.40}\] compare (5.13). A similar transformation holds for \[\frac{\xi_{\rm T}^{\rm iso}(t)}{|\widehat{\bf x}|}=\frac{\bar{\xi}_{\rm T}^{ \rm sp}(t)}{\big{[}{\bf x}\cdot\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{-1}{ \bf x}\big{]}^{1/2}} \tag{5.41}\] in (3.8). Together with (5.28) this leads to the scaling form of the anisotropic bulk order-parameter correlation function (2.25) above, at, and below \(T_{c}\) of the \(n\)-vector model (2.22) for \(2\leq d<4\) \[G_{\pm}^{\rm sp}({\bf x},t)=\] \[\frac{\Gamma_{+}^{\rm sp}\big{(}\bar{\xi}_{0+}^{\rm sp}\big{)}^{- 2+\eta}}{\big{[}{\bf x}\cdot\big{(}\widetilde{\bf A}^{\rm sp}\big{)}^{-1}{\bf x }\big{]}^{(d-2+\eta)/2}}\Psi_{\pm}\bigg{(}\frac{\big{[}{\bf x}\cdot\big{(} \widetilde{\bf A}^{\rm sp}\big{)}^{-1}{\bf x}\big{]}^{1/2}}{\bar{\xi}_{\pm}^{ \rm sp}(t)}\bigg{)}. \tag{5.42}\] with the universal scaling function \(\Psi_{\pm}\), (3.38), of the isotropic system. Here \(\widetilde{\bf A}^{\rm sp}\) is to be inserted in the form of (4.98) as a function of the principal unit vectors and principal correlation lengths. Right at \(T_{c}\) we have the purely algebraic behavior \[G_{\pm}^{\rm sp}({\bf x},0)=\widetilde{Q}_{3}\frac{\Gamma_{+}^{\rm sp}\big{(} \bar{\xi}_{0+}^{\rm sp}\big{)}^{-2+\eta}}{\big{[}{\bf x}\cdot\big{(}\widetilde{ \bf A}^{\rm sp}\big{)}^{-1}{\bf x}\big{]}^{(d-2+\eta)/2}}. \tag{5.43}\] Eqs. (5.42) and (5.43) are the anisotropy-dependent generalizations of Eqs. (3.37) and (3.41). Similarly we obtain the anisotropic counterpart of the transverse correlation function (3.8) below \(T_{c}\) for \(n>1,d>2\) \[G_{\rm T}^{\rm sp}({\bf x},t)\ =\ {\cal C}_{\rm T}[{\cal M}^{\rm sp}(t)]^{2} \Bigg{(}\frac{\bar{\xi}_{\rm T}^{\rm sp}(t)}{\big{[}{\bf x}\cdot\big{(}\widetilde{ \bf A}^{\rm sp}\big{)}^{-1}{\bf x}\big{]}^{1/2}}\Bigg{)}^{d-2} \tag{5.44}\] where \({\cal M}^{\rm sp}(t)={\cal M}^{\rm iso}(t)\) is invariant under the generalized shear transformation and where \(\widetilde{\bf A}^{\rm sp}\) is represented as a function of \(\xi_{0\rm T}^{\rm sp(\alpha)}/\xi_{0\rm T}^{\rm sp(\beta)}\) and \({\sf e}^{\rm sp(\alpha)}\). In a similar way we obtain from (3.1), with \(D_{1}^{\rm iso}\) replaced by \[\widehat{D}_{1}=\left(\xi_{0+}^{\rm iso}/\bar{\xi}_{0+}^{\rm sp}\right)^{d-2+ \eta}\ D_{1}^{\rm sp}, \tag{5.45}\] the representation above, at, and below \(T_{c}\) \[G_{\pm}^{\rm sp}({\bf x},t)=\] \[\frac{D_{1}^{\rm sp}}{[{\bf x}\cdot\big{(}\widetilde{\bf A}^{\rm sp }\big{)}^{-1}{\bf x}]^{(d-2+\eta)/2}}\ \Phi_{\pm}\Bigg{(}\frac{\big{[}{\bf x}\cdot\big{(}\widetilde{\bf A}^{\rm sp }\big{)}^{-1}{\bf x}\big{]}^{1/2}}{\bar{\xi}_{\pm}^{\rm sp}(t)}\Bigg{)}, \tag{5.46}\] where the universal scaling function \(\Phi_{\pm}\) is the same as in (3.1) for the isotropic system. The analysis in Fourier space is parallel to that in Sec. V. A. Defining the generalized shear transformation in \({\bf k}\) space [compare (4.5)] \[\widehat{\bf k}=\widehat{\bf\lambda}^{1/2}\widehat{\bf U}{\bf k}, \tag{5.47}\] and the Fourier transforms \[\hat{G}_{\pm}^{\rm sp,iso}(|\widehat{\bf k}|,t) = \int d^{d}\widehat{\bf x}\ e^{-i\widehat{\bf k}\cdot\widehat{\bf x}}G_{ \pm}^{\rm sp,iso}(|\widehat{\bf x}|,t), \tag{5.48}\] \[\hat{G}_{\pm}^{\rm sp}({\bf k},t) = \int d^{d}{\bf x}\ e^{-i{\bf k}\cdot{\bf x}}G_{\pm}^{\rm sp}({\bf x },t), \tag{5.49}\] with \(\widehat{\bf k}\cdot\widehat{\bf x}={\bf k}\cdot{\bf x}\) where \(\hat{\Psi}_{\pm}\) is the same function as in (3.43) and (5.16), with \[\hat{G}_{\pm}^{\rm sp}({\bf k},0)=Q_{3}\frac{\Gamma_{+}^{\rm sp}}{\left(\left[{ \bf k}\cdot\,\,\bar{\bf A}^{\rm sp}{\bf k}\right]^{1/2}\bar{\xi}_{0+}^{\rm sp} \right)^{2-\eta}} \tag{5.54}\] at \(T_{c}\), with the same universal constant \(Q_{3}\) as in (5.22). The effect of the inverse shear transformations on the arguments of the correlation functions is condensed into the exact relations (5.13), (5.20), (5.40), and (5.51) which describe how the isotropic structure is transferred to the anisotropic structure. No specific properties of the \(n\)-vector model have been used, thus the derivation given above can be extended to any weakly anisotropic system beyond the \(\varphi^{4}\) theory without explicit knowledge of an anisotropy matrix \({\bf A}\) as a function of the couplings. Our results (5.42), (5.44), (5.46), and (5.53) for the anisotropic \(n\)-vector model have the same form as derived for the anisotropic \(\varphi^{4}\) model given in (5.14) and (5.21) and in Eqs. (1.6) and (5.42) of [13], with the same critical exponents \(\nu\) and \(\eta\), the same universal structure of the reduced anisotropy matrix \(\bar{\bf A}\) or \(\bar{\bf A}^{\rm sp}\), the same universal scaling functions \(\Psi_{\pm}\) (or \(\Phi_{\pm}\)) and \(\hat{\Psi}_{\pm}\), and with the same universal constants \(Q_{3}\) and \(\bar{Q}_{3}\) for a given \((d,n)\) universality class. This proves the validity of multiparameter universality for the anisotropic bulk correlation function for general \(d\) and \(n\) with up to \(d(d+1)/2+1\) independent nonuniversal parameters. This is the central result of this section. Here we have not made any assumptions other than the validity of two-scale-factor universality for isotropic systems and the existence of principal correlation lengths and principal axes for weakly anisotropic systems together with (4.80). Our derivation cannot, of course, make any prediction about the dependence of the nonuniversal parameters on the couplings \(E_{i,j}\) of the \(n\)-vector model, unlike the dependence on the couplings \(K_{i,j}\) within the \(\varphi^{4}\) model [13; 14]. Among the \(d(d+1)/2+1\) nonuniversal parameters of weakly anisotropic systems there are only two independent parameters that can be determined by thermodynamic measurements, namely the amplitude \(\Gamma_{+}^{\rm sp}\) of the susceptibility and the amplitude \(\bar{\xi}_{0+}\) of the mean correlation length which enters the amplitude of the specific heat above \(T_{c}\), as will be shown in Sec. VI. B. While two-scale-factor universality implies that isotropic correlation functions can be expressed in terms of purely thermodynamic amplitudes this property is destroyed by weak anisotropy. The remaining \(d(d+1)/2-1\) independent nonuniversal parameters \(\xi_{0\pm}^{\rm sp(\alpha)}/\xi_{0\pm}^{\rm sp(\beta)}\) or \(\xi_{0T}^{\rm sp(\alpha)}/\xi_{0T}^{\rm sp(\beta)}\) and \({\bf e}^{\rm sp(\alpha)}\) are contained in the reduced anisotropy matrix \(\bar{\bf A}^{\rm sp}\). Although this matrix has a universal structure in terms of ratios of principal correlation lengths and in terms of principal angles the latter are nonuniversal quantities describing the nonuniversal angular dependence of the critical correlations. Thus there exists a high degree of intrinsic diversity in the asymptotic critical region of weakly anisotropic systems with up to five intrinsic parameters in three dimensions. On the experimental side, the determination of these parameters requires spatially resolved scattering measurements. On the theoretical side, it has not been widely recognized in the literature [31; 32; 33; 34; 37; 85; 36] that in most cases the directions of the principal axes depend in a generically unknown way on the anisotropic couplings [14; 46]. Thus, in practice, it is by no means simple to identify the appropriate principal axes of anisotropic two- and three-dimensional \(n\)-vector models or of real systems before the invoked anisotropic scale transformations can be performed. For example, even for the two-dimensional anisotropic Ising model the principal axes and correlation lengths are known only in a few special cases. We shall substantiate our findings in the next sections by exact results in the spherical and Gaussian universality classes and by an approximate result derived from the FRG for \(d=3,n=1\)[54; 55] as well as in Sec. VII by exact results for the \(d=2\) Ising universality class. ### Exact anisotropic correlation function in the large-n limit Exact analytic results can be derived for systems belonging to the spherical universality class which includes the \(O(n)\)-symmetric \(\varphi^{4}\) model in the large-\(n\) limit, the \(n\)-vector model in the large-\(n\) limit, the spherical model, and the mean spherical model [68; 69; 70; 71; 72]. So far the exact scaling form of the anisotropic bulk correlation function of these models has not been given in the literature. As a representative of this universality class we take the large-\(n\) limit of the continuum version of the \(\varphi^{4}\) model (2.18). We consider the anisotropic bulk correlation function per component \(G_{\infty}\) for \(T\geq T_{c}\) in the large-\(n\) limit at fixed \(u_{0}n\)[79; 87] \[G_{\infty}({\bf x},t)=\lim_{n\to\infty}\frac{1}{n}<\varphi({\bf x })\varphi^{\prime}({\bf 0})> \tag{5.55}\] \[=\int_{\bf k}\ \hat{G}_{\infty}({\bf k},t)e^{i{\bf k}\cdot{\bf x}}. \tag{5.56}\] After the shear transformation (4.1)-(4.3) the isotropic \(\varphi^{4}\) Hamiltonian (4.13) and the correlation functions \[G^{\prime}_{\infty}({\bf x}^{\prime},t) = (\det{\bf A})^{1/2}G_{\infty}({\bf x},t), \tag{5.57}\] \[\hat{G}^{\prime}_{\infty}(|{\bf k}^{\prime}|,t) = \hat{G}^{\prime}_{\infty}(|{\bf\lambda}^{1/2}{\bf U}{\bf k}|,t)= \hat{G}_{\infty}({\bf k},t), \tag{5.58}\] are obtained. They are given by [79; 87] \[\hat{G}^{\prime}_{\infty}(|{\bf k}^{\prime}|,t) = \{[\chi^{\prime}_{\infty}(t)]^{-1}+({\bf k}^{\prime})^{2}\}^{-1}\ \, \tag{5.59}\] \[\hat{G}^{\prime}_{\infty}({\bf x}^{\prime},t) = \int_{\bf k^{\prime}}\ \frac{e^{i{\bf k}^{\prime}\cdot{\bf x}^{\prime}}}{[\chi^{ \prime}_{\infty}(t)]^{-1}+({\bf k}^{\prime})^{2}}\ \, \tag{5.60}\] where the inverse of the bulk susceptibility per component is determined implicitly by \[[\chi^{\prime}_{\infty}(t)]^{-1}=r_{0}+4u_{0}^{\prime}n\int_{\bf k^{\prime}}[ [\chi^{\prime}_{\infty}(t)]^{-1}+({\bf k}^{\prime})^{2}\}^{-1}. \tag{5.61}\] The same equation determines the square of the bulk correlation length above \(T_{c}\) because [87] \[\chi^{\prime}_{\infty}(t) = [\xi^{\prime}_{\infty}(t)]^{2}. \tag{5.62}\] Here \(\int_{\bf k^{\prime}}\) stands for \((2\pi)^{-d}\int d^{d}k^{\prime}\) with a transformed cutoff. This cutoff dependence becomes negligible in the asymptotic region near \(T_{c}\) where \(G^{\prime}_{\infty}({\bf x}^{\prime},t)\to G^{\prime}_{\infty}(|{\bf x}^{\prime}|,t)\) becomes isotropic. For \(t>0\) and \(2<d<4\) the asymptotic behavior is [87] \[\chi^{\prime}_{\infty}(t) = \Gamma^{\prime}_{\infty}\;t^{-\gamma_{\infty}}, \tag{5.63}\] \[\xi^{\prime}_{\infty}(t) = \xi^{\prime}_{\infty,0}t^{-\nu_{\infty}},\] (5.64) \[\xi^{\prime}_{\infty,0} = [4u^{\prime}_{0}nA_{d}/(\varepsilon a_{0})]^{1/(d-2)},\] (5.65) \[A_{d} = \frac{\Gamma(3-d/2)}{2^{d-2}\pi^{d/2}(d-2)} \tag{5.66}\] with \(\varepsilon=4-d\), with the critical exponents \[\gamma^{\prime}_{\infty}=2\nu_{\infty},\;\nu_{\infty}=1/(d-2),\;\eta_{\infty}=0. \tag{5.67}\] In particular the amplitude of the susceptibility \[\Gamma^{\prime}_{\infty}=\big{(}\xi^{\prime}_{\infty,0}\big{)}^{2} \tag{5.68}\] is independent of the cutoff of the \(\varphi^{4}\) Hamiltonian. We note that this simple relation is not valid for all members of the spherical universality class, e.g., it is not valid for the spherical model [80]. The universal isotropic scaling form for \(t\geq 0\) in \({\bf k}^{\prime}\) space is simply \[\hat{G}^{\prime}_{\infty}(|{\bf k}^{\prime}|,t) = ({\bf k}^{\prime})^{-2}\hat{\Psi}_{\infty}(|{\bf k}^{\prime}|\xi^ {\prime}_{\infty}(t)), \tag{5.69}\] \[\hat{\Psi}_{\infty}(y) = (1+y^{-2})^{-1},\] (5.70) \[Q_{\infty,3} = \hat{\Psi}_{\infty}(\infty)=1. \tag{5.71}\] To derive \(G^{\prime}_{\infty}(|{\bf x}^{\prime}|,t)\) in real space we use the integral representation \(w^{-1}=\int_{0}^{\infty}dse^{-ws}\) of the quantity \(w=(\xi^{\prime}_{\infty})^{-2}+({\bf k}^{\prime})^{2}\). Then we obtain from (5.60) \[G^{\prime}_{\infty}(|{\bf x}^{\prime}|,t)=\int_{0}^{\infty}dse^{-s/[\xi^{ \prime}_{\infty}(t)]^{2}}\int_{{\bf k}^{\prime}}\;e^{-s({\bf k}^{\prime})^{2}+ {\bf k}^{\prime}\cdot{\bf x}^{\prime}}. \tag{5.72}\] Performing the Gaussian \({\bf k}^{\prime}\)-integration at infinite cutoff [6, 87] we obtain the scaling form \[G^{\prime}_{\infty}(|{\bf x}^{\prime}|,t) = \int_{0}^{\infty}\frac{ds}{(4\pi s)^{d/2}}e^{-s/(\xi^{\prime}_{ \infty})^{2}-|{\bf x}^{\prime}|^{2}/(4s)} \tag{5.73}\] \[= \frac{1}{|{\bf x}^{\prime}|^{d-2}}\Psi_{\infty}(|{\bf x}^{\prime }|/\xi^{\prime}_{\infty}) \tag{5.74}\] where the exact universal scaling function is given by \[\Psi_{\infty}(y) = \frac{y^{d-2}}{(4\pi)^{d/2}}\int_{0}^{\infty}dss^{-d/2}e^{-s-y^{ 2}/(4s)}, \tag{5.75}\] \[\widetilde{Q}_{\infty,3} = \Psi_{\infty}(0)=\frac{\Gamma[(d-2)/2]}{4\pi^{d/2}}\;. \tag{5.76}\] This function depends only on the scalar scaling variable \(y=|{\bf x}^{\prime}|/\xi^{\prime}_{\infty}\) because of isotropy. \(G^{\prime}_{\infty}\) has indeed the form of (3.37) with \(\eta=0\) since, according to (5.68), we may replace 1 in (5.74) by \(1=\Gamma^{\prime}_{\infty}(\xi^{\prime}_{\infty,0})^{-2}\). Only owing to the simple relation (5.68) the isotropic correlation function in the large-\(n\) limit has a single-parameter scaling form which is not universally valid for finite \(n\) and \(\eta\neq 0\). On the basis of the exact isotropic results (5.69)-(5.75) it is straight forward to determine the anisotropic correlation functions in real space and \({\bf k}\) space from (5.57) and (5.58) by substituting the general results of Secs. IV and V. Using (4.9), (4.33), (4.50) and the general relations (5.13) and (5.20) we obtain the exact anisotropic correlation function of the \(\varphi^{4}\) model in the large-\(n\) limit for \(d>2\) and \(t\geq 0\) \[G_{\infty}({\bf x},t)\] \[=\frac{\Gamma_{\infty}\;(\bar{\xi}_{\infty,0})^{-2}}{[{\bf x} \cdot(\bar{\bf A}^{-1}{\bf x})]^{(d-2)/2}}\;\Psi_{\infty}\Big{(}\frac{[{\bf x }\cdot(\bar{\bf A}^{-1}{\bf x})]^{1/2}}{\bar{\xi}_{\infty}(t)}\Big{)}, \tag{5.77}\] \[\hat{G}_{\infty}({\bf k},t)=\frac{\Gamma_{\infty}\;(\bar{\xi}_{ \infty,0})^{-2}}{{\bf k}\cdot\bar{\bf A}{\bf k}}\hat{\Psi}_{\infty}\Big{(}[{\bf k }\cdot\;\bar{\bf A}{\bf k}]^{1/2}\bar{\xi}_{\infty}(t)\Big{)},\] (5.78) \[\Gamma_{\infty}\;\;(\bar{\xi}_{\infty,0})^{-2}=\Big{(}\prod_{ \alpha=1}^{d}\lambda_{\alpha}\Big{)}^{-1/d}=(\det{\bf A})^{-1/d}, \tag{5.79}\] where \(\Psi_{\infty}\) and \(\hat{\Psi}_{\infty}\) are the same scaling functions as in the isotropic case. Here \(\Gamma_{\infty}=\Gamma^{\prime}_{\infty}=(\bar{\xi}_{\infty,0})^{2}\) and \(\bar{\xi}_{\infty,0}\) are the amplitudes of the susceptibility per component and mean correlation length of the anisotropic system above \(T_{c}\), \[\chi_{\infty}(t) = \Gamma_{\infty}\;t^{-\gamma_{\infty}}, \tag{5.80}\] \[\bar{\xi}_{\infty}(t) = \bar{\xi}_{\infty,0}\;t^{-\nu_{\infty}},\] (5.81) \[\bar{\xi}_{\infty,0} = \big{[}\prod_{\alpha=1}^{d}\xi^{(\alpha)}_{\infty,0}\big{]}^{1/d} \tag{5.82}\] where the amplitudes \(\xi^{(\alpha)}_{\infty,0}\) of the principal correlation lengths \[\xi^{(\alpha)}_{\infty 0}={\lambda_{\alpha}}^{1/2}\xi^{\prime}_{\infty,0} \tag{5.83}\] are determined by the eigenvalues \(\lambda_{\alpha}\) of the anisotropy matrix \({\bf A}\) and by the amplitude \(\xi^{\prime}_{\infty,0}\) of the isotropic correlation length above \(T_{c}\) of the Hamiltonian (4.13). Below \(T_{c}\) the algebraic large-distance behavior of the transverse correlation function of the isotropic \(\varphi^{4}\) Hamiltonian (4.13) in the large-\(n\) limit reads for \(d>2\), \[G^{\prime}_{\infty,\rm T}(|{\bf x}^{\prime}|,t) = \!\!\!C_{\rm T}\big{[}{\cal M}^{\prime}_{\infty}(t)\big{]}^{2} \big{[}\xi^{\prime}_{\infty,\rm T}(t)/|{\bf x}^{\prime}|\big{]}^{d-2}, \tag{5.84}\] \[\big{[}{\cal M}^{\prime}_{\infty}(t)\big{]}^{2} = \!\!\!\frac{r^{\prime}_{\infty,0\rm c}-r_{0}}{4u^{\prime}_{0}n}=( B^{\prime}_{\infty})^{2}|t|^{2\beta_{\infty}},\] (5.85) \[\beta_{\infty} = \!\!\!1/2,\] (5.86) \[(B^{\prime}_{\infty})^{2} = \!\!\!a_{0}/(4u^{\prime}_{0}n),\] (5.87) \[r^{\prime}_{\infty,0\rm c} = \!\!\!-4u^{\prime}_{0}n\int_{{\bf k}^{\prime}}\;{\bf k}^{\prime-2}, \tag{5.88}\] according to (3.8) and to Eq. (28) of [87]. The transverse correlation length \(\xi^{\prime}_{\infty,\rm T}(t)=\xi^{\prime}_{\infty,\rm T,0}|t|^{-\nu_{\infty}}\) below \(T_{c}\) is universally related to the correlation length \(\xi^{\prime}_{\infty}(t)\) above \(T_{c}\) \[\xi^{\prime}_{\infty,\rm T,0}/\xi^{\prime}_{\infty,0}=X_{\infty \Gamma}={\rm universal}, \tag{5.89}\] with a known universal constant \(X_{\infty\Gamma}\)[17]. By means of the inverse special shear transformation the anisotropic transverse correlation function \(G_{\infty,\rm T}({\bf x},t)\) of the model in the large-\(n\) limit is obtained as \[G_{\infty,{\rm T}}({\bf x},t) = (\det{\bf A})^{-1/2}G^{\prime}_{\infty,{\rm T}}(|{\bf x}^{\prime}|,t) \tag{111}\] \[= {\cal C}_{\rm T}\big{[}{\cal M}_{\infty}(t)\big{]}^{2}\Big{(}\frac{ \bar{\xi}_{\infty,{\rm T}}}{[{\bf x}\cdot(\bar{\bf A}^{-1}{\bf x})]^{1/2}} \Big{)}^{d-2},\] (112) \[\big{[}{\cal M}_{\infty}(t)\big{]}^{2} = \frac{r_{\infty,0c}-r_{0}}{4u_{0}n}=(B_{\infty})^{2}|t|,\] (113) \[(B_{\infty})^{2} = a_{0}/(4u_{0}n),\] (114) \[r_{\infty,0c} = -4u_{0}n\int_{\bf k}{\bf k}^{-2}=r^{\prime}_{\infty,0c},\] (115) \[\bar{\xi}_{\infty,{\rm T}}(t) = \Big{[}\prod_{\alpha=1}^{d}\xi^{(\alpha)}_{\infty,{\rm T}}(t) \Big{]}^{1/d},\] (116) \[\xi^{(\alpha)}_{\infty,{\rm T}}(t) = \xi^{(\alpha)}_{\infty,{\rm T},0}|t|^{-\nu_{\infty}},\] (117) \[\xi^{(\alpha)}_{\infty,{\rm T},0} = \lambda_{\alpha}^{1/2}\xi^{\prime}_{\infty,{\rm T},0}. \tag{118}\] where \(\bar{\bf A}\) must be expressed in terms of \(\xi^{(\alpha)}_{\infty,{\rm T},0}\), compare Eqs. (110)-(112) of [13] for finite \(n>1\). Our proof of multiparameter universality in Sec. IV. C. implies that the correlation functions of the anisotropic \(n\)-vector model (22) in the large-\(n\) limit have the same structure as those derived for the anisotropic \(\varphi^{4}\) model, with the same universal scaling functions \(\Psi_{\infty}\) and \(\hat{\Psi}_{\infty}\) and with the reduced anisotropy matrix \(\bar{\bf A}^{\rm sp}\) and two different nonuniversal parameters \(\Gamma^{\rm sp}_{\infty}\) and \(\xi^{\rm sp}_{\infty,0}\) that depend on the couplings \(E_{i,j}\). The same statement applies to other models of the spherical universality class. ### Exact anisotropic Gaussian correlation function The following is based on the continuum version of the anisotropic Gaussian model (110) and the isotropic Gaussian Hamiltonian (111) obtained after the special shear transformation (100)-(111). The structure of the anisotropic and isotropic Gaussian correlation functions \(G^{\rm G}({\bf x},t),\ \hat{G}^{\rm G}({\bf k},t)\) and \(G^{\prime\rm G}(|{\bf x}^{\prime}|,t),\ \hat{G}^{\prime\rm G}(|{\bf k}^{\prime}|,t)\), respectively, are closely related to those in the large-\(n\) limit with essentially the same scaling functions but with Gaussian critical exponents \[\nu^{\rm G}=1/2,\ \gamma^{\rm G}=1,\ \eta^{\rm G}=0, \tag{119}\] and different nonuniversal parameters \(\Gamma^{\rm G}_{+}\) and \(\bar{\xi}^{\rm G}_{0+}\). These correlation functions are defined for \(d\geq 2\) by \[G^{\rm G}({\bf x},t) = n<\varphi({\bf x})\varphi({\bf 0})>=\int_{\bf k}\ \hat{G}^{\rm G}({\bf k},t)e^{i{\bf k}\cdot{\bf x}}, \tag{120}\] \[G^{\prime\rm G}(|{\bf x}^{\prime}|,t) = \int_{\bf k^{\prime}}\hat{G}^{\prime\rm G}(|{\bf k}^{\prime}|,t) \ e^{i{\bf k}^{\prime}\cdot{\bf x}^{\prime}},\] (121) \[\hat{G}^{\prime\rm G}(|{\bf k}^{\prime}|,t) = n\ [r_{0}+({\bf k}^{\prime})^{2}]^{-1}\ \ \, \tag{122}\] where \[r_{0}=a_{0}t=[\xi^{\prime\rm G}_{+}(t)]^{-2},\ \ t\geq 0, \tag{123}\] with the isotropic Gaussian correlation length [6] \[\xi^{\prime\rm G}_{+}(t)=r_{0}^{-1/2}=\xi^{\prime\rm G}_{0+}t^{-1/2},\ \ \xi^{\prime\rm G}_{0+}=a_{0}^{-1/2}. \tag{124}\] The evaluation of the integral (121) in the asymptotic region is analogous to that of (121) and leads to \[G^{\prime\rm G}(|{\bf x}^{\prime}|,t) = \frac{1}{|{\bf x}^{\prime}|^{d-2}}\Psi^{\rm G}(|{\bf x}^{\prime}|/ \xi^{\prime\rm G}_{+}(t)), \tag{125}\] \[\Psi^{\rm G}(y) \equiv n\Psi_{\infty}(y),\] (126) \[\bar{Q}^{\rm G}_{3} = \Psi^{\rm G}(0)=n\frac{\Gamma[(d-2)/2]}{4\pi^{d/2}}, \tag{127}\] where the Gaussian scaling function (126) divided by \(n\) is identical with the scaling function (126) in the large-\(n\) limit. There exists agreement with the structure of (104) because of the identity \(1=\Gamma^{\rm G}_{+}(\xi^{\prime\rm G}_{0+})^{-2}\) where \(\Gamma^{\prime\rm G}_{+}\) is the amplitude of the isotropic Gaussian susceptibility \[\chi^{\prime\rm G}(t) = \Gamma^{\rm G}_{+}t^{-1}=r_{0}^{-1}=[\xi^{\prime\rm G}_{+}(t)]^{2}, \tag{128}\] \[\Gamma^{\prime\rm G}_{+} = (\xi^{\prime\rm G}_{0+})^{2}. \tag{129}\] The universal isotropic scaling form for \(t\geq 0\) in \({\bf k}^{\prime}\) space is analogous to (117), \[\hat{G}^{\prime\rm G}(|{\bf k}^{\prime}|,t) = ({\bf k}^{\prime})^{-2}\hat{\Psi}^{\rm G}(|{\bf k}^{\prime}|\xi^{ \prime\rm G}_{+}(t)), \tag{130}\] \[\hat{\Psi}^{\rm G}(y) = (1+y^{-2})^{-1},\] (131) \[Q^{\rm G}_{3} = \hat{\Psi}^{\rm G}(\infty)=1. \tag{132}\] Correspondingly we obtain the exact anisotropic correlation function of the Gaussian model for \(d\geq 2\) and \(t\geq 0\) \[G^{\rm G}({\bf x},t) = \frac{\Gamma^{\rm G}_{+}(\bar{\xi}^{\rm G}_{0+})^{-2}}{[{\bf x} \cdot\bar{\bf A}^{-1}{\bf x}]^{(d-2)/2}}\Psi^{\rm G}\Big{(}\frac{[{\bf x} \cdot\bar{\bf A}^{-1}{\bf x}]^{1/2}}{\bar{\xi}^{\rm G}_{+}(t)}\Big{)}, \tag{133}\] \[\hat{G}^{\rm G}({\bf k},t) = \frac{\Gamma^{\rm G}_{+}(\bar{\xi}^{\rm G}_{0+})^{-2}}{[{\bf k} \cdot\bar{\bf A}{\bf k}]}\hat{\Psi}^{\rm G}\Big{(}[{\bf k}\cdot\bar{\bf A}{\bf k }]^{1/2}\bar{\xi}^{\rm G}_{+}(t)\Big{)},\] (134) \[\Gamma^{\rm G}_{+}(\bar{\xi}^{\rm G}_{0+})^{-2} = \Big{(}\prod_{\alpha=1}^{d}\lambda_{\alpha}\Big{)}^{-1/d}=(\det{ \bf A})^{-1/d}, \tag{135}\] where \(\bar{\bf A}\) must be expressed in terms of \(\xi^{\rm G}_{0+}\). Here \(\Gamma^{\rm G}_{+}=\Gamma^{\rm G}_{+}\) and \(\bar{\xi}^{\rm G}_{0+}\) are the amplitudes of the susceptibility per component and mean correlation length of the anisotropic system above \(T_{c}\), \[\chi^{\rm G}_{+}(t) = \Gamma^{\rm G}_{+}\ t^{-1}, \tag{136}\] \[\bar{\xi}^{\rm G}_{+}(t) = \bar{\xi}^{\rm G}_{0+}t^{-1/2},\] (137) \[\bar{\xi}^{\rm G}_{0+} = \big{[}\prod_{\alpha=1}^{d}\xi^{\rm G(\alpha)}_{0+}\big{]}^{1/d} \tag{138}\] where the amplitudes \(\xi^{\rm G(\alpha)}_{0+}\) of the principal correlation lengths \(\xi^{\rm G(\alpha)}_{0+}={\lambda_{\alpha}}^{1/2}\varepsilon^{\rm G}_{0+}\) are determined by the eigenvalues \(\lambda_{\alpha}\) of the anisotropy matrix \({\bf A}\) and the amplitude \(\xi^{\prime\rm G}_{0+}=a_{0}^{-1/2}\), (124), of the isotropic correlation length above \(T_{c}\) of the Gaussian Hamiltonian (111). As in the large-\(n\) limit, the single-parameter isotropic scaling forms (125) and (130) are due to the special relation (129). Owing to two-scale-factor universality all isotropic systems of the Gaussian universality class have an asymptotic correlation function that has the same structure as that of (125) and (130) with two nonuniversal amplitudes \(\Gamma^{\rm G,iso}_{+}\) and \(\xi^ of each other. This statement applies to the Gaussian version of the \(n\)-vector model (2.22) (with spin variables \(-\infty\leq S_{i}^{(\mu)}\leq\infty\)). Thus, according to the proof of multiparameter universality in Sec. IV.C., the same structure of the anisotropic Gaussian correlation function as in (5.112) and (5.21) is predicted for the Gaussian version of the isotropic \(n\)-vector model where the reduced anisotropy matrix \(\bar{\bf A}^{\rm sp}\) comes into play as well as two nonuniversal parameters \(\Gamma_{+}^{\rm sp}\) and \(\tilde{\xi}_{0+}^{\rm sp}\) which need to be determined as functions of \(E_{i,j}\). ### Application to three dimensions: Determination of \(\hat{\Psi}_{\pm}\) from the FRG Application of the preceding analysis to three dimensions is of interest in view of the various anisotropic three-dimensional systems in condensed matter physics. Thereby one encounters, in general, three problems: (i) the determination of the three-dimensional universal isotropic scaling functions \(\Psi_{\pm},\hat{\Psi}_{\pm}\) or, equivalently, \(\hat{D}_{\pm}\), of the \(O(n)\)-symmetric universality classes, (ii) the determination of up to seven independent nonuniversal parameters \(\Gamma_{+},\xi_{0+}^{(1)},\xi_{0+}^{(2)},\xi_{0+}^{(3)},\Omega_{1},\Omega_{2 },\Omega_{3}\) where the principal angles \(\Omega_{\alpha}\) describe the orientation of the three principal directions, and (iii) the construction of the reduced anisotropy matrix \[\bar{\bf A}=\bar{\bf A}\big{(}\{\xi_{0\pm}^{(\alpha)},{\bf e}^{(\alpha)}\} \big{)}=\bar{\bf A}\Big{(}\frac{\xi_{0\pm}^{(1)}}{\xi_{0\pm}},\frac{\xi_{0\pm }^{(2)}}{\xi_{0\pm}},\frac{\xi_{0\pm}^{(3)}}{\xi_{0\pm}},\Omega_{1},\Omega_{2},\Omega_{3}\Big{)} \tag{5.118}\] where \({\bf e}^{(\alpha)}\) are the three principal unit vectors and where \(\bar{\xi}_{0\pm}\) is the amplitude of the mean correlation length. Several examples of \(\bar{\bf A}\) for three-dimensional anisotropic \(\varphi^{4}\) models have been presented [1; 6; 13; 16], in particular with general principal angles \(\Omega\) and \(\Omega+\pi/2\) describing planar anisotropies in a three-dimensional environment [15]. This matrix appears not only in the bulk correlation function but also in the finite-size properties of the excess free energy and the Casimir force of weakly anisotropic systems [1; 6; 13; 15]. A determination of the nonuniversal parameters (ii) in terms of the couplings is a nontrivial problem for three-dimensional anisotropic \(n\)-vector models and real systems that we shall not further discuss in this paper. Here we confine ourselves to a discussion of the problem (i) in the context of the isotropic order-parameter correlation function \(\hat{G}_{\pm}^{\rm FRG}(k,t)\) derived in the framework of the FRG [54; 55]. The calculation was performed within the isotropic \(\varphi^{4}\) theory with a finite cutoff \(\Lambda_{0}\) and a four-point coupling \(u_{0}\) in \(d<4\) dimensions for \(n=1\). On the basis of truncated FRG flow equations the following angular result was obtained above (\(+\)) and below (\(-\)) \(T_{c}\) \[\hat{G}_{\pm}^{\rm FRG}(k,t) = k_{c}^{-2}g^{\pm}(k\xi_{\pm}^{\rm FRG},k/k_{c}), \tag{5.119}\] \[g^{\pm}(x_{\pm},y) = [y^{2}+\sigma^{\pm}(x_{\pm},y)]^{-1} \tag{5.120}\] with \(x_{\pm}=k\xi_{\pm}^{\rm FRG}\), \(y=k/k_{c}\), \(k=|{\bf k}|\) where \({\bf k}\) is the wave vector, \(\xi_{\pm}^{\rm FRG}\) is the correlation length, and \(k_{c}\) is a finite nonuniversal wavenumber that depends on the cutoff and the four-point coupling. For the definition of the functions \(g^{\pm}(x_{\pm},y)\) and \(\sigma^{\pm}(x_{\pm},y)\) we refer to [54; 55]. This result is supposed to be applicable to the region \(k\ll k_{c}\) and to the crossover to a nonsymptotic critical region \(k_{c}\ll k\ll\Lambda_{0}\) as well as in the limit \(k\to 0\) at fixed \(\xi_{\pm}^{\rm FRG}<\infty\). The implications of the exact sum rule (3.45) were not discussed. It was claimed [54], without justification, that the scaling function \(\sigma^{\pm}(x_{\pm},y)\) is universal. Below we shall refute this claim by showing that \(g^{\pm}(x_{\pm},y)\) is not universal. We first discuss (5.119) in the standard asymptotic scaling region \(k_{c}\xi_{\pm}^{\rm FRG}\gg 1\) and \(k/k_{c}\ll 1\) at fixed \(k\xi_{\pm}^{\rm FRG}\), including the case \(\xi_{\pm}^{\rm FRG}=\infty\) and the limit \(k\to 0\) at \(\xi_{\pm}^{\rm FRG}<\infty\). We shall show that \(\hat{G}_{\pm}^{\rm FRG}(k,t)\) can be written in the two-parameter scaling forms (3.43), (3.47), and (3.49) and that the universal scaling function \(\hat{\Psi}_{\pm}\) is not determined by \(g^{\pm}\) or by \(\sigma^{\pm}\) but by a certain ratio of \(g^{\pm}\) and a ratio of its overall amplitudes. Comparison of (5.119) with (3.43) in the limit \(k_{c}\to\infty\) at fixed \(k\xi_{\pm}^{\rm FRG}\) yields the universal ratio \[\lim_{k_{c}\to\infty}\frac{\hat{G}_{\pm}^{\rm FRG}(k,t)}{\hat{G}_{+}^{ \rm FRG}(k,0)} = \lim_{k_{c}\to\infty}\frac{g^{\pm}(k\xi_{\pm}^{\rm FRG},k/k_{c})}{ g^{+}(\infty,k/k_{c})} \tag{5.121}\] \[= \frac{\hat{\Psi}_{\pm}(k\xi_{\pm}^{\rm FRG})}{\hat{\Psi}_{+}( \infty)}\;. \tag{5.122}\] Next we determine the universal constant \(Q_{3}\). We first consider the case \(T=T_{c}\), \(\xi_{\pm}^{\rm FRG}=\infty\), \[\hat{G}_{\pm}^{\rm FRG}(k,0) = k_{c}^{-2}g^{\pm}(\infty,k/k_{c}). \tag{5.123}\] Since \(\hat{G}_{\pm}^{\rm FRG}(k,0)\) must behave as \(\simeq k^{-2+\eta}\) for small \(k\) we obtain for \(k/k_{c}\ll 1\) \[g^{\pm}(\infty,k/k_{c}) = A^{\rm FRG}\;(k_{c}/k)^{2-\eta}, \tag{5.124}\] \[\hat{G}_{\pm}^{\rm FRG}(k,0) = A^{\rm FRG}\frac{k_{c}^{-\eta}}{k^{2-\eta}}. \tag{5.125}\] The \(d\)-dependent constant \(A^{\rm FRG}>0\) is denoted by \(A_{D}^{-1}\) in Eq. (58) of [55], with \(A_{3}\approx 1.075\) in three dimensions, i.e., \[A^{\rm FRG}\approx 0.930\;,\;\;d=3. \tag{5.126}\] Furthermore, compatibility of (5.119) with the free-energy density of the \(\varphi^{4}\) model requires that (5.119) satisfies the exact sum rule (3.45) which yields the susceptibility \(\chi_{\pm}^{\rm FRG}\) \[\hat{G}_{\pm}^{\rm FRG}(0,t) = \chi_{\pm}^{\rm FRG} \tag{5.127}\] \[= k_{c}^{-2}\lim_{k\to 0}g^{\pm}(k\xi_{\pm}^{\rm FRG},k/k_{c})\] (5.128) \[= k_{c}^{-2}f_{\chi}^{\pm}(k_{c}\xi_{\pm}^{\rm FRG}). \tag{5.129}\] As discussed in Sec. II. A, Eqs. (5.127)-(5.129) are exactly valid in the entire range where (5.119) is valid. For \(k_{c}\xi_{\pm}^{\rm FRG}\gg 1\), \(\chi_{\pm}^{\rm FRG}\) must behave as \[\chi_{\pm}^{\rm FRG} = \Gamma_{\pm}^{\rm FRG}t^{-\gamma}, \tag{5.130}\] with \(\gamma=\nu(2-\eta)\) and \(\xi^{\rm FRG}_{\pm}=\xi^{\rm FRG}_{0\pm}t^{-\nu}\). This implies \[f^{\pm}_{\chi}(k_{c}\xi^{\rm FRG}_{\pm}) = C^{\rm FRG}_{\pm}(k_{c}\xi^{\rm FRG}_{\pm})^{2-\eta}\, \tag{5.131}\] \[\Gamma^{\rm FRG}_{\pm} = C^{\rm FRG}_{\pm}k_{c}^{-\eta}(\xi^{\rm FRG}_{0\pm})^{2-\eta}, \tag{5.132}\] with the \(d\)-dependent constant \(C^{\rm FRG}_{\pm}\). The factor \(k_{c}^{-\eta}\) in (5.132) and (5.125) can be eliminated. This yields at \(T_{c}\) \[\hat{G}^{\rm FRG}_{+}(k,0) = \frac{A^{\rm FRG}}{C^{\rm FRG}}\frac{\Gamma^{\rm FRG}_{+}}{(k \xi^{\rm FRG}_{0+})^{2-\eta}}. \tag{5.133}\] From a comparison with the exact asymptotic isotropic scaling form at \(T_{c}\), (3.46) and (3.47), we obtain the identification of the universal quantity \(Q_{3}\) in terms of the ratio \[Q_{3}=\frac{A^{\rm FRG}}{C^{\rm FRG}_{+}}=\hat{\Psi}_{+}(\infty)=\hat{\Psi}_ {-}(\infty)=\ \ {\rm universal.} \tag{5.134}\] Together with the determination of the ratio (5.122) this completes our determination of the isotropic two-parameter scaling form (3.43), \[\hat{G}^{\rm FRG}_{\pm}(k,t)=\frac{\Gamma^{\rm FRG}_{+}}{\left(k\;\xi^{\rm FRG }_{0+}\right)^{2-\eta}}\;\hat{\Psi}_{\pm}\Big{(}k\xi^{\rm FRG}_{\pm}\Big{)} \tag{5.135}\] with the universal scaling function \(\hat{\Psi}_{\pm}\) for \(d<4\) in the regime \(k/k_{c}\ll 1,k_{c}\xi^{\rm FRG}_{\pm}\gg 1\), \(k^{\rm FRG}_{0+}>0\), \[\hat{\Psi}_{\pm}(k\xi^{\rm FRG}_{\pm})=\frac{A^{\rm FRG}}{C^{\rm FRG}_{+}} \lim_{k_{c}\to\infty}\frac{g^{\pm}(k\xi^{\rm FRG}_{\pm},k/k_{c})}{g^{+}(\infty,k/k_{c})}. \tag{5.136}\] We note that in (5.135) \(\hat{G}^{\rm FRG}_{\pm}(k,t)\) is uniquely divided into nonuniversal and universal parts. The corresponding two-parameter Fisher-Aharony scaling form for \(t\neq 0,k\geq 0\), \[\hat{G}^{\rm FRG}_{\pm}(k,t) = \Gamma^{\rm FRG}_{\pm}|t|^{-\gamma}\;\hat{D}_{\pm}\Big{(}k\xi^{ \rm FRG}_{\pm}(t)\Big{)} \tag{5.137}\] follows from (3.49)-(3.52). The structure of (5.135) and (5.137) has been confirmed by exact analytic results for isotropic Ising and \(\varphi^{4}\) models on two-dimensional lattices [14; 88]. We see that, unlike the original formulation (5.119) in terms of the nonuniversal model parameter \(k_{c}\), this parameter has been completely eliminated in (5.135) and (5.137) in favor of the observable thermodynamic amplitude \(\Gamma^{\rm FRG}_{\pm}\) of the susceptibility, as expected on general grounds. But there still exist two independent nonuniversal thermodynamic parameters \(\Gamma^{\rm FRG}_{+}\) and \(\xi^{\rm FRG}_{0+}\) in the asymptotic scaling forms (5.135) and (5.137) including \(T=T_{c}\). This refutes the existence of a "one-parameter scaling picture" and "one-parameter scaling hypothesis" [54] for systems with \(\eta\neq 0\) for which the principle of two-scale-factor universality with two independent thermodynamic parameters is well established. After the identification of \(\hat{\Psi}\pm\) in terms of the ratios \(A^{\rm FRG}/C^{\rm FRG}_{+}\) and \(g^{\pm}(x_{\pm},y)/g^{+}(\infty,y)\) from [54; 55] we substitute (5.136) into (5.21) which yields the bulk correlation function \(\hat{G}_{\pm}\) of the anisotropic \(\varphi^{4}\) model in the asymptotic critical region in three dimensions for \(n=1\). Due to multiparameter universality this also yields the prediction of the correlation function of all other weakly anisotropic systems in the (\(d=3,n=1\)) universality class provided that the nonuniversal parameters (ii) specified above are known. The value of \(C^{\rm FRG}_{+}\) was not calculated in [54; 55]. It would be worthwhile to determine this value in order to obtain an estimate for \(Q_{3}\) which then can be compared with the known estimates \(Q_{3}\approx 0.922\) (\(\varepsilon\) expansion) or \(0.90\pm 0.01\) (numerical studies) for \(d=3,n=1\)[17]. (Note that these values are defined in combination with the "true" (exponential) correlation length \(\xi_{0+}\) which differs from the second-moment correlation length [18; 77; 81].) So far our analysis confirms the universality only of the _ratios_ of the amplitudes \(A^{\rm FRG}\) and \(C^{\rm FRG}_{+}\) and of the functions \(g^{\pm}(x_{\pm},y)\) and \(g^{+}(\infty,y)\) appearing in (5.121), (5.134), and (5.136) in the asymptotic scaling regime. Thus there is no compelling reason for the assertion [54; 55] that the functions \(\sigma^{\pm}(x_{\pm},y)\) and \(g^{\pm}(x_{\pm},y)\) themselves are universal. No argument in support of this assertion was given in [54; 55]. This claim would imply the existence of new universal amplitudes \(A^{\rm FRG}\) or \(C^{\rm FRG}_{+}\) that do not exist in the established theory of bulk critical phenomena in the asymptotic region of isotropic systems [17; 18; 82; 83; 84]. In the following we show that the function \(g^{\pm}(x_{\pm},y)\) is not universal. From (5.125), (5.133), and (5.134) we obtain the relation between amplitudes \[A^{\rm FRG}k_{c}^{-\eta}=Q_{3}\;\frac{\Gamma^{\rm FRG}_{+}}{(\xi^{\rm FRG}_{0+ })^{2-\eta}}. \tag{5.138}\] The right-hand side is uniquely divided into a universal part \(Q_{3}\) and a nonuniversal amplitude ratio of thermodynamic quantities. The nonuniversal parameter \(k_{c}\) on the left-hand side characterizes the nonasymptotic range \(k>k_{c}\) of wave numbers of the correlation function of the \(\varphi^{4}\) theory and is determined by non-thermodynamic quantities, i. e., the cutoff \(\Lambda_{0}\) and the four-point coupling \(u_{0}\) of the \(\varphi^{4}\) Hamiltonian [54; 55], \[k_{c} = \Lambda_{0}\exp(-l_{c}) \tag{5.139}\] where \(l_{c}(u_{0}\Lambda_{0}^{d-4})\) is a dimensionless flow parameter. Thus \(k_{c}^{-\eta}\) is different from the ratio \(\Gamma^{\rm FRG}_{+}/(\xi^{\rm FRG}_{0+})^{2-\eta}\) and is not universally related to this ratio. We conclude that the dimensionless amplitude \(A^{\rm FRG}\) must be different from \(Q_{3}\) and must contain a nonuniversal part, i.e., \(A^{\rm FRG}\) is a nonuniversal quantity. Since \(A^{\rm FRG}\) governs the small-\(k\) behavior of \(g^{\pm}(\infty,k/k_{c})\) according to (5.124) we conclude that the function \(g^{\pm}(\infty,k/k_{c})\) is nonuniversal in the asymptotic region at \(T_{c}\) for small \(k/k_{c}\). By continuity and analyticity requirements this nonuniversal dependence for small \(k/k_{c}\) cannot turn smoothly into a universal dependence for larger \(k/k_{c}\) in the nonasymptotic regime \(k>k_{c}\). Furthermore, because of the relation \[C^{\rm FRG}_{+} = A^{\rm FRG}/Q_{3} \tag{5.140}\] \[= k_{c}^{\rm TF}\Gamma^{\rm FRG}_{+}/(\xi^{\rm FRG}_{0+})^{2-\eta} \tag{5.141}\] we conclude that \(C_{+}^{\rm FRG}\) must be a nonuniversal quantity. This conclusion is supported by exact analytic results for the ratio \(\Gamma_{+}^{\rm iso}/(\xi_{0+}^{\rm rep})^{2-\eta}\) of isotropic Ising and \(\varphi^{4}\) models on two-dimensional square and parallelogram lattices [14; 88]. We also consider the case \(k=0\) in the asymptotic range \(k_{c}\xi_{+}^{\rm FRG}\gg 1\) where we have from (5.127)-(5.132) \[k_{c}^{2}\hat{G}_{+}^{\rm FRG}(0,t) = \lim_{k\to 0}g^{+}(k\xi_{+}^{\rm FRG},k/k_{c}) \tag{5.142}\] \[= C_{+}^{\rm FRG}(k_{c}\xi_{+}^{\rm FRG})^{2-\eta}. \tag{5.143}\] The nonuniversality of \(C_{+}^{\rm FRG}\) implies that the function \(g^{+}(k\xi_{+}^{\rm FRG},k/k_{c})\) is a nonuniversal quantity for \(k\to 0\) and large \(k_{c}\xi_{+}^{\rm FRG}\). A similar argumentation holds for \(g^{-}(k\xi_{+}^{\rm FRG},k/k_{c})\) for \(k\to 0\). Together with the nonuniversality of \(g^{\pm}(\infty,k/k_{c})\) this means that the function \(g^{\pm}(x_{\pm},y)\) is not universal along the two vertical and horizontal axes \((\xi_{\pm}^{\rm FRG})^{-1}=0\) and \(k=0\) of the \(k-(\xi_{\pm}^{\rm FRG})^{-1}\) plane. By continuity and analyticity requirements this nonuniversal dependence cannot turn smoothly into a universal dependence in the plane away from these axes. This implies the nonuniversality of \(g^{\pm}(x_{\pm},y)\) in the region between these axes including both the asymptotic and nonasymptotic regions, thus the curves for \(\Delta\sigma^{\pm}\) shown in Fig. 5 of [54] and in Fig. 3 of [55] are nonuniversal. We briefly comment on the consequences of the structure of (5.119) for \(t\neq 0\). We rewrite (5.127) and (5.129) in the form \[\chi_{\pm}^{\rm FRG} = (\xi_{\pm}^{\rm FRG})^{2}{\cal F}_{\pm}(k_{c}\xi_{\pm}^{\rm FRG}) \tag{5.144}\] with \[{\cal F}_{\pm}(k_{c}\xi_{\pm}^{\rm FRG}) = (k_{c}\xi_{\pm}^{\rm FRG})^{-2}\;f_{\chi}^{\pm}(k_{c}\xi_{\pm}^{ \rm FRG}) \tag{5.145}\] where \(f_{\chi}^{\pm}\) is the nonuniversal function defined in (5.129). This constitutes an implicit equation that determines the parameter \(k_{c}\) as a nonuniversal function of \(\chi_{\pm}^{\rm FRG}\) and \(\xi_{\pm}^{\rm FRG}\) in the entire range where (5.119) is valid. Thus \(k_{c}\) can be exactly eliminated in favor of \(\chi_{\pm}^{\rm FRG}\) and \(\xi_{\pm}^{\rm FRG}\), and (5.119) can be expressed completely in terms of these two nonuniversal temperature-dependent thermodynamic quantities, without any explicit cutoff dependence even outside the standard asymptotic critical region. We consider this nonuniversal structure to be due to the approximations made in [54; 55] within the \(\varphi^{4}\) model which are not expected to remain universally valid for all systems in the Ising universality class. Nevertheless, after the identification of the universal part of (5.119) through (5.135) and (5.136), this result is a prediction that is of substantial interest for a comparison with other systems of the \(d=3\) Ising universality class in the asymptotic critical region. ## VI Multiparameter universality of critical bulk amplitude relations The universal critical point bulk amplitude relations play an important role in the traditional theory of critical phenomena [17] where, however, no attention has been paid to amplitude relations in the subclass of weakly anisotropic systems within a universality class. The failure of two-scale-factor universality in this subclass was pointed out in [1]. Subsequently several bulk amplitude relations have been shown to be valid within the weakly anisotropic bulk \(\varphi^{4}\) theory in \(2<d<4\) dimensions [4; 6; 13; 14] with the same universal constants as for the isotropic case, such as \(Q_{c},Q_{1},R_{\xi}^{+},Q_{2},\widetilde{Q}_{3},P_{2},P_{3},W_{1},X_{-}(0)\) but where, in general, up to \(d(d+1)/2+1\) independent nonuniversal parameters are involved rather than only two nonuniversal parameters. These properties were called multiparameter universality [6] for bulk amplitude relations. It was hypothesized [13] that these properties are valid not only for \(\varphi^{4}\) models but also for all weakly anisotropic bulk systems other than \(\varphi^{4}\) models. So far no proof exists for this hypothesis. Our generalized shear transformation introduced in Sec. IV enables us to present such a proof. As a representative of a weakly anisotropic system other than the \(\varphi^{4}\) model we take the \(n\)-vector model. ### Proof of multiparameter universality Application of the generalized shear transformation (4.89) to the volume \(V\) of the anisotropic system yields the transformed volume of the isotropic system \[\widehat{V}\equiv V^{\rm iso} = (\det\widehat{\mathbf{\lambda}})^{-1/2}\;V=\left(\xi_{ 0\pm}^{\rm iso}/\widetilde{\xi}_{0\pm}^{\rm sp}\right)^{d}V \tag{6.1}\] where we have used (4.94). This implies the invariance of the volume ratios \[\frac{V^{\rm iso}}{V_{{\rm cor},+}^{\rm iso}} = \frac{V}{V_{{\rm cor},+}},\;\frac{V^{\rm iso}}{V_{{\rm cor},-}^{ \rm iso}}=\frac{V}{V_{{\rm cor},-}} \tag{6.2}\] where \(V_{{\rm cor},\pm}^{\rm iso}=\left(\xi_{0\pm}^{\rm iso}\right)^{d}\) and \(V_{{\rm cor}\pm}=\prod_{\alpha=1}^{d}\xi_{0\pm}^{\rm sp(\alpha)}=\left( \widetilde{\xi}_{0\pm}^{\rm sp}\right)^{d}\) are the isotropic (spherical) and anisotropic (ellipoidal) correlation volumes above and below \(T_{c}\), respectively. This is analogous to the invariance (4.46) in the \(\varphi^{4}\) theory. We consider the singular part \({\cal F}_{s}^{\rm sp}\) of total free energy (2.26) of the anisotropic \(n\)-vector model in the volume \(V\). The generalized shear transformation (4.89) generates the singular part \({\cal F}_{s}^{\rm iso}\) of the transformed isotropic system. We have argued in Sec. II.B. that since this transformation involves only a smooth change of the positions \({\bf x}\rightarrow\widehat{\bf x}\) of the lattice points it does not change its singular part, thus we have the invariance \[{\cal F}_{s}^{\rm iso}={\cal F}_{s}^{\rm sp}. \tag{6.3}\] This is parallel to (4.23) of the special shear transformation. From (6.3) and (6.1) we obtain the relation between the singular parts of the bulk free-energy densities of the isotropic and anisotropic \(n\)-vector model \[f^{\rm sp}_{b,s,\pm} = \lim_{V\to\infty}\frac{{\cal F}^{\rm sp}_{s}}{V}=\left(\frac{\xi^{ \rm iso}_{0\pm}}{\xi^{\rm sp}_{0\pm}}\right)^{d}\lim_{V^{\rm iso}\to\infty} \frac{{\cal F}^{\rm iso}_{s}}{V^{\rm iso}} \tag{6.4}\] \[= \left(\frac{\xi^{\rm iso}_{0\pm}}{\xi^{\rm sp}_{0\pm}}\right)^{d} \;f^{\rm iso}_{b,s,\pm}.\] Together with the isotropic relation (3.12) this yields the singular part of the bulk free-energy density of the anisotropic system \[f^{\rm sp}_{b,s,\pm}(t)=\left\{\begin{array}{cc}A_{\pm}|t|^{d\nu}&\quad{\rm for }\;2<d<4\;,\\ \frac{1}{2}A_{\pm}|t|^{2}\ln|t|&\quad{\rm for}\;d=2,\end{array}\right. \tag{6.5}\] where the amplitudes \(A_{\pm}\) of the anisotropic system are given by \[A_{\pm} = \left(\frac{\xi^{\rm iso}_{0\pm}}{\xi^{\rm sp}_{0\pm}}\right)^{d }A^{\rm iso}_{\pm},\;\;\;d\geq 2, \tag{6.6}\] \[\frac{f^{\rm sp}_{b,s,+}(t)}{f^{\rm sp}_{b,s,-}(t)} = \frac{A_{+}}{A_{-}}=\frac{A^{\rm iso}_{+}}{A^{\rm iso}_{-}}=\;\;{ \rm universal,}\;\;{\rm d}\geq 2. \tag{6.7}\] Substituting the isotropic relation (3.14) we arrive at the universal relation for the anisotropic amplitude \(A_{+}\) above \(T_{c}\) of the \(n\)-vector model \[\left(\bar{\xi}^{\rm sp}_{0+}\right)^{d}A_{+}=\left(\xi^{\rm iso}_{0+}\right)^ {d}A^{\rm iso}_{+}=Q_{1},\;d\geq 2, \tag{6.8}\] where in the anisotropic case the same universal constant \(Q_{1}\) appears as in the isotropic case. Thus we obtain the singular part of the free-energy density of the anisotropic \(n\)-vector model for \(t>0\) \[f^{\rm sp}_{b,s,+}(t)=\left\{\begin{array}{cc}Q_{1}\left(\bar{\xi}^{\rm sp}_ {0+}\right)^{-d}t^{d\nu},&\quad 2<d<4\;,\\ \frac{1}{2}Q_{1}\left(\bar{\xi}^{\rm sp}_{0+}\right)^{-2}\;t^{2}\ln t,&\quad d =2,\end{array}\right. \tag{6.9}\] and obtain \(f^{\rm sp}_{b,s,-}(t)\) for \(t<0\) from (6.7) as \[f^{\rm sp}_{b,s,-}(t)=\left\{\begin{array}{cc}\frac{A_{-}}{A_{+}}Q_{1} \left(\bar{\xi}^{\rm sp}_{0+}\right)^{-d}|t|^{d\nu},&\quad 2<d<4\;,\\ \frac{1}{2}Q_{1}\left(\bar{\xi}^{\rm sp}_{0+}\right)^{-2}\;t^{2}\ln|t|,&\quad d =2,\end{array}\right. \tag{6.10}\] where we have used (3.18) for \(d=2\). We see that for \(d=2\) dimensions \(f^{\rm sp}_{b,s,\pm}\) depends only on \(|t|\) rather than \(t\). Our final step is to reformulate our result for the free-energy density in a scaling form for the singular bulk part of the free energy of the anisotropic \(n\)-vector model \[{\cal F}^{\rm sp}_{b,s,\pm}(t,V) = Vf^{\rm sp}_{b,s,\pm}(t). \tag{6.11}\] This is achieved by means of the observation that the scaling variable \(\widetilde{x}\), (3.20), introduced for the isotropic system can be expressed in two different ways as \[\widetilde{x} = t[V^{\rm iso}/(\xi^{\rm iso}_{0+})^{d}]^{1/(d\nu)} \tag{6.12}\] \[= t[V/(\bar{\xi}^{\rm sp}_{0+})^{d}]^{1/(d\nu)} \tag{6.13}\] where in (6.13) we have used the invariance of the volume ratios above \(T_{c}\) in (6.2) under the shear transformation (4.89). A corresponding observation was already made for the finite-size scaling variable \(\hat{x}\) defined in Eq. (6.12) of [13] in the context of renormalized perturbation theory for the anisotropic \(\varphi^{4}\) model in a finite geometry in \(2<d<4\) dimensions. From (6.7)-(6.13) and (3.21)-(3.23) we derive the scaling form for the singular bulk part of the free energy of the anisotropic \(n\)-vector model in \(2<d<4\) dimensions \[{\cal F}^{\rm sp}_{b,s,+}(t,V) = Q_{1}\;\widetilde{x}^{d\nu}={\cal F}^{\rm iso}_{b,s,+}(t,V^{\rm iso }),\;t>0, \tag{6.14}\] \[{\cal F}^{\rm sp}_{b,s,-}(t,V) = \frac{A^{\rm iso}_{+}}{A^{\rm iso}_{+}}Q_{1}\;|\widetilde{x}|^{d \nu}={\cal F}^{\rm iso}_{b,s,-}(t,V^{\rm iso}),\;t<0, \tag{6.15}\] and the corresponding expression for the (\(d=2,n=1\)) universality class \[{\cal F}^{\rm sp}_{b,s,\pm}(t,V) = \frac{1}{2}Q_{1}\;|\widetilde{x}|^{2}\ln|t|={\cal F}^{\rm iso}_{b, s,\pm}(t,V^{\rm iso}), \tag{6.16}\] with a nonscaling logarithmic factor \(\ln|t|\). The nonuniversality due to weak anisotropy enters only the scaling variable \(\widetilde{x}\). For \(d=2\), \({\cal F}^{\rm sp}_{b,s,\pm}\) is a function of \(|t|\). Eqs. (6.14)-(6.16) explicitly reflect the invariance of the singular bulk parts under the shear transformation of the \(n\)-vector model and constitute the anisotropic extension of the singular bulk parts of the free energy of isotropic systems presented in (3.21)-(3.23). Multi-parameter universality manifests itself by the fact that the corresponding result for the anisotropic \(\varphi^{4}\) model is obtained from (6.12)-(6.16) simply by the substitutions \(\bar{\xi}^{\rm sp}_{0+}\to\bar{\xi}_{0+},V^{\rm iso}\to V^{\prime},\xi^{\rm iso }_{0+}\to\xi^{\prime}_{0+}\) in \(\widetilde{x}\), (6.13) which corresponds to the scaling variable (4.47) of the \(\varphi^{4}\) theory, with the same universal constants \(Q_{1}\) and \(A^{\rm iso}_{-}/A^{\rm iso}_{+}\), where \(\bar{\xi}_{0+}\) is defined in (4.34). This was shown already previously in Eq. (3.33) of [6] and Eq. (5.16) of [13]. Since \(\bar{\xi}^{\rm sp}_{0+}\) and \(\bar{\xi}_{0+}\) depend on \(d\) independent principal correlation lengths \(\xi^{\rm sp(\alpha)}_{0+}\) and \(\xi^{(\alpha)}_{0+}\), respectively, our results for \(f^{\rm sp}_{b,s,\pm}\) and \({\cal F}^{\rm sp}_{b,s,\pm}\) violate two-scale-factor universality. In a similar way one can prove the validity of multiparameter universality for the result of the singular bulk part of the free energy of the anisotropic Gaussian model for \(t>0\) \[{\cal F}^{\rm G}_{b,s,+}(t,V)=\left\{\begin{array}{cc}Q^{\rm G}_{1}\;( \widetilde{x}^{\rm G})^{d/2}&\quad{\rm for}\;d>2,\\ \frac{1}{2}Q^{\rm G}_{1}\;\widetilde{x}^{\rm G}\ln t&\quad{\rm for}\;d=2, \end{array}\right. \tag{6.17}\] with the Gaussian scaling variable \[\widetilde{x}^{\rm G} = t[V^{\rm iso}/(\xi^{\rm G,iso}_{0+})^{d}]^{2/d}=t[V/(\bar{\xi}^{ \rm G}_{0+})^{d}]^{2/d}, \tag{6.18}\] with the Gaussian mean correlation length (5.117), and with the same universal constant \(Q^{\rm G}_{1}\), (3.26) and (3.27), as in the isotropic case. Eqs. (6.17) and (6.18) are the anisotropic extension of (3.30) and (3.31) for the isotropic Gaussian model. There exist further universal bulk amplitude relations of isotropic systems that involve the correlation length which are affected by anisotropy. As an example we consider the relation (3.7). The amplitude of the order parameter \({\cal M}^{\rm sp}=B^{\rm sp}|t|^{\beta}\) of the anisotropic system is left invariant under the generalized shear transformation (4.89), \[B^{\rm sp}=B^{\rm iso}, \tag{6.19}\] in contrast to (4.18) and (4.41) for the special shear transformation. Together with the amplitude relation \(\widehat{\Gamma}_{+}\equiv\Gamma^{\rm iso}_{+}=\left(\xi^{\rm iso}_{0+}/\xi^{\rm sp }_{0+}\right)^{d}\,\Gamma^{\rm sp}_{+}\), (5.28), this yields \[(B^{\rm iso})^{2}(\Gamma^{\rm iso}_{+})^{-1}(\xi^{\rm iso}_{0+})^{d}=\left(B^ {\rm sp}\right)^{2}\!\left(\Gamma^{\rm sp}_{+}\right)^{-1}\!\left(\tilde{\xi} ^{\rm sp}_{0+}\right)^{d}=Q_{c}, \tag{6.20}\] with the same universal constant \(Q_{c}\) for both isotropic and anisotropic systems. On the basis of the special shear transformation (4.1)-(4.3) the same relation with the same constant \(Q_{c}\) was recently proven within the anisotropic \(\varphi^{4}\) model in \(d=2\) dimensions [14] and was verified to be valid also in the exactly solvable anisotropic Ising model [38; 39] where \(Q_{c}\) was identified as given in Eq. (76) of [14]. From (5.65), (5.68), and (5.87) we confirm that (6.20) is also valid in the large-\(n\) limit, with the universal constant \[Q_{c}=A_{d}/(4-d),\quad n=\infty,\ 2<d<4, \tag{6.21}\] with \(A_{d}\) given by (5.66). In deriving the exact critical bulk amplitude relations (6.2), (6.9), and (6.20) for the \(n\)-vector model we have not made any assumptions other than the validity of two-scale-factor universality for isotropic systems and the existence of principal axes and correlation lengths for weakly anisotropic systems. No specific properties of the \(n\)-vector model were needed in the derivation. Thus these results are proven to be valid for arbitrary weakly anisotropic systems. In a similar way, the validity of all the other bulk amplitude relations considered in [6; 13] for general \(n\) and \(d\) within anisotropic \(\varphi^{4}\) model including those at finite external field (such as Eq. (3.35) of [6]) can be proven for all weakly anisotropic systems. This feature is called multiparameter universality since these relations contain the same universal constants that appear already in the isotropic case but in the anisotropic case up to \(d(d+1)/2+1\) independent nonuniversal parameters (rather than only two independent parameters) are involved whose definition depends on the nonuniversal orientation of the principal axes and on the nonuniversal amplitudes of the principal correlation lengths. ### Bulk specific heat of anisotropic systems The universal bulk amplitude relations for the free energy derived above are relevant for the analysis of the singular part \[C^{\rm sp}_{b,s\pm}(t) = -\partial^{2}f^{\rm sp}_{b,s\pm}(t)/\partial t^{2}, \tag{6.22}\] of the bulk specific heat per unit volume (divided by \(k_{B}\)) of anisotropic systems above and below \(T_{c}\). Above \(T_{c}\) it can be expressed in terms of the mean correlation length \(\tilde{\xi}^{\rm sp}_{0+}\) defined in (4.92). We obtain from (6.4)-(6.9) \[C^{\rm sp}_{b,s+}(t) = \frac{\left(R^{+}_{\xi}\right)^{d}}{\alpha\left(\xi^{\rm sp}_{0+ }\right)^{d}}\,t^{-\alpha},\ \ \ d>2, \tag{6.23}\] \[C^{\rm sp}_{b,s+}(t) = -\ \frac{Q_{1}}{\left(\tilde{\xi}^{\rm sp}_{0+}\right)^{2}}\,\ln t,\ \ d=2, \tag{6.24}\] above \(T_{c}\) and \[C^{\rm sp}_{b,s-}(t) = \frac{A_{-}}{A_{+}}\,C^{\rm sp}_{b,s+}(t),\ \ d\geq 2, \tag{6.25}\] below \(T_{c}\). Eqs. (6.23) and (6.24) are the anisotropic extensions of (3.57) and (3.58). Eqs. (6.23)-(6.25) demonstrate that thermodynamic measurements of the amplitude of specific heat of anisotropic systems can determine the mean correlation length \(\tilde{\xi}^{\rm sp}_{0+}\) of real anisotropic systems. Together with thermodynamic measurements of the amplitude \(\Gamma_{+}\) of the susceptibility of anisotropic systems one obtains the two amplitudes determining the overall amplitude of the bulk correlation function (5.42). Together with the universal ratio (4.93) this also determines \(\tilde{\xi}^{\rm sp}_{\pm}(t)\) in the argument of \(\Psi_{\pm}\) in (5.42). These predictions are valid for any weakly anisotropic bulk system, e.g., for superconductors and magnetic materials. Relations equivalent to (6.23) have been employed previously [89] in the analysis of experimental data of superconductors whose critical behavior belongs to the \((d=3,n=2)\)\(XY\) universality class. In the earlier work [89] the anisotropic fluctuations were treated within a Gaussian approximation for the case of a diagonal anisotropy matrix (effective mass tensor). Here we have provided a general and exact foundation for the bulk critical specific heat in weakly anisotropic systems. ## VII Correlation function of the two-dimensional Ising universality class The proof of multiparameter universality presented in the preceding sections has a significant impact on the anisotropic bulk correlation function of the \((d=2,n=1)\) universality class [14] with the exact critical exponents \[\nu=1,\ \ \eta=1/4. \tag{7.1}\] This includes both anisotropic Ising and anisotropic scalar \(\varphi^{4}\) models. ### Isotropic scaling form and exact universal bulk scaling function \(\Psi_{\pm}(y_{\pm})\) There exists a large variety of interactions and lattice structures of \((d=2,n=1)\) systems that have an isotropic bulk correlation function in the scaling region near \(T_{c}\). For example, for the two-dimensional \(\varphi^{4}\) lattice model (2.1) with short-range pair interactions the condition of isotropy is \[{\bf A}^{\rm iso}=\left(\begin{array}{cc}a&c\\ c&b\end{array}\right)=c_{0}^{\rm iso}\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right) \tag{7.2}\] with \(c_{0}^{\rm iso}>0\) which, for given lattice structure, is a condition for the couplings \(K_{i,j}\) according to (2.3). The same condition holds for the two-dimensional Gaussian lattice model (4.57). A corresponding general condition of isotropy for the couplings \(E_{i,j}\) of two-dimensional Ising models (with the Hamiltonian \(H^{\rm ls}\) given in (7.10) below) is unknown. If the principal correlation lengths \(\xi_{0+}^{\rm ls(\alpha)}\) of Ising models are known as a function of the couplings \(E_{i,j}\) the condition of isotropy reads \[\xi_{0+}^{\rm ls(1)}=\xi_{0+}^{\rm ls(2)}. \tag{7.3}\] There exists an unlimited number of lattice structures and couplings satisfying these conditions of isotropy. There exist even two-dimensional systems with short-range multi-spin interactions within the Ising universality class that have isotropic correlation functions. Owing to the principle of two-scale-factor universality all of these isotropic systems have the same structure of the bulk order-parameter correlation function above and below \(T_{c}\)[14] \[G^{\rm iso}({\bf x},t) = \frac{\Gamma_{+}^{\rm iso}(\xi_{0+}^{\rm iso})^{-7/4}}{|{\bf x}| ^{1/4}}\;\Psi_{\pm}\Big{(}\frac{|{\bf x}|}{\xi_{\pm}^{\rm iso}(t)}\Big{)}, \tag{7.4}\] \[\xi_{\pm}^{\rm iso}(t) = \xi_{0\pm}^{\rm iso}\;|t|^{-1}, \tag{7.5}\] with the universal scaling function \(\Psi_{\pm}\). To determine this function \(\Psi_{\pm}\) it suffices to consider the simplest nontrivial example of this universality class. This is not the isotropic two-dimensional \(\varphi^{4}\) model but the exactly solvable isotropic Ising model with equal nearest-neighbor (NN) couplings \(E>0\) on a square lattice with the Hamiltonian \[H^{\rm Is,iso}=-E\sum_{j,k}[\sigma_{j,k}\sigma_{j,k+1}+\sigma_{j,k}\sigma_{j+1,k}],\;\;\sigma_{j,k}=\pm 1, \tag{7.6}\] and with the lattice spacing \(\tilde{a}=1\). The bulk correlation function of this model is isotropic in the scaling limit (but not in the range \(|{\bf x}|/\xi_{\pm}^{\rm iso}\gg 1\)[6; 78; 79; 80]). It was calculated exactly in [38] where it was presented in a nonuniversal scaling form without identifying the universal part of the scaling function. Recently [14] we have written this correlation function in the universal form (7.4) and have identified the universal scaling function \(\Psi_{\pm}(y_{\pm})\) as well as the arguments \(y_{\pm}\), together with the two nonuniversal amplitudes, specialized to the square lattice with equal NN couplings, \[\xi_{0+}^{\rm squ,iso} = 2\big{[}\ln(1+2^{1/2})\big{]}^{-1}, \tag{7.7}\] \[\Gamma_{+}^{\rm squ,iso} = 2^{19/8}\pi\big{(}\xi_{0+}^{\rm squ,iso}\big{)}^{7/4}p_{+}, \tag{7.8}\] where the constant \(p_{+}\) is expressed in terms of a Painleve function of the third kind, as defined in Eqs. (30)-(36) of [14] and in the text after these equations. The exact universal amplitude ratio of the exponential correlation lengths is [38; 18] \[\xi_{0+}^{\rm squ,iso}/\xi_{0-}^{\rm squ,iso}=X_{\xi}=2. \tag{7.9}\] The scaling function \(\Psi_{\pm}(y_{\pm})\) applies to all systems in the subclass of isotropic systems in the \((d=2,n=1)\) universality class, in particular, to Ising models with isotropic bulk correlation functions on two-dimensional lattices other than the square lattice. So far the exact results for such Ising models on "isotropic lattices" [43] (e.g. honeycomb lattice, see Fig. 1 in [22]) have not been discussed in the light of the predictions of two-scale-factor universality with only two (rather than four [43]) nonuniversal parameters, e.g., two independent nonuniversal amplitudes corresponding to (7.7) and (7.8) which are different for different lattice structures. ### Exact anisotropic bulk scaling form We apply our general analysis of Secs. IV and V to systems in the \((d=2,n=1)\) universality class. Examples for these systems are Ising models on two-dimensional Bravais lattices with short-range pair interactions described by the Hamiltonian \[H^{\rm ls}=-\sum_{i,j}E_{i,j}\sigma_{i}\cdot\sigma_{j},\;\;\sigma_{i}=\pm 1. \tag{7.10}\] In the following all equations are given for these examples as indicated by the superscript "Is" for the nonuniversal quantities but these equations apply to all weakly anisotropic systems of the \((d=2,n=1)\) universality class. We assume the existence of two principal axes and two principal correlation lengths in a range of couplings \(E_{i,j}\) where the model (7.10) displays weakly anisotropic critical behavior. The orientation of the principal axes in the direction of the angles \(\Omega^{\rm ls}\) and \(\Omega^{\rm ls}+\pi/2\) is described by the principal unit vectors \[{\bf e}^{(1){\rm Is}}={\bf e}(\Omega^{\rm Is})=\left(\begin{array}{c}\cos \Omega^{\rm Is}\\ \sin\Omega^{\rm Is}\end{array}\right), \tag{7.11}\] \[{\bf e}^{(2){\rm Is}}={\bf e}(\Omega^{\rm Is}+\pi/2)=\left(\begin{array}{c}- \sin\Omega^{\rm ls}\\ \cos\Omega^{\rm Is}\end{array}\right). \tag{7.12}\] Correspondingly the rotation matrix of the generalized shear transformation for a clockwise rotation is \[{\bf U}^{\rm Is}={\bf U}(\Omega^{\rm ls})=\left(\begin{array}{ccc}\cos\ \Omega^{\rm Is}&\sin\ \Omega^{\rm Is}\\ -\sin\ \Omega^{\rm Is}&\cos\ \Omega^{\rm Is}\end{array}\right). \tag{7.13}\] A counterclockwise rotation provided by the matrix \[{\bf U}^{\rm Is}_{cc}={\bf U}_{cc}(\Omega^{\rm Is})=\left(\begin{array}{cc} \sin\ \Omega^{\rm Is}&-\cos\ \Omega^{\rm ls}\\ \cos\ \Omega^{\rm Is}&\sin\ \Omega^{\rm Is}\end{array}\right) \tag{7.14}\] would lead to equivalent results. The two principal correlation lengths \[\xi_{\pm}^{(\alpha){\rm Is}}(t) = \xi_{0\pm}^{(\alpha){\rm Is}}|t|^{-1},\alpha=1,2, \tag{7.15}\] have the nonuniversal ratio \[q^{\rm Is}=\xi_{0+}^{(1){\rm Is}}/\xi_{0+}^{(2){\rm Is}}=\xi_{0-}^{(1){\rm Is}}/ \xi_{0-}^{(2){\rm Is}}. \tag{7.16}\] The mean correlation length is \[\bar{\xi}_{\pm}^{\rm Is}(t) = \bar{\xi}_{0\pm}^{\rm Is}|t|^{-1}, \tag{7.17}\] \[\bar{\xi}_{0\pm}^{\rm Is} = \big{[}\xi_{0\pm}^{(1){\rm Is}}\xi_{0\pm}^{(2){\rm Is}}\big{]}^{1/2}, \tag{7.18}\] which can be used to express the principal correlation lengths as \[\xi_{0\pm}^{(1){\rm Is}}=\bar{\xi}_{0\pm}^{\rm Is}(q^{\rm Is})^{1/2},\ \ \xi_{0\pm}^{(2){\rm Is}}=\bar{\xi}_{0\pm}^{\rm Is}(q^{\rm Is})^{-1/2}. \tag{7.19}\] The reduced rescaling matrix of the generalized shear transformation reads \[\bar{\mathbf{\lambda}}^{\rm Is}=\left(\begin{array}{cc}\bar{\lambda}_{1}^{\rm Is }&0\\ 0&\bar{\lambda}_{2}^{\rm Is}\end{array}\right)=\left(\begin{array}{cc}q^{\rm Is }&0\\ 0&(q^{\rm Is})^{-1}\end{array}\right) \tag{7.20}\] with the diagonal elements \[\bar{\lambda}_{1}^{\rm Is}=[\xi_{0\pm}^{(1){\rm Is}}/\xi_{0\pm}^{\rm Is}]^{2}= q^{\rm Is},\ \ \ \bar{\lambda}_{2}^{\rm Is}=[\xi_{0\pm}^{(2){\rm Is}}/\bar{\xi}_{0\pm}^{\rm Is}]^{2 }=(q^{\rm Is})^{-1}. \tag{7.21}\] The exact scaling form of the anisotropic correlation function of the model (7.10) is obtained from the general result (5.42) as \[G^{\rm Is}({\bf x},t) = \frac{\Gamma_{+}^{\rm Is}\,(\bar{\xi}_{0+}^{\rm Is})^{-7/4}}{\big{[} \mathbf{x}\cdot\big{(}\bar{\mathbf{A}}^{\rm Is}\big{)}^{-1}\mathbf{x}\big{]}^{ 1/8}}\ \Psi_{\pm}\Big{(}\frac{[\mathbf{x}\cdot\big{(}\bar{\mathbf{A}}^{\rm Is} \big{)}^{-1}\mathbf{x}]^{1/2}}{\bar{\xi}_{\pm}^{\rm Is}(t)}\Big{)}, \tag{7.22}\] with the same universal scaling function \(\Psi_{\pm}\) as in (7.4). According to (4.96)-(4.98) the reduced anisotropy matrix is given by \[\bar{\mathbf{A}}^{\rm Is}(q^{\rm Is},\Omega^{\rm Is}) = (\mathbf{U}^{\rm Is})^{-1}\bar{\mathbf{\lambda}}^{\rm Is}\mathbf{U }^{\rm Is} \tag{7.23}\] \[= \bar{\mathbf{A}}(q^{\rm Is},\Omega^{\rm Is}) \tag{7.24}\] where \(\bar{\mathbf{A}}(q,\Omega)\) has a universal structure given by \[\bar{\mathbf{A}}(q,\Omega) = \left(\begin{array}{cc}q\ c_{\Omega}^{2}+q^{-1}s_{\Omega}^{2}& (q-q^{-1})\ c_{\Omega}\,s_{\Omega}\\ (q-q^{-1})\ c_{\Omega}\,s_{\Omega}&q\ s_{\Omega}^{2}+q^{-1}\ c_{\Omega}^{2} \end{array}\right)\] with the abbreviations \(c_{\Omega}\equiv\cos\Omega,s_{\Omega}\equiv\sin\Omega\). The result (7.22)-(7.25) has the same form as derived recently [14] by means of the special shear transformation (4.1)-(4.3) for the two-dimensional anisotropic scalar \(\varphi^{4}\) model (2.1). The universality of \(\Psi_{\pm}\) and of the structure of \(\bar{\mathbf{A}}(q,\Omega)\) confirms explicitly the validity of multiparameter universality for the bulk correlation functions of all weakly anisotropic systems in the \((d=2,n=1)\) universality class. We add the following comments. Two-scale-factor universality is violated as \(G^{\rm Is}(\mathbf{x},t)\), (7.22), depends on the four independent nonuniversal parameters \(\Gamma_{+}^{\rm Is},\bar{\xi}_{0+}^{\rm Is},q^{\rm Is},\Omega^{\rm Is}\). The angle \(\Omega^{\rm Is}\) of the principal axes of Ising models depends in an unknown way on the microscopic interactions \(E_{i,j}\) and needs to be determined for each special Ising model under consideration. This is in contrast to the known dependence [14] of the corresponding angle \(\Omega\) of the anisotropic \(d=2\)\(\varphi^{4}\) model (2.1) on the couplings \(K_{i,j}\) through the matrix elements of the anisotropy matrix \(\mathbf{A}\) (2.3). A corresponding anisotropy matrix for the Ising model is unknown. Likewise a general condition for weak anisotropy for the Ising model (7.10) analogous to (2.4) for the \(\varphi^{4}\) model is unknown. The significance of our general result (7.22)-(7.25) is that the universal validity of the structure of the correlation function (7.22) and of the reduced anisotropy matrix \(\bar{\mathbf{A}}^{\rm Is}(q^{\rm Is},\Omega^{\rm Is})\) no longer rests upon exact calculations within special models on special lattices (such as square, triangular, or honeycomb lattices) or upon the hypothesis of multiparameter universality but is a proven fact that applies to all two-dimensional weakly anisotropic systems of the \((d=2,n=1)\) universality class. This implies that the calculation of the correlation function of any two-dimensional weakly anisotropic system of the Ising universality class no longer requires a new calculation of a scaling function and of an anisotropy matrix but can be restricted to the much simpler task of determining four nonuniversal parameters, namely the thermodynamic amplitude \(\Gamma_{+}^{\rm Is}\) of the susceptibility above \(T_{\rm c}\), the two principal correlation lengths, and the angle \(\Omega^{\rm Is}\) of the principal axes. This conclusion constitutes a fundamental simplification in the analytic theory of two-dimensional anisotropic systems as well as in the analysis of numerical or experimental data of anisotropic correlation functions. We substantiate our findings by the correlation function of the anisotropic triangular-lattice Ising model [38; 39; 73] for which the exact universal and nonuniversal properties have been identified in most cases [14] from the explicit exact results that were given in [38; 39] as a function of the couplings \(E_{i}\). The Hamiltonian of this model reads \[H^{\rm tr}=\sum_{j,k}[-E_{1}\sigma_{j,k}\sigma_{j,k+1}-E_{2}\sigma_{j,k}\sigma_{ j+1,k}-E_{3}\sigma_{j,k}\sigma_{j+1,k+1}] \tag{7.26}\] with horizontal, vertical, and diagonal couplings \(E_{1},E_{2},E_{3}\) on a square lattice (see Fig. 2 of [14]). The condition for a ferromagnetic critical point with weak anisotropy is [90] \[E_{1}+E_{2}>0,E_{1}+E_{3}>0,E_{2}+E_{3}>0. \tag{7.27}\] In this range of the couplings the applicability of our proof of multiparameter universality is guaranteed because of the existence of the principal axes and principal correlation lengths, as shown in [14]. Specifically, (i) the exact form of \(\bar{\mathbf{A}}^{\rm tr}(q^{\rm tr},\Omega^{\rm tr})\) was identified in [14] in terms of \(\hat{S}_{i}=\sinh 2\beta_{c}^{\rm tr}E_{i}\) for general \(E_{i}\), \(i=1,2,3\), (ii) the existence of the two principal axes was verified by the exact determination of the angle \(\Omega^{\rm Is}\) from the two extrema of the angular-dependent correlation length, (iii) the exact ratio of \(q^{\rm tr}\) of the principal correlation lengths was determined in [14] in terms of \(\hat{S}_{i}\) for general \(E_{i}\), and (iv) the existence of the mean correlation length \(\bar{\xi}_{\pm}^{\rm tr}(t)\) follows from its identification through Eqs. (52)-(54) of [14] in terms of the exact "scaled variable" \(t^{\rm Us}\) of Vaidya [39] (denoted by \(t\) in Eq. (10) of [39]). The determination of \(\bar{\xi}_{\pm}^{\rm tr}(t)\) of the Ising model in the full range of (7.27) can be obtained by expanding the scaled variable in Eq. (10) of [39] around \(T_{c}^{\rm tr}\) to leading order in \(|t|=|T-T_{c}^{\rm tr}|/T_{c}^{\rm tr}\), as noted in [14]. More explicitly, the scaled variable of Vaidya is identified according to Eq. (53) of [14] for small \(0<t\ll 1\) as \[t^{\rm Vai}=\frac{R^{\rm tr}(t)}{\xi_{+}^{\rm tr}(t)}\longrightarrow\frac{R^{ \rm tr}(0)}{\xi_{0+}^{\rm tr}}t \tag{7.28}\] where the distance variable \(R^{\rm tr}(t)\) in Eq. (11) of [39] has a finite limit \(R^{\rm tr}(0)=\lim_{t\to 0}R^{\rm tr}(t)\). We have verified that the mean correlation length \(\bar{\xi}_{0+}^{\rm tr}\) is determined uniquely as a function of \(E_{i}\) from (7.28) and Eq. (10) of [39] for general \(E_{i}\). This proves the existence of the two principal correlation lengths \(\xi_{0+}^{\rm(tr)}\) according to (7.19) via the combination of \((q^{\rm tr})^{1/2}\), \((q^{\rm tr})^{-1/2}\), and \(\bar{\xi}_{0+}^{\rm tr}\) for general \(E_{i}\). The amplitudes below \(T_{c}\) follow from \(\bar{\xi}_{0+}^{\rm tr}/\bar{\xi}_{0-}^{\rm tr}=2\). The expressions of \(\bar{\xi}_{0+}^{\rm tr}\) and of \(\xi_{0+}^{\rm(tr)}\) in terms of \(\hat{S}_{i}\) are derived in [88]. In retrospect, the verification of multiparameter universality of the structure of the correlation function of the special Ising model (7.26) achieved in [14] was to be expected in view of the general proof of the present paper. ## VIII Angular-dependent correlation vector In [14] the notion of an angular-dependent correlation length was introduced. In the following we further develop this notion by introducing the _vector of the angular-dependent correlation length_. Since such vectors exists in all weakly \(d\)-dimensional anisotropic systems including \(\varphi^{4}\), Gaussian, and fixed-length spin models we suppress the superscripts "Is", "sp", and "G" in the following. Here we confine ourselves to two dimensions. It is of interest to introduce a correlation length as a measure of the spatial range of the critical correlations in a certain spatial direction with an angle \(\theta\) described by a unit vector \[{\bf e}(\theta)=\left(\begin{array}{c}\cos\theta\\ \sin\theta\end{array}\right). \tag{8.1}\] This can be done by introducing polar coordinates \({\bf x}=r\;{\bf e}(\theta)\) for the argument \({\bf x}\) of the anisotropic bulk correlation function \(G({\bf x},t)\). As expected on physical grounds such a correlation length should not depend on the absolute value of \(\theta\) but on the angle \(\theta-\Omega\) relative to \(\Omega\) describing the direction of the principal correlation lengths. This is indeed the case. By rewriting the argument of the scaling function \(\Psi_{\pm}\) in (7.22) as \[\frac{[{\bf x}\cdot\bar{\bf A}(q,\Omega)^{-1}{\bf x}]^{1/2}}{\xi_{\pm}(t)}= \frac{r}{\xi_{\pm}(t,\theta-\Omega,q)} \tag{8.2}\] we obtain from (7.25) the angular-dependent correlation length \[\xi_{\pm}(t,\theta-\Omega,q)=\xi_{0\pm}(\theta-\Omega,q)|t|^{-1}, \tag{8.3}\] \[\xi_{0\pm}(\theta-\Omega,q)=\frac{\bar{\xi}_{0\pm}}{f(\theta- \Omega,q)},\] (8.4) \[f(\theta-\Omega,q)=[q\sin^{2}(\theta-\Omega)+q^{-1}\cos^{2}( \theta-\Omega)]^{1/2}. \tag{8.5}\] Similarly the prefactor in (7.22) can be rewritten in this form. This yields the alternative representation of the correlation function in polar coordinates \[G({\bf x},t)=\] \[\frac{\Gamma_{+}\big{(}\bar{\xi}_{0+}\big{)}^{-7/4}}{[rf(\theta- \Omega,q)]^{1/4}}\;\Psi_{\pm}\Big{(}\frac{rf(\theta-\Omega,q)}{\bar{\xi}_{\pm} (t)}\Big{)}=\] \[\frac{\Gamma_{+}}{\big{(}\bar{\xi}_{0+}\big{)}^{2}}\Bigg{(}\frac{ \xi_{0\pm}(\theta-\Omega,q)}{r}\Bigg{)}^{1/4}\;\Psi_{\pm}\Big{(}\frac{r}{\xi_ {\pm}(t,\theta-\Omega,q)}\Big{)}. \tag{8.6}\] The exact angular dependence described by the function \(f(\theta-\Omega,q)\) was first found within the \(\varphi^{4}\) model for general \(K_{i,j}\) and the triangular-lattice Ising model (7.26) in terms of \(\hat{S}_{i}\) for general \(E_{i}\)[14]. Because of the universality of the structure of the reduced anisotropy matrix \(\bar{\bf A}\), the function \(f(\theta-\Omega,q)\) describes a universal structure of the \((\theta-\Omega)\)-dependence for all weakly anisotropic two-dimensional systems including \(\varphi^{4}\), Gaussian, and Ising models. The usefulness of this correlation length is its property of having two extrema with respect to \(\theta\) at the angles \[\theta^{(1)}=\Omega,\;\theta^{(2)}=\Omega+\pi/2 \tag{8.7}\] which determine the two principal directions, i. e., \[\xi_{0\pm}(0,q) = \bar{\xi}_{0\pm}q^{1/2}=\xi_{0\pm}^{(1)}, \tag{8.8}\] \[\xi_{0\pm}(\pi/2,q) = \bar{\xi}_{0\pm}q^{-1/2}=\xi_{0\pm}^{(2)}. \tag{8.9}\] Here we extend the definition of the amplitude \(\xi_{0\pm}(\theta-\Omega,q)\) to a definition of an angular-dependent correlation vector for two-dimensional weakly anisotropic systems \[\mathbf{\xi}_{\pm}(t,\theta,\Omega,q) = \mathbf{\xi}_{0\pm}(\theta,\Omega,q)|t|^{-\nu}, \tag{8.10}\] \[\mathbf{\xi}_{0\pm}(\theta,\Omega,q) = \xi_{0\pm}(\theta-\Omega,q)\;{\bf e}(\theta) \tag{8.11}\] that is oriented in the direction of the angle \(\theta\). This is a generalization of the principal correlation vectors (4.60) and (4.81) which are obtained from (8.4)-(8.11) for \(\theta=\theta^{(1)}\) and \(\theta=\theta^{(2)}\). It can be verified that the generalized shear transformation of Sec. IV. C with \(\widehat{\bf U}(\Omega)={\bf U}(\Omega)\) defined in (7.13) indeed transforms the angular-dependent correlation vector (8.11) to a vector with a rescaled isotropic length \(\xi_{0\pm}^{\rm iso}\), \[\widehat{\mathbf{\lambda}}^{-1/2}\;\;\widehat{\bf U}(\Omega)\;\mathbf{\xi}_{0\pm}( \theta,\Omega,q)=\xi_{0\pm}^{\rm iso}\;{\bf e}(\theta-\Omega,q) \tag{8.12}\] with a unit vector \({\bf e}(\theta-\Omega,q)\) whose orientation depends on the relative angle \(\theta-\Omega\), \[{\bf e}(\theta-\Omega,q)=\frac{1}{\big{[}1+q^{2}\tan^{2}(\theta-\Omega)\big{]}^ {1/2}}\left(\begin{array}{c}1\\ q\tan(\theta-\Omega)\end{array}\right), \tag{8.13}\] \[|{\bf e}(\theta-\Omega,q)|=1. \tag{8.14}\] This agrees with the transformation of the principal correlation vectors discussed in Sec. IV. An analogous result is obtained within the \(\varphi^{4}\) and Gaussian models if the special shear transformation (4.1) is applied to \(\mathbf{\xi}_{0\pm}(\theta,\Omega,q)\) where \(\xi^{\rm iso}_{0\pm}\) is replaced by \(\xi^{\prime}_{0\pm}\) (or \(\xi^{\prime G}_{0+}\), respectively), i.e., \[\mathbf{\lambda}^{-1/2}\ \ {\bf U}(\Omega)\ \mathbf{\xi}_{0\pm}(\theta,\Omega,q)= \xi^{\prime}_{0\pm}\ {\bf e}(\theta-\Omega,q) \tag{8.15}\] with the same unit vector \({\bf e}(\theta-\Omega,q)\). The same angular-dependent representation can be employed in the shear transformation applied to any lattice point \({\bf x}\) of the anisotropic system. For the generalized shear transformation (4.89) this yields the transformed vector \[\widehat{\bf x}= \widehat{\mathbf{\lambda}}^{-1/2}\ \ \widehat{\bf U}(\Omega)\ {\bf x}=\frac{\xi^{\rm iso}_{0\pm}}{\xi_{0\pm}(\theta-\Omega,q)}\ |{\bf x}|\ {\bf e}(\theta-\Omega,q) \tag{8.16}\] \[= \frac{\xi^{\rm iso}_{0\pm}}{\xi_{0\pm}}\ |{\bf x}|\ f(\theta-\Omega,q)\ {\bf e}(\theta,q,\Omega) \tag{8.17}\] where the factor \(\xi^{\rm iso}_{0\pm}/\xi_{0\pm}(\theta-\Omega,q)\) describes the amount of rescaling in the direction of \(\theta\). For the special shear transformation (4.1) of the \(\varphi^{4}\) and Gaussian models the corresponding representation of the vector \({\bf x}^{\prime}\) reads \[{\bf x}^{\prime}= \mathbf{\lambda}^{-1/2}\ \ {\bf U}(\Omega)\ {\bf x}=\frac{\xi^{\prime}_{0\pm}}{\xi_{0\pm}( \theta-\Omega,q)}\ |{\bf x}|\ {\bf e}(\theta-\Omega,q) \tag{8.18}\] \[= \frac{\xi^{\prime}_{0\pm}}{\xi_{0\pm}}\ |{\bf x}|\ f(\theta-\Omega,q)\ {\bf e}(\theta-\Omega,q) \tag{8.19}\] where the rescaling factor is \(\xi^{\prime}_{0\pm}/\xi_{0\pm}(\theta-\Omega,q)\) (or \(\xi^{\prime G}_{0\pm}/\xi^{G}_{0\pm}(\theta-\Omega,q)\) for the Gaussian model). Although the function \(f(\theta-\Omega,q)\) has a universal form describing the angular dependence of the correlation length of all weakly anisotropic two-dimensional systems one should keep in mind that it contains a substantial source of an intrinsic diversity through the dependence on the nonuniversal angle \(\Omega\) and the ratio \(q\). Both \(\Omega\) and \(q\) depend on all microscopic details. Within the \(\varphi^{4}\) theory \(q\) and \(\Omega\) are well defined via the eigenvalues and the orientation of the eigenvectors of the anisotropy matrix \({\bf A}\)[14] but they are unknown for, e. g., an unlimited number of weakly anisotropic Ising models on various lattices with short range interactions. We are not aware of an analytic approach to determining the principal axes of such systems. This serious lack of knowledge is a nontrivial source of nonuniversality in the physics of weakly anisotropic systems. It causes a directional nonuniversality of two- and three-dimensional correlation functions [14] that has not been anticipated in the traditional theory where it was believed that weak anisotropy enters only the amplitudes of power laws [23]. As long as the principal axes and correlation lengths are unknown for an anisotropic system no appropriate rotation matrix can be defined, and not only the required _amount_ of rescaling is unknown but also the _directions_ are unknown along which the rescaling should be performed. Thus, in general, the effects of weak anisotropy cannot be simply "transformed away" by a "trivial rescaling". This problem has not been adequately addressed in the earlier literature on weakly anisotropic bulk and confined systems [34; 37; 38; 85; 23]. In particular in confined systems these anisotropy effects may become unexpectedly complex [15]. The notion of an angular-dependent representation of correlation vectors and lattice points is applicable to both bulk and confined systems with arbitrary boundary conditions. This is of relevance to the application of the shear transformation to the boundaries of finite systems as will be further discussed in Sec. IX. A. ## IX Critical Casimir forces in anisotropic systems ### Proof of multiparameter universality of the critical Casimir amplitude in two dimensions Recently exact results have been derived for the critical free energy and the ensuing critical Casimir amplitude of the anisotropic \(\varphi^{4}\) model at \(T_{c}\) on a finite rectangle with periodic boundary conditions [15]. Surprisingly complex finite-size effects were found near the instability where weak anisotropy breaks down. These exact results were based on conformal field theory [67] and the principle of two-scale-factor universality [17; 22] for finite isotropic systems combined with the special shear transformation (4.1)-(4.3) of the anisotropic \(\varphi^{4}\) model on a rectangle to an isotropic \(\varphi^{4}\) model on a parallelogram. Corresponding predictions were presented for the finite anisotropic triangular-lattice Ising model [14; 39] on the basis of the assumption that multiparameter universality [13] is valid for this model but no proof was given. A quantitative test was performed [16] by high-precision Monte Carlo simulations for a special Ising model on a square with a diagonal anisotropic coupling and remarkable agreement was found with the predicted critical amplitude \({\cal F}^{\rm ls}_{c}\) of the free energy at \(T_{c}\). It was noted [15] that the assumption of multiparameter universality for the anisotropic triangular-lattice Ising model is equivalent to an "effective shear transformation" between the isotropic Ising model on a parallelogram and the anisotropic Ising model on a rectangle but this effective shear transformation was not specified. Here we shall identify this effective shear transformation together with a proof for the validity of multiparameter universality for critical free energy \({\cal F}^{\rm ls}_{c}\) of the anisotropic Ising model. Our proof is based on (a) the generalized shear transformations (4.89) and (8.16) applied to the boundaries of the Ising model, (b) the invariance of the critical free energy under this shear transformation as discussed in Sec. II. B, (c) the inversion of this shear transformation. The strategy is to perform the generalized shear transformation from the anisotropic rectangle to an isotropic parallelogram, to take the exact critical free energy on the parallelogram from isotropic CFT [67], and to determine the critical free energy on the anisotropic rectangle by means of inverting the shear transformation. This can be done without recourse to the \(\varphi^{4}\) model and no assumptions are needed other than the existence of weakly anisotropic behavior, i.e., the existence of principal axes with the angle \(\Omega^{\rm ls}\) and the ratio \(q^{\rm ls}\) of the principal correlation lengths. We recall that both \(\Omega^{\rm ls}\) and \(q^{\rm ls}\) are known exactly for the triangular-lattice Ising model (7.26) [14]. We assume a finite \(L_{\parallel}\times L_{\perp}\) rectangle spanned by the vectors \({\bf L}_{\parallel}=L_{\parallel}\,(1,0)\) and \({\bf L}_{\perp}=L_{\perp}\,(0,1)\) in the horizontal and vertical directions with the aspect ratio \[\rho=L_{\perp}/L_{\parallel}. \tag{115}\] We apply our generalized shear transformation (4.89) directly to the anisotropic Ising model on the rectangle in order to obtain an isotropic Ising model on a parallelogram. The transformation applied to the boundaries of the rectangle reads \[{\bf L}_{p\parallel} = ({\bf\lambda}^{\rm ls})^{-1/2}{\bf U}(\Omega^{\rm ls}){\bf L}_{ \parallel}, \tag{116}\] \[{\bf L}_{p\perp} = ({\bf\lambda}^{\rm ls})^{-1/2}{\bf U}(\Omega^{\rm ls}){\bf L}_{ \perp}, \tag{117}\] where \({\bf U}(\Omega^{\rm ls})\) is given by (7.13) and \[{\bf\lambda}^{\rm ls}=\left(\begin{array}{cc}\lambda_{1}^{\rm ls}&0\\ 0&\lambda_{2}^{\rm ls}\end{array}\right) \tag{118}\] with the diagonal elements \[\lambda_{1}^{\rm ls}=[\xi_{0\pm}^{\rm ls}(\zeta_{0\pm}^{\rm iso})^{2},\quad \lambda_{2}^{\rm ls}=[\xi_{0\pm}^{\rm ls}(\zeta_{0\pm}^{\rm iso})^{2}\,]\;, \tag{119}\] (compare (4.88)). This generates an isotropic Ising model with the bulk correlation-length amplitude \(\xi_{0\pm}^{\rm iso}\) on a parallelogram spanned by the vectors \({\bf L}_{p\parallel}\) and \({\bf L}_{p\perp}\). From the general transformation formulae (8.16), (8.4), (8.5), and (8.13) we obtain for \(\theta=0\) and \(\theta=\pi/2\) \[{\bf L}_{p\parallel} = L_{\parallel}\,\frac{\xi_{0\pm}^{\rm iso}/\xi_{0\pm}^{\rm ls}(- \Omega^{\rm ls},q^{\rm ls})}{\left[1+(q^{\rm ls})^{2}\tan^{2}\Omega^{\rm ls} \right]^{1/2}}\left(\begin{array}{c}1\\ -q^{\rm ls}\tan\Omega^{\rm ls}\end{array}\right), \tag{120}\] \[{\bf L}_{p\perp} = L_{\perp}\,\frac{\xi_{0\pm}^{\rm iso}/\xi_{0\pm}^{\rm ls}(\pi- \Omega^{\rm ls},q^{\rm ls})}{\left[1+(q^{\rm ls})^{2}\cot^{2}\Omega^{\rm ls} \right]^{1/2}}\left(\begin{array}{c}1\\ q^{\rm ls}\cot\Omega^{\rm ls}\end{array}\right), \tag{121}\] with \[|{\bf L}_{p\parallel}| = L_{\parallel}\,\xi_{0\pm}^{\rm iso}/\xi_{0\pm}^{\rm ls}(-\Omega ^{\rm ls},q^{\rm ls}), \tag{122}\] \[|{\bf L}_{p\perp}| = L_{\perp}\,\xi_{0\pm}^{\rm iso}/\xi_{0\pm}^{\rm ls}(\pi/2-\Omega ^{\rm ls},q^{\rm ls}), \tag{123}\] where we have used \(\tan(\pi/2-\Omega)=\cot\Omega\). We see that the ratio \(\xi_{0\pm}^{\rm iso}/\xi_{0\pm}^{\rm ls}(\theta-\Omega^{\rm ls},q^{\rm ls})\) determines the rescaling of the lengths in the directions \(\theta=0\) and \(\theta=\pi/2\). The parallelogram is characterized by the transformed aspect ratio \[\rho_{\rm p}=|{\bf L}_{p\perp}|/|{\bf L}_{p\parallel}| \tag{124}\] and by the angle \(\alpha\) between the vectors \({\bf L}_{p\perp}\) and \({\bf L}_{p\parallel}\) (Fig.2 of [15]). This angle is related to \({\bf L}_{p\perp}\) and \({\bf L}_{p\parallel}\) by \[\cos\alpha=\frac{{\bf L}_{p\perp}\cdot{\bf L}_{p\parallel}}{|{\bf L}_{p\perp }||{\bf L}_{p\parallel}|}. \tag{125}\] Substituting (120) and (121) into (124) and (125) and using (8.4) and (8.5) we obtain the transformed aspect ratio \[\rho_{\rm p}(\rho,q^{\rm ls},\Omega^{\rm ls}) = \frac{L_{\perp}/\xi_{0+}^{\rm ls}(\pi/2-\Omega^{\rm ls},q^{\rm ls })}{L_{\parallel}/\xi_{0+}^{\rm ls}(-\Omega^{\rm ls},q^{\rm ls})} \tag{126}\] \[= \rho\left[\frac{\tan^{2}\Omega^{\rm ls}+(q^{\rm ls})^{2}}{1+(q^ {\rm ls})^{2}\tan^{2}\Omega^{\rm ls}}\right]^{1/2}\] \[= \rho\left[\frac{\bar{\bf A}(q^{\rm ls},\Omega^{\rm ls})_{11}}{ \bar{\bf A}(q^{\rm ls},\Omega^{\rm ls})_{22}}\right]^{1/2} \tag{127}\] and the angle \(\alpha\) determined by \[\cot\alpha(q^{\rm ls},\Omega^{\rm ls}) = \frac{(q^{\rm ls})^{-1}-q^{\rm ls}\tan\Omega^{\rm ls}\,\tan(\pi/ 2-\Omega^{\rm ls})}{\tan\Omega^{\rm ls}+\tan(\pi/2-\Omega^{\rm ls})} \tag{128}\] \[= [(q^{\rm ls})^{-1}-q^{\rm ls}]\cos\Omega^{\rm ls}\sin\Omega^{\rm ls}\] (129) \[= -\bar{\bf A}(q^{\rm ls},\Omega^{\rm ls})_{12} \tag{130}\] where the matrix elements \(\bar{\bf A}(q^{\rm ls},\Omega^{\rm ls})_{\alpha\beta}\) are defined in (7.23)-(7.25). We note that the free parameter \(\xi_{0\pm}^{\rm iso}\) in (9.6) and (9.7) is canceled in all subsequent equations, as expected. No specific properties of the weakly anisotropic Ising model were needed in the derivation of (9.12)-(9.17), thus these relations have a universal structure. In Eqs. (7) and (8) of [15] analogous formulae corresponding to (9.13), (9.14) and (9.16), (9.17), but with \(q^{\rm ls},\Omega^{\rm ls}\) replaced by \(q,\Omega\), were first presented for the \(\varphi^{4}\) model. These formulae were obtained on the basis of geometric considerations as an extended version of the derivation in the context of Fig. 2 in [4] for the \(\varphi^{4}\) model (see also Eqs. (16) and (17) of [4] for the special shear transformation from an anisotropic square to an isotropic rhombus for \(\Omega=\pi/4\)). These formulae for the \(\varphi^{4}\) model were then adopted in [15] for the Ising model by the substitution \(q\to q^{\rm ls},\Omega\to\Omega^{\rm ls}\) on the basis of the assumption that multiparameter universality is valid for the Ising model. Here we have provided an exact analytic derivation for this substitution directly within the Ising model, without recourse to the \(\varphi^{4}\) model, through the general transformation formula (8.16) derived from our generalized shear transformation (4.89). Thus this transformation between anisotropic and isotropic Ising models specifies what was called "effective shear transformation" in Fig. 1 of [15]. The critical free energy of the transformed isotropic Ising model on the parallelogram is denoted by \({\cal F}_{c}^{\rm ls,iso}\). As noted in Sec. III. D, two-scale-factor universality implies that \({\cal F}_{c}^{\rm ls,iso}\) is universal, i.e., it has a universal dependence on the geometric parameters \(\alpha\) and \(\rho_{\rm p}\) of the parallelogram. Since the critical free energy \({\cal F}_{c}^{\rm ls}\) on the anisotropic rectangle remains invariant under this pure coordinate transformation (see Sec. II. B) we obtain \[{\cal F}_{c}^{\rm ls}={\cal F}_{c}^{\rm ls,iso}(\alpha,\rho_{\rm p}). \tag{131}\] As pointed out in [15] the exact result for \({\cal F}_{c}^{\rm ls,iso}(\alpha,\rho_{\rm p})\) for periodic boundary conditions can be taken directly from the critical free energy \({\cal F}_{c}^{\rm CFT}(\tau)\) of conformal field theory for the isotropic Ising model on a torus [67] \[{\cal F}_{c}^{\rm ls,iso}(\alpha,\rho_{\rm p}) = {\cal F}_{c}^{\rm CFT}(\tau)=-\ln Z^{\rm CFT}(\tau), \tag{132}\] \[\tau(\alpha,\rho_{\rm p}) = {\rm Re}\ \tau+i\ {\rm Im}\ \tau=\rho_{\rm p}\exp(i\ \alpha) \tag{133}\] which is characterized by the complex torus modular parameter \(\tau(\alpha,\rho_{\rm p})\). The \(\tau\)-dependence of \({\cal F}_{c}^{\rm CFT}(\tau)\) is universal. The partition function \(Z^{\rm CFT}(\tau)\) is expressed in terms of Jacobi theta functions \(\theta_{i}(0|\tau)\equiv\theta_{i}(\tau)\) as [67] \[Z^{\rm CFT}(\tau)=\big{(}|\theta_{2}(\tau)|+|\theta_{3}(\tau)|+|\theta_{4}(\tau)| \big{)}/\big{(}2|\eta(\tau)|\big{)}, \tag{134}\] with \(\eta(\tau)=\Big{[}\frac{1}{2}\theta_{2}(\tau)\theta_{3}(\tau)\theta_{4}(\tau) \Big{]}^{1/3}\). The crucial step is to transfer this exact information directly from the isotropic Ising model to the anisotropic Ising model by means of inverting the generalized shear transformation, without recourse to the \(\varphi^{4}\) model. This is achieved by defining the \((q^{\rm ls},\Omega^{\rm ls})\)-dependent quantity \[\tau(\rho,q^{\rm ls},\Omega^{\rm ls})=\tau\big{(}\alpha(q^{\rm ls},\Omega^{\rm ls }),\rho_{\rm p}(\rho,q^{\rm ls},\Omega^{\rm ls})\big{)} \tag{119}\] with \(\alpha(q^{\rm ls},\Omega^{\rm ls})\) and \(\rho_{\rm p}(\rho,q^{\rm ls},\Omega^{\rm ls})\) given by (108) and (110) and by substituting (119) into the isotropic formula (107). Using (109) we then obtain the exact result for the critical free energy \({\cal F}_{c}^{\rm ls}\) and the Casimir amplitude \(X_{c}^{\rm ls}\) of the anisotropic Ising model on the rectangle as \[{\cal F}_{c}^{\rm ls}(\rho,q^{\rm ls},\Omega^{\rm ls}) = -\ln Z^{\rm CFT}\big{(}\tau(\rho,q^{\rm ls},\Omega^{\rm ls})\big{)}, \tag{120}\] \[X_{c}^{\rm ls}(\rho,q^{\rm ls},\Omega^{\rm ls}) = -\rho^{2}\;\partial{\cal F}_{c}^{\rm ls}(\rho,q^{\rm ls},\Omega^ {\rm ls})/\partial\rho. \tag{121}\] with two nonuniversal parameters \(q^{\rm ls},\Omega^{\rm ls}\). Going from (107) to (120) is equivalent to performing a nonuniversal inverse shear transformation from the isotropic to the anisotropic Ising model. The difference between (107) and (120) is that the former relation contains a universal function of the geometric variables \(\alpha\) and \(\rho_{\rm p}\) in agreement with two-scale factor universality whereas (120) contains a universal function reflecting multiparameter universality with the two nonuniversal anisotropy parameters \(q^{\rm ls}\) and \(\Omega^{\rm ls}\). They are introduced via the nonuniversal anisotropy parameters contained in the shear transformation formulae (106)-(108). The same structure was derived for the \(\varphi^{4}\) model in [15]. This proves the validity of the predictions of [15] not only for the triangular lattice Ising model but more generally for all other weakly anisotropic systems that belong to the \(d=2,n=1\) universality class. In particular this establishes the validity of the self-similar structures of the critical free energy and the Casimir amplitude discovered in [15] for all weakly anisotropic systems with periodic boundary conditions in this universality class. As noted in the context of the anisotropic _bulk_ correlation function, there exists a corresponding intrinsic diversity in _confined_ anisotropic systems as compared to confined isotropic systems: It arises from the nonuniversal parameters \(q,\Omega\) and \(q^{\rm ls},\Omega^{\rm ls}\) that do not exist in isotropic systems. Furthermore there is a basic difference between \(q(\{K_{i,j}\})\) and \(\Omega(\{K_{i,j}\})\) of the \(\varphi^{4}\) model (which are known exactly as functions of \(K_{i,j}\)[14]) and \(q^{\rm ls}(\{E_{i,j}\})\) and \(\Omega^{\rm ls}(\{E_{i,j}\})\) of the Ising model which are generically unknown for the general Ising model (106). We conclude that weak anisotropy destroys the universality of the critical Casimir amplitude \(X_{c}\) of the subclass of isotropic systems and makes \(X_{c}\) to become an unknown quantity for general anisotropic systems whose principal angles and correlation lengths are unknown. The triangular-lattice Ising model [14; 39] is a very rare example for a system other than the \(\varphi^{4}\) model for which these parameters are known exactly as a function of the microscopic couplings. It would be worthwhile to extend this knowledge to other anisotropic Ising models [41; 42; 43; 44] whose relevant anisotropy parameters are as yet unknown. ### Critical Casimir forces in anisotropic superconductors An experimental verification of critical Casimir forces has been achieved so far only in isotropic systems, most prominently in superfluid \({}^{4}\)He films [91]. It has been pointed out by Williams [92] that a measurable critical Casimir force should occur also in superconducting films. Superconductors belong to the same universality class as superfluid \({}^{4}\)He and have the same (Dirichlet) boundary conditions but are anisotropic. Williams' scenario is the following: a superconducting film is connected to a bulk sample of the same material. He argues that below the bulk critical temperature the film-bulk-system can lower its free energy by a transfer of electrons (Cooper pairs) from the film to the bulk system which is analogous to helium atoms moving from the film to the superfluid bulk reservoir. While in the helium system this leads to a thinning of the film the corresponding effect in the superconducting film-bulk-system is a transfer of negative electrical charge from the film to the bulk system. Williams [92] argues that this gives rise to an electrical potential difference which can be related to the free-energy difference (per unit area) between the film and the bulk and from which a Casimir force can be derived. He estimates the voltage difference to have a measurable magnitude. The critical Casimir force is an observable only if the ordering degrees of freedom can enter and leave the system. Therefore it has been claimed in the literature [93; 94; 95; 96; 97; 98; 99; 86; 93; 94; 95] that this force can be active only in isotropic fluids and that the issue of spatial anisotropy is not relevant in the context of the critical Casimir force. We argue that, unlike the localized degrees of freedom (magnetic moments) of the order parameter of an anisotropic magnetic material, the ordering degrees of freedom (Cooper pairs) of a superconductor are not localized at lattice points but play the role of an electrical superfluid in an _anisotropic_ environment that can leave and enter the film connected to the bulk of the same material, as anticipated by Williams [92]. So far no specific objection has been raised in the literature against this specific argumentation for superconductors, and in a comment [96] on [92] the measurability of the critical Casimir force in superconductors has not been questioned. Furthermore, we point to the largely unexplored area of thermodynamic Casimir forces in liquid crystals [97] which exhibit a wide variety of spatial anisotropy and whose ordering degrees of freedom can leave and enter the system. In closing we note that experimental studies, Monte Carlo simulations, and further theoretical research are called for in view of the fact that at present no experimental or Monte Carlo data and no analytic predictions are available for the critical Casimir force in anisotropic systems with realistic boundary conditions. Also theoretical efforts based on the functional renormalization group [57] applied to anisotropic confined systems could yield important contributions to this matter. Analytic results for the critical Casimir force in anisotropic films of finite thickness have been presented previously [13] for the case of periodic boundary conditions which re -fute earlier results [92] where no anisotropy effect in anisotropic superconductors near \(T_{c}\) was found. An analytic renormalization-group study in three dimensions with realistic Dirichlet boundary conditions below \(T_{c}\) without adjustable parameters was performed [98] that explains the depth and position of the deep minimum of the Casimir force scaling function observed in isotropic \({}^{4}\)He films [91] in the temperature regime \(T_{c,\rm{film}}<T<T_{c}\) on a semiquantitative level. It is conceivable that this isotropic study can be extended to anisotropic film systems with Dirichlet boundary conditions which could lead to quantitative predictions of the critical Casimir force in real superconductors. ## X Summary We have presented a general theory of bulk critical phenomena in weakly \(d\)-dimensional \(O(n)\)-symmetric anisotropic systems in \(2\leq d<4\) dimensions where our only assumptions are the existence of \(d\) principal axes and correlation lengths and the validity of two-scale-factor universality for isotropic systems. Our general conclusions with regard to the validity of multiparameter universality confirms and specifies the early "belief in some form of universality not only for the two-dimensional Ising model but also for a large class of two-dimensional models with short-range interactions" [38]. Our findings are supported by exact results for the \(d=2,n=1\) universality class and for the spherical and Gaussian universality classes in \(d\geq 2\) dimensions. On the other hand our theory reveals a high degree of intrinsic diversity in weakly anisotropic systems even in the asymptotic critical region. This limitation of universality was not anticipated in the traditional theory of critical phenomena and in the more recent development of the functional renormalization group. Furthermore we have applied our theory to finite-size effects at \(T_{c}\) of the critical free energy and Casimir amplitude of anisotropic systems on a rectangle with periodic boundary conditions of the \(d=2,n=1\) universality class studied recently [15]. Our main results are summarized in more detail as follows. (i) After defining the anisotropic \(O(n)\)-symmetric \(\varphi^{4}\) and \(n\)-vector models in Sec. II and summarizing several aspects of two-scale-factor universality in Sec. III we have introduced in Sec. IV a generalized shear transformation that provides exact relations between weakly anisotropic systems and isotropic systems in the same universality class. We have identified a temperature-independent universal structure of a reduced anisotropy matrix \(\overline{\bf A}\) that depends on the ratios of principal correlation lengths and on the principal unit vectors describing the principal axes. The latter quantities depend on microscopic details such as coupling constants and the lattice structure, thus \(\overline{\bf A}\) is a nonuniversal quantity that does not exist in isotropic systems. (ii) In Sec. V a proof has been presented for the validity of multiparameter universality of the bulk order-parameter correlation function in weakly anisotropic systems. It implies that the traditional notion of a universality class of critical phenomena must be revised in that it must be divided into subclasses of isotropic and weakly anisotropic systems. The latter have up to \(d(d+1)/2+1\) independent nonuniversal parameters. Only two of these parameters can be expressed in terms of thermodynamic amplitudes whereas the remaining \(d(d+1)/2-1\) parameters enter the matrix \(\overline{\bf A}\). We have also determined the exact structure of the anisotropic bulk order-parameter correlation function for \(n>1\) in the Goldstone regime below \(T_{c}\). Exact anisotropic results are presented in the large-\(n\) limit and for the Gaussian model. From the nonsymptotic nonuniversal result of the functional renormalization group [54; 55] we have identified the universal part of the isotropic correlation function of the \(n=1\) Ising universality class in three dimensions. We have refuted the claim that an extended universality [54; 55] is valid in the nonasymptotic region. Our theory provides the opportunity of a quantitative comparison with the universal parts of numerical or experimental data in two-dimensional and three-dimensional anisotropic systems after the nonuniversal parameters have been determined. (iii) In Sec. VI a bulk scaling variable \(\widetilde{x}\), (6.12), has been introduced for general \(n\) in \(2\leq d<4\) dimensions which is invariant under the shear transformation. It permits us to represent the singular bulk part of the free energy of isotropic and anisotropic systems in a compact form. We have also shown the validity of multiparameter universality of several critical bulk amplitude relations. In particular the amplitude of the bulk specific heat of anisotropic systems is shown to be universally related to the mean correlation length. (iv) In Sec. VII our theory has been applied to two dimensions. The significance of our general results is that the universal validity of the structure of the correlation function no longer rests upon exact calculations within special models on special lattices or upon the hypothesis of multiparameter universality but is a proven fact that applies to all two-dimensional weakly anisotropic systems of the \((d=2,n=1)\) universality class. This constitutes a fundamental simplification in the analytic theory of two-dimensional anisotropic systems as well as in the analysis of numerical or experimental data. (v) In Sec. VIII the anisotropic correlation function is written in terms of polar coordinates. An angular-dependent correlation vector is introduced for all two-dimensional weakly anisotropic systems. Angular-dependent formulae of the shear transformation are derived that are applicable to any lattice vector in bulk and confined systems. (vi) Application of these formulae to the boundaries of a finite anisotropic rectangle in Sec. IX provides an analytic derivation of the aspect ratio and the angle of the transformed isotropic parallelogram. Combining this result with the exact partition function of conformal field theory of the isotropic two-dimensional Ising model on a torus at \(T_{c}\) proves the validity of the predictions of [15] with regard to multiparameter universality and self-similar structures of the critical free energy and the Casimir amplitude not only for the triangular lattice Ising model but more generally for all weakly anisotropic Ising models and other systems with periodic boundary conditions that belong to the \(d=2,n=1\) universality class. In particular this identifies the previous "effective shear transformation" [15] between anisotropic and isotropic two-dimensional Ising models. (vii) We have not made progress in developing a systematic approach to determining the principal axes of weakly anisotropic systems other than \(\varphi^{4}\) models. These axes are of fundamental physical importance in real systems. So far the angles describing the principal directions, e.g., of the \(n\)-vector model, depend in an unknown way on the microscopic anisotropic interactions which must be determined for each special anisotropic system under consideration. This lack of knowledge is not of a harmless kind and constitutes a major challenge to future research. Significant issues of weak anisotropy were not yet addressed in this paper and call for further research in several directions, e.g., extensions to (a) bulk and finite-size properties of other lattice systems [41; 42; 43; 44; 45] and other models [71] with both isotropic and weakly anisotropic interactions, to be analyzed in the framework of two-scale-factor universality and multiparameter universality, (b) angular-dependent representations of correlation lengths and correlation functions in three dimensions for general \(n\), (c) other geometries beyond rectangles, (d) effects of an ordering field, (e) anisotropic effects near the Kosterlitz-Thouless transition of systems in the \(d=2,n=2\) universality class, (f) finite-size effects in weakly anisotropic systems away from \(T_{c}\), (g) crossover from weak to strong anisotropy, (h) finite-size effects in anisotropic systems in the presence of non-periodic boundary conditions. We consider item (h) to be most important as it is relevant for applications to real systems such as magnetic materials and superconductors which require a description with free or Dirichlet boundary conditions. ## Appendix A Relation between \(Q_{3}\) and \(\widetilde{Q}_{3}\) In the following the relation (3.48) between the universal constants \(Q_{3}\) and \(\widetilde{Q}_{3}\) is derived [99]. The isotropic bulk correlation functions (3.41) and (3.47) at \(T_{c}\) \[G_{c}(|{\bf x}|) = \frac{D_{c}}{|{\bf x}|^{d-2+\eta}}\,,\] (A.1) \[\hat{G}_{c}(|{\bf k}|) = \frac{\hat{D}_{c}}{|{\bf k}|^{2-\eta}}\,,\] (A.2) \[\frac{D_{c}}{\hat{D}_{c}} = \frac{\widetilde{Q}_{3}}{Q_{3}}\,,\] (A.3) are related by the Fourier transformation \[G_{c}(|{\bf x}|)=\int_{\bf k}\,e^{i{\bf k}\cdot{\bf x}}\hat{G}_{c}(|{\bf k}|)\;,\] (A.4) where \(\int_{\bf k}\) stands for \((2\pi)^{-d}\int d^{d}k\) with an infinite cutoff. We decompose \({\bf k}={\bf k_{0}}+{\bf q}\) where \({\bf k_{0}}\) and \({\bf q}\) are parallel and perpendicular to \({\bf x}\), respectively. Then the ratio \(D_{c}/\hat{D}_{c}\) is determined by \[\int_{\bf k}\frac{e^{i{\bf k}\cdot{\bf x}}}{|{\bf k}|^{2-\eta}}= \int_{-\infty}^{\infty}\frac{dk_{0}}{2\pi}e^{ik_{0}|{\bf x}|}\int\frac{d^{d-1} q}{(2\pi)^{d-1}}\frac{1}{(k_{0}^{2}+q^{2})^{(2-\eta)/2}}\] \[= \frac{1}{|{\bf x}|^{d-2+\eta}}\int_{-\infty}^{\infty}\frac{dy}{2 \pi}\frac{e^{iy}}{|y|^{3-d-\eta}}\int\frac{d^{d-1}q}{(2\pi)^{d-1}}\frac{1}{(1+ q^{2})^{(2-\eta)/2}}\] \[= \frac{1}{|{\bf x}|^{d-2+\eta}}\ \frac{-2\Gamma(d-2+\eta)\,\cos[(d+ \eta)\pi/2]}{2\pi}\] \[\times \frac{2}{(4\pi)^{(d-1)/2}\Gamma[(d-1)/2]}\,\frac{\Gamma[(d-1)/2] \,\,\Gamma[(3-d-\eta)/2]}{2\Gamma(1-\eta/2)}\] \[= \frac{\sin[(3-d-\eta)\pi/2]\,\,\Gamma[(3-d-\eta)/2]\,\,\Gamma(d- 2+\eta)}{|{\bf x}|^{d-2+\eta}\,\pi(4\pi)^{(d-1)/2}\,\,\Gamma(1-\eta/2)}\] \[= \frac{1}{|{\bf x}|^{d-2+\eta}}\ \frac{\Gamma(d-2+\eta)}{(4\pi)^{(d-1)/2} \,\,\Gamma(1-\eta/2)\,\,\Gamma[(d-1+\eta)/2]}\] \[= \frac{1}{|{\bf x}|^{d-2+\eta}}\ \frac{2^{d-2+\eta}\,\Gamma[(d-2+\eta)/2]} {(4\pi)^{d/2}\,\,\Gamma[(2-\eta)/2]}\] (A.5) \[= \frac{D_{c}}{\hat{D}_{c}}\,\frac{1}{|{\bf x}|^{d-2+\eta}}\;,\] (A.6) which yields (3.48). ## Acknowledgment I thank F. Kischel and S. Wessel for useful discussions and J. H. H. Perk for calling attention to Refs. [41; 42; 43; 44].
2302.03988
On Cosmological Low Entropy After the Big Bang: Universal Expansion and Nucleosynthesis
We investigate the sensitivity of a universe's nuclear entropy after Big Bang nucleosynthesis (BBN) to variations in both the baryon-to-photon ratio and the temporal evolution of cosmological expansion. Specifically, we construct counterfactual cosmologies to quantify the degree by which these two parameters must vary from those in our Universe before we observe a substantial change in the degree of fusion, and thus nuclear entropy, during BBN. We find that, while the post-BBN nuclear entropy is indeed linked to baryogenesis and the Universe's expansion history, the requirement of leftover light elements does not place strong constraints on the properties of these two cosmological processes.
Charlie F. Sharpe, Luke A. Barnes, Geraint F. Lewis
2023-02-08T10:53:39Z
http://arxiv.org/abs/2302.03988v1
# On Cosmological Low Entropy After the Big Bang: Universal Expansion and Nucleosynthesis ###### Abstract We investigate the sensitivity of a universe's nuclear entropy after Big Bang nucleosynthesis (BBN) to variations in both the baryon-to-photon ratio and the temporal evolution of cosmological expansion. Specifically, we construct counterfactual cosmologies to quantify the degree by which these two parameters must vary from those in our Universe before we observe a substantial change in the degree of fusion, and thus nuclear entropy, during BBN. We find that, while the post-BBN nuclear entropy is indeed linked to baryogenesis and the Universe's expansion history, the requirement of leftover light elements does not place strong constraints on the properties of these two cosmological processes. **Keywords:** Baryogenesis, Big Bang nucleosynthesis, Cosmology, Scale Factor Contributing authors: [email protected]; [email protected]; [email protected]; ## 1 Introduction Many physical processes in our Universe, from cell division to star formation, are observed to occur only in one direction in time. The Second Law of Thermodynamics identifies what these processes have in common: they increase the total entropy within a closed, isolated physical system. The Second Law is thus crucial to our understanding of the arrow of time. However, the Second Law is emergent, not fundamental. What light do the fundamental laws of nature shed on the arrow of time? Perhaps surprisingly, with the exception of very rare \(T\)-violating weak-force processes, these laws show no preference for entropy-increasing over entropy-decreasing processes. Hence, something more is required to explain the Second Law.1 Footnote 1: This “something more” cannot be mere probability/statistical considerations. These make no reference to time at all. An argument from time-symmetric laws and timeless mathematical principles cannot explain a time-asymmetric universe: the argument could be time-reversed, and be equally valid. The _Past Hypothesis_ proposes that this missing piece is a statement about the initial conditions of our Universe: "the universe had some particular, simple, compact, symmetric, cosmologically sensible, very low-entropy initial macrocondition"[1]. With this initial condition, the most likely evolution of our Universe is according to a consistent arrow of time, explaining the success of the Second Law. However, the Past Hypothesis only postulates the existence of a low-entropy initial macrostate of our Universe. It does _not_ specify its form. So, we can ask: what is it about the arrangement of matter and energy at the start of the Universe that makes it a rare, low-entropy macrostate? Penrose [2, 3] argued that the low entropy of the early Universe is primarily attributed to gravity. The matter is distributed almost uniformly, and the entropy can increase substantially as matter collapses under its own gravity. We can see this by turning the question around: what would a high-entropy Big Bang look like? Answer: an expanding swarm of black holes, with no energy from gravitational collapse to give. The central role of gravity has been questioned by Rovelli [4], building on Wallace [5]. Most of the entropy-increasing processes we see around us are powered, not by gravitational collapse, but by nuclear fusion. While the gravitational collapse of a gas cloud initially _ignited_ the Sun, the last 4.5 billion years of sunlight was powered by the fusion of hydrogen (\({}^{1}\)H) left over from Big Bang nucleosynthesis (BBN). The Big Bang initial condition is a low-entropy macrostate because it produced a mostly-hydrogen Universe at low temperature. Turning the question around again, a high-entropy Big Bang would look like an expanding gas of iron nuclei, with no energy from nuclear fusion to give. Rovelli [4] traces the large abundance of hydrogen in the early Universe to the rapid expansion of space. The Universe, between about one second and three minutes after the Big Bang, expanded too fast for reactions to keep protons and neutrons in equilibrium with each other. This left our Universe in a metastable state, full of cold, diffuse hydrogen. This does not correspond to maximal nuclear entropy2. The Universe can increase its entropy by burning hydrogen to iron. However, the reaction rate is so slow (outside of stellar cores) that the Universe will remain in this metastable state for a very long time. The importance of rapid expansion raises the question: just how rapid?3 And what cosmological factors determine the critical rate of expansion? Barnes and Lewis [6] identify a necessary condition for BBN to produce more heavy elements: an excess of baryons over antibaryons in an early universe. Specifically, too much asymmetry increases the baryon-to-photon ratio, allowing nuclear reactions to proceed for longer. By considering the timescales of the relevant nuclear reactions, they conclude that an iron-filled universe requires baryon-antibaryon asymmetry that approaches unity. However, they only consider the standard FLRW (\(a\propto t^{1/2}\)) expansion of our Universe. Footnote 3: Rovelli [4] states that “the dominant source of the low-entropy of the past universe is only the smallness of the scale factor.” However, the normalisation of the scale factor in the RW metric is arbitrary, so this condition is not correct. As [4] states immediately after, it is the fact that the expansion is _rapid_ that determines the extent of BBN reactions. Here, we calculate the effect of altering the early expansion rate of the Universe on BBN and the production of heavy elements. We also vary the amount of baryon-antibaryon asymmetry, via the baryon-to-photon ratio \(\eta\). The structure of the paper is as follows. In Section 2, we discuss the details of baryogenesis and BBN. In Section 3, we discuss the different cosmological models that we will consider. In Section 4, for each cosmology model, we explain the relationship between post-BBN elemental abundances and variations in both the expansion rate and the baryon-to-photon ratio. We then conclude by discussing these results in Section 5. ## 2 Big Bang Nucleosynthesis Big Bang nucleosynthesis occurs in the era in which protons and neutrons are hot enough to fuse together into deuterium, and the Universe's photon background cool enough that significant amounts of deuterium fuse into heavier nuclei, rather than being photo-disintegrated. In the standard cosmological model, BBN lasted from just under 1 second (\(T\sim 2\) MeV) to \(10^{3}\) seconds (\(T\sim 0.03\) MeV) after the Big Bang. Modelling BBN requires tracking the creation and destruction of nuclear species. The large network of non-linear rate equations is virtually impossible to solve analytically. For this reason, numerical simulations are an attractive option when it comes to modelling BBN, such as those put forward by [7, 8, 9, 10, 11, 12, 13, 14, 15]. In the following, we use AlterBBN4[14, 15] to compute the final BBN abundances within our counterfactual cosmologies. However, further considerations were required, such as modifying the code (c.f. Appendix A.1) and calculating initial abundance conditions (c.f Appendix A.2). Interestingly, the Saha equation can be used to derive analytic expressions for abundances under the condition of Nuclear Statistical Equilibrium (NSE), in which every forward reaction rate is equal to the corresponding reverse reaction rate [6, 16]. Footnote 4: [https://alterbbn.hepforge.org/](https://alterbbn.hepforge.org/) The baryon-to-photon ratio \(\eta\) is particularly important to BBN, as it fixes the relationship between temperature and baryon density; nuclear reaction rates depend on both. This ratio is determined by the baryon-antibaryon asymmetry that the early universe possesses. After baryons and antibaryons annihilated with each other at a temperature around \(2\times 10^{12}\) K [17], our Universe only consisted of photons and leftover baryons. Today, we measure \(\eta_{0}=6.1\times 10^{-10}\), indicating that for (roughly) every one billion anti-baryons, there were one billion and one baryons before annihilation [18]. The degree of baryon-antibaryon asymmetry is determined by the process of baryogenesis, which is currently not well understood. In the counterfactual cosmological models we consider in this paper, we find that fusion continues down to lower temperatures. As a result, we model down to temperatures much lower than 0.03 MeV. The initial and final temperatures we consider are \(T_{i}\sim 2.7\) MeV and \(T_{f}\sim 0.001\) MeV, respectively [16]. Due to the temperature at which fusion occurs being independent of a universe's expansion rate and baryon-to-photon ratio, we keep these values of \(T_{i}\) and \(T_{f}\) fixed, independent of the cosmological model being considered. ## 3 Modelling Modified Universes Here, we discuss below our strategy for altering the expansion rate, which we achieve by altering the function \(a(t)\). We will then be in a position to answer our question: how rapid does a universe's expansion need to be for significant amounts of nuclei to be fused into heavy elements during BBN? The point here is _not_ to attempt to model the actual history of our Universe. We are exploring the physical relationship between the expansion of a universe and the low-entropy nuclear energy available post-BBN. We are not proposing alternative models of the early Universe, or exotic new forms of energy. ### The Forced Cosmology In this model, we specify the scale factor's dependence on time by fiat: \(a(t)\propto t^{n}\). We label this 'the forced cosmology'. The energy density of matter and radiation depend on the scale factor in the usual way (\(a^{-3}\) and \(a^{-4}\), respectively). The temperature of the cosmic microwave background is inversely proportional to the scale factor \(T\propto a^{-1}\), and \(a\propto t^{\alpha}\). This gives \[t=t_{0}\left(\frac{T_{0}}{T}\right)^{1/\alpha}\;, \tag{1}\] where we set \(T=T_{0}\) at \(t=t_{0}\), where \(T_{0}=2.725\) K is the temperature of CMB today and \(t_{0}=13.7\) Gyr is the age of our Universe today. The starting time of BBN is found by substituting \(T=T_{i}\). Having \(a\propto T^{-1}\) and \(a\propto t^{\alpha}\) gives the relationship between the finishing time of BBN, \(t_{f}\), and the starting time of BBN, \(t_{i}\), to be \(t_{f}=(T_{i}/T_{f})^{1/\alpha}t_{i}\). For the rest of this paper, we will define the duration of BBN to be the time required for the temperature to go from \(T_{i}\) to \(T_{f}\), despite element abundances mostly freezing out at earlier temperatures. Thus, using the approximate values of the BBN starting and finishing temperatures that are given in [14], namely \(T_{i}=2.7\times 10^{10}\) K (2.32 MeV) and \(T_{f}=10^{7}\) K (\(8.6\times 10^{-4}\) MeV), we are able to examine the period over which BBN occurs as a function of \(\alpha\). This is shown in Figure 1. We can see that the duration covers many orders of magnitude. Note that because we have normalised the power law to "today" (\(t_{0}\) and \(T_{0}\)), larger values of \(\alpha\) imply a _longer_ duration of BBN. This is because they reach the relevant BBN temperatures later in their evolution. ### The Dominant Fluid Cosmology The dominant fluid cosmology adds to the standard model of cosmology a form of energy that dominates during the period of Big Bang nucleosynthesis, whose energy density we label \(\rho_{D}\). By requiring consistency with the Friedmann equation and fluid equation, this additional dominant fluid will control the rate of expansion. The equation of state (EoS) of the dominant fluid is given by \(w=P/(c^{2}\rho_{D})\), and for simplicity is assumed to be constant throughout BBN. Upon solving the Friedmann equation and fluid equation, one finds \(\rho_{D}\propto T^{3(1+w)}\) and \(a\propto t^{2/(3w+3)}\) (assuming \(\rho_{D}\) dominates over all other energy forms) [16]. The difference between this model and the forced cosmology lies in the determination of the initial conditions, and duration, of BBN. Hereafter, we will be working in natural units (\(c=\hbar=k_{B}=1\)) unless stated otherwise. The time independence of the EoS parameter restricts the scale factor to a power-law [16], \(a\propto t^{\alpha}\), giving the EoS parameter of the dominant fluid to be \(w=2/(3\alpha)-1\). Thus, \(\rho_{D}=AT^{2/\alpha}\) for some constant \(A\). We find that the Figure 1: Left Panel: This shows how the starting and finishing time of BBN change with \(\alpha\) for the forced cosmology. From Equation 1, the variation of temperature over the BBN periods can be easily understood from the fact that an order of magnitude increase in time corresponds to temperature dropping by \(\alpha\) orders of magnitude. Right panel: This shows how the duration of BBN varies with \(\alpha\). While this can be derived from the left panel, the right panel provides easier visualisation. Here, \(\alpha\) only goes down to \(0.2\) for reasons discussed in Section 4.1. Note that because we have normalised the power law to “today” (\(t_{0}\) and \(T_{0}\)), for the range of \(\alpha\) that we consider here, larger values of \(\alpha\) imply a _longer_ duration of BBN. This is because they reach the relevant BBN temperatures later in their evolution. starting time of BBN is given by, \[t_{i}=\begin{cases}\frac{3\alpha T_{i}^{-1/\alpha}}{\sqrt{24\pi GA}}&\quad\text {if $\alpha\leq 1/2$},\\ \frac{T_{i}^{-1/\alpha}}{\sqrt{\frac{32}{3}\pi GA}}&\quad\text {if $\alpha>1/2$},\end{cases} \tag{2}\] where \[A=\begin{cases}\max\left[\frac{43\pi^{2}}{120}(T_{f})^{4-2/\alpha},\frac{2\pi^ {2}m_{p}\eta}{81}(T_{f})^{3-2/\alpha}\right]&\quad\text{if $\alpha\leq\frac{1}{2}$},\\ \max\left[\frac{43\pi^{2}}{120}(T_{i})^{4-2/\alpha},\frac{2\pi^{2}m_{p}\eta}{8 1}(T_{f})^{3-2/\alpha}\right]&\quad\text{if $\frac{1}{2}<\alpha\leq\frac{2}{3}$},\\ \max\left[\frac{43\pi^{2}}{120}(T_{i})^{4-2/\alpha},\frac{2\pi^{2}m_{p}\eta}{8 1}(T_{i})^{3-2/\alpha}\right]&\quad\text{if $\alpha>\frac{2}{3}$}.\end{cases} \tag{3}\] with \(m_{p}\) being the mass of the proton. We note that the expression for \(A\) gives the minimum value required for \(\rho_{D}\) to dominate during BBN. While \(A\) could be larger, we made this assumption to remove ambiguity when determining the times and temperatures at which dominance switches between \(\rho_{m},\rho_{r}\) and \(\rho_{D}\). Figure 2 shows how the duration, \(t_{f}-t_{i}\), of BBN varies with \(\alpha\) and \(\eta\). The range of BBN durations in this cosmological model is much smaller than that in the forced cosmology. It should also be noted that the forced cosmology can be derived from the dominant fluid cosmology by forcing \(t_{0}=T_{0}\) and taking \(A\to\infty\). ## 4 Results Here, we present how the final nuclide abundances were effected by variations in \(\alpha\) and \(\eta\), calculated using our modified version of AlterBBN. The easiest Figure 2: Left Panel: This shows how the starting time of BBN changes with \(\alpha\) and \(\eta\) for the dominant fluid cosmology. Right panel: This shows how the duration of BBN varies with \(\alpha\) and \(\eta\). way to do so is by studying the most abundant, and second most abundant, nuclides left over after BBN. We will discuss the forced cosmology and the dominant fluid cosmology separately. Altogether, this paper reports on the results from 11680 BBN simulations. ### The Forced Cosmology Figure 3 shows how the most abundant, and second most abundant, nuclides left over from BBN vary with \(\alpha\) and \(\eta\) in the forced cosmology. We have ignored \(\alpha<0.2\) as BBN duration is \(\mathcal{O}(10^{-15}\)s) and hence \({}^{1}\)H will always dominate as there is not enough time for any fusion to occur. Convergence issues within the code forced us to set \(\alpha<1.2\), which we suspect was due to the large time-steps leading to invalid linear approximations within the AlterBBN (see Ref. [15] for details of the linearization process). Updating the integration methods as a means of resolving these issues is beyond the scope of this paper. We also only consider \(\eta\leq 10^{-1}\) due to AlterBBN not considering degeneracy in it's calculations. The complicated trend in Figure 3 stems from the intricate interplay between \(\alpha\), \(\eta\), the starting time of BBN, the duration of BBN, and the NSE configuration when NSE breaks. This last detail is most important. Let \(\mathbf{Y}_{\rm NSE}(T)\) be the NSE abundance configuration at temperature \(T\). As temperature decreases, heavier elements will take over as the dominant species. Let \(T_{k}\) be the temperature at which elemental species \(k\) first becomes dominant. Over the temperature range of BBN, the dominating NSE element transitions from \({}^{1}\)H\(\rightarrow^{4}\)He\(\rightarrow^{16}\)O with \(T_{{}^{1}{\rm H}}>T_{{}^{4}{\rm He}}>T_{{}^{16}{\rm O}}\). It is the form of \(\mathbf{Y}_{\rm NSE}(T)|_{T=T_{\rm NSE}}\), where \(T_{\rm NSE}\) is the temperature at which NSE breaks, that is ultimately responsible for the observed trend. Note \(T_{k}\) is independent of \(\alpha\) and monotonically increases with \(\eta\)[16], while \(T_{\rm NSE}\) is independent of \(\eta\) and monotonically decreases with \(\alpha\) (c.f. Appendix A.2). When \(T_{{}^{1}{\rm H}}>T_{\rm NSE}>T_{{}^{4}{\rm He}}\), which is the case for \(\alpha\lesssim 0.54\), the neutron to proton ratio will freeze out at a value of \(X_{n}/X_{p}=\exp(Q/T_{\rm NSE})\) where \(Q=1.29\) MeV is the mass difference of a proton and neutron. This creates a cap on the possible \({}^{4}\)He production. The production will get closer to this cap as \(\eta\) increases due to a larger portion of the free neutrons being fused in \({}^{4}\)He nuclei. When \(\alpha<0.5\), \(X_{n}/X_{p}\approx 1\) since \(T_{\rm NSE}\gg Q\), leading to an almost absent cap. Hence, depending on \(\eta\), we can effectively reach arbitrarily strong \({}^{4}\)He dominance. However, once \(\alpha\approx 0.54\), \(X_{n}/X_{p}\) falls below 0.5, capping the maximum \({}^{4}\)He abundance at a value below that of \({}^{1}\)H, returning us to a \({}^{1}\)H dominated universe. This is then made worse by free neutron decay. A further increase in \(\alpha\) eventually gives \(T_{{}^{4}{\rm He}}>T_{\rm NSE}>T_{{}^{16}{\rm O}}\) and we hence arrive back at \({}^{4}\)He dominance where the lack of \({}^{16}\)O production is due to insufficient time for fusion into \({}^{16}\)O. Increasing \(\alpha\) even further leads to \(T_{{}^{16}{\rm O}}>T_{\rm NSE}\), and we get strong \({}^{16}\)O dominance. As we increase \(\eta\) for \(\alpha<0.54\), we get closer to the \({}^{4}\)He cap, thus leading to an increase in \({}^{4}\)He production. Furthermore, as \(\eta\) increases, also increase while \(T_{\rm NSE}\) remains the same. Thus, on the right, we expect a negative gradient for the \({}^{4}\)He and \({}^{16}\)O islands, which is what we observe. Additionally, \({}^{16}\)O is the heaviest element that AlterBBN considers, and hence we are unable to quantitatively conclude whether fusion in universes that are \({}^{16}\)O dominated would continue all the way up to iron. However, it is unlikely that the fusion would stop at \({}^{16}\)O. ### The Dominant Fluid Cosmology Figure 4 shows which two nuclides are most abundant in the dominant fluid cosmology for varying values of \(\alpha\) and \(\eta\). We vary \(\alpha\) from \(10^{-1}\) to \(10^{2}\) and \(\eta\) from \(10^{-11}\) to \(10^{-1}\) and have no convergence issues. We did not consider values of \(\eta>10^{-1}\) for the same reasons as in the forced cosmology. We first note that universes with \(\alpha<0.2\) do not experience NSE before BBN started (c.f. Appendix A.2). Hence, all we are allowed to note here is the lack of significant fusion, despite choosing an initial configuration identical to our Universe. We find that \(T_{\rm 1H}>T_{\rm NSE}>T_{\rm 4He}\) for the entire parameter space considered. The behaviour of the abundances for \(0.2<\alpha<0.5\) is almost identical to the forced cosmology. We, again, obtain almost arbitrarily strong \({}^{4}\)He dominance for \(0.2<\alpha<0.4\). We also observe that \(X_{n}/X_{p}\) falls below 0.5 at \(\alpha\approx 0.48\), returning us to a \({}^{1}\)H dominated universe. Interestingly, in the dominant fluid cosmology, \(T_{\rm NSE}\) increases with \(\eta\) for \(\eta\gtrsim 2\times 10^{-5}\) (c.f. Appendix A.2). For \(\eta\gtrsim 2\times 10^{-3}\), this increase is great enough to prevent \(X_{n}/X_{p}\) from dropping below 0.5 for any values of \(\alpha\) and hence we do not return to \({}^{1}\)H dominance. For \(\alpha>0.5\), unlike in the forced cosmology, we find that \(T_{\rm NSE}\) begins to increase, asymptotically approaching \(T_{i}\). Thus, \(X_{n}/X_{p}\) increases with \(\alpha\) and Figure 3: The most abundance (left panel), and second most abundance (right panel), nuclides left over from BBN in the forced cosmology as a function of \(\alpha\) and \(\eta\). The horizontal red dashed lines represent the value of \(\eta\) in our Universe while the vertical ones represent the value of \(\alpha\) required for BBN duration to match that of our Universe. In the left panel, we have split the regions dominated by \({}^{4}\)He and \({}^{16}\)O into those where the abundances are greater than 0.95 and less than 0.95. Here, we have varied \(\alpha\) from 0.2 to 1.2, and \(\eta\) over 10 orders of magnitude, from \(10^{-11}\) to \(10^{-1}\). The white regions represent the parameters where the code was unable to complete the calculation due to the convergence issues. Note that the jagged boundaries are due to our use of a discrete set of simulation parameters. we go back to \({}^{4}\)He dominance. Finally, as we take \(\alpha\) to be very large, we will eventually return back to \({}^{1}\)H dominance as the duration of BBN will become very small, explaining the positive gradient on the bottom boundary of the \({}^{4}\)He island on the right. ## 5 Discussion Our calculations assume that no relativistic species are degenerate, quantified by \(\mu_{i}\ll T\) where \(\mu_{i}\) is chemical potential of the species, \(i\), in consideration. If this criterion is not met, then we are unable to assume \(\rho_{r}\propto T^{4}\). The most important relativistic species during BBN is the electron, which [19] (pg. 93) states to obey \(\mu_{e}/T\sim B\), where \(B\) is the baryon-to-entropy ratio. Furthermore, we know that \(B\sim\eta\) and hence non-degeneracy can be assumed if \(B\sim\eta\ll 1\)[6, 19]. The largest value of \(\eta\) we consider is \(10^{-1}\), validating our assumption. As we slow down the rate of expansion during BBN, while keeping \(\eta=6.1\times 10^{-10}\) constant, Rovelli suggests we should see the dominant nuclides change from light elements to heavy elements. This trend is eventually observed within the forced cosmology when we reach large \(\alpha\). However, the trend is not monotonic. The universe with the greatest heavy element production occurred when \(\alpha\approx 0.50\), where the abundances of \({}^{1}\)H and \({}^{4}\)He were 0.329 and 0.634, respectively. Almost all other values of \(\alpha\) gave universes that were \({}^{1}\)H dominated, as can be seen in Figure 3. This corresponds to a very short BBN duration of about 9 hours, as opposed to about 13 days in our Universe (we remind the reader that we have defined the duration to be the time taken for \(T\) to drop from \(T_{i}\) to \(T_{f}\), rather than when the abundances freeze out). For the dominant fluid cosmology, we did not observe the trend suggested by Rovelli. We found that the universe with the greatest heavy element production occurred when \(\alpha\approx 0.37\), where the abundances of \({}^{1}\)H and \({}^{4}\)He were Figure 4: The most abundance (left panel), and second most abundance (right panel), nuclides left over from BBN in the dominant fluid cosmology as a function of \(\alpha\) and \(\eta\). The horizontal red dashed lines represent the value of \(\eta\) in our Universe while the vertical ones represent the value of \(\alpha\) required for BBN duration to match that of our Universe. In the left panel, we have split the regions dominated by \({}^{4}\)He into those where the abundance is greater than 0.95 and less than 0.95. Here, we have varied \(\alpha\) over 3 orders of magnitude, from \(10^{-1}\) to \(10^{2}\), and \(\eta\) over 10 orders of magnitude, from \(10^{-11}\) to \(10^{-1}\). Again, note that the jagged boundaries are due to our use of a discrete set of simulation parameters. 0.169 and 0.818, respectively. This corresponds to a BBN duration about 20% shorter than in our Universe. We do not expect values of \(\alpha\) greater than those considered here, i.e. \(\alpha>10^{2}\), to provide heavy element build up due to the duration of BBN decreasing and the starting time remaining the same as \(\alpha\) increases. While the aforementioned universe is \({}^{4}\)He dominated, with \({}^{4}\)He providing less access to nuclear energy than \({}^{1}\)H, it still possesses low nuclear entropy and has a substantial amount of \({}^{1}\)H left over. Again, almost all other values of \(\alpha\) gave universes that were \({}^{1}\)H dominated. This agrees with results presented by [6] which show that BBN in our Universe would have to last approximately 1 billion years for all elements to burn all the way to iron. Note that \({}^{16}\)O is the heaviest element that AlterBBN considers, so we don't predict whether fusion all the way up to iron would have occurred. However, the trend towards heavier elements is clear. Approximate calculations of fusion up to iron can be found in [6]. When allowing variations in \(\eta\), we that see a large portion of the parameter space of the forced cosmology has strong \({}^{4}\)He dominance (greater than 95% \({}^{4}\)He), which represents universes that are far from maximal nuclear entropy, but still closer than our own. This occurs for \(0.335<\alpha<0.475\) and \(10^{-6}<\eta<10^{-1}\). These values of \(\alpha\) give a duration of BBN between about 16 seconds and 2 hours, much shorter than in our Universe, and require a value of \(\eta\) multiple orders of magnitude larger than \(\eta_{0}\). We also see strong \({}^{4}\)He build up in the central part of the right \({}^{4}\)He island. Furthermore, as we move above this region, \({}^{4}\)He is fused to \({}^{16}\)O and hence these universes move even closer to maximal entropy. Note, however, that the BBN duration in these universes is around 31 years and they require a value of \(\eta\) many orders of magnitude greater than \(\eta_{0}\). In the dominant fluid cosmology, the maximum abundance that \({}^{16}\)O reaches is around 13%, which occurs when \(\eta\) is more than 8 orders of magnitude larger than \(\eta_{0}\) and a duration around 50 times shorter than in our Universe. This lack of heavy element build up makes sense when considering that BBN duration only varies between \(10^{-2}-10^{6}\) seconds in this cosmology. We see a large section of strong \({}^{4}\)He dominance within the approximate parameter range of \(0.2\lesssim\alpha\lesssim 0.4\) and \(10^{-8}\lesssim\eta\lesssim 10^{-1}\). This corresponds to a duration of BBN to be slightly smaller than our Universe, between about 5 days (\(\sim 434,000\) seconds) and 9 days (\(\sim 782,000\) seconds), and requires \(\eta\) to be at least more than 2 orders of magnitude greater than its current value. While we acknowledge these universes have greater nuclear entropy than ours, we emphasise that this range is very narrow when considering all the permissible values of \(\eta\) and BBN duration. In conclusion, while an extremely slow cosmological expansion would have led to considerable heavy element production, the duration of BBN can be varied by many orders of magnitude without resulting in a universe that burns entirely to heavy elements. To put it another way, BBN in our Universe is many orders of magnitude more rapid than is required for a low-nuclear-entropy initial conditions. Further, the relationship between time available for BBN and element abundance is far from monotonic. The answer to the question "how rapid?" seems to be "faster than a billion years". ## 6 Acknowledgments The majority of the work presented here was undertaken as part of C. Sharpe's honours year at the University of Sydney, but was originally conceived by G. F. Lewis. We would like to thank A. Arbey et al. for making their AlterBBN code publicly available as this paper would not have been possible without it. G. F. Lewis received no funding to support this research. C. Sharpe would also like to thank NSW police for ensuring a swift return of his belongings, including his laptop, after having them burgled from his house a few weeks before this paper's submission. Additionally, he would like to thank his neighbour, Gary, who spotted the burglar and, rather than simply phoning the police and staying put, decided to yell 'you better run fast mate' before chasing the man down the street, tackling him, pinning him to the ground, and _then_ calling the police, all while still in his pyjamas and a sleepy daze. ### Declarations **Competing interests:** None ## Appendix 0.A AlterBBN ### Modifications to the Code In their code, Arbey et al. have an incorrect expression for the starting time which is many orders of magnitude smaller than it should be. This was pointed out by [20]. Despite this, the current publicly available version of the code does not run into issues due to the abundances and temperature being forced to not change until \(t=0.136\) seconds \(-\) the starting time of BBN within our Universe \(-\) is reached. When varying \(\alpha\) and \(\eta\), we had to remove this condition and correct the starting time. Furthermore, the reaction rate of three reactions, \({}^{2}\)H+\(p\rightarrow\gamma\)+\({}^{3}\)He, \({}^{2}\)H+\({}^{2}\)H \(\rightarrow\)\(n\)+\({}^{3}\)He and \({}^{2}\)H+\({}^{2}\)H \(\rightarrow\)\(p\)+\({}^{3}\)He, obey step functions. This produced unphysical jigsaw patterns in the time evolution of the abundances, and so we replaced these expressions with a linear interpolation of the discrete values5. Footnote 5: This was merely to make interpretation easier, the results of our calculations are unaffected. Finally, the fluid equation is used to calculate the rate of change of \(\ln(a^{3})\) with respect to temperature, which is then used to calculate \(dT/dt\). This would occasionally cause \(dT/dt\geq 0\) despite \(da/dt>0\), which is clearly unphysical. Hence, using the relation \(a\propto T^{-1}\), we replaced their calculation of \(d\ln(a^{3})/dT\) with \(d\ln(a^{3})/dT=-3/T\), ensuring that \(dT/dt<0\) at all times. ### Setting Initial Abundance Conditions By default, AlterBBN determines the starting abundances for neutrons, protons, and deuterium by their NSE configuration at \(T=T_{i}\) (c.f. Ref. [16] p.88). This is only valid if \(T_{\rm NSE}<T_{i}\). NSE is broken when any reaction rate falls below \(H\). The first to succumb is always \(\Gamma_{pe\to\nu n}\) and hence by computing the temperature, \(T_{\rm NSE}\), at which \(\Gamma_{pe\to\nu n}=H\), we are able to determine when NSE is broken and thus whether we can assume an NSE initial abundance configuration. We achieve this by using the definition of the Hubble parameter, \(H=\dot{a}/a=\alpha/t\), Equations 1 and 2, and the relatioship \[\Gamma_{pe\to\nu n}=\frac{7}{60}\pi\left(1+3g_{A}^{2}\right)G_{F}^{2}T^{5}. \tag{4}\] Here, \(g_{A}\) is the axial-vector coupling of the nucleon6 and \(G_{F}\) is the Fermi coupling constant7[16]. Note that this equation is only valid in the high temperature limit, \(T\gg 1.28\) MeV. However, this does not concern us as we are only interested in whether NSE is achieved at \(T=T_{i}=2.32\) MeV. Footnote 6: With numerical value of \(1.26\). Footnote 7: With numerical value \(1.16\times 10^{-5}\) GeV\({}^{-2}\). The relationship between time and temperature is independent of \(\eta\) for the forced cosmology, while this is not the case for the dominant fluid cosmology. Accordingly, in Figure 5, we have shown \(T_{\rm NSE}\) against \(\alpha\) only for the forced cosmology (left panel) and \(T_{\rm NSE}\) against both \(\alpha\) and \(\eta\) for the dominant fluid cosmology (right panel). Since \(\Gamma_{pe\to\nu n}/H\propto T^{(5\alpha-1)/\alpha}\), this ratio increases with time for \(\alpha<0.2\). Additionally, \(\Gamma_{pe\to\nu n}/H\ll 1\) at \(T=T_{i}\). This indicates that these universes do not experience a period of NSE whatsoever before BBN starts. This makes it troublesome to choose an initial configuration as knowledge of the conditions of the much earlier Universe is required, which we do not have. For the purposes of this report, and due to the lack of a better choice, we will assume the initial abundances for these universes are the same as that of BBN in our Universe. We also acknowledge that the quark-gluon plasma condensed into nucleons when \(T\sim 1\) GeV [21]. Thus, when \(T_{\rm NSE}\gtrsim 1\) GeV, we simply set \(X_{n}/X_{p}=\exp(-Q/T_{\rm NSE})\) as further considerations are beyond the scope of the paper. We see that \(T_{\rm NSE}>T_{i}\) for \(0.2<\alpha\leq 0.55\) in the forced cosmology, and \(0.2<\alpha\lesssim 0.45\) in the dominant fluid cosmology. Thus, we set the initial proton and neutron abundances to be those at the time NSE was broken. This is valid as the time between \(T_{\rm NSE}\) and \(T_{i}\) for these values of \(\alpha\) is less than 0.2 seconds and so significant neutron decay won't not occur. Deuterium still follows its NSE abundance at \(T=T_{i}\) for \(\alpha\geq 0.3\) in the forced cosmology and \(\alpha\geq 0.16\) in the dominant fluid cosmology. These were both checked numerically with AlterBBN. For \(\alpha<0.3\) in the forced cosmology, BBN is too rapid for any fusion to occur between \(T_{\rm NSE}\) and \(T_{f}\) and hence the post-BBN configuration consists purely of protons and neutrons. Recall that for \(\alpha<0.2\), we are assuming an NSE configuration at \(T_{i}\), as discussed above, and hence no further considerations are required. ### Data Availability The datasets generated and analysed during the research carried out in this paper are available from C.S. on reasonable request.
2308.10011
Recurrent Symbiotic Nova T Coronae Borealis Before Outburst
The results of photometric and spectral observations of T CrB obtained in a wide range of wavelengths in 2011-2023 are presented. We use the near-IR light curves to determine a new ephemeris $JD_{min} = 2455828.9 + 227.55 \times E$ for the times of light minima when the red giant is located between the observer and the hot component. The flux ratio H$\alpha$/H$\beta$ varied from $\sim 3$ to $\sim 8$ in 2020-2023, which may be due to a change in the flux ratio between the X-ray and optical ranges. It is shown that the value of H$\alpha$/H$\beta$ anticorrelates with the rate of accretion onto the hot component of the system. Based on high-speed follow-up observations obtained on June 8, 2023, we detected a variability of the HeII $\lambda 4686$ line with a characteristic time-scale of $\sim 25$ min, the amplitude of variability in the $B$-band was $\sim 0.07^m$. Simulations of the near-IR light curves accounting for the ellipsoidal effect allowed us to obtain the parameters of the binary system: the Roche lobe filling factor of the cool component $\mu=1.0$, the mass ratio $q=M_{cool}/M_{hot} \in [0.5, 0.77]$, the orbital inclination $i \in [55^\circ, 63^\circ]$. A comparison of the light curve obtained in 2005-2023 with the 1946 outburst template made it possible to predict the date of the upcoming outburst - January 2024.
N. A. Maslennikova, A. M. Tatarnikov, A. A. Tatarnikova, A. V. Dodin, V. I. Shenavrin, M. A. Burlak, S. G. Zheltoukhov, I. A. Strakhov
2023-08-19T13:29:56Z
http://arxiv.org/abs/2308.10011v2
# Recurrent symbiotic Nova T Coronae Borealis Before Outburst ###### Abstract The results of photometric and spectral observations of T CrB obtained in a wide range of wavelengths in 2011-2023 are presented. We use the near-IR light curves to determine a new ephemeris \(JD_{min}=2455828.9+227.55\times E\) for the times of light minima when the red giant is located between the observer and the hot component. The flux ratio H\(\alpha\)/H\(\beta\) varied from \(\sim 3\) to \(\sim 8\) in 2020-2023, which may be due to a change in the flux ratio between the X-ray and optical ranges. It is shown that the value of H\(\alpha\)/H\(\beta\) anticorrelates with the rate of accretion onto the hot component of the system. Based on high-speed follow-up observations obtained on June 8, 2023, we detected a variability of the He II \(\lambda 4686\) line with a characteristic time-scale of \(\sim 25\) min, the amplitude of variability in the \(B\)-band was \(\sim 0\,\!\!^{\rm m}\!07\). Simulations of the near-IR light curves accounting for the ellipsoidal effect allowed us to obtain the parameters of the binary system: the Roche lobe filling factor of the cool component \(\mu=1.0\), the mass ratio \(q=M_{cool}/M_{hot}\in[0.5,0.77]\), the orbital inclination \(i\in[55^{\circ},63^{\circ}]\). A comparison of the light curve obtained in 2005-2023 with the 1946 outburst template made it possible to predict the date of the upcoming outburst - January 2024. binaries: symbiotic, stars: individual: T CrB, accretion discs ## I Introduction T CrB is a famous symbiotic recurrent nova. Throughout the history of observations, T CrB erupted as a nova twice, in 1866 and in 1946, at maximum light becoming brighter than 2\({}^{\rm m}\). It is expected to flare up again in the near future (Schaefer, 2023). T CrB is a symbiotic binary - a system consisting of a red giant and a white dwarf. Due to the presence of a high-luminosity star in the system (the cool component is a M4 III star), the amplitude of outburst is small compared to that of classical novae which is about 8\({}^{\rm m}\). Nevertheless, the outburst itself is completely similar to the outburst of a classical nova as it develops on the surface of a white dwarf. According to Fekel et al. (2000), the orbital period of T CrB is approximately equal to \(P_{orb}=227.6\,{\rm d}\). At the same time, no eclipses are observed in the system (Selvelli et al., 1992). The light curves of T CrB in the optical and near-IR ranges demonstrate the presence of periodic variations that occur with a period of \(0.5P_{orb}\) and are related to the ellipsoidal shape of the cool component. The amplitude of this effect indicates that the cool component completely fills its Roche lobe (Shahbaz et al., 1997). In addition to regular long-term variability, T CrB exhibits irregular brightness variations in the short-wavelength range (Zamanov and Bruch (1998), Zamanov et al. (2004), Zamanov et al. (2005), Minev et al. (2023), etc.) with a characteristic time-scale of tens of minutes and an amplitude up to \(\sim 0\,\!\!^{\rm m}\!.\!5\) in the \(U\) band (by an analogy with cataclysmic variables, this type of variability of symbiotic stars is called flickering). Such variability was also registered by Maslennikova et al. (2023) during spectral observations both for the continuum and emission line fluxes. It is associated with the presence of an accretion disk around the hot component of the system. The accretion rate estimates made by different authors indicate that the accretion rate can significantly change with time. A change in the accretion rate affects the luminosity of the accretion disk, causing changes in the overall spectral energy distribution (SED) of the system which are noticeable at wavelengths shorter than 4500 A. In 2015 T CrB entered a so-called super-active phase (Ilkiewicz et al. (2016), Munari (2023)). By 2016 the Balmer lines fluxes had increased by more than an order of magnitude, strong lines of He I and He II had appeared, the mean \(B\) brightness had increased by 1\({}^{\rm m}\), the amplitude of flickering had declined. According to Munari (2023), this phase lasted until about the middle of 2023 with a maximum in April 2016. The analysis of the complete set of photometric data obtained for T CrB from 1855 till 2023 showed that the light curves before, during and after the 1866 and 1946 outbursts were very similar. This fact allowed Schaefer (2023) to create a template of light change and to predict a date of \(2025.5\pm 1.3\) for the next outburst of T CrB. We aim to investigate the photometric and spectral variability of T CrB at the pre-outburst phase and to revise the date of the upcoming eruption. ## II Observations Our spectroscopic observations of T CrB were carried out at the 2.5-m telescope of the Caucasian mountain observatory of the Sternberg astronomical institute of the Moscow State University (CMO SAI MSU) with the Transient Double-beam Spectrograph (TDS Potanin et al., 2020). During the observations, a 1\({}^{\prime\prime}\) slit was used, which provides the spectral resolution \(R=1300\) in the short-wave channel (the so-called \(B\)-channel, wavelength range 3600-5770 A), and \(R=2500\) in the long-wave channel (\(R\)-channel, wavelength range 5700-7400 A). The slit in the TDS was oriented in the direction of the zenith angular distance to minimize atmospheric dispersion. Standard A0V stars were used to correct for tellurics. The observation log is shown in Table 1. In addition to obtaining individual spectra, a 2-hour set of high-speed follow-up observations was carried out on June 8, 2023, when the spectra were recorded continuously (with breaks of 20 s for reading) with exposures of 100 s in the \(B\)-channel and 30 s in the \(R\)-channel. Standard stars were selected at airmasses close to that of the object, and were observed before the start and immediately after the end of follow-up observations. When calibrating the spectra, the airmass of the standard star was reduced to the airmass of the currently processed T CrB spectrum. The obtained spectra were processed according to the algorithm described in Potanin et al. (2020). The spectra were wavelength-calibrated using the emission spectrum of a gas-discharge Ne-Kr-Pb hollow cathode lamp (HCL), corrections for vignetting and non-uniform slit illumination were calculated using a continuum lamp. To check the quality of the conversion of the observed fluxes to the absolute ones, we used the closest in time \(V\)-band photometric observation from the AAVSO database (Kloppenborg, 2023). All the spectra were reduced to the Solar system barycenter and corrected for interstellar extinction with \(E(B-V)=0.15\)(Selvelli et al., 1992). The spectra were processed using self-developed python scripts. The \(B\)-band photometric follow-up observations of T CrB were performed on June 8, 2023 with the RC-600 telescope of the CMO SAI MSU (Berdnikov et al., 2020). The campaign lasted for nearly 2 hours, a total of 348 images with an exposure time of 15 s were obtained. After the initial standard data reduction procedures performed (bias and dark-subtraction and flat-fielding), a differential aperture photometry was applied using the MaxIm DL software package. The comparison stars were chosen among the field stars of comparable brightness. The infrared (IR) observations were carried out on the 1.25-m telescope of the Crimean astronomical station (CAS) of SAI MSU in 2011-2023 with the one-channel InSb-photometer (Shenavrin et al., 2011), which \(JHKLM\) photometric system is close to that of Johnson (1965). The star BS 5947 (\(J=2\,\fm 09\), \(H=1\,\fm 60\), \(K=1\,\fm 30\), \(L=1\,\fm 12\), \(M=1\,\fm 35\)) was used as a comparison star. We present the \(JK\) photometry in Table 2 and the \(JHKLM\) photometry in Table 3. The brightness uncertainties are \(0\,\fm 02\) for \(JHKL\) and \(0\,\fm 05\) for \(M\). In this work we make use of the ultraviolet (UV) spectroscopy obtained by the IUE satellite in 1978-1990 and by the Swift/UVOT in 2015-2023 (Roming et al., 2005). The latter was reduced using the heasoft package (v6.31.1, NASA High Energy Astrophysics Science Archive Research Center (Heasarc), 2014). ## III Photometry analysis Tables 2 and 3 list the IR photometry for T CrB in the quiet and active state. A large amount of \(J\) and \(K\) measurements allows us to state that the mean brightness in these bands did not depend on the activity state of the hot component. Then, we can assume that the mean brightness in other bands was constant, too: \(\overline{J}=5.91\pm.06\), \(\overline{H}=5.07\pm 0.05\), \(\overline{K}=4.74\pm 0.05\), \(\overline{L}=4.38\pm 0.04\), \(\overline{M}=4.68\pm 0.05\). We performed a frequency analysis of the IR measurements presented in Table 2 using an upgraded version of the L2 program created by Yu. Kolpakov. The code implements the fitting of a time series by a third-order polynomial, which reconstructs the long-term trend, and then the Fourier analysis up to the third harmonic component of the residuals between the observational data and the polynomial. There are two prominent peaks in the resulting power spectrum of the \(J\)-band light curve corresponding to the periods \(P=227.55\pm 0.1\) d and \(P^{\prime}\approx 0.5P\). The first one coincides with the orbital period found by Fekel et al. (2000) from the radial velocity curves for the cool component, and with the period found by Tatarnikova et al. (2013) based on the IR-photometry obtained in 1987-2003. We determined a new ephemeris for the times of minimum brightness when the red giant is located between the hot component and the observer (\(\varphi=0\)): \(JD_{min}=2455828.9+227.55\times E\). The \(J\) light curve folded on the period \(P\) is shown in Fig. 1. The period found from the \(K\) light curve is equal to \(P\) within error. Table 2 demonstrates that a sharp maximum which happened during the active state and fell on April 2016 as was reported by Munari (2023), occurred at phase \(\sim 0.3\). It is seen in Fig. 1 that the \(J\) brightness measured in April 2016 is higher by just \(\sim 5\%\) than the mean brightness at this phase. So, we can state that the transition of the hot component to the active state in 2015 barely affected the IR brightness of the system \begin{table} \begin{tabular}{l|c|c|c|c} \hline Date & Exposure in \(B\)-channel & Exposure in \(R\)-channel & FWHM & phase \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: Log of the TDS observations for T CrB (this is also supported by the mean brightness constancy). The presence of two periods, one of which is equal to the orbital one and the other is \(0.5P_{orb}\), points out the ellipsoidal effect when the total brightness varies due to the changing aspect of the tidally distorted star with respect to us, and to the variations of temperature on the visible stellar surface. Shahbaz et al. (1997) investigated this effect for the T CrB system. When analysing the IR SED, we can neglect the input from the accretion disc and nebula compared to that from the red giant (Maslennikova et al., 2023). So, we have modelled the \(J\) and \(K\) light curves of T CrB considering the orbital motion of the tidally distorted cool component (Tjemkes et al., 1986). The near-IR red giant's SED is well fitted by a black body (Pickles, 1998). According to the data in Table 2, we have \(J-K=1.17\) for the mean colour and, after a small correction for interstellar reddening applied, this corresponds to the M3 III-M4 III spectral type and the effective temperature \(T_{eff}\approx 3500\) K. Hachisu and Kato (1999) modelled the optical light curve of T CrB during the first 300 days of the 1946 outburst. They explained the noticeable secondary light maximum by the reflection of the X-ray radiation from the hot component by the red giant and accretion disc. The modelling took into account the X-ray luminosity (30-100 A) simulated for the best-fitting model of T CrB (\(M_{hot}=1.377\) M\({}_{\odot}\), X = 0.7, Z = 0.02). With a distance of 1 kpc adopted, the simulated X-ray flux was lg \(F_{SSX}=-6.5\) at the moment of X-ray light maximum, and it decreased by not more than two orders of magnitude by the middle of the secondary maximum (if the X-ray light \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline JD & \(J\), & \(K\), & JD & \(J\), & \(K\), & JD & \(J\), & \(K\), & JD & \(J\), & \(K\), \\ (-2400000) & mag & mag & (-2400000) & mag & (-2400000) & mag & mag & (-2400000) & mag & mag \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: \(JK\) photometry for T CrB curve is linearly extrapolated to the moment of secondary maximum). The X-ray fluxes related to the IR light curve simulated here are much smaller. According to Kennea et al. (2009), the dereddened _SWIFT/XRT_ flux in a wider range of 0.3-10 keV was just \(3.8\times 10^{-11}\) erg/cm\({}^{2}\)s that is smaller by four orders of magnitude than the maximal theoretical flux in a much narrower range. Later, during the super-active stage, the X-ray flux from T CrB dropped by another order of magnitude (see Kuin et al. (2023) and references therein) and returned to previous values in 2023. Following the arguments given above, we neglect the reflection effect. When modelling the SED of the cool component, we assumed the surface averaged temperature to be \(T_{eff}\), we took into account the limb darkening from Claret (2000) and the gravitational darkening for stars with convective envelopes according to Lucy (1967) assuming the exponent \(\beta=0.08\). When modelling the IR light curves, the orbital inclination \(i\) is determined quite accurately, whereas the mass ratio \(q\) may vary in a large enough interval. To additionally constrain \(q\) and \(i\) we assumed that the red giant's mass is bigger than 0.6 M\({}_{\odot}\) and the hot component's mass must not exceed the Chandrasekar limit. Besides, as follows from UV observations T CrB is not an eclipsing binary (Selvelli et al. (1992)). Taking into account the known mass function (Fekel et al. (2000)) and assuming that the cool component is filling its Roche lobe we can significantly constrain the values of \(q\) and \(i\). This is illustrated by Fig. 2. Based on these considerations and applying the Fisher criterion we found the system parameters with a 90 per cent probability to lie in the intervals \(q\in[0.5,0.77]\), \(i\in[55^{\circ},63^{\circ}]\). The model that best fits the observations has the following parameters: the Roche-lobe filling factor \(\mu=1.0\), the mass ratio \(q=M_{cool}/M_{hot}=0.57\), the orbital inclination \(i=58^{\circ}\) (see Fig. 1). Using the mass function \(f(m)=0.3224\) (Fekel et al. (2000)) we derived 1.30 M\({}_{\odot}\) for the hot component and 0.74 M\({}_{\odot}\) for the cool companion. Contrary to the IR spectral range where the radiation from the cool component dominates, a significant brightness variation is observed in the UV domain. T CrB was observed many times in 1978 -- 1990 with UV spectrographs of the IUE space observatory. The obtained data were barely studied. After the star entered the super-active state in 2015, the UV spectra began to be obtained on the UVOT telescope of the Swift space observatory. Fig. 3 shows the UV light curve for T CrB reconstructed from these spectra. For this purpose, we measured the mean flux \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline JD (-2400000) & \(J\), mag & \(H\), mag & \(K\), mag & \(L\), mag & \(M\), mag \\ \hline [MISSING_PAGE_POST] 65 \\ \hline \end{tabular} \end{table} Table 3: \(JHKLM\) photometry for T CrB in sectors free of emission lines. We used the region centered at \(\lambda 1850\)A for the short-wave range of IUE (SWP spectra) and the region centered at \(\lambda 2050\)A for the long-wave ranges of IUE (LWR and LWP spectra) and for the UVOT spectra. Analysing the IUE spectra obtained close in time we found that the magnitudes calculated via \(mag=-2.5\lg(flux)\) are almost equal for the short and long-wave regions. Therefore, we combined all the data to create one light curve. The combined light curve demonstrates several UV brightenings with the most prominent one occurring during the super-active stage of 2015 -- 2023. A smaller one was observed in 1987 but it did not show up in the visible range according to the AAVSO data. Deep narrow light minima were observed in 1979 and 1989. These seem not to correlate with the relative orbital location of binary components since they occurred at different phases (about 0.8 and 0.5, respectively). The frequency analysis of the UV brightness variations shows that there is no peak at orbital frequency in the power spectrum. A period of \(\sim 3.3\) yr can be distinguished. Earlier a close period was identified by Ilkiewicz et al. (2016) based on optical data analysis. T CrB is one of the first symbiotic stars with detected flickering (Walker, 1957). In order to study flickering, T CrB is usually observed in the \(U\) and \(B\) bands where the cool component's input is smaller than in redder wavelength regions. Before the system entered the super-active state in 2015, the flickering amplitude had been \(0\hbox{$.\!\!^{\rm m}$}08-0\hbox{$.\!\!^{\rm m}$}16\) in \(B\) (Sokoloski et al. (2001), Zamanov et al. (2010)). In the super-active state the amplitude almost did not change and was \(0\hbox{$.\!\!^{\rm m}$}08-0\hbox{$.\!\!^{\rm m}$}13\) (Zamanov et al. (2016), Maslennikova et al. (2023), Shore et al. (2023)). When the \(B\) brightness started to decrease at the end of April 2023, the flickering amplitude rose up to \(0\hbox{$.\!\!^{\rm m}$}26\) Minev et al. (2023). Our \(B\)-band follow-up photometry carried out on June 8, 2023 simultaneously with the follow-up spectroscopy showed that the flickering amplitude had decreased significantly to a value of \(0\hbox{$.\!\!^{\rm m}$}07\). ## IV Spectroscopy Analysis Fig. 4 demonstrates the sequence of spectra for T CrB obtained in 2020 -- 2023 during the current active state decline. Those of the spectra listed in Table 1 that are almost identical to the presented ones are not shown in the figure. A significant fading of emission lines and a variation in the slope of the continuum are evident. The emission lines except H\(\alpha\) are barely seen in the last two spectra of 2023 whereas the absorption bands and lines (Ca I \(\lambda\) 4227 and those of the "blue" TiO band) have become more prominent. Thus, for example, a Ca II H absorption is well seen while it was filled by the H\(\epsilon\) emission earlier. The super-active stage of T CrB may be considered to be over in April 2023. In Fig. 4 the spectra are vertically arbitrarily shifted for clarity but the flux in the red part of the spectrum where the cool component dominates may be considered nearly constant, varying only slightly due to the ellipsoidal effect. Then we can assume that in May -- June 2023 the short-wave flux decreased by a factor of 2 com Figure 1: The observed \(J\) light curve (dots) of T CrB folded on the period \(P=227.55\) d and model light curve (solid line) of a tidally distorted red giant (the model parameters are \(\mu=1\), \(q=0.57\), \(i=58^{\circ}\), \(T_{eff}=3500\) K, see text for details). The bottom panel shows the deviations of the observed magnitudes from the model curve. pared to 2022 -- early 2023. The radiation input from the nebula and accretion disc significantly dropped. Whereas the [Ne III] \(\lambda\)3869 line did not change noticeably. It should be noted that the spectra are not arranged chronologically, and, in fact, the periods of emission lines weakening alternate with those of strengthening (e.g., the line fluxes increased again in July 2023 after a period of fading in May -- June 2023). Fig. 5 shows the variation of the H\(\alpha\) profile observed from early 2020 till July 2023. The maximum line flux was \(\sim 3\times 10^{-11}\) erg/cm\({}^{2}\)/s, the minimum one -- \(3.3\times 10^{-12}\) erg/cm\({}^{2}\)/s. Between 2023 June 8 and 21 (i.e. at phases from 0.84 to, at least, 0.94) the lines began to show a double-component profile. It is worth mentioning that no double-component line structures are present in the same region. Figure 3: UV light curves for T CrB (in arbitrary magnitudes) reconstructed from the UV IUE (left panel) and UVOT (right panel) spectra. The crosses indicate individual estimates, the dots correspond to the 3-point running average. See text for details. Figure 2: The \((q,i)\) diagram for a fixed value of mass function \(f(m)=0.3224\) (Fekel et al.(2000)). The curves demonstrate the constraints corresponding to \(M_{cool}>0.6M_{\odot}\), \(M_{hot}<1.44\)M\(\odot\) and the absence of eclipses. Grey area represents the space of possible values where the models that fit the observations fall with a 90 per cent probability (according to the Fisher criterion). The cross indicates the model with the smallest deviation from the observations \(q=0.57\), \(i=58^{\circ}\). were observed on 09.12.2022 at \(\varphi=0.99\) during the previous cycle. A double-component H\(\alpha\) profile can be fitted by adding an absorption with a radial velocity of \(-72\pm 3\) km/s\({}^{-1}\). At the same time, the centre of emission corresponds to a radial velocity of \(-32\pm 5\) km/s\({}^{-1}\) that is close to the \(\gamma\)-velocity of the system derived by Fekel et al. (2000) based on absorption spectrum of the cool component. The radial velocity of the additional absorption feature is close to that of the central one observed by Stanishev et al. (2004) at similar phases (note that in Stanishev et al. (2004) not a photometric but spectroscopic ephemeris is used and it is shifted by a quarter of period). The Balmer lines may originate in different regions of the symbiotic system (accretion disc, nebula, hot spot). Fig. 6 shows the variation of H\(\alpha\) flux with phase. In the super-active state the H\(\alpha\) flux was observed to somewhat decrease near the phase \(\varphi=1\). But we should mention that the H\(\alpha\) fluxes measured on 24.01.2023 and 27.01.2023 (at phase \(\varphi\sim 0.2\)) differ by a factor of about 2. No significant dependence on phase is seen in Fig. 6, and this gives evidence that the H\(\alpha\) flux is more sensitive to the activity state of the system than to the orbital motion. Usually the SED of symbiotic stars is well fitted by a three-component model consisting of a red giant, a hot component and a nebula. By comparing the red-wavelength spectra of T CrB with averaged spectra of red giant standards taken from Pickles (1998) we classify the spectral type of the cool component as M4 III and it barely changed during our observations. The temperature of the hot component is \(10^{5}\) K according to Selvelli et al. (1992). Fig. 7 shows a dereddened spectrum of T CrB obtained during the super-active stage. In the spectrum there are the bands of TiO and the Ca I \(\lambda\) 4227 line in absorption, the following features are well distinguished: the Balmer jump, the emission lines of the Balmer series, He I (\(\lambda\) 5875.6, 6678.1), He II (\(\lambda\) 4685.7), [Ne III] (\(\lambda\) 3869), [O III] (\(\lambda\) 4363, 4959, 5007), Mg II (\(\lambda\) Figure 4: The observed spectra for T CrB (left panel) obtained on (from top to bottom) 22.03.2020, 05.01.2022, 27.01.2023, 26.02.2023, 09.03.2023, 24.06.2023, 26.05.2023, 14.07.2023. The fluxes are multiplied by an arbitrary constant for clarity. The right panel shows the spectrum of T CrB obtained on 05.01.2022 with the most prominent lines identified. 2796.3, 2803.5). One can see that the radiation from a so-called "warm" component is needed to fit the SED in the blue and UV, and this component is an accretion disc. We used the equations from Tylenda (1977) to model the SED of the accretion disc: \[F_{disk}(\lambda)=\frac{2hc^{2}}{\lambda^{5}d^{2}}\sin i\int\limits_{R_{1}}^{R _{out}}\frac{2\pi R}{exp\left(\frac{hc}{\lambda kT(R)}\right)-1}\,\mathrm{d}R,\] \[T(R)=\left[\frac{3GM_{1}\dot{M}}{8\pi\sigma R^{3}}(1-(R_{1}/R)^{0.5})\right]^{ 0.25},\] where \(R_{1}\) is the inner disc radius, \(R_{out}\) -- the outer disc radius, \(d\) -- the distance to the system, \(i\) -- the angle between the normal to the disc surface and the line of sight, \(M_{1}\) -- the mass of the hot component, \(\dot{M}\) denotes the rate of accretion onto the hot component. We adopt the inner disc radius \(R_{1}=0.004\,\mathrm{R_{\odot}}\)(Pshirkov et al. (2020)) that is equal to the radius of a \(1.3\mathrm{M\odot}\) white dwarf (see above). This is the smallest possible value of \(R_{1}\) and it will grow with increase of the magnetic field strength. The orbital inclination for T CrB is set equal to \(i=58^{\circ}\) which we have derived through modelling the ellipsoidal effect. We have considered an outer disc radius of \(1\,\mathrm{R_{\odot}}\) (Selvelli et al. (1992), Maslennikova et al. (2023)). By fitting UV spectra obtained by IUE and UVOT/Swift and optical spectra with the given model we found the accretion rate (Fig. 8) which affects the temperature of the accretion disc. It had been smaller than \(6\times 10^{-8}\,\mathrm{M_{\odot}/yr}\) before the system transitioned to the super-active state. The accretion rate was \(4\times 10^{-8}\) -- \(2\times 10^{-7}\,\mathrm{M_{\odot}/yr}\) from 2015 till the middle of April 2023. When the super-active state was over, the accretion rate dropped to \(2.5\times 10^{-8}\,\mathrm{M_{\odot}/yr}\) that approximately coincides with the values derived from IUE spectra. We could vary outer disc radius \(R_{out}\) instead of \(\dot{M}\) to explain the UV spectral changes. But to do so we would have to decrease and then increase again the size of the disc by a factor of 6-7 on a time-scale of several tens of days. Note that the inner disc radius \(R_{1}\) barely affects the SED in the UV region. The volume emission measure for the nebula in Fig. 7 is \(1\times 10^{58}\) cm\({}^{-3}\) which is a little smaller than \(EM=4\times 10^{58}\) cm\({}^{-3}\) estimated on 25.08.2020 (Maslennikova et al., 2023). The change in Balmer jump demonstrated in Fig. 4 Figure 5: The H\(\alpha\) profile in the spectra obtained on (from top to bottom) 05.01.2022, 22.03.2020, 26.02.2023, 09.03.2023, 26.05.2023, 14.07.2023, 24.06.2023 and 04.07.2023. Figure 6: The variation of H\(\alpha\) flux as a function of orbital phase based on spectra obtained before the middle of April 2023 (circles) and later (triangles). can be well explained by the decreasing of emission measure since the end of April 2023. And indeed it was \(3\times 10^{57}\) cm\({}^{-3}\) on 04.07.2023. We carried out simultaneous follow-up observations of T CrB with the TDS spectrograph on the 2.5-m telescope and the CCD photometer on RC600 at the CMO SAI MSU on 08.06.2023. The instability of seeing and telescope driving led to the fact that the registered flux experienced changes up to a factor of 2 due to the varying light loss at the slit. As the absolute flux measurements are impossible under these circumstances we searched for flickering in terms of equivalent widths (EW) of emission lines. In order to derive EW of a line, it is necessary to estimate the continuum level near the line but this task is difficult enough because of numerous absorption lines. As we were interested to detect only the variation in EW we did the following. First, to remove the impact from light loss at the slit we normalized the spectra to the integral flux determined as \(f_{\lambda}=F_{\lambda}/\int\limits_{\lambda_{1}}^{\lambda_{2}}F_{\lambda} \mathrm{d}\lambda\) for a large enough integration range \(\lambda_{1}-\lambda_{2}\). The exact size and limits of the range are not important but it must locate near the given line as the light loss may slightly depend on wavelength. Second, to study the variation of spectral lines we considered the difference between an individual spectrum \(f_{\lambda}\) and a median one \(\overline{f}_{\lambda}\): \(D_{\lambda}=f_{\lambda}-\overline{f}_{\lambda}\). This difference appears as noise for invariable spectral features. Whereas the variable lines stand out as meaningful positive or negative residuals of the difference. To Figure 7: The dereddened spectrum of T CrB (blue line) and the modelled SED of the system (magenta line). The optical spectrum was obtained on 09.12.2022, the UV spectrum (Swift) — on 10.12.2022. The triangles show the \(JHKLM\) magnitudes for T CrB obtained on 26.05.2019. The grey line denotes a M4 III red giant, the red line stands for the combination of a hot component with \(T_{eff}=10^{5}\) K and a nebula with \(T_{e}=10^{4}\) K, the black line represents an accretion disc when \(i=56^{\circ}\), \(\dot{M}=4\times 10^{-8}\mathrm{M_{\odot}/yr}\). obtain a quantitative measure of these residuals we calculated a normalized integral over the line profile \(\int D_{\lambda}\mathrm{d}\lambda/\int(\overline{f}_{\lambda}-f_{c})\mathrm{d}\lambda\) which represents the relative variation in the EW of the line \(\delta\mathrm{EW}=\Delta\mathrm{EW}/\overline{\mathrm{EW}}\). \(f_{c}\) is the continuum level which needs to be determined only once and its uncertainty enters only \(\overline{\mathrm{EW}}\) but not the variable component which we aim to study. The error of \(\delta\mathrm{EW}\) was estimated as a standard deviation from zero of six similar integrals with the same size of integration range taken on the left and on the right from the line. In Fig. 9 we show the values of \(\delta\mathrm{EW}\) and their errors for the H\(\alpha\), H\(\beta\), He II \(\lambda\)4686, and [Ne III] \(\lambda\)3869 lines together with the \(B\) light curve. One of our aims of follow-up observations was a search for variations in radial velocity (and therefore we used a narrow slit) but we did not detect systematic variations in the characteristics of line profiles based on differential spectra \(D_{\lambda}\). It is seen from Fig. 9 that the equivalent widths of the Balmer lines demonstrate similar variability which differs from that in other panels. The correlation coefficient between H\(\alpha\) and H\(\beta\) is larger than 0.6 with no time lag between them. This is supported by the presence of a common feature in the curves observed near the moment 0.46. The [Ne III] \(\lambda\)3869 line flux did not show significant variations during our follow-up observations. This fact is in agreement with the result which we found earlier (see Maslennikova et al., 2023) and confirms the conclusion that the line originates in a much more extended region and due to its large size the impact of fast variability of the hot component (and/or accretion disc) is smoothed. The relative EW of the He II \(\lambda\)4686 line behaves differently -- it displays significant variations with an amplitude much larger than observational errors and a characteristic time-scale of \(\sim 25\) min. This time coincides with the estimate which we derived earlier (see Maslennikova et al., 2023). The \(B\) brightness of the system varies with approximately the same time-scale (see the upper panel in Fig. 9). But no correlation was found between the \(B\) and He II \(\lambda\)4686 data. Figure 8: Variation of accretion rate estimated from UV spectra. ## V Discussion Fig. 10 shows the \(B\)-band light curve for T CrB composed of AAVSO data from early 2005 till the middle of July, 2023 (Kloppenborg, 2023). We also plot a template of brightness variation created by Schaefer (2023) based on \(B\)-band photometry of the 1946 outburst. It is clearly seen that the current "high" state develops with a similar characteristic time-scale but smaller amplitude than the super-active state of 1938-1946. As the accretion disc provides the main input to the \(B\) brightness of the system we can assume that the pre-outburst accretion rate is now smaller that was previously. Since the middle of March, 2023, T CrB has been exhibiting a characteristic fading episode which is also present in the outburst template. So, the observed light curve matches the mean light curve of the 1946 eruption very well. If we assume that the upcoming outburst will follow the 1946 scenario, we can expect that the eruption will occur in the beginning of 2024. The UV data obtained by the Swift satellite also indicate the presence of growing activity (see Fig. 3). The mean UV flux at \(\sim 2000\)A measured during the active state is larger by a factor of 2-2.5 than the mean flux measured by the IUE satellite in 1979-1990. If we assume that the change in UV flux is due to the variation of accretion rate, then the latter has to vary from \(<10^{-9}\)M\({}_{\odot}\)/yr for the minimum observed fluxes to \(\sim 10^{-7}\)M\({}_{\odot}\)/yr for the maximum ones. Another factor that can affect the observed UV flux is the changing matter density in the line of sight which leads to the change in extinction. According to Kuin et al. (2023), the X-ray flux behaved in the opposite way -- it decreased by a factor of 4 during the transition to the active state in 2015 and then recovered to previous levels in 2023 when T CrB was completing the super-active stage. Our spectroscopic observations carried out in 2020-2023 demonstrate that the accretion disc contribution to the total continuum flux is decreasing. The emission-line spectrum of T CrB has changed, too: the He II \(\lambda 4686\) line is barely seen, the Balmer and He I lines have weakened significantly, the [Ne III] lines are weak but still present in the spectrum. In the case of T CrB the complex structure of continuum emission does not allow making flux measurements for weak lines. Therefore, the only quantity we managed to measure with high enough accuracy for all nights, is the H\(\alpha\)/H\(\beta\) flux ratio (Fig. 11). As it follows from Fig. 11, the Balmer decrement (BD) measured in the super-active state (when the accretion rate is high) is consistent Figure 9: The \(B\)-band light curve and the evolution of equivalent widths of the H\(\alpha\), H\(\beta\), He II \(\lambda 4686\), [Ne III] \(\lambda 3869\) lines derived from the follow-up observations carried out on 08.06.2023. with the standard case B recombination values (Osterbrock, 1989). The maximum value of BD \(\sim 8\) was measured on 21.06.2023. On the same date a minimum of UV continuum flux was observed. So, we can point out a negative correlation of BD and UV continuum flux which in turn depends on accretion rate (see Fig. 8). The negative BD - UV-flux relation was noted for active galactic nuclei and may be due to various reasons (Shapovalova et al. (2019), Wu et al. (2023)). One of the most frequently invoked explanations is the presence of additional extinction near the region where the lines originate. In the case of T CrB the colour excess needs to be \(E(B-V)\sim 1\) (assuming a normal extinction law). But we see no evidence for excessive reddening in the SED of the red giant and accretion disc. We suggest that in the case of T CrB some other mechanism might be responsible for the negative correlation. The study by Gaskell and Ferland (1984) showed that the H\(\alpha\)/H\(\beta\) ratio is highly dependent on the shape of the X-ray to UV continuum. As the relative contribution of the X-ray continuum to photoionization is increased, the amount of free electrons with energies high enough to provide collisional excitation of the third level from the ground state grows. A corresponding increase in X-ray flux accompanied by a decrease in UV and optical flux was also detected for T CrB. We carried out 2-h simultaneous photometric and spectroscopic follow-up observations of T CrB on 08.06.2023 during the decline of the super-active state of 2015-2023. We detected the EW variability of the H\(\alpha\) and H\(\beta\) lines, and of the He II \(\lambda\)4686 line, too. In contrast to our similar observations performed on 25.08.2020 and 06.09.2020 during the super-active stage (Maslennikova et al., 2023), we did not detect a time lag between the lines and between the lines and continuum (i.e., \(B\)-band photometry). Nevertheless, the new time-scales of variations in \(B\) and He II line are equal to those obtained earlier (\(\sim 25\) min). The amplitude of \(B\) brightness variations (\(\approx 0\)\(\fm 07\)) appears similar to that obtained in 2020, too. It contradicts the report by Minev et al. (2023) that the amplitude of flickering increased and recovered to the quiet-state Figure 10: The \(B\)-band light curve of T CrB composed of AAVSO data (dots). The black line represents the averaged light curve of the 1946 eruption of T CrB from Schaefer (2023), the crosses indicate the moments of Swift-UVOT observations. value by May, 2023 when the super-active state of T CrB was over. We performed a frequency analysis on new IR photometry obtained in 2011-2023 and refined the ephemeris of T CrB. The derived period \(P=227.55\) d is orbital and it has not changed since 1958 (Kraft, 1958). A comparison of mean brightness before and during the super-active state shows that different activity states of the hot component barely affect the shape of light curves. This fact allows us to model IR light curves not taking into account the reflection effect which has to be small also because of small X-ray fluxes observed (see the Section 'Photometry analysis'). Fig. 1 demonstrates that the observed light curve can be well fitted only with ellipsoidal effect. So, contrary to Shahbaz et al. (1997), we did not need to invoke an additional cool spot on the red giant's surface to explain a deep minimum near phase \(\varphi=0.5\). The parameters of the system derived from modelling are in good agreement with the values published previously by Stanishev et al. (2004) and Tatarnikova et al. (2013): the Roche lobe filling factor \(\mu=1.0\), the binary inclination \(i=58^{\circ}\), the mass ratio \(q=M_{cool}/M_{hot}=0.57\) (for the model with the least sum of squared residuals). It should be noted that the Roche lobe filling factor strongly affects the depth of minima and is determined with high accuracy, so, we can state that the cool component is filling its Roche lobe. ## VI Conclusion 1. We present the results of photometric and spectroscopic observations of T CrB obtained in a wide wavelength range from 2011 till 2023. The near-IR photometry points out the ellipsoidal effect with large amplitude \(\Delta J=0.17\) (see Fig. 1). Based on these data, we derived an ephemeris for the times of primary light minima (when the red giant is located between the hot component and the observer) \(JD_{min}=2455828.9+227.55\times E\), which is in agreement with the results obtained by Fekel et al. (2000) from the radial velocity analysis. The modelling of near-IR light curves with the ellipsoidal effect taken into consideration allowed us to derive the parameters of the binary system: the Roche lobe filling factor for the cool component \(\mu=1.0\), the mass ratio \(q=M_{cool}/M_{hot}\in[0.5,0.77]\), the binary orbital inclination \(i\in[55^{\circ},63^{\circ}]\). The model that best fits the observations provides the mass of the hot component of 1.30 M\({}_{\odot}\) and the mass of the cool component of 0.74 M\({}_{\odot}\) if we adopt the mass function of \(f(m)=0.3224\) (Fekel et al. (2000)) for the cool component. 2. Based on spectroscopic data obtained in 2020-2023, we detected a considerable change in the H\(\alpha\)/H\(\beta\) flux ratio -- from \(\sim 3\) up to \(\sim 8\). We associate this variation with an observed change in the X-ray flux (Kuin et al., 2023) and switching between various mechanisms of excitation of hydrogen atoms. We show that the H\(\alpha\)/H\(\beta\) ratio depends on the rate of accretion onto the hot component (see Fig. 8 and Fig. 11). 3. Based on follow-up observations of 08.06.2023 we detected fast variations in the \(B\)-band brightness with an amplitude of \(\sim 0\).\({}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! a hot component with \(T_{eff}=10^{5}\) K. This enabled us to estimate the accretion rate: \(\dot{M}=4\times 10^{-8}\)M\({}_{\odot}\)/yr (assuming that the inner radius of the accretion disc is equal to the radius of white dwarf 0.004 R\({}_{\odot}\) and the outer one is 1 R\({}_{\odot}\)). 5. A comparison of the AAVSO light curve for the 2005-2023 period with the template of the 1946 outburst created by Schaefer (2023) makes it possible to predict the date of the upcoming classical nova type eruption of T CrB -- the beginning of 2024. ## Acknowledgements This study was performed by using the equipment purchased through the funds of the Development Program of the Moscow State University. The work of A.V. Dodin (initial reduction and calibration of spectra), A.M. Tatarnikov (reduction and analysis of UV and IR observations) and N.A. Maslennikova (data reduction and analysis of high-speed photometry, spectral modelling) was supported by Russian Science Foundation (grant 23-12-00092). We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. We thank the INES archive for providing access to the IUE data. This study has made use of the Swift data provided by the Space Science Data Center (ASI). The authors thank the anonymous referees for carefully reading the paper and providing very useful comments that have contributed to improving the quality of the manuscript. ## Conflict of interest The authors declare that there is no conflict of interest.
2307.00487
On $θ$-Hurewicz and $α$-Hurewicz Topological spaces
In this paper, we introduced $\alpha$-Hurewicz $\&$ $\theta$-Hurewicz properties in a topological space $X$ and investigated their relationship with other selective covering properties. We have shown that for an extremally disconnected semi-regular spaces, the properties: Hurewicz, semi-Hurewicz, $\alpha$-Hurewicz, $\theta$-Hurewicz, almost-Hurewicz, nearly Hurewicz and midly Hurewicz are equivalent. We have also proved that for an extremally disconnected space X, every finite power of X has $\theta$-Hurewicz property if and only if X has the selection principle $U_{fin}(\theta$-$\Omega, \theta$-$\Omega)$. The preservation under several types of mappings of $\alpha$-Hurewicz and $\theta$-Hurewicz properties are also discussed. Also, we showed that, if $X$ is a mildly Hurewicz subspace of $ \omega^\omega$, than $X$ is bounded.
Gaurav Kumar, Sumit Mittal, Brij K. Tyagi
2023-07-02T06:18:33Z
http://arxiv.org/abs/2307.00487v1
# On \(\theta\)-Hurewicz and \(\alpha\)-Hurewicz topological spaces ###### Abstract. In this paper, we introduced \(\alpha\)-Hurewicz & \(\theta\)-Hurewicz properties in a topological space \(X\) and investigated their relationship with other selective covering properties. We have shown that for an extremally disconnected semi-regular spaces, the properties: Hurewicz, semi-Hurewicz, \(\alpha\)-Hurewicz, \(\theta\)-Hurewicz, almost-Hurewicz, nearly Hurewicz and mildy Hurewicz are equivalent. We have also proved that for an extremally disconnected space X, every finite power of X has \(\theta\)-Hurewicz property if and only if X has the selection principle \(U_{fin}(\theta\text{-}\Omega,\theta\text{-}\Omega)\). The preservation under several types of mappings of \(\alpha\)-Hurewicz and \(\theta\)-Hurewicz properties are also discussed. Also, we showed that, if \(X\) is a mildly Hurewicz subspace of \(\omega^{\omega}\), than \(X\) is bounded. Key words and phrases:Selection principles; Hurewicz; \(\alpha\)-Hurewicz; \(\theta\)-Hurewicz; \(\theta\)-continuity; extremally disconnected space 2020 Mathematics Subject Classification: Primary 54D20; 54C08; Secondary 54A10; 54D10 \({}^{1}\)The author\({}^{1}\) acknowledges the fellowship grant of University Grant Commission, India. The author acknowledges the fellowship grant of Council of Scientific & Industrial Research, India. ## 1. Introduction Let \(A\) be a \(\alpha\)-open set and let \(B\) be a \(\alpha\)-closed set. A _\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_\(\alpha\)_-open_\(\Rightarrow\)_-open_\ \(s\)-\(\gamma\)-cover) if each element of \(\mathcal{C}\) are open (\(resp,clopen\), \(\theta\)-open, \(\alpha\)-open, semi-open) such that for each \(x\in X\), the set \(\{U\in\mathcal{C}:x\not\in U\}\) is finite. Let \(\Gamma\), \(c\)-\(\Gamma\)\(\theta\)-\(\Gamma\), \(\alpha\)-\(\Gamma\)\(s\)-\(\Gamma\) denotes the collection of all \(\gamma\), \(c\)-\(\gamma\), \(\theta\)-\(\gamma\), \(\alpha\)-\(\gamma\)\(s\)-\(\gamma\) covers of \(X\), respectively and \(\mathcal{O}\), \(\mathcal{CO}\), \(\theta\)-\(\mathcal{O}\), \(\alpha\)-\(\mathcal{O}\), s-\(\mathcal{O}\) denotes the collection of all open, clopen, \(\theta\)-open, \(\alpha\)-open, semi-open covers of a space \(X\), respectively. Then the Hurewicz, mildly Hurewicz, \(\theta\)-Hurewicz, \(\alpha\)-Hurewicz, semi-Hurewicz property of \(X\) is equivalent to selection principles: \(U_{fin}(\mathcal{O},\Gamma)\), \(U_{fin}(\mathcal{CO}\), \(c\)-\(\Gamma)\), \(U_{fin}(\theta\)-\(\mathcal{O}\), \(\theta\)-\(\Gamma)\), \(U_{fin}(\alpha\)-\(\mathcal{O}\), \(\alpha\)-\(\Gamma)\), \(U_{fin}(s\)-\(\mathcal{O}\), \(s\)-\(\Gamma)\), respectively. In this paper we study \(\alpha\)-Hurewicz and \(\theta\)-Hurewicz properties in details. Throughout the paper a space \(X\) and \((X,\tau)\), means a topological space \(\&\)\(|X|\) denotes the cardinality of \(X\). For a subset \(A\) of a space \(X\), \(Int(A)\) and \(\overline{A}\) or \(Cl(A)\), denotes the interior and closure of \(A\) respectively. Further, \(\omega\) and \(\omega_{1}\) denote the first infinite cardinal \(\&\) uncountable cardinal respectively. ## 2. The \(\theta\)-Hurewicz Spaces and \(\alpha\)-Hurewicz Spaces First, recall that the family of all \(\theta\)-open (resp., \(\alpha\)-open) sets of a space \((X,\tau)\) are form topologies on \(X\), denoted by \(\tau_{\theta}\)[20] (resp., \(\tau_{\alpha}\)[25]. Further, \(\tau_{\theta}\subseteq\tau\subseteq\tau_{\alpha}\). The role of \(\theta\)-open and \(\alpha\)-open sets have been invastigated in many papers (see, [22, 18]) Clearly, a space \((X,\tau)\) is \(\theta\)-Hurewicz (resp., \(\alpha\)-Hurewicz) if and only if the space \((X,\tau_{\theta})\) (resp., \((X,\tau_{\alpha})\) is Hurewicz. **Theorem 2.1**.: Every countable space \(X\) has \(\alpha\)-Hurewicz property. Proof.: Let \(X=\{x_{1},x_{2},....,x_{n},....\}\) be a countable space. Let \(<\mathcal{A}_{k}>_{k\in\mathbb{N}}\) be a sequence of \(\alpha\)-open covers of \(X\). For each \(k\in\mathbb{N}\), Consider \(\mathcal{B}_{k}=\{A_{k,1},A_{k,2},......A_{k,k}\}\), where for each \(i\in\{1,2,....k\}\), \(A_{k,i}\) is \(\alpha\)-open such that \(x_{i}\in A_{k,i}\). Then \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) and for each \(x\in X\), \(x\in\bigcup\mathcal{B}_{k}\) for all but finitely many \(k\). Similarly we can prove that every countable space has \(\theta\)-Hurewicz property. **Example 2.2**.: 1. Every \(\alpha\)-compact space is \(\alpha\)-Hurewicz. But the converse is not true. The real line \(\mathbb{R}\) with the cocountable topology is \(\alpha\)-Hurewicz being semi-Hurewicz [15] but it is not \(\alpha\)-Compact since every \(\alpha\)-compact space is compact. 2. Every \(\theta\)-compact space is \(\theta\)-Hurewicz. But the converse is not true. Let \(X\) be a countably infinite discrete space. Then the space \(X\) has \(\theta\)-Hurewicz property but the \(\theta\)-open cover \(\{\{x\}:x\in X\}\) has no finite subcover. 3. The real line \(\mathbb{R}\) is a Hurewicz space but it is not semi-Hurewicz [15]. 4. The Sorgenfrey line \(S\) does not have the \(\alpha\)-Hurewicz property because it does not have the Hurewicz property. **Example 2.3**.: Let \(A\) be a finite subset of an uncountable set \(X.\) Then \(\tau=\{\phi,A,X\}\) is a topology on \(X.\) Clearly, the space \((X,\tau)\) is Hurewicz. Moreover, the sets of the forms \(A\cup\{p\},\) for \(p\in X\setminus A,\) are \(\alpha\)-open in \((X,\tau).\) For each \(k\in\mathbb{N},\) put \(\mathcal{A}_{k}=\{A\cup\{p\}:p\in X\setminus A\}.\) Then the sequence \((\mathcal{A}_{k}:k\in\mathbb{N})\) witnesses that \((X,\tau)\) is not an \(\alpha\)-Hurewicz space because the cover \(\mathcal{A}_{k}\) does not have a countable subcover. **Example 2.4**.: Let \(p\) be a fixed point of an uncountable set \(X.\) Then \(\tau_{p}=\{O\subseteq X:p\in O\}\) together with the empty set is an uncountable particular point topology on \(X.\) It is shown in [29] that the space \(X\) is not Lindelof, so \(X\) can not be Hurewicz since every Hurewicz space is Lindelof. Note that the space \(X\) is an only closed set containing \(p.\) Then \(Cl(A)=X\) for each \(A\neq\emptyset,\)\(A\in\tau_{p}.\) Hence \(\phi\) and \(X\) are only \(\theta\)-open sets. Therefore \(X\) is \(\theta\)-Hurewicz space. A space \(X\) is said to be nearly Hurewicz [2] (resp., almost Hurewicz [28]) if for each sequence \((\mathcal{A}_{k}:k\in\mathbb{N})\) of open covers of \(X\), there exists a sequence \((\mathcal{B}_{k}:k\in\mathbb{N}),\) where for each \(k\in\mathbb{N},\)\(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that for each \(x\in X,\)\(x\in\cup\{Int(Cl(B)):B\in\mathcal{B}_{k}\}\) (resp., \(x\in\cup\{Cl(B):B\in\mathcal{B}_{k}\}\)) for all but finitely many \(k.\) Evidently, from the definitions follows the following implications: Hurewicz \(\Rightarrow\) nearly Hurewicz \(\Rightarrow\) almost Hurewicz. The following theorem describes a relation between almost Hurewicz and \(\theta\)-Hurewicz space **Theorem 2.5**.: Every almost Hurewicz space is \(\theta\)-Hurewicz. Proof.: Let \(X\) be an almost Hurewicz space and \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of \(\theta\)-open covers of \(X.\) Then for each \(k\in\mathbb{N}\) and each \(x\in X\) there is an open set \(B_{x,k}\) such that \(x\in B_{x,k}\subset Cl(B_{x,k})\subset A_{k}\) for some \(A_{k}\in\mathcal{A}_{k}.\) For each \(k,\) put \(\mathcal{B}_{k}=\{B_{x,k}:x\in X\}.\) Then each \(\mathcal{B}_{k}\) is an open cover of \(X.\) Since \(X\) is an almost Hurewicz, for each \(k\in\mathbb{N},\) there is a finite subset \(\mathcal{B}_{k}^{\prime}\) of \(\mathcal{B}_{k}\) such that \(x\in X,\)\(x\in\cup\{Cl(B^{\prime}):B^{\prime}\in\mathcal{B}_{k}^{\prime}\}\) for all but finitely many \(k.\) Since for each \(B^{\prime}\in\mathcal{B}_{k}^{\prime},\) there is a \(A_{k,B^{\prime}}^{\prime}\in\mathcal{A}_{k}\) such that \(Cl(B^{\prime})\subset A_{k,B^{\prime}}^{\prime}.\) Let \(\mathcal{A}_{k}^{\prime}=\{A_{k,B^{\prime}}\in\mathcal{A}_{k}:B^{\prime}\in \mathcal{B}_{k}^{\prime}\}.\) Then the sequence \((\mathcal{A}_{k}^{\prime}:k\in\mathbb{N})\) witnesses that \(X\) is \(\theta\)-Hurewicz. Next we determine a class of spaces in which above variants of Hurewicz property are equivalent. Recall that, a space \(X\) is called extremally disconnected if closure of open set is open. **Theorem 2.6**.: For an extremally disconnected semi-regular space \(X.\) The following statements are equivalent: 1. \(X\) is semi-Hurewicz; 2. \(X\) is \(\alpha\)-Hurewicz; 3. \(X\) is Hurewicz; 4. \(X\) is nearly Hurewicz; 5. \(X\) is almost Hurewicz; 6. \(X\) is \(\theta\)-Hurewicz; 7. \(X\) is mildly Hurewicz. Proof.: Already proved \((1)\Rightarrow(2)\Rightarrow(3)\Rightarrow(4)\Rightarrow(5)\Rightarrow(6)\Rightarrow(7)\). For \((7)\Rightarrow(1)\), let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of semi-open covers of \(X\). Then by Lemma 1.1, for each \(x\in X\), we have a \(B_{k,x}\in SO(X)\) such that \(x\in B_{k,x}\subset sCl(B_{k,x})\subset A\) for some \(A\in\mathcal{A}_{k},\). For \(k\in\mathbb{N}\), put \(\mathcal{B}_{k}=\{B_{k,x}:x\in X\}\). Then \((\mathcal{B}_{k}:k\in\mathbb{N})\) is a sequence of semi-open covers of \(X\). As \(X\) is an extremally disconnected, by [14, Proposition 4.1], we have \(B\subset Int(Cl(B))\). For each \(B\in SO(X)\), \(Cl(Int(Cl(B)))\) is clopen in \(X\). Put \(\mathcal{C}_{k}=\{Cl(Int(Cl(B))):B\in\mathcal{B}_{k}\}\). Then \((\mathcal{C}_{k}:k\in\mathbb{N})\) is a sequence of clopen covers of \(X\). As \(X\) is mildly Hurewicz, there exists a sequence \((\mathcal{C}^{\prime}_{k}:k\in\mathbb{N})\), where for each \(k\), \(\mathcal{C}^{\prime}_{k}\) is a finite subset of \(\mathcal{C}_{k}\) such that for \(x\in X\), \(x\in\cup\mathcal{C}^{\prime}_{k}\) for all but finitely many \(k\). Observe that for each subset \(A\) of \(X\), \(Int(Cl(A))\subset sCl(A)\) and from the extremal disconnectedness of \(X\), \(sCl(A)=Cl(A)\) for each \(A\in SO(X)\). From the above construction, for each \(C^{\prime}\in\mathcal{C}^{\prime}_{k}\) we have a \(A_{C^{\prime}}\in\mathcal{A}_{k}\) such that \(C^{\prime}\subset A_{C^{\prime}}\). Then for \(k\in\mathbb{N}\), let \(\mathcal{A}^{\prime}_{k}=\{A_{C^{\prime}}:C^{\prime}\in\mathcal{C}^{\prime}_{k }\}\). Hence the sequence \((\mathcal{A}^{\prime}_{k}:k\in\mathbb{N})\) witnesses that \(X\) is semi-Hurewicz. In the following examples, we show that the extremal disconnectedness and semi-regularity are neccessary conditions in Theorem 2.6. **Example 2.7**.: Consider the real line \(\mathbb{R}\) with usual topology. Then \(\mathbb{R}\) is semi-regular mildly Hurewicz space but it is not an extremally disconnected space. On the other hand, \(\mathbb{R}\) is not semi-Hurewicz [15]. **Example 2.8**.: Let \(X\) be an uncountable cofinite space, that means an uncountable set \(X\) with cofinite topology. Then \(X\) is an extremally disconnected mildly Hurewicz space. But \(X\) does not have semi-Hurewicz property, since the semi-open cover \(\{X\setminus\{x\}:x\in X\}\) has no countable subcover. In an extremally disconnected space, zero-dimensionality and semi-regularity are equivalent ([26], Theorem 6.4). We have the following corollary: **Corollary 2.9**.: For an extremally disconnected, zero-dimensional space \(X,\) the following statements are equivalent: 1. \(X\) is semi-Hurewicz; 2. \(X\) is \(\alpha\)-Hurewicz; 3. \(X\) is Hurewicz; 4. \(X\) is nearly Hurewicz; 5. \(X\) is almost Hurewicz 6. \(X\) is \(\theta\)-Hurewicz; 7. \(X\) is mildly Hurewicz. A space \(X\) is called \(S\)-paracompact [1] if for every open cover of \(X\) has a locally finite semi-open refinement. A \(S\)-paracompact Hausdorff space \(X\) is semi-regular [1]. Hence the properties mentioned in Theorem 2.6 are also equivalent for an extremally disconnected \(S\)-paracompact Hausdorff spaces. It is known that the Stone-\(\breve{C}\)ech compactification of a discrete space is extremally disconnected compact Hausdorff space. Thus the class of Stone-\(\breve{C}\)ech compactification of discrete spaces contained in the class of extremally disconnected S-paracompact Hausdorff spaces and it is turns out to be subclass of extremally disconnected semi-regular spaces. **Theorem 2.10**.: For a space \(X,\) the following statements are equivalent: 1. \(X\) has \(\theta\)-Hurewicz property; 2. \(X\) satisfies \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\mathcal{O})\) Proof.: \(1\Rightarrow 2.\) It follows from the fact that each \(\theta\)-\(\gamma\)-cover of \(X\) is a \(\theta\)-open cover of \(X\). \(2\Rightarrow 1.\) Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of \(\theta\)-open covers of \(X.\) Let \(\mathbb{N}=Y_{1}\cup Y_{2}\cup...\cup Y_{m}\cup...\) be a partition of \(\mathbb{N}\) into countably many pairwise disjoint infinite subsets. For each \(k\), let \(\mathcal{B}_{k}\) contains all sets of the form \(A_{k_{1}}\cup A_{k_{2}}\cup...\cup A_{k_{n}}\), \(k_{1}\leq...\leq k_{n}\), \(k_{i}\in Y_{k}\), \(A_{k_{i}}\in\mathcal{A}_{k}\), \(i\leq n\), \(n\in\mathbb{N}.\) Then for each \(k\), \(\mathcal{B}_{k}\) is a \(\theta\)-\(\omega\)-cover of \(X.\) Applying \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\mathcal{O})\) on the sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), there is a sequence \((\mathcal{C}_{k}:k\in\mathbb{N})\), where for each \(k\), \(\mathcal{C}_{k}\) is a finite subset of \(\mathcal{B}_{k}\) such that \(x\in X\)\(x\in\cup\mathcal{C}_{k}\) for all but finitely many \(k.\) Assume that \(\mathcal{C}_{k}=\{C_{k}^{1},....C_{k}^{m_{k}}\}\), then by the above construction, \(C_{k}^{i}=A_{k}^{k_{i_{1}}}\cup....\cup A_{k}^{k_{in}}\), \(C_{k}^{i}\in\mathcal{C}_{k}\). Thus for each \(k\), we have a finite subset \(\mathcal{A}_{k}^{\prime}\) of \(\mathcal{A}_{k}\) such that \(\cup\mathcal{C}_{k}\subseteq\cup\mathcal{A}_{k}^{\prime}.\) Hence \(X\) has the \(\theta\)-Hurewicz property. On the similar lines, we can prove that a space \(X\) has the \(\alpha\)-Hurewicz property if and only if \(X\) satisfies the selection principle \(U_{fin}(\alpha\)-\(\Omega,\alpha\)-\(\mathcal{O})\) **Theorem 2.11**.: If each finite power of space \(X\) is \(\theta\)-Hurewicz, then \(X\) satisfy \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\). Proof.: Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of open \(\theta\)-\(\omega\)-covers of \(X\). For each \(l\in\mathbb{N}\), we put \(\mathcal{B}_{k}=\{A^{l}:A\in\mathcal{A}_{k}\}\). For each \(l\in\mathbb{N}\), applying the \(\theta\)-Hurewicz property to the sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\) of \(\theta\)-open covers of \(X^{l}\), for each \(k\in\mathbb{N}\) we have finite subfamilies \(\mathcal{C}_{k}\) of \(\mathcal{B}_{k}\) such that \(x\in X^{l}\), \(x\in\cup C_{k}\) for all but finitely many \(k\). For \(k\in\mathbb{N}\), let \(\mathcal{A}^{\prime}_{k}=\{A\in\mathcal{A}_{k}:A^{l}\in\mathcal{C}_{k}\}\). Then the sequence \((\mathcal{A}^{\prime}_{k}:k\in\mathbb{N})\) witnesses that \(X\) satisfies \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\). In a Similar way, we can prove that if each finite power of space \(X\) is \(\alpha\)-Hurewicz, then \(X\) satisfy \(U_{fin}(\alpha\)-\(\Omega,\alpha\)-\(\Omega)\). **Lemma 2.12**.: [11] Let \(X\) be an extremally disconnected space. Then for each \(\theta\)-\(\omega\)-cover \(\mathcal{A}\) of \(X^{k}\), \(k\in\mathbb{N}\), there exists a \(\theta\)-\(\omega\)-cover \(\mathcal{B}\) of \(X\) such that the \(\theta\)-open cover \(\{B^{k}:B\in\mathcal{B}\}\) of \(X^{k}\) refines \(\mathcal{A}\). **Theorem 2.13**.: Let \(X\) be an extremally disconnected space. If \(X\) has a property \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\), then for each \(n\in\mathbb{N}\), \(X^{n}\) also has this property. Proof.: Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of \(\theta\)-\(\omega\)-covers of \(X^{n}\). Then by Lemma 2.12, there exists a \(\theta\)-\(\omega\)-cover \(\mathcal{B}_{k}\) of \(X\) such that \(\{B^{n}:B\in\mathcal{B}_{k}\}\) refines \(\mathcal{A}_{k}\). Apply the condition \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\) of \(X\) on the sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), then for each \(k\in\mathbb{N}\), there exists a finite subset \(\mathcal{C}_{k}\) of \(\mathcal{B}_{k}\) such that \(\{\cup\mathcal{C}_{k}:k\in\mathbb{N}\}\) forms a \(\theta\)-\(\omega\)-cover of \(X\). Since \(\{B^{n}:B\in\mathcal{B}_{k}\}\) refines \(\mathcal{A}_{k}\), for each \(C\in\mathcal{C}_{k}\), we have \(A_{C}\in\mathcal{A}_{k}\) such that \(C^{n}\subset A_{C}\). For \(k\in\mathbb{N}\), let \(\mathcal{A}^{\prime}_{k}=\{A_{C}\in\mathcal{A}_{k}:C\in\mathcal{C}_{k}\}\). Thus the sequence \((\mathcal{A}^{\prime}_{k}:k\in\mathbb{N})\) witnesses that \(X^{n}\) has a property \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\). Thus from Theorem 2.10, Theorem 2.11 and Theorem 2.13, we obtained the following corollary. **Corollary 2.14**.: Let \(X\) be an extremally disconnected space. Then every finite power of \(X\) is \(\theta\)-Hurewicz if and only if \(X\) satisfies \(U_{fin}(\theta\)-\(\Omega,\theta\)-\(\Omega)\). ## 3. Preservation in subspaces and mappings In this section, we analyse the properties of \(\alpha\)-Hurewicz and \(\theta\)-Hurewicz spaces. We investigate the behaviour of these properties under subspaces and various type of mappings. In the following example we show that \(\alpha\)-Hurewicz is not a hereditarty property. **Example 3.1**.: Let \(x_{0}\) be a fixed point of an uncountable set \(X.\) Then the family \(\tau=\{A\subset X:x_{0}\notin A\}\cup\{A\subset X:X\setminus A\text{ is finite}\}\) of subsets of \(X\) forms a topology on \(X.\) It is easy to prove that the space \((X,\tau)\) is \(\alpha\)-Hurewicz. Consider the subspace \(Y=X\setminus\{x_{0}\}\) of \((X,\tau).\) Then one point set \(\{x\},x\in Y\) is \(\alpha\)-open in \(Y.\) Then the \(\alpha\)-open cover \(\mathcal{A}=\{\{x\}:x\in Y\}\) of \(Y\) has no countable subcover. Hence the subspace \(Y\) of the space \((X,\tau)\) is not \(\alpha\)-Hurewicz. Note that, \(Y\) is also an open (\(\alpha\)-open) subset of \((X,\tau).\) Hence the open (\(\alpha\)-open) subspace of a \(\alpha\)-Hurewicz space need not be \(\alpha\)-Hurewicz. **Remark:** Let \(X\) be the space considered in Example 3.1 It is also easy to prove that \(X\) is \(\theta\)-Hurewicz space. On the other hand the open subspace \(Y=X\setminus\{x_{0}\}\) of \(X\) is not \(\theta\)-Hurewicz. It means that \(\theta\)-Hurewicz property is also not hereditary. However the \(\alpha\)-Hurewicz & \(\theta\)-Hurewicz properties are preserved under clopen subsets as shown below in Proposition 3.2. **Proposition 3.2**.: A clopen subspace of a \(\alpha\)-Hurewicz (\(\theta\)-Hurewicz) space is \(\alpha\)-Hurewicz (\(\theta\)-Hurewicz). Proof.: Let \(Y\) be a clopen subspace of a \(\alpha\)-Hurewicz space \(X.\) Let \((\mathcal{A}_{k}:{k\in\mathbb{N}})\) be a sequence of \(\alpha\)-open covers of \(Y.\) Then \((\mathcal{B}_{k}:{k\in\mathbb{N}})\) is a sequence of \(\alpha\)-open covers of \(Y\), where \(\mathcal{B}_{k}=\mathcal{A}_{k}\cup\{X\setminus Y\}\) for each \(k.\) Since \(X\) is \(\alpha\)-Hurewicz, there is a sequence \((\mathcal{B}^{\prime}_{k}:{k\in\mathbb{N}})\), where \(\mathcal{B}^{\prime}_{k}\) is a finite subset of \(\mathcal{B}_{k}\) such that \(x\in X\), \(x\in\bigcup\mathcal{B}^{\prime}_{k}\) for all but finitely many \(k.\) We observe that for each \(y\in Y\), \(y\in\bigcup\mathcal{B}^{\prime}_{k}\setminus\{X\setminus Y\}.\) That means that \(Y\) is an \(\alpha\)-Hurewicz space. Similarly, we can prove for \(\theta\)-Hurewicz space. **Proposition 3.3**.: Let \(Y\) be a subspace of a space \(X.\) If \(Y\) is \(\theta\)-Hurewicz, then for each sequence \((\mathcal{A}_{k}:{k\in\mathbb{N}})\) of covers of \(Y\) by \(\theta\)-open sets of \(X\), there is a sequence \((\mathcal{B}_{k}:{k\in\mathbb{N}})\), where for each \(k\), \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that for each \(y\in Y\), \(y\in\cup\mathcal{B}_{k}\) for all except finitely many \(k\). Proof.: Let \(Y\) be \(\theta\)-Hurewicz subspace of a space \(X.\) Let \((\mathcal{A}_{k}:{k\in\mathbb{N}})\) be a sequence of covers of \(Y\) by \(\theta\)-open sets of \(X.\) Put \(\mathcal{B}_{k}=\{Y\cap A:A\in\mathcal{A}_{k}\}.\) Then \((\mathcal{B}_{k}:{k\in\mathbb{N}})\) is a sequence of \(\theta\)-open covers of \(Y\) and \(Y\) is \(\theta\)-Hurewicz, there exists a finite subset \(\mathcal{C}_{k}\) of \(\mathcal{B}_{k}\) such that \(y\in Y\), \(y\in\cup\mathcal{C}_{k}\) for all but finitely many \(k.\) Let \(\mathcal{A}^{\prime}_{k}=\{A\in\mathcal{A}_{k}:A\cap Y\in\mathcal{C}_{k}\}.\) Then the sequence \((\mathcal{A}^{\prime}_{k}:{k\in\mathbb{N}})\) witnesses our requirement. In the following example we show that the converse of the above theroem does not hold. **Example 3.4**.: Let \(U=\{u_{\alpha}:\alpha<\omega_{1}\}\), \(V=\{v_{i}:i\in\omega\}\) and \(W=\{\langle u_{\alpha},v_{i}\rangle:\alpha<\omega_{1},i\in\omega\}.\) Let \(X=W\cup U\cup\{x^{\prime}\},\)\(x\not\in W\cup U.\) Topologize \(X\) as follows: for \(u_{\alpha}\in U,\)\(\alpha<\omega_{1}\) the basic neighborhood takes of the form \(A_{u_{\alpha}}(i)=\{u_{\alpha}\}\cup\{\langle u_{\alpha},v_{j}\rangle:j\geq i,i \in\omega\},\) the basic neighborhood of \(x^{\prime}\) takes of the form \(A_{x^{\prime}}(\alpha)=\{x^{\prime}\}\cup\bigcup\{\langle u_{\beta},v_{i} \rangle:\beta>\alpha,i\in\omega\},\)\(\alpha<\omega_{1}\) and each point of \(W\) is isolated. Consider the subspace \(Y=\{u_{\alpha}:\alpha<\omega_{1}\}\cup\{x^{\prime}\}\) of the space \(X.\) Observe that, the singleton set \(\{y\},\)\(y\in Y,\) is \(\theta\)-open in \(Y.\) Thus the family \(\{\{y\}:y\in Y\}\) is an uncountable \(\theta\)-open cover of \(Y\), which has no countable subcover. Hence \(Y\) is not \(\theta\)-Hurewicz. Next, we show that \(Y\) for each sequence \((\mathcal{A}_{k}:k\in\mathbb{N})\) of covers of \(Y\) by \(\theta\)-open sets of \(X\), there exists a sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), where for each \(k\), \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that for each \(y\in Y\), \(y\in\cup\mathcal{B}_{k}\) for all but finitely many \(k.\) Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a family of \(\theta\)-open sets of \(X\) such that for each \(k\in\mathbb{N}\), \(Y\subseteq\cup\mathcal{A}_{k}\). Then for each \(k\in\mathbb{N}\), there is an open set \(B_{k}\) and \(A_{k}\in\mathcal{A}_{k}\) such that \(x^{\prime}\in B_{k}\subset\overline{B_{k}}\subset A_{k}\). From the construction of topology on \(X\) for each \(k\), there exists a \(\beta_{k}<\omega_{1}\) such that \(A_{x^{\prime}}(\beta_{k})\subseteq B_{k}\), \(\overline{A_{x^{\prime}}(\beta_{k})}\subseteq\overline{B_{k}}.\) Thus for each \(k\), \(\{u_{\alpha}:\alpha>\beta_{k}\}\cup\{x^{\prime}\}\subseteq\overline{B_{k}} \subset A_{k}\) and \(Y^{\prime}_{k}=\bigcup_{\alpha\leq\beta_{k}}u_{\alpha}\) is countable. Thus \(\left(\bigcup_{k\in\mathbb{N}}Y^{\prime}_{k}\right)\cap Y\) is countable. As similiar to Theorem 2.1, we can find a subset \(\mathcal{A}^{\prime}_{k}\) of \(\mathcal{A}_{k}\) such that for each \(y\in\left(\bigcup_{k\in\mathbb{N}}Y^{\prime}_{k}\right)\cap Y\), \(y\in\cup\mathcal{A}^{\prime}_{k}\) for all but finitely many \(k.\) For each \(k\in\mathbb{N}\), let \(\mathcal{A}^{\prime\prime}_{k}=\mathcal{A}^{\prime}_{k}\cup\{A_{k}\}.\) Then \(\mathcal{A}^{\prime\prime}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) and for each \(y\in Y\), \(y\in\cup\mathcal{A}^{\prime\prime}_{k}\) for all but finitely many \(k.\) The mapping \(f:X\to Y\) from a space \(X\) to a space \(Y\) is said to be : 1. \(\alpha\)-continuous [24] (\(\alpha\)-irresolute [23]) if the preimage of each open (\(\alpha\)-open) set of \(Y\) is \(\alpha\)-open in \(X.\) 2. \(\alpha\)-open (strongly \(\alpha\)-open) if the image of each \(\alpha\)-open set of \(X\) is \(\alpha\)-open (open) in \(Y\). 3. \(\theta\)-continuous ([9], [10]) (resp., strongly \(\theta\)-continuous [21]) if for each \(x\in X\) and each open set \(B\) of \(Y\) containing \(f(x)\) there exists an open set \(A\) of \(X\) containing \(x\) such that \(f(Cl(A))\subset Cl(B)\) (resp., \(f(Cl(A))\subset B).\) **Theorem 3.5**.: An \(\alpha\)-continuous image of an \(\alpha\)-Hurewicz space is Hurewicz. Proof.: Let \(X\) be an \(\alpha\)-Hurewicz space and \(f:X\to Y\) be an \(\alpha\)-continuous map from \(X\) onto a space \(Y.\) Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of open covers of \(Y\). Since \(f\) is \(\alpha\)-continuous, then for each \(k\in\mathbb{N}\)\(\{f^{-1}(A_{k}):A_{k}\in\mathcal{A}_{k}\}\), is a \(\alpha\)-open cover of \(X.\) Since \(X\) is \(\alpha\)-Hurewicz, there is a sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\) where for each \(k\), \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that \(x\in X\), \(x\in\bigcup\{f^{-1}(B):B\in\mathcal{B}_{k})\}\) for all but finitely many \(k.\) Consider \(A_{B_{k}}=f(B_{k})\), \(k\in\mathbb{N}.\) Then the sequence \((f(B):B\in\mathcal{B}_{k}\)\(\&\)\(k\in\mathbb{N})\) witness that \(Y\) is Hurewicz. Similarly, we can prove the following theorem. **Theorem 3.6**.: A \(\alpha\)-irresolute image of an \(\alpha\)-Hurewicz space is \(\alpha\)-Hurewicz. Since each continuous map is \(\alpha\)-continuous, we have the following corollary: **Corollary 3.7**.: A continuous image of an \(\alpha\)-Hurewicz space is Hurewicz. **Theorem 3.8**.: A strongly \(\theta\)-continuous image of a \(\theta\)-Hurewicz space \(X\) is Hurewicz. Proof.: Let \(X\) be a \(\theta\)-Hurewicz space and \(f:X\to Y\) be a strongly \(\theta\)-continuous map from \(X\) onto a space \(Y.\) Consider the sequence \((\mathcal{A}_{k}:k\in\mathbb{N})\) of open covers of \(Y\). Then for each \(k\), and for each \(x\in X\), \(f(x)\in A_{k}\) for some \(A_{k}\in\mathcal{A}_{k}\). From the strongly \(\theta\)-continuity of \(f\), there is an open set \(B_{x,k}\) containing \(x\) such that \(f(Cl(B_{x,k}))\subset A_{k}\). This means that \(f^{-1}(A_{k})\) is \(\theta\)-open. Then for each \(k\in\mathbb{N}\), \(\{f^{-1}(A):A\in\mathcal{A}_{k}\}\) is a \(\theta\)-open cover of \(X.\) As \(X\) is \(\theta\)-Hurewicz, there is a sequence \((\mathcal{C}_{k}:k\in\mathbb{N})\) where for each \(k\), \(\mathcal{C}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that \(x\in X\), \(x\in\bigcup\{f^{-1}(C):C\in\mathcal{C}_{k}\}\) for all but finitely many \(k\). Then we have \[Y=f(X)=f(\bigcup_{C\in\mathcal{C}_{k}}f^{-1}(C))=\bigcup\mathcal{C}_{k}.\] Hence, \(Y\) is Hurewicz. **Theorem 3.9**.: A \(\theta\)-continuous image of a \(\theta\)-Hurewicz space \(X\) is \(\theta\)-Hurewicz. Proof.: Let \(f:X\to Y\) be a \(\theta\)-continuous map from a \(\theta\)-Hurewicz space \(X\) onto a space \(Y.\) Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of \(\theta\)-open covers of \(Y\). Then for each \(k\in\mathbb{N}\), \(\{f^{-1}(A):A\in\mathcal{A}_{k}\}\) is a \(\theta\)-open cover of \(X\), because \(f\) is \(\theta\)-continuous. Since \(X\) is \(\theta\)-Hurewicz, there is a sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), where for each \(k\), \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that \(x\in X\), \(x\in\bigcup_{B\in\mathcal{B}_{k}}f^{-1}(B)\) for all but finitely many \(k\). For each \(k\), and for each \(B_{k}\in\mathcal{B}_{k}\), we may choose \(A_{k}\in\mathcal{A}_{k}\) such that \(B_{k}=f^{-1}(A_{k})\). Then we have \[Y=f(X)=f(\bigcup_{B\in\mathcal{B}_{k}}f^{-1}(B))=\bigcup\mathcal{B}_{k}.\] Hence, \(Y\) is \(\theta\)-Hurewicz. Since continuity implies \(\theta\)-continuity, we have the following corollary: **Corollary 3.10**.: A continuous image of a \(\theta\)-Hurewicz space is \(\theta\)-Hurewicz. **Theorem 3.11**.: For a space \((X,\tau)\), the following statements are equivalent: (1) \((X,\tau)\) is \(\alpha\)-Hurewicz; (2) \((X,\tau)\) admits a strongly \(\alpha\)-open bijection onto a Hurewicz space \((Y,\tau^{\prime})\). Proof.: (1) \(\Rightarrow\) (2): Let \((X,\tau)\) be an \(\alpha\)-Hurewicz space, then \((X,\tau_{\alpha})\) is Hurewicz. The identity map \(I_{X}:(X,\tau)\rightarrow(X,\tau_{\alpha})\) is a strongly \(\alpha\)-open bijection. (2)\(\Rightarrow\) (1) : Assume that \(f:(X,\tau)\rightarrow(Y,\tau^{\prime})\) is a strongly \(\alpha\)-open bijection from a space \((X,\tau)\) onto a Hurewicz space \((Y,\tau^{\prime})\). Let \((\mathcal{A}_{k}:k\in\mathbb{N})\) be a sequence of \(\alpha\)-open covers of \((X,\tau)\). Then for each \(k\in\mathbb{N}\), \(\{f(A_{k}):A_{k}\in\mathcal{A}_{k}\}\) is an open cover of \(Y\). Since \(Y\) is Hurewicz space, there exists a sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), where for each \(k\), \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) such that for each \(y\in Y\), \(y\in\bigcup\{f(B):B\in\mathcal{B}_{k}\}\) for all but finitely many \(k\). Hence for each \(x\in X\), \(x\in\bigcup\mathcal{B}_{k}\) for all but finitely many \(k\). ## 4. Characterizations of Variants of Hurewicz spaces Let \(\omega^{\omega}\) be the set of all functions \(f:\omega\rightarrow\omega\). The set \(\omega^{\omega}\) is equipped with the product topology. Define a relation \(\leq^{*}\) on \(\omega^{\omega}\) as follows: \(f\leq^{*}g\) if \(f(n)\leq g(n)\) for all but finitely many \(n\). Then the relation \(\leq^{*}\) on \(\omega^{\omega}\) is reflexive and transitive. Let \(H\) be a subset of \(\omega^{\omega}\). We say \(H\) is a bounded if \(H\) has an upper bound with respect to \(\leq^{*}\), otherwise \(H\) is unbounded. We say \(H\) is dominating if it is cofinal in \((\omega^{\omega},\leq^{*})\). Let \(\mathfrak{b}\) be the smallest cardinality of an unbounded subset of \(\omega^{\omega}\) with respect to \(\leq^{*}\). The cardinal \(\mathfrak{b}\) is known as the bounding number. It is not difficult to prove that \(\omega_{1}\leq\mathfrak{b}\leq\mathfrak{d}\leq\mathfrak{c}\) and it is known that \(\omega_{1}<\mathfrak{b}=\mathfrak{c}\), \(\omega_{1}<\mathfrak{d}=\mathfrak{c}\) and \(\omega_{1}\leq\mathfrak{b}<\mathfrak{d}=\mathfrak{c}\) are all consistent with the axioms of ZFC for more details (see [6]). **Theorem 4.1**.: Let \(X\subset\omega^{\omega}\). If \(X\) is a mildly Hurewicz space, then \(X\) is bounded. Proof.: Let us assume that \(X\) be an unbounded subset of \(\omega^{\omega}\). For \(f_{x}\in X\) and \(n\in\omega\), let \(A_{n}^{f_{x}}=\{h\in X:h(i)\in\{f_{x}(1),f_{x}(2),...f_{x}(n)\},1\leq i\leq n\}\). Then \(A_{n}^{f_{x}}\) is a basic open set of \(X\), containing \(f_{x}\). Moreover, if \(g\not\in A_{n}^{f_{x}}\), then there exists an \(i\in\omega\) such that \(1\leq i\leq n\) and \(g(i)\not\in\{f_{x}(1),f_{x}(2),...f_{x}(n)\}\). Then we have an open set \(B_{n}^{g}=\{h\in X:h(i)\in\omega\setminus\{f_{x}(1),f_{x}(2),...f_{x}(n)\}\}\) of \(X\) containing \(g\) such that \(B_{n}^{g}\cap A_{n}^{f_{x}}=\emptyset\). This implies that \(g\not\in Cl_{X}(A_{n}^{f_{x}})\). Hence \(Cl_{X}(A_{n}^{f_{x}})\subseteq A_{n}^{f_{x}}\) which implies that \(A_{n}^{f_{x}}\) is closed. For each \(n\in\omega\), put \(\mathcal{A}_{n}=\{A_{n}^{f_{x}}:f_{x}\in X\}\). Then \(\mathcal{A}_{n}\) is a clopen cover of \(X\) and \((\mathcal{A}_{n}:n\in\omega)\) is a sequence of clopen covers of \(X\). For \(n\in\omega\) and for any finite subset \(\mathcal{B}_{n}\) of \(\mathcal{A}_{n}\). Let \(n_{f_{x}}=max\{f_{x}(1),f_{x}(2),.....f_{x}(n):f_{x}\in A_{n}^{f_{x}}\}\). Define a function \(f:\omega\rightarrow\omega\) as follows: \[f(n)=max\{n_{f_{x}}:A_{n}^{f_{x}}\in\mathcal{B}_{n}\}+1.\] From the assumption of unboundedness of \(X\), there exists \(f^{\prime}\in X\) such that \(f^{\prime}\not\leq^{*}f\), that is \(f(n)<f^{\prime}(n)\) for infinitely many \(n\). Hence for infinitely many \(n\), \(f^{\prime}\not\in A_{n}^{f_{x}}\), \(A_{n}^{f_{x}}\in\mathcal{B}_{n}\). Thus \(f^{\prime}\not\in\bigcup\mathcal{B}_{n}\) for infinitely many \(n\). This means that \(X\) does not have mildly Hurewicz property. This completes the proof. **Corollary 4.2**.: The dominating subset \(D\) of \(\omega^{\omega}\) is not mildly Hurewicz. By Theorem 4.1, the following corollaries follows directly: **Corollary 4.3**.: Let \(X\) be a \(\theta\)-Hurewicz subspace of \(\omega^{\omega}\), then \(X\) is bounded. **Corollary 4.4**.: Let \(X\) be a nearly Hurewicz subspace of \(\omega^{\omega}\), then \(X\) is bounded. **Corollary 4.5**.: Let \(X\) be an almost Hurewicz subspace of \(\omega^{\omega}\), then \(X\) is bounded. **Theorem 4.6**.: Let \(X\) be a \(\theta\)-Hurewicz space. Then every \(\theta\)-continuous image of \(X\) in \(\omega^{\omega}\) is bounded. Proof.: Let \(F:X\rightarrow\omega^{\omega}\) be a \(\theta\)-continuous map from a \(\theta\)-Hurewicz space \(X\) to \(\omega^{\omega}\). Then \(F(X)\) is a \(\theta\)-Hurewicz space. Hence \(F(X)\) is bounded. **Corollary 4.7**.: Every continuous image of a \(\theta\)-Hurewicz space \(X\) in \(\omega^{\omega}\) is bounded. **Corollary 4.8**.: Every continuous image of a Hurewicz space \(X\) in \(\omega^{\omega}\) is bounded. **Corollary 4.9**.: Every continuous image of a nearly Hurewicz space \(X\) in \(\omega^{\omega}\) is bounded. **Corollary 4.10**.: Every continuous image of an almost Hurewicz space \(X\) in \(\omega^{\omega}\) is bounded. **Theorem 4.11**.: Let \(X\) be a \(\theta\)-Lindelof space. If the cardinality of \(X\) is less than \(\mathfrak{b}\), then \(X\) is \(\theta\)-Hurewicz Proof.: Let \(X\) be a \(\theta\)-Lindelof space with \(|X|<\mathfrak{b}\). If \(X\) is not a \(\theta\)-Hurewicz space. Then there exists a sequence \((\mathcal{A}_{n}:n\in\omega)\) of \(\theta\)-open covers of \(X\) such that for each \(n\) and for each finite subset \(\mathcal{B}_{n}\) of \(\mathcal{A}_{n}\), there exists a \(x\in X\) such that \(x\not\in\cup\mathcal{B}_{n}\) for infinitely many \(n\). Since \(X\) is \(\theta\)-Lindelof, assume that for each \(n\), \(\mathcal{A}_{n}=\{A_{n}^{j}:j\in\omega\}\). For each \(x\in X\), define \(f_{x}:\omega\rightarrow\omega\) as \(:f_{x}(n)=min\{j:x\in A_{n}^{j}\}\). Let \(D=\{f_{x}:x\in X\}\). Then \(D\) is an unbounded set. If \(D\) is bounded. Then there exists a \(f\in\omega^{\omega}\) such that \(f_{x}\leq^{*}f\) for all \(f_{x}\in D\). For \(n\in\omega\), put \(\mathcal{B}_{n}=\{A_{n}^{j}:j\leq f(n)\}\). Then for each \(x\in X\), \(x\in\cup\mathcal{B}_{n}\) for all but finitely many \(n\). This leads to be a contradiction to the fact the there is a \(x\in X\) such that \(x\not\in\cup\mathcal{B}_{n}\) for infinitely many \(n\). Thus \(D\) is an unbounded set. Hence \(\mathfrak{b}\leq|D|\). Since \(|X|<\mathfrak{b}\) and it is mapped surjectively to \(D\). This leads to a contradiction. Hence \(X\) must be a \(\theta\)-Hurewicz space. **Theorem 4.12**.: Let \(X\) be a mildly Lindelof space. If the cardinality of \(X\) is less than \(\mathfrak{b}\), then \(X\) is mildly Hurewicz Proof.: The proof is on similar lines of the proof of Theorem 4.11. **Corollary 4.13**.: Let \(X\) be a subset of real line \(\mathbb{R}\). If \(X\) is not \(\theta\)-Hurewicz, then \(|X|\geq\mathfrak{b}\). **Corollary 4.14**.: Let \(X\) be a subset of real line \(\mathbb{R}\). If \(X\) is not mildly Hurewicz, then \(|X|\geq\mathfrak{b}\). **Remark**: In [30], Velicho defined the \(\theta\)-closure operator which is denoted by \(\mathrm{Cl}_{\theta}(\mathrm{A})\). For \(A\subset X\), \(\mathrm{Cl}_{\theta}(A)=\{x\in X:\) for each neighbourhood \(U\) of \(x\), \(Cl(U)\cap A\neq\phi\}\) and \(\mathrm{Cl}(\mathrm{A})\subseteq\mathrm{Cl}_{\theta}(\mathrm{A})\). Many papers have been published on \(\theta\)-closure operator (see [4, 7, 18, 8]). Using the \(\theta\)-closure operator it is interesting to investigate the following class of spaces. A space \(X\) is called \(\theta\)-almost Hurewicz if for each sequence \((\mathcal{A}_{k}:k\in\mathbb{N})\) of open covers of \(X\) there exists a sequence \((\mathcal{B}_{k}:k\in\mathbb{N})\), where \(\mathcal{B}_{k}\) is a finite subset of \(\mathcal{A}_{k}\) for each \(k\), such that for each \(x\in X\), \(x\in\bigcup\{Cl_{\theta}(Cl(B)):B\in\mathcal{B}_{k}\}\) for all but finitely many \(k\). Observe that every almost Hurewicz spaces is almost \(\theta\)-Hurewicz. **Conflicts of interests**: The authors have no relevant financial or non-financial interests to disclose.
2303.12574
On a Bohr set analogue of Chowla's conjecture
Let $\lambda$ denote the Liouville function. We show that the logarithmic mean of $\lambda(\lfloor \alpha_1n\rfloor)\lambda(\lfloor \alpha_2n\rfloor)$ is $0$ whenever $\alpha_1,\alpha_2$ are positive reals with $\alpha_1/\alpha_2$ irrational. We also show that for $k\geq 3$ the logarithmic mean of $\lambda(\lfloor \alpha_1n\rfloor)\cdots \lambda(\lfloor \alpha_kn\rfloor)$ has some nontrivial amount of cancellation, under certain rational independence assumptions on the real numbers $\alpha_i$. Our results for the Liouville function generalise to produce independence statements for general bounded real-valued multiplicative functions evaluated at Beatty sequences. These results answer the two-point case of a conjecture of Frantzikinakis (and provide some progress on the higher order cases), generalising a recent result of Crn\v{c}evi\'c--Hern\'andez--Rizk--Sereesuchart--Tao. As an ingredient in our proofs, we establish bounds for the logarithmic correlations of the Liouville function along Bohr sets.
Joni Teräväinen, Aled Walker
2023-03-22T13:59:02Z
http://arxiv.org/abs/2303.12574v1
# On a Bohr set analogue of Chowla's conjecture ###### Abstract. Let \(\lambda\) denote the Liouville function. We show that the logarithmic mean of \(\lambda(\lfloor\alpha_{1}n\rfloor)\lambda(\lfloor\alpha_{2}n\rfloor)\) is \(0\) whenever \(\alpha_{1},\alpha_{2}\) are positive reals with \(\alpha_{1}/\alpha_{2}\) irrational. We also show that for \(k\geqslant 3\) the logarithmic mean of \(\lambda(\lfloor\alpha_{1}n\rfloor)\cdots\lambda(\lfloor\alpha_{k}n\rfloor)\) has some nontrivial amount of cancellation, under certain rational independence assumptions on the real numbers \(\alpha_{i}\). Our results for the Liouville function generalise to produce independence statements for general bounded real-valued multiplicative functions evaluated at Beatty sequences. These results answer the two-point case of a conjecture of Frantzikinakis (and provide some progress on the higher order cases), generalising a recent result of Crncevic-Hernandez-Rizk-Sereesuchart-Tao. As an ingredient in our proofs, we establish bounds for the logarithmic correlations of the Liouville function along Bohr sets. ## 1. Introduction Let \(\lambda:\mathbb{N}\to\{-1,+1\}\) denote the Liouville function: that is, the completely multiplicative function with \(\lambda(p)=-1\) for all primes \(p\). In this note, we consider correlations of the Liouville function (as well as arbitrary multiplicative functions) along Beatty sequences \(\lfloor\alpha n\rfloor\). For correlations of 'length \(1\)' (i.e. single averages of \(\lambda\) over Beatty sequences), it follows from a classical exponential sum estimate of Davenport1[4] that for all \(\alpha>0\) Footnote 1: Indeed, by Davenport’s result, \(\sum_{n\leqslant X}\lambda(n)e(\beta n)=o(X)\) for all \(\beta\). If \(\alpha\) is rational, the claim follows easily from this. If \(\alpha\) is irrational, by considering the sums \(\sum_{n\leqslant X}(1\pm\lambda(n))e(k\alpha n)\) and applying Weyl’s criterion, the sequence \(\{\alpha n:\lambda(n)=v\}\) is uniformly distributed modulo \(1\) for \(v\in\{-1+1\}\). But now if \(\alpha>1\) then \(\sum_{n\leqslant X}\lambda(\lfloor\alpha n\rfloor)=\sum_{m\leqslant\alpha X,m /\alpha\in[1-1/\alpha,1)\ (\text{mod }1)}\lambda(m)\), and by the uniform distribution property mentioned above this is \(o(X)\). The case \(\alpha\in(0,1)\) follows along similar lines. \[\lim_{X\to\infty}\frac{1}{X}\sum_{n\leqslant X}\lambda(\lfloor\alpha n\rfloor) =0.\] The following far-reaching extension was posed as an open problem by Frantzikinakis2. Footnote 2: Special case of [6, Problem 2], see remark following this problem. Also stated by Frantzikinakis in a talk at Additive Combinatorics Webinar, July 2020. **Conjecture 1.1**.: Let \(k\geqslant 1\) be an integer, and let \(\alpha_{1},\ldots,\alpha_{k}>0\) be such that \(1,\alpha_{1},\ldots,\alpha_{k}\) are linearly independent over \(\mathbb{Q}\). Then, for any multiplicative functions \(f_{1},\ldots,f_{k}:\mathbb{N}\to[-1,1]\), we have \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(\lfloor \alpha_{i}n\rfloor)=\prod_{i=1}^{k}\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^ {\log}f_{i}(n). \tag{1.1}\] In particular, we have \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor\alpha_{1}n \rfloor)\cdots\lambda(\lfloor\alpha_{k}n\rfloor)=0. \tag{1.2}\] Here and throughout, \(\mathbb{E}_{n\leqslant X}^{\log}f(n)\) denotes the logarithmic average \(\frac{1}{\log X}\sum_{n\leqslant X}\frac{f(n)}{n}\). We use \(\mathbb{E}_{n\leqslant X}f(n)\) to denote the natural average \(\frac{1}{X}\sum_{n\leqslant X}f(n)\). **Remarks.** * The limits on the right-hand side of (1.1) always exist, since by Wirsing's theorem [14, Theorem 4.6 in Section III.4] any bounded, real-valued multiplicative function has a mean value. * The claim (1.2) should hold more generally when \(\alpha_{i}/\alpha_{j}\) is irrational for all \(i\neq j\), but (1.1) does not hold under this weaker assumption (for a counterexample, take \(k=2\), \(f_{1}(n)=f_{2}(n)=1_{(n,2)=1}\) and \(\alpha_{1}=\sqrt{2},\alpha_{2}=\sqrt{2}+2\)). For \(k=2\), Conjecture 1.1 was recently proved in [2, Theorem B] by Crncevic-Hernandez-Rizk-Seresuchart-Tao, under the additional assumption that \(\alpha_{1}=1\). Conjecture 1.1 for \(k=2\) was also posed in a more general setting of "bounded multiplicative approximately invariant sequences" as [2, Conjecture 5.1], but we will only consider multiplicative functions in this note. One may also consult [2, Conjecture 5.2] to see the Liouville case of Conjecture 1.1 in print when \(\alpha_{1}=1\). Our first main theorem settles Conjecture 1.1 when \(k=2\), for arbitrary \(\alpha_{1},\alpha_{2}\). More generally, the following result applies to two-point correlations of bounded multiplicative functions along inhomogeneous Beatty sequences \(\lfloor\alpha n+\beta\rfloor\). In the case of the Liouville function, it gives a complete characterisation of when such correlations converge to \(0\). **Theorem 1.2** (Two-point correlations along Beatty sequences).: _Let \(\alpha_{1},\alpha_{2}>0\) and \(\beta_{1},\beta_{2}\in\mathbb{R}\). Let \(f_{1},f_{2}:\mathbb{N}\to[-1,1]\) be multiplicative functions._ 1. _Suppose that_ \(1,\alpha_{1},\alpha_{2}\) _are linearly independent over_ \(\mathbb{Q}\)_. Then_3__ Footnote 3: Here and in what follows, we extend multiplicative functions defined on \(\mathbb{N}\) arbitrarily to \(\mathbb{Z}\). \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(\lfloor\alpha_{1}n+ \beta_{1}\rfloor)f_{2}(\lfloor\alpha_{2}n+\beta_{2}\rfloor)=\lim_{X\to\infty} \left(\mathbb{E}_{n\leqslant X}^{\log}f_{1}(n)\right)\cdot\lim_{X\to\infty} \left(\mathbb{E}_{n\leqslant X}^{\log}f_{2}(n)\right).\] 2. _Suppose that_ \(\alpha_{1}/\alpha_{2}\) _is irrational. Then we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor\alpha_{1}n+ \beta_{1}\rfloor)\lambda(\lfloor\alpha_{2}n+\beta_{2}\rfloor)=0.\] 3. _Suppose that_ \(r:=\alpha_{1}/\alpha_{2}\) _is rational. Then_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor\alpha_{1}n+ \beta_{1}\rfloor)\lambda(\lfloor\alpha_{2}n+\beta_{2}\rfloor)\] _exists, and is_ \(0\) _if and only if for all large enough_ \(m\in\mathbb{N}\) _we have_ \[\lfloor\alpha_{1}m+\beta_{1}\rfloor\neq r\lfloor\alpha_{2}m+\beta_{2}\rfloor.\] **Remarks.** * Note that Theorem 1.2 contains the statement that the logarithmic mean of \(\lambda(\lfloor\alpha_{1}n+\beta_{1}\rfloor)\lambda(\lfloor\alpha_{2}n+\beta _{2}\rfloor)\) always exists. There are certain trivial examples when the mean value is non-zero (e.g. \(\alpha_{1}=\alpha_{2}=1\), \(\beta_{1}=\beta_{2}=0\)), and some less trivial examples, e.g. \(\alpha_{1}=\sqrt{2}\), \(\alpha_{2}=2\sqrt{2}\), \(\beta_{1}=0\), \(\beta_{2}=1/4\). * The case of Theorem 1.2(2) where \(\beta_{i}/\alpha_{i}\) are integers follows as a special case from a result of Frantzikinakis [6]. A tool for proving Theorem 1.2 is an analogue of the two-point logarithmic Elliott conjecture (proved by Tao in [13]) where the summation variable is restricted to lie in a Bohr set. For ease of future reference we give the definition of these sets here. **Definition 1.3**.: Let \(d\geqslant 1\), \(\gamma\in\mathbb{R}^{d}\), and let \(U\subset\mathbb{R}^{d}/\mathbb{Z}^{d}\) be measurable. Then we call \[B_{d}(\gamma,U):=\{x\in\mathbb{Z}:\ \gamma x\in U\bmod\mathbb{Z}^{d}\}\] an _inhomogeneous Bohr set_. Viewing \([0,1)^{d}\) as a fundamental domain for \(\mathbb{R}^{d}/\mathbb{Z}^{d}\), we denote \[\mathcal{B}_{d,\mathrm{convex}}:=\{B_{d}(\gamma,U):\ \gamma\in\mathbb{R}^{d},\, U\subset[0,1)^{d},\,U\ \mathrm{convex}\}.\] Write \(\mathcal{B}_{\mathrm{convex}}\) for \(\bigcup_{d\geqslant 1}B_{d,\mathrm{convex}}\), and for \(B\in\mathcal{B}_{\mathrm{convex}}\) \[\delta_{B}:=\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}1_{B}(n)=\lim_{X\to \infty}\mathbb{E}_{n\leqslant X}^{\log}1_{B}(n).\] It is a standard result (and follows from Lemma 3.1 below, for example) that the natural average \(\delta_{B}\) is well-defined for all \(B\in\mathcal{B}_{\mathrm{convex}}\). The equality of logarithmic and natural averages follows from partial summation. For stating the next theorem, we also need the notion of pretentious multiplicative functions, introduced in [8]. **Definition 1.4**.: Let \(f:\mathbb{N}\to[-1,1]\) be multiplicative. We say that \(f\) is _pretentious_ if for some Dirichlet character \(\chi\) we have \[\sum_{p}\frac{1-\mathrm{Re}(f(p)\overline{\chi}(p))}{p}<\infty.\] Otherwise, we say that \(f\) is _non-pretentious_. The Liouville function is clearly non-pretentious by the prime number theorem in arithmetic progressions. **Theorem 1.5** (Logarithmic two-point Elliott over Bohr sets).: _Let \(f_{1},f_{2}:\mathbb{N}\to[-1,1]\) be multiplicative functions with \(f_{1}\) non-pretentious. Let \(B\in\mathcal{B}_{\mathrm{convex}}\). Then, for any \(a_{1},a_{2}\in\mathbb{N}\) and \(h_{1},h_{2}\in\mathbb{Z}\) satisfying \(a_{1}h_{2}\neq a_{2}h_{1}\), we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{ 2}n+h_{2})1_{B}(n)=0.\] We note that the case where \(a_{1}=a_{2}=1\) and \(B=B_{d}(\gamma,U)\) with \(d=1\), and \(U\) an interval essentially follows from [2]. Indeed our methods are broadly similar to those from the excellent paper [2] (though we were working independently from those authors). A few additional technical results are needed to prove Theorem 1.5, to handle the rational dependencies that can arise when \(d\geqslant 2\). When \(k\geqslant 3\), we have the following "99% version" of Conjecture 1.1. **Theorem 1.6** (99% result for \(k\)-point correlations).: _Let \(k\geqslant 3\) be an integer, and let \((\alpha_{1},\alpha_{2},\ldots,\alpha_{k}):=\alpha\in\mathbb{R}_{>0}^{k} \setminus\mathbb{Q}^{k}\)._ 1. _Suppose that_ \(1,\alpha_{1},\ldots,\alpha_{k}\) _are linearly independent over_ \(\mathbb{Q}\)_. Then there is some_ \(\eta>0\) _(depending on the_ \(\alpha_{i}\)_'s) such that for any multiplicative functions_ \(f_{1},\ldots f_{k}:\mathbb{N}\to[-1,1]\) _we have_ (1.3) \[\limsup_{X\to\infty}\left|\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i }(\lfloor\alpha_{i}n\rfloor)-\prod_{i=1}^{k}\mathbb{E}_{n\leqslant X}^{\log} f_{i}(n)\right|\leqslant 1-\eta.\] 2. _Suppose that_ \(\mathcal{V}\) _is a nonempty maximal linearly independent set of vectors_ \(v\in\mathbb{Z}^{k}\) _for which_ \(v\cdot\alpha\in\mathbb{Z}\) _for all_ \(v\in\mathcal{V}\)_. Suppose also that there exists a vector_ \((w_{1},\ldots,w_{k}):=w\in\mathbb{R}_{>0}^{k}\) _such that:_ * \(v\cdot w=0\) _for all_ \(v\in\mathcal{V}\)_;_ * \(w_{1}\) _is the unique maximal coefficient of_ \(w\)_._ _Then there is some_ \(\eta>0\) _(depending on the_ \(\alpha_{i}\)_'s) such that for any multiplicative non-pretentious function_ \(f_{1}:\mathbb{N}\to[-1,1]\) _and completely multiplicative functions_ \(f_{2},\ldots,f_{k}:\mathbb{N}\to[-1,1]\) _we have_ (_1.3_)_. In particular, we have_ \[\limsup_{X\to\infty}\left|\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k} \lambda(\lfloor\alpha_{i}n\rfloor)\right|\leqslant 1-\eta.\] We stress that in Theorem 1.6(2) the first condition is indeed \(v\cdot w=0\) as an element of \(\mathbb{R}\), and is not a shorthand for \(v\cdot w\in\mathbb{Z}\) (as is sometimes the convention). Theorem 1.6(1) deals with the case when \(1,\alpha_{1},\ldots,\alpha_{k}\) are linearly independent over \(\mathbb{Q}\). At the opposite extreme, when the \(\alpha_{i}\)'s are as rationally dependent as possible, we can also show some cancellation. **Corollary 1.7**.: _Let \(k\geqslant 3\), and let \(\alpha_{1},\ldots,\alpha_{k}>0\) be distinct with \(\max(\alpha_{1},\ldots,\alpha_{k})=\alpha_{1}\). Suppose that there is some irrational \(\beta\) such that \(\alpha_{i}/\beta\in\mathbb{Q}\) for all \(i\). Then there is some \(\eta>0\) (depending on the \(\alpha_{j}\)'s) such that, for any multiplicative functions \(f_{1},\ldots f_{k}:\mathbb{N}\to[-1,1]\) with \(f_{1}\) non-pretentious and \(f_{2},\ldots,f_{k}\) completely multiplicative, we have (1.3)._ Proof.: Write \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\), and for \(i\) in the range \(1\leqslant i\leqslant k\) let \(\alpha_{i}=q_{i}\beta\) (for some \(q_{i}\in\mathbb{Q}_{>0}\)). The \(q_{i}\) are distinct. Now apply Theorem 1.6(2), taking \(w=(q_{1},\ldots,q_{k})\). This is an admissible choice, since \(v\cdot\alpha\in\mathbb{Z}\) for \(v\in\mathcal{V}\) implies \(v\cdot(q_{1},\ldots,q_{k})=0\). For example, when \(k=4\) we have results for tuples \((\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) such as * \((\sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7})\) (rationally independent); * \((\sqrt{2},\sqrt{2}+\sqrt{3},\sqrt{2}+2\sqrt{3},\sqrt{2}+3\sqrt{3})\) (take \(\mathcal{V}=\{(1,-2,1,0),(0,1,-2,1)\}\) and \(w=(1,2,3,4)\), say); and * \((\sqrt{2},2\sqrt{2},3\sqrt{2},4\sqrt{2})\) (take \(w=(1,2,3,4)\) again). But our methods cannot handle the tuple \((\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})=(\sqrt{2},\sqrt{2}+1,\sqrt{3}, \sqrt{3}+1)\), at least not without the injection of some further ideas. Theorem 1.6 is proved by a rather simple argument. After handling the case of pretentious \(f_{i}\) by almost periodicity of such functions, we restrict \(n\) to a suitably chosen Bohr set and then replace \(n\) by a multiple \(rn\) that reduces the \(k\)-point correlation to a \(2\)-point correlation. From this, Theorem 1.2 can be applied. The main challenge is establishing that the Bohr set is non-empty, and this leads to the various conditions in Theorem 1.6(2). The requirement that the functions \(f_{2},\ldots,f_{k}\) are completely multiplicative (rather than merely multiplicative) can be relaxed to the assumption that \(f_{2},\ldots,f_{k}\) are completely multiplicative at a single common prime. However we have not been able to prove Theorem 1.6(2) for functions that are only assumed to be multiplicative. We also prove the following extension of the "99% Elliott conjecture" due to the first author [15]. **Theorem 1.8** (99% Elliott over Bohr sets).: _Let \(k\geqslant 3\), and let \(a_{1},\ldots,a_{k}\in\mathbb{N}\) and \(h_{1},\ldots,h_{k}\in\mathbb{Z}\) with \(a_{i}h_{j}-a_{j}h_{i}\neq 0\) for all \(i\neq j\). Let \(B\in\mathcal{B}_{\mathrm{convex}}\). Then there is some \(\eta>0\) for which the following holds. For any multiplicative functions \(f_{1},f_{2},\ldots f_{k}:\mathbb{N}\to[-1,1]\) _with \(f_{1}\) non-pretentious,_ \[\limsup_{X\to\infty}\left|\mathbb{E}_{n\leqslant X}^{\log}1_{B}(n)\prod_{i=1}^{k} f_{i}(a_{i}n+h_{i})\right|\leqslant\delta_{B}(1-\eta).\] This result is not needed in the proof of Theorem 1.6, however. ### Acknowledgements The majority of the work for this note was done in the first half of 2021, partly when both authors were Junior Fellows at the Number Theory programme at Institut Mittag-Leffler (working remotely). JT was supported by a Titchmarsh Fellowship, Academy of Finland grant no. 340098, a von Neumann Fellowship (NSF grant DMS-1926686), and funding from European Union's Horizon Europe research and innovation programme under Marie Sklodowska-Curie grant agreement No 101058904. AW was supported by a Junior Research Fellowship at Trinity College Cambridge. We thank Nikos Frantzikinakis for helpful comments. ## 2 Notation and some preliminaries As usual, we denote \(e(\theta):=e^{2\pi i\theta}\). We use standard Landau and Vinogradov asymptotic notation \(O(\cdot),o(\cdot),\ll,\gg\). To clarify a couple of points, a function denoted by \(o_{c}(1)\) will tend to zero as \(X\to\infty\) with the parameter \(c\) fixed. A function denoted by \(o_{P\to\infty}(1)\) is a function that tends to zero as \(P\to\infty\) (with all other parameters fixed). We say that a sequence \((a(n))_{n\in\mathbb{N}}\) taking values in a \(d\)-dimensional torus \(T\) is _equidistributed_ if \[\lim_{X\to\infty}\frac{1}{X}\sum_{n\leqslant X}F(a(n))=\int_{T}F\,\mathrm{d}\mu, \tag{2.1}\] for all continuous functions \(F:T\to\mathbb{C}\), where \(\mu\) is the Haar measure on \(T\). We say that \((a(n))_{n\in\mathbb{N}}\) is _totally equidistributed_ if \((a(qn+b))_{n\in\mathbb{N}}\) is equidistributed for all \(q,b\in\mathbb{N}\). It is well known (see [12, Proposition 1.1.2]) that (2.1) is equivalent to the same statement holding for all \(f\) of the form \(1_{U}\), where \(U\subset T\) is an open set whose boundary has measure zero. We shall frequently use (sometimes without further mention) the Kronecker-Weyl theorem, which states that for \(\alpha\in\mathbb{R}^{d}/\mathbb{Z}^{d}\) the sequence \((\alpha n)_{n\in\mathbb{N}}\) equidistributes in the torus \(\mathbb{R}^{d}/\mathbb{Z}^{d}\) if and only if \(k\cdot\alpha\not\in\mathbb{Z}\) for all \(k\in\mathbb{Z}^{d}\). We endow \(\mathbb{R}^{d}/\mathbb{Z}^{d}\) with the usual metric \(\|x-y\|_{\mathbb{R}^{d}/\mathbb{Z}^{d}}=\min_{z\in\mathbb{Z}^{d}}|x-y-z|\). A function \(F:\mathbb{R}^{d}/\mathbb{Z}^{d}\longrightarrow\mathbb{C}\) is Lipschitz, with Lipschitz constant \(c\in\mathbb{R}_{\geqslant 0}\), if \(c=\sup_{\begin{subarray}{c}x,y\in\mathbb{R}^{d}/\mathbb{Z}^{d}\\ x\neq y\end{subarray}}\frac{|F(x)-F(y)|}{\|x-y\|_{\mathbb{R}^{d}/\mathbb{Z}^{d }}}\). ## 3 Decomposition of Bohr sets The goal of this section is to prove Lemma 3.2, a result on Fourier approximations of Bohr sets in \(\mathcal{B}_{\mathrm{convex}}\). Such a result is surely standard, but we couldn't find exactly the statement we needed in an easily citable form. We begin with a lemma to deal with possible rational dependencies between the coordinates of the phase. **Lemma 3.1** (Removing rational dependencies).: _Let \(d\geqslant 1\), and let \(B_{d}(\gamma,U)\) be an inhomogeneous Bohr set with \(\gamma\notin\mathbb{Q}^{d}\). Then there is an integer \(d^{\prime}\) in the range \(1\leqslant d^{\prime}\leqslant d\), a vector \((\rho_{1},\ldots,\rho_{d^{\prime}})^{T}=\rho\in\mathbb{R}^{d^{\prime}}\) for which \(1,\rho_{1},\ldots\rho_{d^{\prime}}\) are linearly independent over \(\mathbb{Q}\), an integer \(q\geqslant 1\), and measurable sets \(U^{\prime}(1),\ldots,U^{\prime}(q)\subset[0,1)^{d^{\prime}}\) for which_ \[1_{B_{d}(\gamma,U)}(n)=1_{B_{d^{\prime}}(\rho,U^{\prime}(n\pmod{q}))}(n).\] _Furthermore there is a constant \(C(\gamma)\) such that, if \(U\subset[0,1)^{d}\) is convex, each set \(U^{\prime}(a)\) is a disjoint union of at most \(C(\gamma)\) convex sets. Finally,_ \[\frac{1}{q}\sum_{a\leqslant q}\operatorname{vol}(U^{\prime}(a))=\delta_{B_{d} (\gamma,U)}.\] Proof.: By the abelian Ratner's theorem of [12, Proposition 1.1.5] we may write \(\gamma=\gamma^{\prime}+\gamma^{\prime\prime}\) where \(\gamma^{\prime\prime}\in\mathbb{Q}^{d}\) and \(\gamma^{\prime}n\bmod\mathbb{Z}^{d}\) totally equidistributes in some subtorus \(T\leqslant\mathbb{R}^{d}/\mathbb{Z}^{d}\). Let \(d^{\prime}:=\dim T\), noting that \(d^{\prime}\geqslant 1\) (since \(\gamma\notin\mathbb{Q}^{d}\) by assumption). Let \(q\in\mathbb{N}\) be minimal such that \(q\gamma^{\prime\prime}\in\mathbb{Z}^{d}\). Define \(U_{1}(n)\) to be the representative of \((U-n\gamma^{\prime\prime})\cap T\bmod\mathbb{Z}^{d}\) in the fundamental domain \([0,1)^{d}\). Observe also that \(n\gamma\in U\bmod\mathbb{Z}^{d}\) if and only if \(n\gamma^{\prime}\in(U-n\gamma^{\prime\prime})\cap T\bmod\mathbb{Z}^{d}\). Since \(U_{1}(n)\) depends only on \(n\bmod q\), \[1_{B_{d}(\gamma,U)}(n)=1_{B_{d}(\gamma^{\prime},U_{1}(n\,(\bmod q)))}(n).\] There is a linear transformation \(M\in SL_{d}(\mathbb{Z})\) (which has a well-defined action on \(\mathbb{R}^{d}/\mathbb{Z}^{d}\)) such that \(M(T)=(\mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime}})\times\{0\}^{d-d^{ \prime}}\). Let \(U^{\prime}(n):=M(U_{1}(n))\bmod\mathbb{Z}^{d}\) (with the \(d-d^{\prime}\) trailing zeros removed and viewed as a subset of \([0,1)^{d^{\prime}}\)). Let \(\rho=M(\gamma^{\prime})\), and again remove the final \(d-d^{\prime}\) coordinates (which are all integers) to view \(\rho\in\mathbb{R}^{d^{\prime}}\). Since \(\rho n\bmod\mathbb{Z}^{d^{\prime}}\) totally equidistributes in \(\mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime}}\) by construction, we conclude from the Kronecker-Weyl theorem that \(1,\rho_{1},\ldots\rho_{d^{\prime}}\) are linearly independent over \(\mathbb{Q}\). As \(B_{d}(\gamma^{\prime},U_{1}(n\,(\bmod q)))=B_{d^{\prime}}(\rho,U^{\prime}(n \,(\bmod q)))\), the first part of the lemma follows. For the second part of the lemma, note that \(T\subset[0,1)^{d}\) is a disjoint union of finitely many convex sets (each a translation of a fixed linear subspace intersected with \([0,1)^{d}\)). Therefore, if \(U\subset[0,1)^{d}\) is convex, \(U_{1}(n)\) is a disjoint union of finitely many convex sets. Hence \(M(U_{1}(n))\subset\mathbb{R}^{d^{\prime}}\times\mathbb{Z}^{d-d^{\prime}}\) is also a union of disjoint convex sets, say \(M(U_{1}(n))=\bigcup_{k\leqslant K}S_{k}\). Reducing modulo \(\mathbb{Z}^{d^{\prime}}\) to give \(U^{\prime}(n)\subset[0,1)^{d^{\prime}}\) may split each convex set \(S_{k}\) into a union of possibly \(2^{d^{\prime}}\) convex sets, but this larger collection still remains disjoint, as the points in \(M(U_{1}(n))\) are distinct modulo \(\mathbb{Z}^{d}\). We now formulate the following result for approximating Bohr sets by trigonometric polynomials. **Lemma 3.2** (Approximation of Bohr sets by trigonometric polynomials and periodic part).: _Let \(d\geqslant 1\) and \(\alpha\in\mathbb{R}^{d}\) be fixed. Let \(B=B_{d}(\alpha,U)\in\mathcal{B}_{\operatorname{convex}}\). Then there exists an integer \(q\geqslant 1\) (depending only on \(\alpha\)) and for every \(\varepsilon>0\) a decomposition of functions_ \[1_{B}(n)=T_{\varepsilon}(n)+\sum_{a\leqslant q}t_{a}1_{n\equiv a\bmod q}+ \mathcal{E}_{\varepsilon}(n)\] _such that the following hold._ 1. _For some constant_ \(K_{\varepsilon}\ll_{\varepsilon}1\)_, some sequence of real numbers_ \((\gamma_{k,\varepsilon})_{k\geqslant 1}\)_, and some complex numbers_ \(c_{\varepsilon}(k)\) _with_ \(|c_{\varepsilon}(k)|\ll_{\varepsilon}1\) _we have_ \[T_{\varepsilon}(x)=\sum_{1\leqslant k\leqslant K_{\varepsilon}}c_{\varepsilon} (k)e(\gamma_{k,\varepsilon}x)\] _for all_ \(x\in\mathbb{R}\)_. Furthermore, if_ \(\alpha\notin\mathbb{Q}^{d}\) _then_ \(\gamma_{k,\varepsilon}\notin\mathbb{Q}\) _for all_ \(k\)_._ 2. _We have_ \(t_{a}\geqslant 0\) _for all_ \(a\) _and_ \(\frac{1}{q}\sum_{a\leqslant q}t_{a}=\delta_{B}+O(\varepsilon)\)_._ 3. \(\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}|\mathcal{E}_{\varepsilon}(n)|\leqslant\varepsilon\)_._ **Corollary 3.3** (Approximation of Bohr sets by trigonometric polynomials).: _Let \(B=B_{d}(\alpha,U)\in\mathcal{B}_{\mathrm{convex}}\). Then for every \(\varepsilon>0\) there exists a decomposition_ \[1_{B}(n)=T_{\varepsilon}(n)+\mathcal{E}_{\varepsilon}(n)\] _with \(T_{\varepsilon}\) and \(\mathcal{E}_{\varepsilon}\) having the same properties as in the conclusion of Lemma 3.2, save for the fact that some of the phases \(\gamma_{k,\varepsilon}\) may be rational._ Proof of Corollary 3.3.: Expand \(\sum_{a\leqslant q}t_{a}1_{n\equiv a\,\mathrm{mod}\ q}=\frac{1}{q}\sum_{a,r \leqslant q}t_{a}e\Big{(}\frac{-ra}{q}\Big{)}e\Big{(}\frac{rn}{q}\Big{)}\) and amalgamate with the original trigonometric polynomial \(T_{\varepsilon}\). The coefficients \(c_{\varepsilon}(k)\) remain suitably bounded, since \(\big{|}\frac{1}{q}\sum_{a\leqslant q}t_{a}e(\frac{-ra}{q})\big{|}\leqslant \frac{1}{q}\sum_{a\leqslant q}|t_{a}|\leqslant 1+O(\varepsilon)=O_{\varepsilon}(1)\). Proof of Lemma 3.2.: If \(\alpha\in\mathbb{Q}^{d}\) then \(1_{B}(n)\) is periodic so may be written exactly as \(\sum_{a\leqslant q}t_{a}1_{n\equiv a\,\mathrm{mod}\ q}\) (for some \(q\)), with no error. Each \(t_{a}\geqslant 0\), and \(\frac{1}{q}\sum_{a\leqslant q}t_{a}=\delta_{B}\) exactly. If \(\alpha\notin\mathbb{Q}^{d}\), we use Lemma 3.1 to construct \(d^{\prime}\), \(q\), \(\rho\in\mathbb{R}^{d^{\prime}}\), and sets \(U^{\prime}(1),\dots,U^{\prime}(q)\subset[0,1)^{d^{\prime}}\); expanding the condition \(n\equiv a\,(\mathrm{mod}\ q)\) in additive characters, we get \[1_{B}(n)=1_{B_{d^{\prime}}(\rho,U^{\prime}(n\,(\mathrm{mod}\ q)))}(n)=\frac{1 }{q}\sum_{a,r=1}^{q}e\Big{(}-\frac{ra}{q}\Big{)}1_{B_{d^{\prime}}(\rho,U^{ \prime}(a))}(n)e\Big{(}\frac{r}{q}n\Big{)}.\] From the second part of Lemma 3.1, write \(U^{\prime}(a)\) as union \(\bigcup_{l\leqslant L}S_{a,l}\) of disjoint convex sets \(S_{a,l}\subset[0,1)^{d^{\prime}}\). By further subdivision as necessary, we may assume that each \(S_{a,l}\) is contained in a Cartesian box of side-length \(\frac{1}{10}\). Note that \(L\) depends only on \(\alpha\). By [10, Corollary A.3], we can write \[1_{S_{a,l}}=F_{\varepsilon,S_{a,l}}+O(G_{\varepsilon,S_{a,l}}), \tag{3.1}\] where \(F_{\varepsilon,S_{a,l}},G_{\varepsilon,S_{a,l}}:\mathbb{R}^{d^{\prime}} \longrightarrow[0,1]\) are non-negative Lipschitz functions with Lipschitz constants \(O(\varepsilon^{-1})\), where both functions are supported within Cartesian boxes of side-length \(\frac{1}{5}\), and where \(\int_{\mathbb{R}^{d^{\prime}}}G_{\varepsilon,S_{a,l}}(x)\,\mathrm{d}x=O(\varepsilon)\). Because of their restricted support, we may consider \(F_{\varepsilon,S_{a,l}},G_{\varepsilon,S_{a,l}}\) as Lipschitz functions on \(\mathbb{R}^{d^{\prime}}\big{/}\mathbb{Z}^{d^{\prime}}\) with Lipschitz constant \(O(\varepsilon^{-1})\), and furthermore where \(\int_{\mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime}}}G_{\varepsilon,S_{a,l}} (x)\,\mathrm{d}x=O(\varepsilon)\). From [9, Lemma A.9], we obtain (for all \(K\) sufficiently large) \[1_{B_{d^{\prime}}(\rho,U^{\prime}(a))}(n) =\sum_{l\leqslant L}(F_{\varepsilon,S_{a,l}}(\rho n)+O(G_{ \varepsilon,S_{a,l}}(\rho n)))\] \[=\sum_{l\leqslant L}\Big{(}\sum_{\begin{subarray}{c}k\in\mathbb{Z }^{d^{\prime}}\\ \|k\|_{\infty}\leqslant K\end{subarray}}c_{K,\varepsilon,a,l}(k)e(nk\cdot \rho)+O\Big{(}\frac{\log K}{\varepsilon K}\Big{)}+O(G_{\varepsilon,S_{a,l}}( \rho n))\Big{)} \tag{3.2}\] for some complex coefficients \(c_{K,\varepsilon,a,l}(k)\) with \(|c_{K,\varepsilon,a,l}(k)|\ll_{\varepsilon}1\). Choose \(K=K_{\varepsilon}\) sufficiently large so that \((\log K)\varepsilon^{-1}K^{-1}\leqslant\varepsilon\). Note that (as the sequence \(\rho n\) equidistributes in \(\mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime}}\)) we have \(\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}G_{\varepsilon,S_{a,l}}(\rho n)=\int_{ \mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime}}}G_{\varepsilon,S_{a,l}}(x)\, \mathrm{d}x=O(\varepsilon)\). Therefore, inserting the sums over \(a\), \(r\) into (3.2) and separating out the \(k=0\) term, we get \[1_{B}(n)=\sum_{\begin{subarray}{c}k\in\mathbb{Z}^{d^{\prime}}\\ \|k\|_{\infty}\leqslant K\\ k\neq 0\end{subarray}}\sum_{r\leqslant q}e(n(k\cdot\rho+\frac{r}{q}))\Big{(} \frac{1}{q}\sum_{l\leqslant L}\sum_{a\leqslant q}e\Big{(}-\frac{ra}{q}\Big{)}c _{K,\varepsilon,a,l}(k)\Big{)} \tag{3.3}\] \[+\sum_{a\leqslant q}1_{n\equiv a\bmod q}\sum_{l\leqslant L}c_{K, \varepsilon,a,l}(0)+\mathcal{E}_{\varepsilon}(n)\] where \(\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}|\mathcal{E}_{\varepsilon}(n)|= O(\varepsilon)\). When \(k\in\mathbb{Z}^{d^{\prime}}\setminus\{0\}\), Lemma 3.1 ensures that \(k\cdot\rho+\frac{r}{q}\notin\mathbb{Q}\). Therefore, replacing \(\varepsilon\) by \(\varepsilon/b_{\alpha}\) for a suitable constant \(b_{\alpha}\), the first term satisfies the conditions to be \(T_{\varepsilon}(n)\) and \(\mathcal{E}_{\varepsilon}\) is a suitable error. It remains to prove part (ii) of the lemma. By summing (3.3) over \(n\leqslant X\) (and using the fact that \(\sum_{n\leqslant X}e(n(k\cdot\rho+\frac{r}{q}))=O(1)\) uniformly in \(X\)) \[\mathbb{E}_{n\leqslant X}1_{B}(n)=\frac{1}{q}\sum_{a\leqslant q}\sum_{l \leqslant L}c_{K,\varepsilon,a,l}(0)+O(\varepsilon)\] for large enough \(X\). From the construction of the \(c_{K,\varepsilon,a,l}(k)\) in [9, Lemma A.9], we also derive \[c_{K,\varepsilon,a,l}(0)=\int_{\mathbb{R}^{d^{\prime}}/\mathbb{Z}^{d^{\prime} }}F_{\varepsilon,S_{a,l}}(x)\,\mathrm{d}x\geqslant 0.\] Setting \(t_{a}=\sum_{l\leqslant L}c_{K,\varepsilon,a,l}(0)\), the part (ii) of the lemma follows. ## 4 Lemmas on correlations ### Correlations twisted by additive characters In this section, we prove a correlation estimate for multiplicative functions twisted by linear phases (Lemma 4.2) that is important in the proof of our main theorems. We also resolve the pretentious case of the proofs of our main theorems in Lemma 4.3. We begin by summarising some known correlation estimates of Tao [13], the first author [15], and Frantzikinakis-Host [7]. **Lemma 4.1**.: _Let \(k\geqslant 1\), and let \(a_{1},\dots,a_{k}>0\) and \(h_{1},\dots,h_{k}\in\mathbb{N}\) be integers with \(a_{i}h_{j}-a_{j}h_{i}\neq 0\) for all \(i\neq j\). Let \(f_{1},\dots,f_{k}:\mathbb{N}\to[-1,1]\) be multiplicative functions._ 1. _Suppose that_ \(f_{1}\) _is non-pretentious. Then we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{ 2}n+h_{2})=0.\] 2. _Suppose that_ \(f_{1}\) _is non-pretentious. Then for some_ \(\eta>0\)_, depending only on the values_ \(a_{i},h_{i}\)_, we have_ \[\limsup_{X\to\infty}|\mathbb{E}_{n\leqslant X}^{\log}\prod_{j=1}^{k}f_{j}(a_{ j}n+h_{j})|\leqslant 1-\eta.\] _._ 3. _For any irrational_ \(\gamma\in\mathbb{R}\) _we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}e(\gamma n)\prod_{j=1}^{k}f_{j}( n+h_{j})=0.\] Proof.: Part (1) follows from Tao's resolution of the two-point logarithmic Elliott conjecture [13, Theorem 1.3], after noting that the non-pretentiousness assumption on \(f_{1}\) there (which involves archimedean characters \(n^{it}\)) can be weakened in the case of real-valued functions \(f_{1}\) using [11, Lemma C.1]. Part (3) is the "irrational logarithmic Elliott conjecture" of Frantzikinakis-Host [7, Corollary 1.4]. It remains to prove part (2). If we assume that \(f_{1}\) takes values in \(\{-1,+1\}\), then part (2) follows immediately from the "99% Elliott conjecture" of the first author [15, Theorem 2.6] (using partial summation to pass to the logarithmic average). To deal with the general case4 when \(f_{1}\) takes values in \([-1,+1]\), we use an argument of Tao [13, Proposition 2.1]. Write \(f_{1}=f_{1}^{\prime}f_{1}^{\prime\prime}\), where \(f_{1}^{\prime}(n)=|f_{1}(n)|\) and \(f_{1}^{\prime\prime}(n)=\operatorname{sgn}(f_{1})\). Let \(A\) be a sufficiently large quantity (depending on the \(a_{i}\), \(h_{i}\), and the value of \(\eta\) that can be established in part (2) when \(|f_{1}(n)|=1\) for all \(n\)). We may assume that Footnote 4: Alternatively, one could adapt the methods from [15]. Indeed, [15, Proposition 5.4] as stated is for multiplicative functions taking values which are \(q^{th}\) roots of unity for some fixed \(q\). It is easy to adapt the proof to the case of multiplicative functions taking values in the convex hull of the \(q^{th}\) roots of unity, which when \(q=2\) gives the full interval \([-1,+1]\). \[\sum_{p}\frac{1-f_{1}^{\prime}(p)}{p}<A.\] Indeed, if not then using the standard elementary bound \[\mathbb{E}_{n\leqslant X}^{\log}f^{\prime}(n)\ll\exp(-\sum_{p\leqslant X} \frac{1-f^{\prime}(p)}{p}),\] which holds for any non-negative multiplicative function, we conclude that \[\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}^{\prime}(n)=o_{A\to \infty}(1).\] Using non-negativity again we derive \[\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}^{\prime}(a_{1}n+b_{ 1})=o_{A\to\infty}(1),\] and so by the triangle inequality we may conclude that \[\limsup_{X\to\infty}|\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(a_{ i}n+h_{i})|\leqslant 1-\eta\] as required. Now, for later purposes we let \(S\) be the set of \(\{-1,+1\}\)-valued multiplicative functions \(g\) for which \[\sum_{p}\frac{1-g(p)}{p}<A^{2}.\] We also construct a random multiplicative function \(\mathbf{f_{1}^{\prime}}\) taking values in \(\{-1,+1\}\) by taking \(\mathbf{f_{1}^{\prime}}(p^{j})\) to be independent \(\{-1,+1\}\)-valued random variables with mean \(\mathbb{E}\mathbf{f_{1}^{\prime}}(p^{j})=f_{1}^{\prime}(p^{j})\). (There is a slight overloading of the symbol \(\mathbb{E}\) in what follows, but we hope that it will be clear that \(\mathbb{E}_{n\leq X}^{\log}\) refers to logarithmic averaging and \(\mathbb{E}\) refers to expectation of a random variable.) By Fubini's theorem we have \[\mathbb{E}\sum_{p}\frac{1-\mathbf{f}_{\mathbf{1}}^{\prime}(p)}{p}<A,\] so by Markov's inequality we have \(\mathbf{f}_{\mathbf{1}}^{\prime}\in S\) with probability at least \(1-O(A^{-1})\). Supposing that \(\mathbf{f}_{\mathbf{1}}^{\prime}\in S\), set \(\mathbf{f}_{\mathbf{1}}:=\mathbf{f}_{\mathbf{1}}^{\prime}f_{1}^{\prime\prime}\). Thus \(\mathbf{f}_{\mathbf{1}}\) is a random multiplicative function taking values in \(\{-1,+1\}\) such that \(\mathbb{E}\mathbf{f}_{\mathbf{1}}(n)=f_{1}(n)\) for all \(n\). By the triangle inequality we have \[|\mathbf{f}_{\mathbf{1}}(p)-f_{1}(p)| =|f_{1}^{\prime\prime}(p)(\mathbf{f}_{\mathbf{1}}^{\prime}(p)-f_{ 1}^{\prime}(p))|\] \[=|f_{1}^{\prime\prime}(p)((1-f_{1}^{\prime}(p))-(1-\mathbf{f}_{ \mathbf{1}}^{\prime}(p)))|\] \[\leqslant(1-f_{1}^{\prime}(p))+(1-\mathbf{f}_{\mathbf{1}}^{ \prime}(p)).\] In particular \[\sum_{p}\frac{\mathbf{f}_{\mathbf{1}}(p)\overline{x}(p)}{p}=\sum_{p}\frac{f_{1 }(p)\overline{x}(p)}{p}+O_{A}(1).\] Taking real parts, since \(f_{1}\) is non-pretentious we conclude that \(\mathbf{f}_{\mathbf{1}}\) is non-pretentious. Since \(\mathbf{f}_{\mathbf{1}}\) takes values in \(\{-1,+1\}\), by [15, Theorem 2.6] we get \[\limsup_{X\to\infty}|\mathbb{E}_{n\leqslant X}^{\log}\mathbf{f}_{\mathbf{1}} (a_{1}n+h_{1})\prod_{i=2}^{k}f_{i}(a_{i}n+h_{i})|\leqslant 1-\eta \tag{4.1}\] for some absolute constant \(\eta>0\) (depending on \(a_{i},h_{i}\) but not on any of the multiplicative functions). Therefore, by (4.1) and the reverse Fatou's lemma, for some \(v\in\{-1,+1\}\) we have \[\limsup_{X\to\infty}|\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^ {k}f_{i}(a_{i}n+h_{i})|= \limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}v\prod_{i=1}^ {k}f_{i}(a_{i}n+h_{i})\] \[= \limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}v\mathbb{E} \mathbf{f}_{\mathbf{1}}(a_{i}n+h_{i})\prod_{i=2}^{k}f_{i}(a_{i}n+h_{i})\] \[= \limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}v\mathbb{E} (1_{S}(\mathbf{f}_{\mathbf{1}})+1_{S^{c}}(\mathbf{f}_{\mathbf{1}}))\mathbf{f}_ {\mathbf{1}}(a_{i}n+h_{i})\prod_{i=2}^{k}f_{i}(a_{i}n+h_{i})\] \[\leqslant \mathbb{E}1_{S}(\mathbf{f}_{\mathbf{1}})\limsup_{X\to\infty} \mathbb{E}_{n\leqslant X}^{\log}v\mathbf{f}_{\mathbf{1}}(a_{i}n+h_{i})\prod_{ i=2}^{k}f_{i}(a_{i}n+h_{i})+\mathbb{E}1_{S^{c}}(\mathbf{f}_{\mathbf{1}})\] \[\leqslant 1-\eta+O(A^{-1})\] \[\leqslant 1-\frac{\eta}{2}\] if \(A\) is large enough. Thus, replacing \(\eta\) by \(\eta/2\) we see that part (2) holds for general non-pretentious multiplicative functions \(f_{1}:\mathbb{N}\to[-1,1]\). As we will soon see, Theorems 1.5 and 1.8 follow quickly from Lemma 3.2 and the following estimate (which is based heavily on Lemma 4.1). **Lemma 4.2**.: _Let \(k\geqslant 1\), and let \(a_{1},\ldots,a_{k}>0\) and \(h_{1},\ldots,h_{k}\in\mathbb{N}\) be integers with \(a_{i}h_{j}-a_{j}h_{i}\neq 0\) for all \(i\neq j\). Let \(f_{1},\ldots,f_{k}:\mathbb{N}\to[-1,1]\) be multiplicative functions._ 1. _Suppose that_ \(f_{1}\) _is non-pretentious. Then for all_ \(\gamma\in\mathbb{R}\) _we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{2}n +h_{2})e(\gamma n)=0.\] 2. _Suppose that_ \(f_{1}\) _is non-pretentious. If_ \(\gamma\in\mathbb{Q}\) _there is some_ \(\eta>0\) _(depending only on_ \(\gamma\)_, the_ \(a_{i}\) _and the_ \(h_{i}\)_) such that_ \[\limsup_{X\to\infty}|\mathbb{E}_{n\leqslant X}^{\log}e(\gamma n)\prod_{i=1}^{k }f_{i}(a_{i}n+h_{i})|\leqslant 1-\eta.\] 3. _If_ \(\gamma\notin\mathbb{Q}\)_, then_ (4.2) \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}e(\gamma n)\prod_{i=1}^{k}f_{ i}(a_{i}n+h_{i})=0.\] Proof.: _Case 1: \(\gamma\) rational._ Write \(\gamma=a/b\) with \(a\in\mathbb{Z}\) and \(b\in\mathbb{N}\). Then by expanding \(e(\gamma n)\) as a linear combination of indicators of arithmetic progressions modulo \(b\), for part (1) it suffices to show that for each \(1\leqslant r\leqslant b\) we have \[\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{2}n+h_{2})1_{n \equiv r\bmod b}=o(1).\] Making a change of variables, this reduces to \[\mathbb{E}_{m\leqslant X/b}^{\log}f_{1}(a_{1}(bm+r)+h_{1})f_{2}(a_{2}(bm+r)+h _{2})=o(1).\] But this follows from Lemma 4.1(1). For part (2) when \(\gamma\in\mathbb{Q}\), proceeding analogously we seek some \(\eta>0\) for which \[\limsup_{X\to\infty}|\mathbb{E}_{m\leqslant X/b}^{\log}\prod_{i=1}^{k}f_{i}(a _{i}bm+a_{i}r+h_{i})|\leqslant 1-\eta\] for each \(1\leqslant r\leqslant b\). This follows directly from Lemma 4.1(2). _Case 2: \(\gamma\) irrational._ In this case, the same argument works for parts (1) and (3), so we write out the argument for general \(k\). We first reduce to the case where \(f_{1},\ldots,f_{k}\) are completely multiplicative. For each \(1\leqslant i\leqslant k\), write \(f_{i}=\widetilde{f}_{i}*g_{i}\), where \(\widetilde{f}_{i}\) is the completely multiplicative function given on the primes by \(\widetilde{f}_{i}(p)=f_{i}(p)\), and \(g_{i}\) is the multiplicative function given on prime powers \(p^{\ell}\) (\(\ell\geqslant 1\)) by \(g_{i}(p^{\ell})=f_{i}(p^{\ell})-f_{i}(p)f_{i}(p^{\ell-1})\). Note that \(|g_{i}(p^{\ell})|\leqslant 2\) for all \(p,\ell\), and \(g_{i}(p)=0\). Writing \(f_{i}(n)=\sum_{d|n}g_{i}(d)\widetilde{f}_{i}(n/d)\) and applying the triangle inequality, (4.2) reduces to showing that \[\sum_{d_{1},\ldots,d_{k}\geqslant 1}|g_{1}(d_{1})|\cdots|g_{k}(d_{k})|\left| \mathbb{E}_{n\leqslant X}^{\log}e(\gamma n)\prod_{i=1}^{k}\widetilde{f}_{i} \left(\frac{a_{i}n+h_{i}}{d_{i}}\right)1_{d_{i}|a_{in}+h_{i}}\right|=o(1).\] If the system of \(k\) congruences \(a_{i}x+b_{i}\equiv 0\pmod{d_{i}}\) with \(1\leqslant i\leqslant k\) has a solution, then there is a unique solution of the form \(x\equiv c\pmod{D}\), where \(D\) is the least common multiple of \(d_{1},\ldots,d_{k}\). Making the change of variables \(n=Dm+c\) in (4.3), for any \(w\geqslant 1\) the contribution from the terms with \(d_{1}>w\) is \[\ll\sum_{\begin{subarray}{c}d_{1},\ldots,d_{k}\geqslant 1\\ d_{1}>w\end{subarray}}\frac{|g_{1}(d_{1})|\cdots|g_{k}(d_{k})|}{D} \ll w^{-1/3}\sum_{d_{1},\ldots,d_{k}\geqslant 1}\frac{|g_{1}(d_{1})| \cdots|g_{k}(d_{k})|}{D^{2/3}}\] \[\ll w^{-1/3}\prod_{p}\Big{(}1+\sum_{\begin{subarray}{c}(i_{1}, \ldots,i_{k})\in\mathbb{Z}_{\geqslant 0}^{k}\\ \max i_{j}\geqslant 1\end{subarray}}\frac{|g_{1}(p^{i_{1}})|\cdots|g_{k}(p^{ i_{k}})|}{(p^{\max i_{j}})^{2/3}}\Big{)}\] \[\ll w^{-1/3}.\] Similarly, the contribution of terms with \(d_{j}>w\) for some \(j\) is \(\ll w^{-1/3}\). Letting \(w\to\infty\), we see that it suffices to show that for any fixed \(d_{1},\ldots,d_{k}\geqslant 1\) we have \[\mathbb{E}_{n\leqslant X}^{\log}e(\gamma n)\prod_{i=1}^{k}\widetilde{f}_{i} \left(\frac{a_{i}n+h_{i}}{d_{i}}\right)1_{d_{i}|a_{i}n+b_{i}}=o(1). \tag{4.3}\] Substituting \(n=Dm+c\) in (4.3), we reduce to proving \[\mathbb{E}_{m\leqslant x/D}^{\log}e(\gamma Dm)\prod_{i=1}^{k}\widetilde{f}_{i }\left(\frac{a_{i}(Dm+c)+h_{i}}{d_{i}}\right)=o(1).\] The linear polynomials \(a_{i}^{\prime}x+h_{i}^{\prime}:=\frac{a_{i}(Dx+c)+h_{i}}{d_{i}}\) have integer coefficients by assumption, and we have \(a_{i}^{\prime}h_{j}\neq a_{j}^{\prime}h_{i}^{\prime}\) whenever \(i\neq j\). Hence, the claim (4.2) would follow from the case of completely multiplicative functions. Thus, we assume that each \(f_{i}\) is completely multiplicative and that \((a_{i},h_{i})=1\) for all \(i\leqslant k\), since otherwise we can pull out the common factors by complete multiplicativity. We may further assume that \(f_{i}(a_{i})=1\) for all \(i\leqslant k\), since the values of \(f_{i}\) at the primes dividing \(a_{i}\) do not influence (4.2). Let \(A=\prod_{i\leqslant k}a_{i}\), \(h_{i}^{\prime}=h_{i}\prod_{j\neq i}a_{j}\). Then, writing \(\gamma^{\prime}=\gamma/A\), by complete multiplicativity and the fact that \(f_{i}(a_{i})=1\) for all \(i\leqslant k\), it suffices to show that \[\mathbb{E}_{n\leqslant X}^{\log}e(\gamma^{\prime}An)\prod_{i=1}^{k}f_{i}(An+ h_{i}^{\prime})=o(1).\] Making the change of variables \(m=An\), and expanding \[1_{m\equiv 0\bmod A}=\frac{1}{A}\sum_{j=1}^{A}e(jm/A),\] we reduce matters to showing that \[\mathbb{E}_{m\leqslant AX}^{\log}e((\gamma^{\prime}+j/A)m)\prod_{i=1}^{k}f_{i} (m+h_{i}^{\prime})=o(1)\] for all integers \(1\leqslant j\leqslant A\). But as \(\gamma^{\prime}+j/A\) is irrational, this follows from Lemma 4.1(3). ### The pretentious case We now prove that Theorems 1.2(1) and 1.6(1) hold in the case of pretentious functions. **Lemma 4.3**.: _Let \(k\geqslant 1\) and let \(f_{1},\ldots,f_{k}:\mathbb{N}\to[-1,1]\) be pretentious multiplicative functions. Let \(\alpha_{1},\ldots,\alpha_{k}>0\) and \(\beta_{1},\ldots,\beta_{k}\in\mathbb{R}\) be such that \(1,\alpha_{1},\ldots,\alpha_{k}\) are linearly independent over \(\mathbb{Q}\). Then we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(\lfloor \alpha_{i}n+\beta_{i}\rfloor)=\prod_{i=1}^{k}\lim_{X\to\infty}\mathbb{E}_{n \leqslant X}^{\log}f_{i}(n). \tag{4.4}\] Proof.: From [3, Theorem 6] it follows that \(f_{i}\) is almost periodic in the following sense: for any \(\varepsilon>0\) there exist a decomposition \[f_{i}(n)=T_{\varepsilon,i}(n)+\mathcal{E}_{\varepsilon,i}(n),\] where \(T_{\varepsilon,i}(x)=\sum_{1\leqslant\ell\leqslant L_{\varepsilon,i}}c_{ \varepsilon,i}(\ell)e(\gamma_{\ell,\varepsilon,i}x)\) for some \(L_{\varepsilon,i}\), some real numbers \(c_{\varepsilon,i}(\ell)\) and some rational numbers \(\gamma_{\ell,\varepsilon,i}\), and \(\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}|\mathcal{E}_{\varepsilon,i}(n)|\leqslant\varepsilon\). Therefore, it suffices to prove for any rational numbers \(\gamma_{i}\) that \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}e(\gamma_{i} \lfloor\alpha_{i}n+\beta_{i}\rfloor)=\prod_{i=1}^{k}\lim_{X\to\infty}\mathbb{E }_{n\leqslant X}^{\log}e(\gamma_{i}n).\] Let \(\gamma_{i}=a_{i}/d_{i}\) with \(a_{i}\) and \(d_{i}\geqslant 1\) integers. By writing \(e(\gamma_{i}m)\) as a linear combination of the indicators \(1_{m\equiv c\pmod{d_{i}}}\), it suffices to show for any integers \(c_{i},d_{i}\geqslant 1\) that \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}1_{\lfloor \alpha_{i}n+\beta_{i}\rfloor\equiv c_{i}\pmod{d_{i}}}=\prod_{i=1}^{k}\lim_{X \to\infty}\mathbb{E}_{n\leqslant X}^{\log}1_{n\equiv c_{i}\pmod{d_{i}}}= \frac{1}{d_{1}\cdots d_{k}}.\] Observe that \(\lfloor\alpha n+\beta\rfloor\equiv c\pmod{d}\) for \(0\leqslant c<d\) is equivalent to \(\{\frac{\alpha}{d}n+\frac{\beta}{d}\}\in[\frac{c}{d},\frac{c+1}{d})\). Hence, it suices to show that \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}1_{\{\frac{ \alpha_{i}}{d_{i}}n+\frac{\beta_{i}}{d_{i}}\}\in[\frac{c_{i}}{d_{i}},\frac{c_{ i}+1}{d_{i}})}=\frac{1}{d_{1}\cdots d_{k}}.\] But this follows from the Kronecker-Weyl theorem since the numbers \(1,\alpha_{1}/d_{1},\ldots,\alpha_{k}/d_{k}\) are linearly independent over \(\mathbb{Q}\). ## 5 Proofs of Theorem 1.5 and Theorem 1.8 Understanding the correlations of non-pretentious multiplicative functions restricted to Bohr sets is straightforward, given the previous lemmas. Proof of Theorem 1.5.: Let \(B\in\mathcal{B}_{\mathrm{convex}}\) and \(\varepsilon>0\). Let \(f_{1},f_{2}:\mathbb{N}\to[-1,1]\) be multiplicative with \(f_{1}\) non-pretentious. For any \(\gamma\in\mathbb{R}\) we have \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{ 2}n+h_{2})e(\gamma n)=0\] by Lemma 4.2. Therefore, from Corollary 3.3 and the triangle inequality, if \(X\) is large enough depending on \(\varepsilon\), \[\mathbb{E}_{n\leqslant X}^{\log}f_{1}(a_{1}n+h_{1})f_{2}(a_{2}n+h_{2})1_{B}(n )=O(\varepsilon).\] Since \(\varepsilon\) was arbitrary, Theorem 1.5 follows. Proof of Theorem 1.8.: Let \(B\in\mathcal{B}_{\mathrm{convex}}\) and \(\varepsilon>0\). Let \(f_{1},\ldots,f_{k}:\mathbb{N}\to[-1,1]\) be multiplicative with \(f_{1}\) non-pretentious. By Lemma 3.2 we write \[1_{B}(n)=\sum_{l\leqslant L_{\varepsilon}}c_{\varepsilon}(l)e(\gamma_{l, \varepsilon}n)+\sum_{a\leqslant q}t_{a}1_{n\equiv a\,\mathrm{mod}\;q}+ \mathcal{E}_{\varepsilon}(n),\] where \(\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}|\mathcal{E}_{\varepsilon}(n)| \leqslant\varepsilon\), \(\gamma_{l,\varepsilon}\notin\mathbb{Q}\) for all \(l\), \(|c_{\varepsilon}(l)|\ll_{\varepsilon}1\), \(t_{a}\geqslant 0\) for all \(a\), and \(\frac{1}{q}\sum_{a\leqslant q}t_{a}=\delta_{B}+O(\varepsilon)\). Parametrising the progression \(n\equiv a\,\mathrm{mod}\;q\), and using partial summation to pass from \(\mathbb{E}_{n\leqslant X}|\mathcal{E}_{\varepsilon}(n)|\) to \(\mathbb{E}_{n\leqslant X}^{\mathrm{log}}|\mathcal{E}_{\varepsilon}(n)|\), we have \[|\mathbb{E}_{n\leqslant X}^{\mathrm{log}}1_{B}(n)\prod_{i=1}^{k }f_{i}(a_{i}n+h_{i})|\] \[\leqslant\sum_{l\leqslant L_{\varepsilon}}|c_{\varepsilon}(l)| |\mathbb{E}_{n\leqslant X}^{\mathrm{log}}e(\gamma_{l,\varepsilon}n)\prod_{i=1 }^{k}f_{i}(a_{i}n+h_{i})|+\sum_{a\leqslant q}\frac{t_{a}}{q}|\mathbb{E}_{m \leqslant\frac{X}{q}}^{\mathrm{log}}\prod_{i=1}^{k}f_{i}(a_{i}(qm+a)+h_{i})| +O(\varepsilon).\] By combining the different parts of Lemma 4.2, using critically the fact that \(\gamma_{l,\varepsilon}\notin\mathbb{Q}\), there is some \(\eta>0\) (fixed, independently of \(X\) and \(\varepsilon\)) for which the above is \[\leqslant o_{\varepsilon}(1)+\delta_{B}(1-2\eta)+O(\varepsilon).\] Picking \(\varepsilon\) small enough and \(X\) large enough, we obtain an upper bound of \(\delta_{B}(1-\eta)\) as required. ## 6 Proof of Theorem 1.2(1)-(2) By Lemma 4.3, we have Theorem 1.2(1) in the case where \(f_{1},f_{2}\) are pretentious. We shall show that if \(f_{2}\) is non-pretentious, then Theorem 1.2(1) holds under the weaker assumption that \(\alpha_{1}/\alpha_{2}\) is irrational. By the fact that \(f_{2}\) is non-pretentious and real-valued, we have \[\sum_{p}\frac{1-\mathrm{Re}(f_{2}(p)\overline{\chi}(p)p^{-it})}{p}=\infty\] for any real number \(t\) and Dirichlet character \(\chi\) (see [11, Lemma C.1]). Hence, we have \(\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f_{2}(n)=0\) by Halasz's theorem ([14, Theorem 4.5 in Section III.4]). Now it suffices to show that \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f_{1}(\lfloor\alpha_ {1}n+\beta_{1}\rfloor)f_{2}(\lfloor\alpha_{2}n+\beta_{2}\rfloor)=0.\] Once we have shown this, Theorem 1.2(2) also follows. We first reduce the correlation in Theorem 1.2(1) to simpler correlations of the form \[\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f_{1}(n)f_{2}(\lfloor\alpha n+\beta \rfloor)1_{B}(n),\] where \(B\in\mathcal{B}_{\mathrm{convex}}\) is a Bohr set. To this end, we begin with the following lemma. **Lemma 6.1**.: _Fix \(\alpha_{1},\alpha_{2}>0\) and \(\beta_{1},\beta_{2}\in\mathbb{R}\), and suppose that \(\alpha_{1}/\alpha_{2}\) is irrational. Then, there exist \(M\in\mathbb{N}\) and linear polynomials \(L_{1},\ldots,L_{M}:\mathbb{R}\to\mathbb{R}\) of the form \(L_{i}(x)=(\alpha_{2}/\alpha_{1})x+n_{i}\) with \(n_{i}\in\mathbb{Z}\) and a partition \(A_{1}\sqcup A_{2}\sqcup\cdots\sqcup A_{M}\) of \(\mathbb{N}\) such that_ 1. _For any_ \(1\leqslant i\leqslant M\)_, we have_ \[\lfloor\alpha_{2}n+\beta_{2}\rfloor=\lfloor L_{i}(\lfloor\alpha_{1}n+\beta_{1 }\rfloor)\rfloor\quad\mathrm{whenever}\quad n\in A_{i}.\] 2. _For any_ \(1\leqslant i\leqslant M\) _and_ \(\varepsilon>0\)_, there exist_ \(J_{\varepsilon}\geqslant 1\)_, Bohr sets_ \(B_{i,j,\varepsilon}\in\mathcal{B}_{2,\mathrm{convex}}\) _for_ \(j\leqslant J_{\varepsilon}\)_, and a decomposition_ \[1_{A_{i}}(n)=\sum_{j\leqslant J_{\varepsilon}}1_{B_{i,j,\varepsilon}}(n)+ \mathcal{E}_{i,\varepsilon}(n),\] _where_ \[\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}|\mathcal{E}_{i,\varepsilon}(n )|\leqslant\varepsilon.\] Proof.: Let \(\gamma=\alpha_{2}/\alpha_{1}\). Write \[\alpha_{2}n+\beta_{2}=\gamma\lfloor\alpha_{1}n+\beta_{1}\rfloor+r_{n}, \tag{6.1}\] where \[r_{n}=\beta_{2}-\gamma\beta_{1}+\gamma\{\alpha_{1}n+\beta_{1}\}. \tag{6.2}\] We have \(|r_{n}|\leqslant R\) for all \(n\) for some \(R\ll_{\alpha_{i},\beta_{i}}1\). Therefore, for each \(n\) there exists an integer \(i\in[-R,R]\) such that \[\lfloor\alpha_{2}n+\beta_{2}\rfloor=\lfloor\gamma\lfloor\alpha_{1}n+\beta_{1} \rfloor+r_{n}\rfloor=\lfloor\gamma\lfloor\alpha_{1}n+\beta_{1}\rfloor+i\rfloor.\] Now let \(L_{i}(x):=\gamma x+i\). Consider the sets \[A_{i}:\,=\{n:\,\lfloor\alpha_{2}n+\beta_{2}\rfloor=\lfloor L_{i}(\lfloor\alpha_ {1}n+\beta_{1}\rfloor)\rfloor\}.\] The sets \(A_{i}\) form a partition of \(\mathbb{N}\), and note that by (6.1), (6.2) we have \[A_{i} =\{n:\,\,\lfloor\alpha_{2}n+\beta_{2}\rfloor=\lfloor(\alpha_{2}n+ \beta_{2})+i+\gamma\beta_{1}-\gamma\{\alpha_{1}n+\beta_{1}\}\rfloor\}\] \[=\{n:\,\,-\{\alpha_{2}n+\beta_{2}\}\leqslant i+\gamma\beta_{1}- \gamma\{\alpha_{1}n+\beta_{1}\}<1-\{\alpha_{2}n+\beta_{2}\}\},\] where we used the fact that \(\lfloor x+y\rfloor=\lfloor x\rfloor\) if and only if \(-\{x\}\leqslant y<1-\{x\}\). Now, let \(\varepsilon>0\) and let \(K\geqslant 1\) be large in terms of \(\varepsilon\). For brevity, write \(u_{i}=i+\gamma\beta_{1}\). Then we can write \[1_{A_{i}}(n) =\sum_{0\leqslant k\leqslant K-1}1_{\alpha_{2}n+\beta_{2}\in[k/ K,(k+1)/K)\,\mathrm{mod}\,\,1}1_{u_{i}-\gamma\{\alpha_{1}n+\beta_{1}\}\in(-k/ K,1-k/K)}\] \[+O\left(1_{u_{i}-\gamma\{\alpha_{1}n+\beta_{1}\}-\alpha_{2}n- \beta_{2}\in[-1/K,1/K]\,\mathrm{mod}\,\,1}\right).\] Each term inside the \(k\) sum can be written as the sum of indicator functions of elements of \(\mathcal{B}_{2,\mathrm{convex}}\). Moreover, since \(\gamma\) is irrational, by the Kronecker-Weyl theorem we have \[\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}1_{u_{i}-\gamma\{\alpha_{1}n+ \beta_{1}\}-\alpha_{2}n-\beta_{2}\in[-1/K,1/K]\mod 1}=o_{K\to\infty}(1). \tag{6.3}\] Indeed, expressing \(\{\alpha_{1}n+\beta_{1}\}=\alpha_{1}n+\beta_{1}-\lfloor\alpha_{1}n+\beta_{1}\rfloor\), it is enough to show that for any interval \(I\) modulo \(1\) with length \(O(1/K)\), \[\limsup_{X\to\infty}\mathbb{E}_{n\leqslant X}1_{\gamma\lfloor\alpha_{1}n+ \beta_{1}\rfloor\in I\,\mathrm{mod}\,\,1}=o_{K\to\infty}(1).\] But since \(\alpha_{1}>0\) the sequence \((\lfloor\alpha_{1}n+\beta_{1}\rfloor)_{n\leqslant X}\) contains integers at most \(\alpha_{1}X+\beta_{1}\) and at least \(\lfloor\beta_{1}\rfloor\), and the multiplicity of the sequence is at most \(\lfloor\alpha_{1}^{-1}\rfloor+1\). Therefore \[\mathbb{E}_{n\leqslant X}1_{\gamma\lfloor\alpha_{1}n+\beta_{1}\rfloor\in I\, \mathrm{mod}\,\,1}\ll\mathbb{E}_{n\leqslant\alpha_{1}X}1_{\gamma n\in I\, \mathrm{mod}\,\,1}+O(1/X)\ll\frac{1}{K}\] by Kronecker-Weyl (for large enough \(X\)). Thus (6.3) holds and the claim follows. Applying Lemma 6.1, we can write \[\mathbb{E}_{n\leqslant X}^{\log}f_{1}(\lfloor\alpha_{1}n+\beta_{1} \rfloor)f_{2}(\lfloor\alpha_{2}n+\beta_{2}\rfloor)\] \[=\sum_{i\leqslant M}\sum_{j\leqslant J}\mathbb{E}_{n\leqslant X}^ {\log}f_{1}(\lfloor\alpha_{1}n+\beta_{1}\rfloor)f_{2}(\lfloor L_{i}(\lfloor \alpha_{1}n+\beta_{1}\rfloor)\rfloor)1_{B_{i,j,J}}(n)+o_{X,J\to\infty}(1)\] for some Bohr sets \(B_{i,j,J}\in\mathcal{B}_{2,\mathrm{convex}}\) and some linear polynomials \(L_{i}:\mathbb{R}\to\mathbb{R}\) having leading coefficient \(\alpha_{2}/\alpha_{1}\). Hence, it suffices to show that \[\mathbb{E}_{n\leqslant X}^{\log}f_{1}(\lfloor\alpha_{1}n+\beta_{1} \rfloor)f_{2}(\lfloor L(\lfloor\alpha_{1}n+\beta_{1}\rfloor)\rfloor)1_{B}(n)= o(1) \tag{6.4}\] for any \(B\in\mathcal{B}_{\mathrm{convex}}\) and any polynomial \(L(x)=\theta x+j\) with \(j\in\mathbb{Z}\), where \(\theta=\alpha_{1}/\alpha_{2}\). For any \(B\in\mathcal{B}_{\mathrm{convex}}\) and \(\gamma\in\mathbb{R}\), introduce a multiplicity counting function \[N_{B,\alpha,\beta,\gamma}(m):=\sum_{n\in B:\ m=\lfloor\alpha n+\beta\rfloor}e( \gamma n).\] Then, making a change of variables, we can rewrite the left-hand side of (6.4) as \[\mathbb{E}_{m\leqslant\alpha_{1}X}^{\log}f_{1}(m)f_{2}(\lfloor L(m)\rfloor)N _{B,\alpha_{1},\beta_{1},0}(m)+o(1).\] We then need the following lemma on the structure of \(N_{B,\alpha,\beta,\gamma}(m)\) (which is a version of Corollary 3.3 for \(N_{B,\alpha,\beta,\gamma}(m)\)). **Lemma 6.2**.: _Fix \(B\in\mathcal{B}_{\mathrm{convex}}\), \(\alpha>0\) and \(\beta,\gamma\in\mathbb{R}\). Then, for any \(\varepsilon>0\), there exists some \(K_{\varepsilon}\geqslant 1\), some sequence of real numbers \((\gamma_{k,\varepsilon})_{k\geqslant 1}\) and some complex numbers \(c_{\varepsilon}(k)\) with \(|c_{\varepsilon}(k)|\ll_{\varepsilon}1\) such that for all \(m\in\mathbb{Z}\)_ \[N_{B,\alpha,\beta,\gamma}(m)=\sum_{1\leqslant k\leqslant K_{\varepsilon}}c_{ \varepsilon}(k)e(\gamma_{k,\varepsilon}m)+\mathcal{E}_{\varepsilon}(m)\] _and \(\limsup_{X\to\infty}\mathbb{E}_{m\leqslant X}|\mathcal{E}_{\varepsilon}(m)|\leqslant\varepsilon\)._ Proof.: Note that there exists an integer \(N\geqslant 0\) such that \[\left|\left[\frac{m-\beta}{\alpha},\frac{m+1-\beta}{\alpha}\right)\cap \mathbb{Z}\right|\in\{N,N+1\}\] for all \(m\in\mathbb{Z}\). Let \(A_{1}\) be the set of \(m\) such that \(|[(m-\beta)/\alpha,(m+1-\beta)/\alpha)\cap\mathbb{Z}|=N\), and let \(A_{2}\) be the complement of this set. We can write \[N_{B,\alpha,\beta,\gamma}(m)=\sum_{(m-\beta)/\alpha\leqslant n<(m+1-\beta)/ \alpha}1_{B}(n)e(\gamma n),\] and this equals \[\sum_{0\leqslant j\leqslant N-1}1_{A_{1}}(m)1_{B}(\lceil(m-\beta)/ \alpha\rceil+j)e(\gamma(\lceil(m-\beta)/\alpha\rceil+j))\] \[+\sum_{0\leqslant j\leqslant N}1_{A_{2}}(m)1_{B}(\lceil(m-\beta)/ \alpha\rceil+j)e(\gamma(\lceil(m-\beta)/\alpha\rceil+j)).\] The claim will follow if we can show that the four functions \(m\mapsto 1_{A_{1}}(m)\), \(m\mapsto 1_{A_{2}}(m)\), \(m\mapsto 1_{B}(\lceil(m-\beta)/\alpha\rceil+j)\) and \(m\mapsto e(\gamma(\lceil(m-\beta)/\alpha\rceil+j))\) can each be approximated by trigonometric polynomials of length \(O_{\varepsilon}(1)\) with bounded coefficients (up to an error term which is \(O(\varepsilon)\) in the normalised \(L^{1}\) norm on the interval \([1,X]\cap\mathbb{Z}\)). First note that the sets \(A_{i}\) are both disjoint unions of elements of \(\mathcal{B}_{1,\mathrm{convex}}\) (in fact, they are unions of sets of the form \(\{m:\ \left\{\frac{m-\beta}{\alpha}\right\}\in I_{i}\}\) for some intervals \(I_{i}\)). Corollary 3.3 then means that \(1_{A_{i}}\) can be suitably approximated. Next observe that by applying Corollary 3.3 to \(B\) one reduces the task of approximating the term \(m\mapsto 1_{B}(\lceil(m-\beta)/\alpha\rceil+j)\) to approximating terms of the form \(m\mapsto e(\gamma(\lceil(m-\beta)/\alpha\rceil+j))\) (for arbitrary \(\gamma\)). To achieve this, we write \[e(\gamma(\lceil(m-\beta)/\alpha\rceil+j))=e(\gamma j)e(\gamma\frac{m-\beta}{ \alpha})e(\gamma\Big{\{}\frac{m-\beta}{\alpha}\Big{\}}),\] which reduces matters to decomposing \(e(\gamma\Big{\{}\frac{m-\beta}{\alpha}\Big{\}})\). Then observe that for a suitably large integer \(L\geq\varepsilon^{-1}\), for any \(\gamma,\gamma_{1},\gamma_{2}\in\mathbb{R}\) we have \[e(\gamma\{\gamma_{1}m+\gamma_{2}\})=\sum_{0\leqslant\ell<L}e\left(\gamma\frac {\ell}{L}\right)1_{\{\gamma_{1}m+\gamma_{2}\}\in[\ell/L,(\ell+1)/L)}+O(\varepsilon)\] Thus, up to an acceptable error, we can write \(e(\gamma\Big{\{}\frac{m-\beta}{\alpha}\Big{\}})\) as a bounded \(\mathbb{C}\)-linear combination of indicator functions of Bohr sets in \(\mathcal{B}_{\mathrm{convex}}\). Applying Corollary 3.3 to each of these Bohr sets, the result follows. Applying Lemma 6.2 to (6.4), and writing out \(L(m)=\theta m+j\), we reduce matters to proving that \[\sup_{\gamma}\lim_{X\to\infty}\Big{|}\mathbb{E}_{m\leqslant\alpha_{1}X}^{ \mathrm{log}}f_{1}(m)f_{2}(\lfloor\theta m+j\rfloor)e(\gamma m)\Big{|}=0 \tag{6.5}\] We are now in a position to apply the orthogonality criterion of Katai-Bourgain-Sarnak-Ziegler [1] for multiplicative functions. **Lemma 6.3** (Orthogonality criterion).: _Let \(a:\mathbb{N}\to\mathbb{C}\) be a bounded sequence of complex numbers. Suppose that, for any \(\varepsilon>0\), there exists \(P\geq 1\) such that for any primes \(P\leqslant p<q\), we have_ \[\limsup_{X\to\infty}\Big{|}\mathbb{E}_{n\leqslant X}^{\mathrm{log}}a(pn) \overline{a(qn)}\Big{|}\leqslant\varepsilon. \tag{6.6}\] _Then, for any \(1\)-bounded multiplicative function \(f:\mathbb{N}\to\mathbb{C}\), we have_ \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f(n)a(n)=0. \tag{6.7}\] Proof.: This can be deduced from [2, Lemma 2.16]. For the sake of completeness, we give a proof. Suppose that \(\varepsilon>0\) is small, \(X\) is large enough in terms of \(\varepsilon\), and \(|\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f(n)a(n)|\geqslant\varepsilon\). Let \(Q\) be large enough in terms of \(\varepsilon\) and \(P\). By Elliott's inequality [5, Lemma 4.7], we have \[\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f(n)a(n)=\frac{1}{\log\log Q}\sum_{p \leqslant X}\frac{1}{p}\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f(pn)a(pn)+o_{ Q\to\infty}(1).\] Since \(Q\) is large enough in terms of \(\varepsilon\), the error term here is at most \(\varepsilon/10\) in absolute value. By the multiplicativity of \(f\), we have \(f(pn)=f(p)f(n)+O(1_{p|n})\), so we conclude that \[\Big{|}\frac{1}{\log\log Q}\sum_{p\leqslant X}\frac{f(p)}{p}\mathbb{E}_{n \leqslant X}^{\mathrm{log}}f(n)a(pn)\Big{|}\geqslant\frac{4}{5}\varepsilon,\] say. Let \(J=\lceil 10\varepsilon^{-2}\rceil\). Then, by the pigeonhole principle and the assumption that \(Q\) is large, there exist distinct primes \(P\leqslant p_{1},\ldots,p_{J}\leqslant Q\) such that \[\Big{|}\mathbb{E}_{n\leqslant X}^{\log}f(n)a(p_{j}n)\Big{|}\geqslant\frac{ \varepsilon}{2}\] for all \(1\leqslant j\leqslant J\). Hence, there exist some unimodular complex numbers \(c_{j}\) such that \[\sum_{j\leqslant J}c_{j}\mathbb{E}_{n\leqslant X}^{\log}f(n)a(p_{j}n)\geqslant \frac{\varepsilon J}{2}.\] Exchanging the order of summation and then applying Cauchy-Schwarz, we deduce \[\mathbb{E}_{n\leqslant X}^{\log}\Big{|}\sum_{j\leqslant J}c_{j}a(p_{j}n) \Big{|}^{2}\geqslant\frac{(\varepsilon J)^{2}}{4}.\] Opening the square and separating the diagonal contribution, we obtain \[\sum_{\begin{subarray}{c}i,j\leqslant J\\ i\neq j\end{subarray}}c_{i}\overline{c_{j}}\mathbb{E}_{n\leqslant X}^{\log}a (p_{i}n)\overline{a(p_{j}n)}\geqslant\frac{(\varepsilon J)^{2}}{4}-J.\] But recalling our choice of \(J\), we obtain a contradiction with (6.6) (with \(\varepsilon^{2}/8\) in place of \(\varepsilon\)). By Lemma 6.3, to prove (6.5) it suffices to show that for all fixed primes \(p,q\) with \(P\leqslant p<q\), that \[\sup_{\gamma}\limsup_{X\to\infty}\Big{|}\mathbb{E}_{n\leqslant X}^{\log}f_{2 }(\lfloor p\theta n+j\rfloor)f_{2}(\lfloor q\theta n+j\rfloor)e(\gamma n) \Big{|}=o_{P\to\infty}(1). \tag{6.8}\] We continue with a lemma connecting \(\lfloor p\theta n+j\rfloor\) and \(\lfloor q\theta n+j\rfloor\) (in a similar spirit to Lemma 6.1). **Lemma 6.4**.: _For all integers \(p,q\geqslant 1\) and reals \(\theta,\beta_{p},\beta_{q}\), we have a finite partition \(\mathbb{Z}=\mathcal{B}_{1}\sqcup\mathcal{B}_{2}\sqcup\cdots\sqcup\mathcal{B} _{M}\) such that \(\mathcal{B}_{i}\in\mathcal{B}_{1,\mathrm{convex}}\) with_ \[\lfloor q\theta n+\beta_{q}\rfloor=\frac{q\lfloor p\theta n+\beta_{p}\rfloor+ r_{i}}{p}\quad\mathrm{whenever}\quad n\in\mathcal{B}_{i}\] _for some integers \(r_{i}\). Furthermore, the phase of each \(\mathcal{B}_{i}\) is \(\theta\)._ Proof.: We have \[p\lfloor q\theta n+\beta_{q}\rfloor-q\lfloor p\theta n+\beta_{p}\rfloor=p\beta _{q}-q\beta_{p}+q\{p\theta n+\beta_{p}\}-p\{q\theta n+\beta_{q}\}.\] For \(i,j\in\mathbb{Z}_{\geqslant 0}\) we define \[B_{i,j}=\{n\in\mathbb{Z}:\,\{p\theta n+\beta_{p}\}=p\{\theta n\}+\beta_{p}-i, \,\{q\theta n+\beta_{q}\}=q\{\theta n\}+\beta_{q}-j\}.\] The \(B_{i,j}\) form a partition of \(\mathbb{Z}\), all but finitely many of the \(B_{i,j}\) are empty, and each \(B_{i,j}\) is a union of finitely many sets \(\mathcal{B}\in\mathcal{B}_{1,\mathrm{convex}}\) with phase \(\theta\); for example, sets of the form \[\mathcal{B}=B_{1}(\theta,U_{k,l}),\qquad\mathrm{where}\;\;U_{k,l}=\Big{[}\frac {k}{p},\frac{k+1-\{\beta_{p}\}}{p}\Big{)}\cap\Big{[}\frac{l}{q},\frac{l+1-\{ \beta_{q}\}}{q}\Big{)}\] for integers \(k\in[0,p-1]\) and \(l\in[0,q-1]\). If \(n\in B_{i,j}\), from the above formulas we have \[p\lfloor q\theta n+\beta_{q}\rfloor-q\lfloor p\theta n+\beta_{p}\rfloor=pj-qi \in\mathbb{Z}.\] The claim follows. Applying Lemma 6.4, we have reduced (6.8) to showing that for all integers \(r\), all \(B\in\mathcal{B}_{\mathrm{convex}}\), and all pairs of distinct primes \(p,q\) with \(P\leqslant p<q\), we have \[\sup_{\gamma}\limsup_{X\to\infty}\left|\mathbb{E}_{n\leqslant X}^{\mathrm{log}}f _{2}(\lfloor p\theta n+j\rfloor)f_{2}(\frac{q\lfloor p\theta n+j\rfloor+r}{p} )e(\gamma n)1_{B}(n)1_{q\lfloor p\theta n+j\rfloor+r\equiv 0\;(\mathrm{mod}\;p)} \right|=o_{P\to\infty}(1). \tag{6.9}\] It is simple to control the \(r=0\) case. Indeed, note that \(r=0\) implies \[\lfloor p\theta n+j\rfloor\equiv 0\pmod{p},\] or equivalently \[\theta n\in\left[\frac{-j}{p},\frac{1-j}{p}\right)\mod 1.\] Since \(\theta\) is irrational, the Kronecker-Weyl theorem [12, Exercise 1.1.5] tells us that this happens for \((1/p+o(1))X=o_{P\to\infty}(X)\) integers \(n\leqslant X\). The contribution of such \(n\) can be bounded trivially by the triangle inequality. It remains to consider \(r\neq 0\). We prove the following general result, as we will need to refer to it several times before the end of the paper. **Lemma 6.5**.: _Let \(p,q\geqslant 1\) be coprime integers, \(\beta\in\mathbb{R}\), \(\theta>0\), \(r\) a non-zero integer, and \(B\in\mathcal{B}_{\mathrm{convex}}\). Then, for any non-pretentious multiplicative function \(f:\mathbb{N}\to[-1,1]\), we have_ \[\sup_{\gamma}\limsup_{X\to\infty}\left|\mathbb{E}_{n\leqslant X}^{\mathrm{log }}f(\lfloor p\theta n+\beta\rfloor)f(\frac{q\lfloor p\theta n+\beta\rfloor+r}{p })e(\gamma n)1_{B}(n)1_{q\lfloor p\theta n+\beta\rfloor+r\equiv 0\;\mathrm{mod}\;p} \right|=0. \tag{6.10}\] Proof.: Recalling that \[N_{B,p\theta,\beta,\gamma}(m):=\sum_{n\in B:\;m=\lfloor p\theta n+\beta \rfloor}e(\gamma n),\] we rewrite (6.10) as \[\sup_{\gamma}\limsup_{X\to\infty}\left|\mathbb{E}_{m\leqslant p \theta X}^{\mathrm{log}}f(m)f(qm+r)N_{B,p\theta,\beta,\gamma}(m)1_{qm+r\equiv 0 \pmod{p}}\right|=0. \tag{6.11}\] By Lemma 6.2, we express \(N_{B,p\theta,\beta,\gamma}(m)\) as a trigonometric polynomial up to small error. We also expand the condition \(m\equiv-r\overline{q}\;\mathrm{mod}\;p\) by the exponential sum \[\frac{1}{p}\sum_{1\leqslant a\leqslant p}e(a(m+r\overline{q})/p).\] It therefore suffices to show that \[\sup_{\gamma}\limsup_{X\to\infty}\left|\mathbb{E}_{m\leqslant p \theta X}^{\mathrm{log}}f(m)f(qm+r)e(\gamma m)\right|=0. \tag{6.12}\] But this follows from Lemma 4.2 (since \(r\in\mathbb{Z}\setminus\{0\}\)). Thus the lemma has been proved. Applying Lemma 6.5 to expression (6.9), Theorem 1.2(1) follows. As already remarked, the argument settled Theorem 1.2(2) as well. Proof of Theorem 1.2(3) Since \(\alpha_{1}/\alpha_{2}\) is rational, there are coprime positive integers \(p\) and \(q\) and real \(\theta\) for which \(\alpha_{1}=p\theta\) and \(\alpha_{2}=q\theta\). By Lemma 6.4, there is an integer \(J\) and a partition \(\mathbb{Z}=B_{-J}\sqcup B_{-J+1}\sqcup\cdots\sqcup B_{J}\) such that \(B_{j}\) is a disjoint union of Bohr sets in \(\mathcal{B}_{1,\mathrm{convex}}\) with phase \(\theta\) with \[\lfloor q\theta n+\beta_{2}\rfloor=\frac{q\lfloor p\theta n+\beta_{1}\rfloor+ j}{p}\quad\text{whenever}\quad n\in B_{j}.\] We claim that if \(j\neq 0\) then \[|\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor\alpha_{1}n+\beta_{1}\rfloor) \lambda(\lfloor\alpha_{2}n+\beta_{2}\rfloor)1_{B_{j}}(n)|=o(1). \tag{7.1}\] Indeed, writing \(B_{j}\) as a disjoint union of elements of \(\mathcal{B}_{\mathrm{convex}}\) it is enough to show that \[|\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor p\theta n+\beta_{1}\rfloor) \lambda(\frac{q\lfloor p\theta n+\beta_{1}\rfloor+j}{p})1_{B}(n)1_{q\lfloor p \theta n+\beta_{1}\rfloor+j\equiv 0\pmod{p}}|=o(1).\] for any \(B\in\mathcal{B}_{\mathrm{convex}}\). But this result follows directly from Lemma 6.5. Consider now the contribution from \(B_{0}\), namely \[\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor p\theta n+\beta_{1}\rfloor) \lambda(\frac{q\lfloor p\theta n+\beta_{1}\rfloor}{p})1_{B_{0}}(n). \tag{7.2}\] Since \(B_{0}\) is a disjoint union of finitely many sets in \(\mathcal{B}_{1,\mathrm{convex}}\) (call these Bohr sets \(S_{1},\ldots,S_{M}\)) we have \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor p\theta n+ \beta_{1}\rfloor)\lambda(\frac{q\lfloor p\theta n+\beta_{1}\rfloor}{p})1_{B_{ 0}}(n)=\lambda(p)\lambda(q)\sum_{i\leqslant M}\delta_{S_{i}}. \tag{7.3}\] Including the terms with \(n\in B_{j}\), for \(j\neq 0\), we have \[\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}\lambda(\lfloor\alpha_{1}n+ \beta_{1}\rfloor)\lambda(\lfloor\alpha_{2}n+\beta_{2}\rfloor)=\lambda(p) \lambda(q)\sum_{i\leqslant M}\delta_{S_{i}}.\] In particular the limit exists. Finally, observe that for any Bohr set \(S_{i}\in\mathcal{B}_{1,\mathrm{convex}}\) the density \(\delta_{S_{i}}\) is positive if and only if \(S_{i}\) is infinite. Therefore \(\sum_{i\leqslant M}\delta_{S_{i}}=0\) if and only if \(B_{0}\) is finite. This completes the proof of the second part of Theorem 1.2. **Remark 7.1**.: It is clear from the proof that one could prove a similar result with \(\lambda\) replaced by any non-pretentious completely multiplicative function \(f:\mathbb{N}\to[-1,1]\) such that \(f(n)\neq 0\) for all \(n\geqslant 1\). ## 8 Higher order correlations In this section we will prove Theorem 1.6. By Lemma 4.3, we already have Theorem 1.6 part (1) in the case where \(f_{1},\ldots,f_{k}\) are pretentious. Hence, we may assume in this section that \(f_{1}\) is non-pretentious. Then we have \(\lim_{X\to\infty}\mathbb{E}_{n\leqslant X}^{\log}f_{1}(n)=0\) by Halasz's theorem, so it suffices to show that \[\limsup_{X\to\infty}\Big{|}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i }(\lfloor\alpha_{i}n\rfloor)\Big{|}\leqslant 1-\eta.\] Proof of Theorem 1.6 part (1).: For contradiction we assume that \[\left|\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(\lfloor\alpha_{i}n \rfloor)\right|\geqslant 1-\eta\] for some fixed \(\eta>0\) and for arbitrarily large values of \(X\). Therefore there exists some \(u\in\{-1,+1\}\) and \(S_{1}\subset[X]\) for which \[\mathbb{E}_{n\leqslant X}^{\log}1_{S_{1}}(n)\geqslant 1-O(\eta)\] and \[\Big{|}\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{1}n\rfloor)-u\Big{|}\ll\eta\] for all \(n\in S_{1}\). Let \(r\geqslant 2\) be prime. Define \[D_{r}:=(\frac{1}{r^{2}},\frac{2}{r^{2}})\times(\frac{1}{r},\frac{1}{r}+\frac{ 1}{r^{2}})^{k-1}\subset[0,1)^{k}.\] Since \(1,\alpha_{1},\ldots,\alpha_{k}\) are linearly independent over \(\mathbb{Q}\), by the Kronecker-Weyl theorem we have that the Bohr set \(B_{r}:=B(\alpha,D_{r})\) has positive density \(\delta_{B_{r}}=r^{-2k}\). We also have that for all \(n\in B_{r}\), \[\lfloor\alpha_{1}r^{2}n\rfloor =r\lfloor\alpha_{1}rn\rfloor+1\] \[\lfloor\alpha_{i}r^{2}n\rfloor =r\lfloor\alpha_{i}rn\rfloor\qquad(i\geqslant 2)\] \[\lfloor\alpha_{i}rn\rfloor \neq 0\ \text{mod}\ r\qquad(i\geqslant 2).\] Observe that \[\mathbb{E}_{n\leqslant X}^{\log}1_{r|n}1_{S_{1}}(n)\geqslant\frac{1}{r}-O( \eta)-o(1).\] Hence \[\mathbb{E}_{n\leqslant X/r}^{\log}1_{S_{1}}(rn)\geqslant 1-O(r\eta)-o(1)\] and so \[\mathbb{E}_{n\leqslant X}^{\log}1_{S_{1}}(rn)\geqslant 1-O(r\eta)-o(1).\] From this argument, letting \[S_{2}:=B_{r}\cap\{n:\,rn\in S_{1}\}\cap\{n:\,r^{2}n\in S_{1}\},\] we see \[\mathbb{E}_{n\leqslant X}^{\log}1_{S_{2}}(n)\geqslant\delta_{B_{r}}-O(r^{2} \eta)-o(1).\] Then for \(n\in S_{2}\) we have \[u+O(\eta)=\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{i}rn\rfloor)=f_{1}(\lfloor \alpha_{1}rn\rfloor)\prod_{i=2}^{k}f_{i}(\lfloor\alpha_{i}rn\rfloor)\] and \[u+O(\eta)=\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{i}r^{2}n\rfloor) =f_{1}(r\lfloor\alpha_{1}rn\rfloor+1)\prod_{i=2}^{k}f_{i}(r\lfloor \alpha_{i}rn\rfloor)\] \[=f_{1}(r\lfloor\alpha_{1}rn\rfloor+1)\prod_{i=2}^{k}f_{i}(r)\cdot \prod_{i=2}^{k}f_{i}(\lfloor\alpha_{i}rn\rfloor)\] by multiplicativity and the fact that \((\lfloor\alpha_{i}rn\rfloor,r)=1\) for all \(i\geqslant 2\). Note that if for some \(u\in\{-1,+1\}\) and some real numbers \(|u_{i}|\leqslant 1\) we have \(u+O(\eta)=u_{1}u_{3}\) and \(u+O(\eta)=u_{2}u_{3}\), then \(|u_{1}u_{2}-1|=O(\eta)\). Therefore, \[|\mathbb{E}_{n\leqslant X}^{\log}1_{B_{r}}(n)f_{1}(\lfloor\alpha_{1}rn \rfloor)f_{1}(r\lfloor\alpha_{1}rn\rfloor+1)|\geqslant\delta_{B_{r}}-O(r^{2} \eta)-o(1). \tag{8.1}\] However, applying Lemma 6.5 with \(\theta=r\alpha_{1}\) we have \[|\mathbb{E}_{n\leqslant X}^{\log}1_{B_{r}}(n)f_{1}(\lfloor\alpha_{1}rn \rfloor)f_{1}(r\lfloor\alpha_{1}rn\rfloor+1)|=o(1). \tag{8.2}\] Expressions (8.1) and (8.2) are in contradiction for large enough \(X\) and small enough \(\eta\). This resolves Theorem 1.6 part (1). Proof of Theorem 1.6 part (2).: Let \(\mathcal{V}=\{v_{1},\ldots,v_{k-k^{\prime}}\}\) denote the maximal linearly independent set of vectors \(\mathcal{V}\subset\mathbb{Z}^{k}\) from the hypotheses of the theorem. By the abelian Ratner's theorem of [12, Proposition 1.1.5] we may write \((\alpha_{1},\ldots,\alpha_{k})=\alpha^{\prime}+\alpha^{\prime\prime}\), where \(\alpha^{\prime}:=(\alpha_{1}^{\prime},\ldots,\alpha_{k}^{\prime})\in\mathbb{ R}^{k}\), \(\alpha^{\prime\prime}=(\alpha_{1}^{\prime\prime},\ldots,\alpha_{k}^{\prime \prime})\in\mathbb{Q}^{k}\), and the sequence \(\alpha^{\prime}n\operatorname{mod}\mathbb{Z}^{k}\) is totally equidistributed in a subtorus \(T^{\prime}\leqslant\mathbb{T}^{k}\). We also have that the dimension of \(T^{\prime}\) is \(k^{\prime}\), and \(T^{\prime}\) is the projection modulo \(\mathbb{Z}^{k}\) of \(\{u\in\mathbb{R}^{k}:\,v_{i}\cdot u=0\text{ for all }i\}\). Letting \(q\) be the least common multiple of the denominators of the \(\alpha_{i}^{\prime\prime}\), we have \(\alpha qn\equiv\alpha^{\prime}qn\operatorname{mod}\mathbb{Z}^{k}\) for all \(n\in\mathbb{Z}\). For contradiction we assume that \[\Big{|}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(\lfloor\alpha_{i} n\rfloor)\Big{|}\geqslant 1-\eta\] for some fixed \(\eta>0\) and for arbitrarily large values of \(X\). Using the same argument as in the previous proof, this implies that \[\Big{|}\mathbb{E}_{n\leqslant X}^{\log}\prod_{i=1}^{k}f_{i}(\lfloor\alpha_{i} qn\rfloor)\Big{|}\geqslant 1-O(q\eta)-o(1).\] Therefore there exists some \(u\in\{-1,+1\}\) and \(S_{1}\subset[X]\) for which \[\mathbb{E}_{n\leqslant X}^{\log}1_{S_{1}}(n)\geqslant 1-O(q\eta)-o(1)\] and \[\Big{|}\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{i}qn\rfloor)-u\Big{|}\ll q\eta\] for all \(n\in S_{1}\). Let \(r\geqslant 2\) be prime, and let \(w\in\mathbb{R}_{>0}^{k}\) be the vector from the hypotheses of the theorem. Write \(w=(w_{1},\ldots,w_{k})\) and assume without loss of generality that \(w_{1}>w_{2}\geqslant w_{i}>0\) for all \(i=3,\ldots,k\). Define \[D_{q,r}:=T^{\prime}\cap\Big{(}(\frac{1}{qr},\frac{2}{qr})\times(0,\frac{1}{qr })^{k-1}\Big{)}\mod\mathbb{Z}^{k}.\] We claim that \(D_{q,r}\neq\emptyset\). Indeed, since \(w_{1}\) is strictly larger than \(w_{2}\) we may choose \(c\in\mathbb{R}\) satisfying \[c\in(\frac{1}{qrw_{1}},\min(\frac{2}{qrw_{1}},\frac{1}{qrw_{2}})).\] Since \(cw\cdot v_{j}=0\) for all \(j\), we conclude that \(cw\) mod \(\mathbb{Z}^{k}\in T^{\prime}\). But by assumptions on the sizes of the \(w_{i}\), \[cw\in\Big{(}(\frac{1}{qr},\frac{2}{qr})\times(0,\frac{1}{qr})^{k-1}\Big{)}.\] So \(cw\bmod\mathbb{Z}^{k}\in D_{q,r}\). Thus \(D_{q,r}\) is a non-empty open subset of \(T^{\prime}\) in the subspace topology. Therefore, when \(T^{\prime}\) is endowed with the normalised Haar measure \(\mu\), we have \(\mu(D_{q,r})>0\). Since the sequence \(\alpha^{\prime}n\) is totally equidistributed in \(T^{\prime}\), we know that the Bohr set \(B_{q,r}\in\mathcal{B}_{\text{convex}}\) defined by \[B_{q,r}:=B\Big{(}\alpha^{\prime},(\frac{1}{qr},\frac{2}{qr})\times(0,\frac{1} {qr})^{k-1}\Big{)}\] is equal to \(B(\alpha^{\prime},D_{q,r})\) and has density \(\delta_{B_{q,r}}=\mu(D_{q,r})>0\). Let \[S_{2}:=B_{q,r}\cap S_{1}\cap\{n:\,rn\in S_{1}\}.\] Then, by the same argument we used to lower-bound \(\mathbb{E}_{n\leqslant X}^{\log}1_{S_{1}}(n)\), we conclude that \[\mathbb{E}_{n\leqslant X}^{\log}1_{S_{2}}(n)\geqslant\delta_{B_{q,r}}-O(rq \eta)-o(1).\] Furthermore, using the fact that \(\alpha qn\equiv\alpha^{\prime}qn\bmod\mathbb{Z}^{k}\) for all \(n\in\mathbb{Z}\), for \(n\in S_{2}\) we have \[\lfloor\alpha_{1}qrn\rfloor =r\lfloor\alpha_{1}qn\rfloor+1\] \[\lfloor\alpha_{i}qrn\rfloor =r\lfloor\alpha_{i}qn\rfloor\qquad(2\leqslant i\leqslant k).\] Then for \(n\in S_{2}\) we have \[u+O(q\eta)=\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{i}qn\rfloor)=f_{1}( \lfloor\alpha_{1}qn\rfloor)\prod_{i=2}^{k}f_{i}(\lfloor\alpha_{i}qn\rfloor)\] and \[u+O(q\eta)=\prod_{i\leqslant k}f_{i}(\lfloor\alpha_{i}qrn\rfloor) =f_{1}(r\lfloor\alpha_{1}qn\rfloor+1)\prod_{i=2}^{k}f_{i}(r\lfloor \alpha_{i}qn\rfloor)\] \[=f_{1}(r\lfloor\alpha_{1}qn\rfloor+1)\prod_{i=2}^{k}f_{i}(r) \cdot\prod_{i=2}^{k}f_{i}(\lfloor\alpha_{i}qn\rfloor)\] by complete multiplicativity of \(f_{2},\ldots,f_{k}\). Arguing analogously to the previous proof, we conclude that \[|\mathbb{E}_{n\leqslant X}^{\log}1_{B_{q,r}}(n)f_{1}(\lfloor\alpha_{1}qn \rfloor)f_{1}(r\lfloor\alpha_{1}qn\rfloor+1)|\geqslant\delta_{B_{q,r}}-O(rq \eta)-o(1). \tag{8.3}\] However, applying Lemma 6.5 with \(\theta=q\alpha_{1}\) we have \[|\mathbb{E}_{n\leqslant X}^{\log}1_{B_{q,r}}(n)f_{1}(\lfloor\alpha_{1}qn \rfloor)f_{1}(r\lfloor\alpha_{1}qn\rfloor+1)|=o(1). \tag{8.4}\] Expressions (8.3) and (8.4) are in contradiction for large enough \(X\) and small enough \(\eta\). This resolves Theorem 1.6 part (2). **Remark 8.1**.: Only the multiplicativity of \(f_{1}\) and the complete multiplicativity of \(f_{2},\ldots f_{k}\) at \(r\) was used in the proof of Theorem 1.6(2). Unfortunately the method only saves a value \(\eta\ll q^{-k-1}r^{-k-1}\) over the trivial bound, and this seems to be not enough to remove the complete multiplicativity assumption using the device from the proof of Lemma 4.2.
2303.02767
Difference independence of the Euler gamma function
In this paper, we established a sharp version of the difference analogue of the celebrated H\"{o}lder's theorem concerning the differential independence of the Euler gamma function $\Gamma$. More precisely, if $P$ is a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$ such that \begin{equation*} P(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0 \end{equation*} for some $(a_0, \dots, a_{n-1})\in \mathbb{C}^{n}$ and $a_i-a_j\notin \mathbb{Z}$ for any $0\leq i<j\leq n-1$, then we have $$P\equiv 0.$$ Our result complements a classical result of algebraic differential independence of the Euler gamma function proved by H\"{o}lder in 1886, and also a result of algebraic difference independence of the Riemann zeta function proved by Chiang and Feng in 2006.
Qiongyan Wang, Xiao Yao
2023-03-05T20:19:41Z
http://arxiv.org/abs/2303.02767v1
# Difference independence of the Euler gamma function ###### Abstract. In this paper, we established a sharp version of the difference analogue of the celebrated Holder's theorem concerning the differential independence of the Euler gamma function \(\Gamma\). More precisely, if \(P\) is a polynomial of \(n+1\) variables in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\) such that \[P(s,\Gamma(s+a_{0}),\ldots,\Gamma(s+a_{n-1}))\equiv 0\] for some \((a_{0},\ldots,a_{n-1})\in\mathbb{C}^{n}\) and \(a_{i}-a_{j}\notin\mathbb{Z}\) for any \(0\leq i<j\leq n-1\), then we have \[P\equiv 0.\] Our result complements a classical result of algebraic differential independence of the Euler gamma function proved by Holder in 1886, and also a result of algebraic difference independence of the Riemann zeta function proved by Chiang and Feng in 2006. Key words and phrases:2010 Mathematics Subject Classification: 11M06, 39A05 The research was partially supported by National Key R&D Program of China (2020YFA0713300) and NSFC of China(No.11901311). Keywords. Algebraic difference independence; Euler gamma function; Algebraic difference equations. property in the literature. It is well known that the Riemann zeta function \(\zeta\) is associated with \(\Gamma\) by the famous Riemann functional equation \[\zeta(1-s)=2^{1-s}\pi^{-s}\cos\frac{\pi s}{2}\Gamma(s)\zeta(s). \tag{1}\] Motivated by the Riemann functional equation, it is natural to consider the algebraic differential independence property for the Riemann zeta function. The study of the algebraic differential independence of the Riemann zeta function \(\zeta\) can be dated back to Hilbert. In [7], he conjectured that Holder's result can be extended to the Riemann zeta function \(\zeta\). Later, this conjecture was verified by Ostrowski in [15]. Bank and Kaufman [2, 3] made the following celebrated generalizations of Holder's result. **Theorem B**.: _Let \(P\) be a polynomial in \(K[X,Y_{0},\ldots,Y_{n-1}]\), where \(K\) is the field of all meromorphic functions such that the Nevanlinna's characteristic \(T(r,f)=o(r)\) as \(r\) goes to infinity for any \(f\) in \(K\). Assume that_ \[P(s,\Gamma(s),\ldots,\Gamma^{(n-1)}(s))\equiv 0,\] _then we have_ \[P\equiv 0.\] For the Nevanlinna characteristic \(T(r,f)\), we refer to Hayman's book [6] for a detailed introduction. Since \(\Gamma\) and \(\zeta\) appeared very naturally in Riemann functional equation (1), Markus in [14] posted an open problem to study the joint algebraic differential independence of \(\Gamma\) and \(\zeta\). We refer the readers to the references [10, 11, 12, 13] for the recent developments in this direction. It is interesting to study the algebraic difference independence of \(\zeta\) or \(\Gamma\). Feng and Chiang proved the following result. **Theorem C**.: _Let \(P\) be a polynomial of \(n+1\) variables in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\) and \(s_{0},\ldots,s_{n-1}\) be \(n\) distinct numbers in \(\mathbb{C}\). Assume that_ \[P(s,\zeta(s+s_{0}),\ldots,\zeta(s+s_{n-1}))\equiv 0,\] _then we have_ \[P\equiv 0.\] Chiang and Feng's result extended a result of Ostrowski in [15] where the assumption of \(s_{0},\ldots s_{n-1}\) are \(n\) distinct real numbers are needed. Indeed, Chiang and Feng proved that Theorem C also holds under the same assumption in Theorem B, we refer the interested readers to [4] for the details. Here, we also mention two remarkable universality results due to Voronin in 1970s for the differential case [17] and the difference case [18]. We refer to [16] for the detailed introduction of the recent developments in this direction. To the best of our knowledge, the topic of the algebraic difference independence of the Euler gamma function was first addressed by Hardouin in [5] in the framework of difference Galois theory. Motivated by the multiplication theorem of Euler gamma function \[\Gamma(ns)=n^{ns-\frac{1}{2}}(2\pi)^{\frac{1-n}{2}}\prod_{j=0}^{n-1}\Gamma(s+ \frac{j}{n}), \tag{2}\] Hardouin proved the following result. **Theorem D** ([5]).: _Let \(a_{0},\ldots,a_{n-1}\) be \(n\) complex numbers in \(\mathbb{C}\), \(b_{0},\ldots,b_{m-1}(\geq 2)\) be \(m\) integers such that \(\{a_{j}(mod\ 1)\}_{j=0}^{n-1}\) and \(\{\sum\limits_{l=0}^{b_{j}-1}\frac{l}{b_{j}}(mod\ 1)\}_{j=0}^{m-1}\) are \(\mathbb{Z}\)-linearly independent. Assume that_ \[P(s,\Gamma(s+a_{0}),\ldots,\Gamma(s+a_{n-1}),\Gamma(b_{0}s),\ldots,\Gamma(b_{m -1}s))\equiv 0\] _for some polynomial \(P\), then we have_ \[P\equiv 0.\] Hardouin's proof relies on Kolchin's type theorem in an essential way. See also in [1] for a detailed disccusion of Kolchin's type theorem and several powerful applications in algebraic independence problems. Our starting point is another well known difference equation of \(\Gamma\), \[\Gamma(s+1)=s\Gamma(s). \tag{3}\] This may be the obvious obstruction for us to study the algebraic difference independence of the Euler gamma function \(\Gamma\). One can not expect to obtain Theorem B for \(\Gamma\) directly. While in this paper, we will show that the machinery exhibited in (3) is the only obstruction to get the algebraic difference independence of \(\Gamma\). Now, we state our main result in the following. In this paper, we will use an elementary method inspired by [8, 15] to prove our main result, which avoids the advanced difference Galois theory. This may be of independent interest. We define \[\mathcal{H}:=\{(a_{0},\ldots,a_{n-1})\in\mathbb{C}^{n}:a_{i}-a_{j}\notin \mathbb{Z}\text{ for any }0\leq i<j\leq n-1\}. \tag{4}\] Now, we state our main result in the following. **Theorem 1**.: _Let \(P\) be a polynomial of \(n+1\) variables in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\). Assume that_ \[P(s,\Gamma(s+a_{0}),\ldots,\Gamma(s+a_{n-1}))\equiv 0\] _for some \((a_{0},\ldots,a_{n-1})\in\mathcal{H}\), then we have_ \[P\equiv 0.\] We remark that we can also use Theorem D to recover part of the result of Theorem 1 under the same condition of \((a_{j})_{j=0}^{n-1}\) and also \(m=0\) in Theorem D. While, it can not completely recover Theorem 1, since the condition in Theorem 1 is sharp. Our result complements the classical result of algebraic differential independence of Euler gamma function proved by Holder [8] in 1886, and also a result of algebraic difference independence of Riemann zeta function proved by Chiang and Feng [4] in 2006. **Corollary 1**.: _Let \(P\) be a polynomial of \(n+1\) variables in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\). Assume that_ \[P(s,\Gamma(s),\ldots,\Gamma(s+(n-1)\alpha))\equiv 0\] _for some \(\alpha\not\in\mathbb{Q}\), then we have_ \[P\equiv 0.\] **Remark 1**.: _Theorem 1 can be seen as a difference version of the Holder's theorem. The identity (3) shows that the discussion restricted to \(\mathcal{H}\) is necessary._ _We can also extend Theorem 1 to the setting of \(K[X,Y_{0},\ldots,Y_{n-1}]\) where \(K\) is the field of all meromorphic functions such that the Nevanlinna's characteristic \(T(r,f)=o(r)\) as \(r\) goes to infinity for any \(f\) in \(K\). While, we will not address it in this paper._ By Theorem 1 and the Euclidean's algorithm, it is not hard to give the following two examples. **Example 1**.: _Let \(P=P(X,Y,Z)\) be a polynomial of \(3\) variables in \(\mathbb{C}[X,Y,Z]\). Assume that_ \[P(s,\Gamma(s+a_{0}),\Gamma(s+a_{1}))\equiv 0,\] _then_ \[P\equiv 0,\] _unless \(a_{1}-a_{0}\in\mathbb{Z}\). In the latter case, if \(\Re a_{0}<\Re a_{1}\), \(P\) can be divided by the polynomial \(R(X,Y,Z)=Z-(X+a_{0})\ldots(X+a_{1}-1)Y\)._ **Example 2**.: _Let \(P(X,Y,Z,W)=YW-Z^{2}-YZ\) in \(\mathbb{C}[X,Y,Z,W]\). We have_ \[P(s,\Gamma(s),\Gamma(s+1),\Gamma(s+2))\equiv 0.\] \(P\) _belongs to the ideal_ \[<W-(X+1)Z,Z-XY>\] _generated by \(W-(X+1)Z\) and \(Z-XY\) in \(\mathbb{C}[X,Y,Z,W]\). Furthermore, \(P\) can be written by_ \[P(X,Y,Z,W)=Y(W-(X+1)Z)+Z(XY-Z).\] **Remark 2**.: _Indeed, inspired by Example 1 and Example 2, we can apply Theorem 1 and the Euclidean's algorithm again to give a complete characterization of the following set_ \[\mathcal{I}:=\{P\in\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]:P(s,\Gamma(s+a_{0}), \ldots,\Gamma(s+a_{n-1}))\equiv 0\}\] _without any assumption on \(a_{0},\ldots,a_{n-1}\). While, we will not discuss it in this paper._ ## 2. Proof of Theorem 1 In order to prove Theorem 1, we need introduce a lexicographic order between any two monomials \(Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n-1}}\) and \(Y_{0}^{j_{0}}\ldots Y_{n-1}^{j_{n-1}}\) in \(\mathbb{C}[Y_{0},\ldots,Y_{n-1}]\), which plays an important role in our proof. And this strategy was inspired by Ostrowski's proof of Holder's classical proof in [15]. It also shares some spirit of Kolchin's type theorem which was used in [5, 1] We first introduce an order for the \(n\) symbols \(Y_{0},\ldots,Y_{n-1}\), \[Y_{0}\prec Y_{1}\prec\cdots\prec Y_{n-1}. \tag{5}\] This can be used to induce a lexicographic order between any two monomials \(Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n-1}}\) and \(Y_{0}^{j_{0}}\ldots Y_{n-1}^{j_{n-1}}\). We still denote it by \(\prec\) to simplify the notation. We define it in the following, **case 1**: \(Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n-1}}=Y_{0}^{j_{0}}\ldots Y_{n-1}^{j_{n-1}}\) if \(i_{k}=j_{k}\) for \(k=0,\ldots,n-1\); **case 2**: \(Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n-1}}\prec Y_{0}^{j_{0}}\ldots Y_{n-1}^{j_{n-1 }}\) if \(i_{0}<j_{0}\) or there exists \(1\leq k\leq n-1\) such that \[i_{0}=j_{0},\ldots,i_{k-1}=j_{k-1},i_{k}<j_{k};\] **case 3**: \(Y_{0}^{j_{0}}\ldots Y_{n-1}^{j_{n-1}}\prec Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n- 1}}\) can be defined similarly as in **case 2**. For any nonzero polynomial \(P=P(X,Y_{0},\ldots,Y_{n-1})\) in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\), we write it by \[P=\sum_{i=(i_{0},\ldots,i_{n-1})}\Phi_{i}(X)Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n -1}}, \tag{6}\] where \(\Phi_{i}(X)\in\mathbb{C}[X]\) and \(\Phi_{i}(X)\neq 0\). The **highest term** of \(P\) is defined by the maximal element in \(\mathcal{T}_{P}\) with respect to the lexicographic order \(\prec\) introduced above, where \[\mathcal{T}_{P}:=\{Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n-1}}:\Phi_{i}(X)\ Y_{0}^{i_ {0}}\ldots Y_{n-1}^{i_{n-1}}\text{ appeared in \eqref{eq:p}}\}. \tag{7}\] For any monomial \(L=Y_{0}^{i_{0}}Y_{1}^{i_{1}}\ldots Y_{n-1}^{i_{n-1}}\), we define its **degree**\(\deg(L)\) by \[\deg(L):=\sum_{k=0}^{n-1}i_{k}.\] The **height** of \(P\) is defined by the degree of the highest term of \(P\). Now, we will prove Theorem 1. Proof.: Let \[\mathcal{S}:=\{P\in\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]:P(s,\Gamma(s+a_{0}), \ldots,\Gamma(s+a_{n-1}))\equiv 0\}. \tag{8}\] We will prove Theorem 1 by contradiction. We assume that \(\mathcal{S}\neq\{0\}\). By our assumption, there exists a nonzero polynomial \[Q=\sum_{i=(i_{0},\ldots,i_{n-1})}\Psi_{i}(X)Y_{0}^{i_{0}}\ldots Y_{n-1}^{i_{n -1}},\] which is of the lowest height in \(\mathcal{S}\backslash\{0\}\) with \(\Psi_{j}(X)Y_{0}^{j_{0}}\dots Y_{n-1}^{j_{n-1}}\) being its highest term for some \(j=(j_{0},\dots,j_{n-1})\). Moreover, we also make the following assumption. **Assumption LD:** The nonzero polynomial \(\Psi_{j}(X)\) appearing in the highest term of \(Q\) is also of the lowest degree. Let \[T(X,Y_{0},\dots,Y_{n-1}):=Q(X+1,(X+a_{0})Y_{0},\dots,(X+a_{n-1})Y_{n-1}). \tag{9}\] Noting that \[Q(s,\Gamma(s+a_{0}),\dots,\Gamma(s+a_{n-1}))\equiv 0,\] we have \[T(s,\Gamma(s+a_{0}),\dots,\Gamma(s+a_{n-1}))\equiv 0\] by (3). And the highest term of \(T\) is \(\hat{\Psi}_{j}(X)Y_{0}^{j_{0}}\dots Y_{n-1}^{j_{n-1}}\), where \[\hat{\Psi}_{j}(X):=\Psi_{j}(X+1)(X+a_{0})^{j_{0}}\dots(X+a_{n-1})^{j_{n-1}}.\] It follows from the Euclidean's algorithm, there exist two polynomials \(R=R(X)\) and \(U=U(X)\) in \(\mathbb{C}[X]\) such that \[\hat{\Psi}_{j}=R\Psi_{j}+U,\] where either \(U=0\) or \(0<\deg U<\deg\Psi_{j}\). It is easy to see that \(\deg R\geq 1\). We claim that \(U=0\). Otherwise, we know that the polynomial \[H(X,Y_{0},\dots,Y_{n-1}):=T(X,Y_{0},\dots,Y_{n-1})-R(X)Q(X,Y_{0},\dots,Y_{n-1})\] is in \(\mathcal{S}\). It follows that the highest term of \(H\) is \[U(X)Y_{0}^{j_{0}}\dots Y_{n-1}^{j_{n-1}}\] and \(0<\deg U<\deg\Psi_{j}\). Thus, \(H\neq 0\), which contradicts the choice of \(Q\) and **Assumption LD**. Now, we have \(U=0\). Since \(U=0\), we see that the highest term of \(H\) is less than the highest term of \(Q\) if \(H\neq 0\). This again contradicts our choice of \(Q\). Thus, we get \(H=0\). That is, \[T(X,Y_{0},\dots,Y_{n-1})=R(X)Q(X,Y_{0},\dots,Y_{n-1}). \tag{10}\] We first assume that there exists \(\beta\notin\Lambda:=\{-a_{k}:0\leq k\leq n-1\}\) such that \(R(\beta)=0\). By (9) and (10), we get \[Q(\beta+1,(\beta+a_{0})Y_{0},\dots,(\beta+a_{n-1})Y_{n-1})=0\] in \(\mathbb{C}[Y_{0},\dots,Y_{n-1}]\). This implies that \[Q(\beta+1,Y_{0},\dots,Y_{n-1})=\sum_{i=(i_{0},\dots,i_{n-1})}\Psi_{i}(\beta+1 )Y_{0}^{i_{0}}\dots Y_{n-1}^{i_{n-1}}=0\] in \(\mathbb{C}[Y_{0},\dots,Y_{n-1}]\). Thus, we have \[\Psi_{i}(\beta+1)=0\] for all \(i\), which implies that each \(\Psi_{i}(X)\) can be divided by \(X-\beta-1\). This contradicts our assumption that \(\Psi_{j}\) is of the lowest degree. Hence, each root of \(R\) lies in \(\Lambda\). Without loss of generality, we assume that \(R(-a_{0})=0\). Thus, we get \[Q(-a_{0}+1,0,(a_{1}-a_{0})Y_{1},\ldots,(a_{n-1}-a_{0})Y_{n-1})=0\] by (9) and (10). Recalling that \(a_{j}-a_{0}\notin\mathbb{Z}\) for any \(j\neq 0\), we have \[Q(-a_{0}+1,0,Y_{1},\ldots,Y_{n-1})=0. \tag{11}\] Taking \(X=-a_{0}+1\), \(Y_{0}=0\) in (9) and (10), we get \[Q(-a_{0}+2,0,(a_{1}-a_{0}+1)Y_{1},\ldots,(a_{n-1}-a_{0}+1)Y_{n-1})\] \[= R(-a_{0}+1)Q(-a_{0}+1,0,Y_{1},\ldots,Y_{n-1})=0\] by (11). Noting that \(a_{j}-a_{0}\notin\mathbb{Z}\) for any \(j\neq 0\) again, we obtain \[Q(-a_{0}+2,0,Y_{1},\ldots,Y_{n-1})=0\] in \(\mathbb{C}[Y_{0},\ldots,Y_{n-1}]\). By induction, we can prove that for any \(m\in\mathbb{N}\), \[Q(-a_{0}+m,0,Y_{1},\ldots,Y_{n-1})=0\] in \(\mathbb{C}[Y_{0},\ldots,Y_{n-1}]\). It follows by the fundamental theorem of algebra, we get \[Q(X,0,Y_{1},\ldots,Y_{n-1})=0\] in \(\mathbb{C}[X,Y_{0},\ldots,Y_{n-1}]\). Thus, we proved that \(Q\) can be divided by the monomial \(Y_{0}\), which contradicts the assumption that \(Q\) is of the lowest height in \(\mathcal{S}\). Now, we finish the proof of Theorem 1.
2304.09695
Big-Little Adaptive Neural Networks on Low-Power Near-Subthreshold Processors
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI applications and proposes strategies to improve them while maintaining the accuracy of the application. The selected processors deploy adaptive voltage scaling techniques in which the frequency and voltage levels of the processor core are determined at the run-time. In these systems, embedded RAM and flash memory size is typically limited to less than 1 megabyte to save power. This limited memory imposes restrictions on the complexity of the neural networks model that can be mapped to these devices and the required trade-offs between accuracy and battery life. To address these issues, we propose and evaluate alternative 'big-little' neural network strategies to improve battery life while maintaining prediction accuracy. The strategies are applied to a human activity recognition application selected as a demonstrator that shows that compared to the original network, the best configurations obtain an energy reduction measured at 80% while maintaining the original level of inference accuracy.
Zichao Shen, Neil Howard, Jose Nunez-Yanez
2023-04-19T14:36:30Z
http://arxiv.org/abs/2304.09695v1
# Big-Little Adaptive Neural Networks on Low-Power Near-Subthreshold Processors ###### Abstract This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI applications and proposes strategies to improve them while maintaining the accuracy of the application. The selected processors deploy adaptive voltage scaling techniques in which the frequency and voltage levels of the processor core are determined at the run-time. In these systems, embedded RAM and flash memory size is typically limited to less than 1 megabyte to save power. This limited memory imposes restrictions on the complexity of the neural networks model that can be mapped to these devices and the required trade-offs between accuracy and battery life. To address these issues, we propose and evaluate alternative 'big-little' neural network strategies to improve battery life while maintaining prediction accuracy. The strategies are applied to a human activity recognition application selected as a demonstrator that shows that compared to the original network, the best configurations obtain an energy reduction measured at 80% while maintaining the original level of inference accuracy. a 2022 2022 2022 2022 2022 2022 2022 2022 ## 1 Introduction Over the past few decades, the rapid development of the Internet of Things (IoT) and deep learning has increased the demand for deploying deep neural networks (DNNs) to low-power devices [1]. Due to high latency and privacy issues, cloud computing tasks are gradually being transferred to the edge in areas such as image recognition and natural language processing [2]. The limitations in memory size and computing power mean that large neural networks with millions of parameters cannot be easily deployed on edge devices such as microcontroller units (MCUs), which in many cases have less than one megabyte of flash memory capacity [1, 2]. Memory is kept low to save costs and reduce power usage since power gating memory blocks that are not in use is not a feature available in these devices. Maximizing device usage time is an important goal and, focusing on this objective, we investigate an adaptive 'big-little' neural network system which consists of a big network and multiple little networks to achieve energy-saving inference by limiting the number of big network executions without degrading accuracy. We call this organization 'big-little' since it draws inspiration from the 'big-little' technology popularized by ARM that combines complex and light processors in a single SoC. Our big network has better accuracy but with a longer inference time, while the little networks have a faster inference speed. Most of the time, the big network remains in sleeping mode and it is only activated when the little network determines that it cannot handle the work at the required level of confidence. In this research, we focus on establishing and deploying the complete adaptive neural network system on the edge device. We investigate how to manage the primary and secondary networks to have a faster, more accurate, and more energy-efficient performance using a human activity recognition (HAR) application as a popular example of an edge application. The contribution of this research is summarized below: * We evaluate state-of-the-art near-threshold processors with adaptive voltage scaling and compare them to a standard edge processor. * We optimize a popular edge application targeting a human activity recognition (HAR) model based on _TensorFlow_ for MCU deployment using different vendor toolchains and compilers. * We propose novel 'big-little' strategies suitable for adaptive neural network systems achieving fast inference and energy savings. * We made our work open source at [https://github.com/DarkSZChao/Big-Little_NN_Strategies](https://github.com/DarkSZChao/Big-Little_NN_Strategies) (accessed on 9 March 2022) to further promote work in this field. This paper is organized as follows. In Section 2, we present an overview of the state-of-the-art hardware for low-power edge AI, frameworks and relevant algorithmic techniques. Then, an initial evaluation in terms of performance and energy cost in near-threshold MCUs and standard MCUs was carried out in Section 3. In Section 4, we propose and evaluate three different configurations of adaptive neural network systems with different features and performance characteristics. Section 5 describes and demonstrates the implementation steps needed to target the selected low-power MCUs. The results obtained in terms of speed, accuracy and energy are presented in Section 6. Finally, the conclusions and future work are discussed in Section 7. ## 2 Background and Related Work In this section, we present an overview of current state-of-the-art hardware with power profiles in the order of 1 watt or less for edge AI and then algorithmic techniques and frameworks optimized to target this hardware. ### Hardware for Low-Power Edge AI The high demand for AI applications at the edge has resulted in a significant increase in hardware optimized for low-power levels. For example, Google has delivered a light version of the Tensor Processing Unit (TPU) called Edge TPU which is able to provide power-efficient inference at 2 trillion MAC operations per second per watt (2TMAC/s/W) [3]. This state-of-the-art device is able to execute mobile version models such as MobileNet V2 at almost 400 FPS. The Cloud TPU focuses on training complex models, while the Edge TPU is designed to perform inference in low-power systems. Targeting significantly lower power than the Edge TPU, Ambiq released the Apollo family of near-threshold processors based on the 32-bit ARM Cortex-M4F processor. These devices can reach much lower energy usage measured at only 6 uA/MHz at 3.3 V under the working mode, and 1 uA/MHz at 3.3 V under sleep mode. The Apollo3 device present in the SparkFun board has 1 MB of flash memory and 384 KB of low-leakage RAM [4]. Similarly, Eta Compute has targeted energy-efficient endpoint AI solutions with the ECM3532 processor. This device is based on an ARM Cortex-M3 32-bit CPU and a separate CoolFlux DSP to speed up machine learning operations in an energy-efficient manner. The ECM3532 available in the AI vision board consumes less than 5 uA/MHz in normal working mode and 1 uA/MHz in sleep mode. According to Eta Compute, its implementation of self-timed continuous voltage and frequency scaling technology (CVFS) achieves a power profile of just 1 mW [5,6]. A characteristic of these near-threshold devices is that voltage scaling is applied to the core but it is not applied to the device's SRAM/flash due to the limited margining possible in memory cells. Both Apollo3 and ECM3532 are based on the popular ARM architecture but, lately, the open-source instruction set architecture RISC-V has also received significant attention in this field. For example, GAP8 developed by GreenWaves Technologies features an 8-core compute cluster of RISC-V processors and an additional CNN accelerator [7]. The compute cluster is coupled with an additional ultra-low power MCU with 30 \(\upmu\)W state-retentive sleep power for control and communication functions. For CNN inference (90 MHz, 1.0 V), GAP8 delivers an energy efficiency of 600 GMAC/s/W and a worst-case power envelope of 75 mW [7]. Other examples of companies exploring the near-threshold regime include Minima who has been involved in designs demonstrating achievable power savings [8]. Minima offers ultra-wide dynamic voltage and frequency scaling (DVFS) which is able to scale frequency and/or operating voltage based on the workload. This approach, combined with the dynamic margining approach from both Minima and ARM, is able to save energy by up to 15\(\times\) to 20\(\times\)[9]. The interest for adaptive voltage scaling hardware has resulted in a \(\upxi\)100 m European project led by STMicroelectronics to develop the next generation of edge AI microcontrollers and software using low-power FD-SOI and phase change technology. This project aims to deliver the chipset and solutions for the automotive and industrial markets with a very high computing capacity of 10 TOPS per watt, which is significantly more powerful than existing microcontrollers [10]. ### Algorithmic Techniques for Low-Power Edge AI Over the years, different algorithmic approaches have appeared to optimize inference on edge devices with a focus on techniques such as quantization, pruning, heterogeneous models and early termination. The deep quantization of network weights and activations is a well-known approach to optimize network models for edge deployments [11; 12]. Examples include [13], which uses extremely low precision (e.g., 1-bit or 2-bits) of weights and activations achieving 51% top-1 accuracy and seven times the speedup in AlexNet [13]. The authors of [14] demonstrate a binarized neural network (BNN) where both weights and activations are binarized. During the forward pass, a BNN drastically reduces memory accesses and replaces most arithmetic operations with bit-wise operations. Ref. [14] has proven that, by using their binary matrix multiplication kernel, the results achieve 32 times the compression ratio and improves performance by seven times with MNIST, CIFAR-10 and SVHN data sets. However, substantial accuracy loss (up to 28.7%) has been observed by [15]. The research in [15] has addressed this drawback by deploying a full-precision norm layer before each Conv layer in XNOR-Net. XNOR-Net applies binary values to both inputs and convolutional layer weights and it is capable of reducing the computation workload by approximately 58 times, with 10% accuracy loss in ImageNet [15]. Overall, these networks can free edge devices from the heavy workload caused by computations using integer numbers, but the loss of accuracy needs to be properly managed. This reduction in accuracy loss has been improved in CoopNet [16]. Similar to the concept of multi-precision CNN in [17], CoopNet [16] applies two convolutional models: a binary net BNN with faster inference speed and an integer net INT8 with relatively high accuracy to balance the model's efficiency and accuracy. On low-power Cortex-M MCUs with limited RAM (\(\leq\) 1 MB), Ref. [16] achieved around three times the compression ratio and 60% of the speed-up while maintaining an accuracy level higher than the CIFAR-10, GSC and FER13 datasets. In contrast to CoopNet which applies the same network structures for primary and secondary networks, we apply a much simpler structure for secondary networks in which each of them is trained to identify one category in the HAR task. This optimization results in a configuration that can achieve around 80% speed-up and energy-saving with a similar accuracy level across all the evaluated MCU platforms. Based on XNOR-Net, Ref. [18] constructed a pruned-permuted-packed network that combines binarization with sparsity to push model size reduction to very low limits. On the Nucleo platforms and Raspberry Pi, 3PXNet achieves a reduction in the model size by up to 38\(\times\) and an improvement in runtime and energy of 25\(\times\) compared to already compact conventional binarized implementations with a reduction in accuracy of less than 3%. TF-Net is an alternative method that chooses ternary weights and four-bit inputs for DNN models. Ref. [19] provides this configuration to achieve the optimal balance between model accuracy, computation performance, and energy efficiency on MCUs. They also address the issue that ternary weights and four-bit inputs cannot be directly accessed due to memory being byte-addressable by unpacking these values from the bitstreams before computation. On the STM32 Nucleo-F411RE MCU with an ARM Cortex-M4, Ref. [19] achieved improvements in computation performance and energy efficiency of 1.83\(\times\) and 2.28\(\times\), respectively. Thus, 3PXNet/TF-Net can be considered orthogonal to our 'big-little' research since they could be used as alternatives to the 8-bit integer models considered in this research. A related architecture to our approach called BranchyNet with early exiting was proposed in [20]. This architecture has multiple exits to reduce layer-by-layer weight computation and I/O costs, leading to fast inference speed and energy saving. However, due to the existence of multiple branches, it suffers from a huge number of parameters, which would significantly increase the memory requirements in edge devices. The configuration of primary and secondary neural networks has been proposed for accelerating the inference process on edge devices in recent years. Ref. [17; 21] constructed 'big' and 'little' networks with the same input and output data structure. The 'big' network is triggered by their score metric generated from the 'little' network. A similar configuration has also been proposed by [22], but their 'big' and 'little' networks are trained independently. 'Big' and 'little' networks do not share the same input and output data structure. Ref. [22] proposed a heterogeneous setup deploying a 'big' network on state-of-the-art edge neural accelerators such as NCS2, with a 'little' network on near-threshold processors such as ECM3531 and Apollo3. Ref. [22] has successfully achieved 93% accuracy and low energy consumption of around 4 J on human activity classification tasks by switching this heterogeneous system between 'big' and 'little' networks. Ref. [22] considers heterogeneous hardware, whereas our approach uses the 'big-little' concept but focuses on deploying all the models on a single MCU device. In contrast to how [22] deployed 'big' and 'little' models on the NCS2 hardware accelerator and near-threshold processors separately, we deploy both neural network models on near-threshold MCU for activity classification tasks. A switching algorithm is set up to switch between 'big' and 'little' network models to achieve much lower energy costs but maintain a similar accuracy level. A related work [23] has performed activity recognition tasks with excellent accuracy and performance by using both convolutional and long short-term memory (LSTM) layers. Due to the flash memory size of MCU, we decided not to use the LSTM layers which have millions of parameters as shown in [23]. The proposed adaptive system is suitable for real-world tasks such as human activity classification in which activities do not change at very high speeds. A person keeps performing one action for a period of time, typically in the order of tens of seconds [24], which means that to maintain the system at full capacity (using the primary 'big' network to perform the inference) is unnecessary. Due to the additional inference time and computation consumed by the primary network, the fewer the number of times the primary network gets invoked, the faster the inference process will be and the lower the energy requirements [16; 17; 21; 22]. ### Frameworks for Low-Power Edge AI Over the last few years, a number of frameworks have appeared to ease the deployment of neural network models on edge devices with limited resources. In [25], a framework is provided called FANN-on-MCU specifically for the fast deployment of multi-layer perceptrons (MLPs) on low-power MCUs. This framework supports not only the very popular ARM Cortex-M series MCUs, but also the RISC-V parallel ultra-low power (PULP) processors. The results in [25] show that the PULP-based 'Mr.Wolf' SoC can reach up to 7.1\(\times\) the speedup with respect to a single core implementation and 13.5\(\times\) the speedup over the ARM Cortex-M4. Moreover, by using FANN-on-MCU, a relatively big neural network with 103,800 MAC operations can be executed within 17.6 ms with an energy consumption of 183 \(\upmu\)m on a Nordic nRF52832 MCU with one ARM Cortex-M4. The same neural network applied on 'Mr.Wolf' with eight RISC-V-based RI5CY cores takes less than 1ms to consume around 50 \(\upmu\)J [25]. Similar to FANN-on-MCU, Ref. [26] delivers a fast deployment on the MCU framework called the neural network on microcontroller (_NNoM_) which supports more complex model topologies such as ResNet and DenseNet from Keras. A user-friendly API and high-performance backend selections have been built for embedded developers to deploy Keras models on low-power MCU devices. There are also deployment frameworks developed by commercial companies targeting low-power edge devices. For example, Google focuses on low-power edge AI with the popular _TensorFlow Lite_ framework [27]. Coupled with the model training framework _TensorFlow_, Google can provide a single solution from neural network model training to model deployment on edge devices. _STM32Cube.AI_ from STMicroelectronics [28] is also an AI deployment framework but it is only designed around the STM family devices such as STM32 Nucleo-L4RSZI and STM32 Nucleo-F411RE. Eta Compute has created the _TENSAIFlow_ deployment framework to provide performance and efficiency optimizations for Eta-series MCU products such as ECM3531 and ECM3532 [29]. In our methodology, the lack of support for certain devices in some frameworks means that we have combined tools from different vendors. We have applied frameworks from [26; 27; 29] for model deployments on MCUs such as ECM3532 and STM32L4 (see Section 5 for details). ## 3 Low-Power Microcontroller Evaluation Four commercially available microcontroller devices designed for energy-efficient applications from STMicroelectronics, Ambiq and Eta Compute are considered in this comparison. Table 1 shows the technical details of these four MCUs. Three of them (STM32L4RSZI, Apollo2 Blue and SparkFun Edge (Apollo3 Blue)) are based on the Cortex-M4 microarchitecture with floating-point units (FPU) [4; 30; 31], while the ECM3532 is based on the Cortex-M3 microarchitecture with a 'CooFlux' 16-bit DSP [5]. The 32-bit ARM Cortex-M3 and M4 are comparable microarchitectures both having a three-stage pipeline and implementing the Thumb-2 instruction set with some differences in the number of instructions available. For example, additional 16/32-bit MAC instructions and single-precision FPU are only available on the Cortex M4. The STM32 Nucleo-144 development board with the STM32L4RSZI MCU is used as a comparison point; the main difference between this STM device and the other three is the power optimization method. The core supply voltage of 1 V for the STM device is significantly higher than the core voltage for the near-threshold devices of Ambiq and Eta Compute at only around 0.5 V. Theoretically, the sub-threshold core supply voltage can be as low as 0.3 V which should be more power-efficient. However, at 0.3 V, the transistor switching time will be longer, which leads to a higher leakage current. The leakage can exceed 50% of the total power consumption for a threshold voltage level of around 0.2 V [32]. Therefore, in practice, choosing near-threshold voltage points instead of sub-threshold voltage points has been shown to be a more energy-efficient solution [32]. In order to optimize the energy usage based on the task requirements, STM32L4 uses standard dynamic voltage and frequency scaling (DVFS) with predefined pair sets of voltage and frequency, while the devices from Ambiq and Eta Compute apply adaptive voltage scaling (AVS) which is able to determine the voltage at a given frequency to handle the tasks at run-time using a feedback loop [33]. Comparing the datasheets, the STM32L4 has the highest clock frequency which results in an advantage in processing speed. Ambiq and Eta Compute's near-threshold devices only require about half of the core supply voltage of STM32L4. All considered processors are equipped with limited flash sizes from 0.5 MB to 1 MB and a size of around 300 KB SRAM. That means that the neural network model deployed must be small enough to fit within the limited memory size. Therefore, we use the _TensorFlow_ framework and _TensorFlow_ _Lite_ converter to create a simple pre-trained CNN model designed for human activity recognition (HAR) from UCI [34] (as shown in Figure 1) to perform the initial energy evaluation of the four MCU devices. The energy board X-NUCLEO-LPM01A from STMicroelectronics is used to evaluate the performance and energy consumption measuring the current used by the target board under a given supply voltage of 3.3 V (lower core voltages are regulated internally in the device). The power consumption of the four tested boards is shown in Figure 2. STM32L4 operates at a much higher power level which is around six times that of the near-threshold processors. The near-threshold processors Apollo2, Apollo3 and ECM offer significantly lower power, consuming less than 5 mW at the normal frequency of 48MHz and around 10 mW in the burst mode of 96 MHz. The reason why SparkFun Edge (Apollo3) consumes more power than Apollo2 is that the Apollo3 core is highly integrated into the SparkFun Edge board with peripheral sensors and ports which cannot be disabled during the power evaluation. Therefore, the peripheral devices on SparkFun Edge (Apollo3) are responsible for a component of the power consumption, which leads to a higher power than Apollo2 at each frequency level. Apollo2 and ECM3532 share a similar level of power consumption at 24 and 48 MHz. Apollo2 does not support running at a frequency higher than 48 MHz; therefore, there is no value for Apollo2 at the 96 MHz frequency point. Figure 3 shows the execution time of the four tested processors for one inference of the pre-trained CNN model in Figure 1. Apollo2 is the slowest one and finishes inference using the longest amount of time at above 100 ms at 24 MHz frequency and around 50 ms at 48 MHz. The SparkFun Edge board (Apollo3) reduces the execution time by approximately 40% compared to Apollo2. It can even drop below 20 ms when operating in burst mode (96 MHz). STM32L4 is the second fastest among all devices due to its higher core supply voltage in Table 1 which enables faster transistor switching and processing speed. ECM3532 has the lowest execution times which are 28 ms at 24 MHz, 15 ms at 48 MHz and 8 ms at 96 MHz. The _TENSAIFlow_ compiler is responsible for significant optimization in the ECM3532 device. Figure 4 indicates the energy consumption values observed using the X-NUCLEO-LPM01A energy measurement board. Since the power consumption of the standard MCU STM32L4 in Figure 2 is six times higher compared to the near-threshold MCUs and there is no obvious advantage in processing speed at the same frequency, STM32L4 is the worst device in terms of energy consumption for all operating frequencies from 24 to 96 MHz. SparkFun Edge (Apollo3) is slightly higher than Apollo2 at 24 and 48MHz due to the energy consumed by the peripheral equipment on board. ECM3532 achieves the minimum energy consumption at normal frequency points (24 and 48 MHz) in the energy test because it has better results in both power and time evaluations. However, when operating in the 96 MHz burst mode, ECM3532 requires more power to obtain a higher processing speed, resulting in a slight increase in energy consumption, and the same situation can be seen for the SparkFun Edge board. Overall, compared to the STM32L4 reference point all three near-threshold MCUs have a significant advantage in power and energy consumption which is around 80% to 85% lower. Although the near-threshold MCUs are comparable with the standard MCU STM32L4 in terms of inference time, their lower core voltage supplies (Table 1) result in lower power (Figure 2) at the same frequency level. Therefore, in our model inference evaluation, the near-threshold MCU devices can achieve better results in energy consumption compared to STM32L4 at 24, 48 and 96 MHz. Thanks to the additional model optimization obtained with the _TENSAIFlow_ compiler provided by Eta Compute, ECM3532 offers a good balance between performance and energy efficiency to reach a lower execution time, enabling the lowest energy consumption for model inference from 24 to 96 MHz. In contrast, Apollo2, with a relatively slow processing speed, needs more time for model inference, which leads to higher values in energy consumption at 24 and 48 MHz. Due to the energy consumed by the inaccessible peripheral equipment on SparkFun Edge (Apollo3), this device consumes higher energy than Apollo2 (Figure 4). Figure 3: MCU initial evaluation in terms of time cost. Figure 2: MCU initial evaluation in terms of power consumption. ## 4 Adaptive Neural Network Methodology To create the adaptive neural network system, we employ Python version 3.6.8 and _TensorFlow_ 1.15 with its dependencies installed on a desktop PC with Intel(R) Core (TM) i7-10850H CPU 2.70 GHz, NVIDIA GeForce MX250 GPU, and 16 GB RAM. There are several framework alternatives to train the neural networks, such as PyTorch and Caffe. Due to the reasons of MCU compatibility and stability, our approach uses _TensorFlow_ 1.15 to train the primary and secondary network models. After that, we use _TensorFlow Lite_ and _NNoM_ Converter to convert the models using single-precision floating-points (FP32) to the unsigned integer 8-bit (UINT8) format which can be deployed on the MCUs. We consider human activity recognition using the UCI data set [34] as our raw data set. This application is a demonstrator which assumes that the activity will remain constant for a short period of time before being replaced by the next activity. To save energy via a reduction in execution time, we propose the adaptive neural network system which is able to disable the primary model and activate a secondary model when the activity remains unchanged. Therefore, we aim at achieving both latency and energy reductions without affecting prediction accuracy. The UCI-HAR data set uses a body accelerometer, body gyroscope, and total accelerometer with three axes to provide body information for six actions (SITTING, STANDING, LAYING, WALKING, WALKING_UPSTAIRS, and WALKING_DOWNSTAIRS) performed by a group of 30 volunteers. All the data have been sampled in fixed-width sliding windows of 128 sampling points and they have been randomly partitioned into two sets, with 70% of data samples used for training and 30% used for testing. Therefore, we have a training data shape of (7352, 128, 3, 3), and a testing data shape of (2947, 128, 3, 3). We have evaluated the accuracy as shown in Figure 5 by applying the test data from these three sensors to the secondary network. The total accelerometer sensor shows the best overall accuracy. Thus, this sensor is selected for the secondary network inference. The training and testing data sets from UCI-HAR use Figure 4: MCU initial evaluation in terms of energy consumption. floating-point values with a wide range so that before training the model, all the data have been rescaled to quantized integer values with a range of [127; 128; 11; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 289; 281; 284; 285; 287; 289; 288; 289; 282; 286; 287; 288; 289; 289; 283; 285; 289; 287; 289; 288; 289; 2910; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 431; 432; 433; 444; 445; 435; 436; 437; 444; 455; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 48; 48; 491; 492; 434; 445; 436; 437; 493; 494; 405; 406; 407; 409; 410; 411; 411; 412; 413; 414; 415; 416; 417; 418; 419; 42; 433; 44; 44; 45; 461; 419; 420; 421; 42; 42; 42; 42; 42; 42; 42; 42; 42; 42; 423; 43; 444; 45; 462; 47; 48; 493; 48; 494; 50; 510; 521; 53; 540; 541; 54; 55; 56; 57; 58; 59; 511; 542; 55; 57; 58; 59; 50; 59; 512; 50; 513; 514; 515; 52; 59; 516; 517; 518; 519; 520; 519; 51; 521; 53; 54; 55; 56; 57; 58; 59; 522; 58; 59; 50; 51; 50; 51; 522; 59; 52; 51; 54; 52; 53; 54; 55; 57; 59; 50; 51; 523; 54; 55; 58; 59; 510; 524; 59; 50; 525; 51; 56; 57; 59; 526; 58; 59; 50; 527; 59; 50; 511; 50; 527; 51; 58; 51; 59; 528; 51; 51; 59; 50; 529; 52; 510; 53; 51; 52; 52; 53; 54; 56; 57; 58; 59; 512; 59; 50; 52; 511; 54; 57; 59; 50; 53; 52; 54; 58; 51; 51; 52; 51; 54; 59; 52; 50; 53; 52; 54; 57; 56; 58; 59; 52; 52; 59; 53; 57; 58; 59; 50; 59; 510; 51; 52; 51; 52; 52; 53; 54; 59; 50; 51; 52; 53; 54; 56; 57; 58; 59; 51; 59; 50; 511; 50; 52; 53; 59; 54; 58; 59; 52; 59; 54; 59; 50; 53; 54; 57; 59; 51; 56; 58; 59; 52; 59; 54; 59; 50; 51; 57; 58; 59; 50; 51; 59; 52; 51; 53; 59; 56; 51; 57; 59; 58; 510; 59; 50; 51; 59; 50; 52; 51; 50; 53; 51; 54; 59; 52; 52; 53; 54; 57; 58; 59; 53; 56; 54; 59; 55; 56; 57; 58; 59; 57; 59; 58; 59; 59; 510; 59; 50; 53; 59; 511; 52; 54; 59; 56; 57; 59; 58; 59; 50; 59; 51; 59; 50; 51; 51; 52; 51; 53; 51; 55; 59; 57; 51 from the UCI-HAR data set to classify six activities, the 'big' network has three inputs, resulting in around 9000 parameters in total. Convolutional 1D layers and max-pooling layers from Keras are stacked together to form the three 'big' branches in Figure 7. Then, the outputs from these branches are converged by a concatenate layer followed by a dense layer that has six neurons for six categories. The data shape of each sensor is (7352, 128, 3) which means we have 7352 data samples with a length of 128 for each axis. The data set is labelled from 0 to 5 to represent each activity for the training and testing processes in the 'big' network. Each 'little' network only classifies two categories by using several convolutional 1D layers and max-pooling layers with 184 parameters in total. Therefore, based on the results in Figure 5, only the total accelerometer sensor which achieves the best overall accuracy is selected as the input for the 'little' network. The output of the 'little' network is a dense layer with two neurons for two categories as seen in Table 2. Due to the limited size of the UCI-HAR data set [34], we have less than 2000 data elements for each activity category. Therefore, we use all of them and convert the data labels from six categories to two for training the 'little' model. Particularly, for each 'little' model, the labels of corresponding activity are set to number 1, while the others are set to number 0. Finally, we can generate the models in the Keras format. Figure 6: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + six ‘little’ configuration of the adaptive neural network system. In the left figure, dark blue and brown represent two ‘little’ network models corresponding to the input activities. In the right figure, the dotted line means only one ‘little’ network model of six is invoked at a time. Figure 7: ‘Big’ (left) and ‘little’ (right) model structures in Keras. ### 'Big' + 'Dual' Configuration The 'big' + 'dual' configuration is an alternative method of the adaptive neural network system. We replace the six 'little' models with one small neural network called 'dual'. Compared to the 'big' + six 'little' model, this one only consists of one primary and one secondary network model instead of one + six networks. In order to replace six 'little' networks designed for six categories with only one 'dual' network, the data sample for the previous activity is required to be stored in a register and compared with the current activity data sample as shown in Figure 8. Then, the 'dual' network can recognize these patterns to distinguish whether the current activity changes or not. For example, the first activity is classified as STANDING by the 'big' network, and the second activity of SITTING is compared with the one previously stored by the 'dual' network. If the 'dual' network detects these two activities are not the same, the 'big' network will be triggered for further inferences. Otherwise, the 'dual' network keeps active for time and energy saving as shown in Figure 8. \begin{table} \begin{tabular}{l c c c c} \hline & **Model: ‘Big’** & & & **Model: ‘Little’** & \\ \hline **Layer (Type)** & **Output Shape** & **Param\#** & **Layer (Type)** & **Output Shape** & **Param\#** \\ \hline model\_input1 & [(None, 128, 3)] & 0 & model\_input & [[None, 128, 3)] & 0 \\ model\_input2 & [(None, 128, 3)] & 0 & conv1d & (None, 128, 4) & 40 \\ model\_input3 & [(None, 128, 3)] & 0 & conv1d\_1 & (None, 64, 4) & 52 \\ conv1d & (None, 128, 4) & 40 & conv1d\_2 & (None, 32, 2) & 26 \\ conv1d\_5 & (None, 128, 4) & 40 & model\_output & (None, 2) & 66 \\ conv1d\_10 & (None, 128, 4) & 40 & & & \\ conv1d\_1 & (None, 64, 8) & 104 & & & \\ conv1d\_6 & (None, 64, 8) & 104 & & & \\ conv1d\_11 & (None, 64, 8) & 104 & & & \\ conv1d\_2 & (None, 32, 16) & 400 & & & \\ conv1d\_7 & (None, 32, 16) & 400 & & & \\ conv1d\_12 & (None, 32, 16) & 400 & & & \\ conv1d\_3 & (None, 16, 32) & 1568 & & & \\ conv1d\_8 & (None, 16, 32) & 1568 & & & \\ conv1d\_13 & (None, 16, 32) & 1568 & & & \\ conv1d\_4 & (None, 8, 8) & 776 & & & \\ conv1d\_9 & (None, 8, 8) & 776 & & & \\ conv1d\_14 & (None, 8, 8) & 776 & & & \\ concatenate & (None, 96) & 0 & & & \\ model\_output & (None, 6) & 582 & & & \\ \hline \multicolumn{5}{c}{Total params: 9246} & Total params: 184} \\ \hline \end{tabular} \end{table} Table 2: ‘Big’ (**left**) and ‘little’ (**right**) model parameter details. The pooling layers are hidden. For more info, see Figure 7. The 'big' network is the same as the one introduced in the previous configuration, while the secondary 'dual' network has been reconstructed as shown in Figure 9 and Table 3. In the same way as for the 'little' network, the single input data from the total accelerometer sensor are selected for the 'dual' network. Therefore, the input data shape of the 'dual' network becomes (1, 128, 3, 2), which contains two adjacent input data samples. As there is a significant increase in the input data shape, the number of parameters increases from 184 in the 'little' network to 300 in the 'dual' network. Figure 8: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + ‘dual’ configuration of the adaptive neural network system. In the left figure, the two input data blocks represent a pair of adjacent data samples required by the ‘dual’ network. In the right figure, registers store the previous data and label for the current process in the ‘dual’ network. Figure 9: ‘Dual’ model structures in Keras. ### 'Big' + Distance Configuration Finally, we consider whether the wake-up module in the adaptive system can be replaced by a simpler algorithm instead of using neural networks such as 'little' and 'dual' networks. This configuration, which is similar to the second configuration, replaces the 'dual' network model with a distance calculator measuring the difference in the distance between two adjacent input samples. In order to pick up on an activity change, a distance calculator using Minkowski distance and Mahalanobis distance is applied to trigger the 'big' network when the difference in distance reaches a pre-set threshold value as shown in Figure 10. \[D(x,y)=\left(\sum_{i=1}^{n}|x_{i}-y_{i}|^{p}\right)^{1/p} \tag{1}\] The Euclidean distance is a typical metric that measures the real distance of two points in N-dimensions. As shown in Equation (1), Minkowski distance is a generalized format of Euclidean distance. When \(p=2\), it becomes equivalent to the Euclidean distance, while it becomes equivalent to the Manhattan distance when \(p=1\). Moreover, the Mahalanobis distance measures the distance of a target point P and a mean point of a distribution D. This distance increases if point P moves away along each principal component axis of D. The Mahalanobis distance becomes Euclidean distance when these axes are scaled to have a unit variance [35,36]. The input data shape which is (1, 128, 3, 2) for the 'dual' network should be stretched into (1, 384, 2) where the value two means that two adjacent data samples are required by the distance calculator. The calculator then measures the Minkowski distance between these two Figure 10: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + distance configuration of the adaptive neural network system. In the left figure, the two input data blocks represent a pair of adjacent data samples required by the distance calculator. In the right figure, the registers store the previous data and label for the current process in the distance calculator. \begin{table} \begin{tabular}{c c c} \hline \hline & **Model: ‘Dual’** & \\ \hline **Layer (Type)** & **Output Shape** & **Param\#** \\ \hline model\_input & [(None, 384, 2)] & 0 \\ conv1d & (None, 384, 4) & 28 \\ conv1d\_1 & (None, 192, 4) & 52 \\ conv1d\_2 & (None, 96, 2) & 26 \\ model\_output & (None, 2) & 194 \\ \hline \hline \end{tabular} Total params: 300 \end{table} Table 3: ‘Dual’ model parameter details. The pooling layers are hidden. For more info, see Figure 9. adjacent data samples following Equation (1) for both cases of \(p=1\) and \(p=2\). Mahalanobis distance requires the covariance matrix of the data set before the calculation. To wake up the 'big' model, multiple thresholds can be selected to achieve multiple sensitivities. The 'big' model is only triggered when the distance between the previous data sample and the current one is beyond the pre-set threshold. Therefore, a lower threshold value will reach a higher inference accuracy because the 'big' network will be invoked more frequently. Conversely, a higher threshold value means that the 'big' network is invoked fewer times, leading to a shorter inference time. ## 5 Neural Network Microcontroller Deployment The neural network models in the Keras format are quantized to the UINT8 format to reduce the amount of memory needed before MCU deployment. According to Equation (2) in [37], as shown below, the real value is the input value of the training process in the range of [-128, 127], while the quantized value is the target value after the quantization, which is in the UINT8 range of [0, 255]. The mean and the standard deviation values can be calculated as 128 and 1, respectively. Finally, the model in a quantized format is obtained. \[real\_value=(quantized\_value-mean\_value)/std\_dev\_value \tag{2}\] We use the available data samples from UCI-HAR [34] instead of real-time data to perform a fair comparison across the different platforms. Thus, when the MCU runs the application, stored data and network models can be accessed correctly. Moreover, the model-switching algorithm for the adaptive system introduced in Section 4 is achieved at the C code level instead of the network model layer level. The 'big' and 'little' models are capable of being invoked independently, which means the adaptive system is more flexible and effective at finding the balance between performance and energy consumption. Finally, before flashing the target boards, the application must be compiled to an executable binary using cross-compilation tools for GCC [38], ARM Compiler [39] and the _TENSAIFlow_ compiler from Eta Compute [29]. The model deployment process is shown in Figure 11 and 12. ### Stm3214k5z1 _STM32Cube.AI_ from STMicroelectronics [28] is a framework designed to optimize STM devices such as STM32L4. However, due to the limitation of being a proprietary environment, the switching algorithm between primary and secondary networks cannot be deployed at the C code level. On the other hand [26], it has been designed with a focus on general-purpose and flexible deployment on different MCU boards. The _NNoM_ converter is able to convert the pre-trained model in the Keras format to the C code and its neural network library can be used to deploy the model. Therefore, the _NNoM_ framework is selected for model deployment on STM32L4 instead of _STM32Cube.AI_ (see Figure 12). The STM32Cube SDK version 1.17.0 from STMicroelectronics which contains utility tools and example projects, is required to drive the STM32L4R5ZI MCU board. Keil uVision IDE from ARM is chosen to set up a coding environment to support STM32L4. The driver pack for STM32L4 is required to be installed by the pack installer of Keil. The STM32L4 CN1 port is connected with a desktop PC by using a micro-USB cable. Then, the ST-Link debugger can be selected under the target debug page and the STM32L4 device can then be connected and detected by the PC. Alternatively, if the connection is unsuccessful, STM32 ST-LINK Utility from STMicroelectronics can erase the board to avoid software conflicts. After the NN models are trained by Keras (_TensorFlow_v1.15_), they are required to be quantized by applying the _NNoM_ converter command as shown in Listing 1. Then, the header file containing model weights can be generated by using the function below. Before building the project, the weight header file, input data file and the files from the _NNoM_ library should be added by Keil Manage Project Items. Finally, the steps of building and flashing the project to the development board can be carried out. To observe the output from the debug viewer, the core clock under the trace page of the target debug setting should match the operating clock of the device. ``` generate_model(model, x_test, name='weight.h') ``` Figure 11: The map of the steps of neural network model deployment on target MCU boards. Black frames represent trained models, brown frames represent the library source codes used, while red ones represent MCU boards. Figure 12: Comparison of the software used in each deployment phase for different MCUs. ### Apollo2 Blue AmbiqSuite SDK version 2.2.0 from Ambiq supports the model deployment on Apollo2 Blue. Keil uVision IDE from ARM is used to set up a coding environment. After installing the driver pack for Apollo2, the Apollo2 board is connected to the PC by using a micro-USB cable and selecting J-Link under the target debug page of Keil as shown in Figure 11. Similar to the case of STM32L4, Apollo2 is not supported by _TensorFlow Lite_ and _TENSAIFlow_. Thus, the pre-trained models in Keras format are converted into a quantized format using the _NNoM_ converter as shown in Listing 1. Then, the model weights and data header files and the _NNoM_ library should be added into the project by Keil Manage Project Items. After building and flashing the project to the target board, the Keil debug viewer can be used to observe the model outputs. ### SparkFun Edge (Apollo3 Blue) _TensorFlow_ from Google is not only capable of training neural network models, but also includes _TensorFlow Lite_ to deploy network models on edge devices such as MCUs [27]. The trained network model saved in the Keras format can be converted into the quantized format using the _TensorFlow Lite_ converter in Listing 2 and 3. The library source code in C and board SDK files are provided to support the model deployment on MCUs (see Figure 11). We use _TensorFlow Lite_ to support model deployment for the MCU development board of SparkFun Edge (Apollo3). AmbiqSuite SDK version 2.2.0 contains utility tools and drivers from Ambiq to support SparkFun Edge (Apollo3 Blue). _TensorFlow Lite_ version 1.15 is used to convert Keras models using floating-point parameters into the TFLite model with UINT8 parameters. As per the corresponding command lines in Listing 2 and 3, the quantized model files are generated and ready to be deployed. The TFLite model is converted into a hexadecimal file which can be read by the _TensorFlow Lite_ library by using hex dump command 'xxd'. Finally, we connect Apollo3 to the PC with a micro-USB cable and flash the binary file to the target board using the flash utility provided by the AmbiqSuite SDK. ``` #file_convert \ --keras_model_file=_/Output_Models/$MODELNAME].h5 \ --output_file=_/Output_Models/$MODELNAME].tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shapes=1,128,3:1,128,3:1,128,3 \ --input_arrays=model_input1,model_input2,model_input3 \ --output_arrays=model_output/BiasAdd \ --default_ranges_min=0 --default_ranges_max=255 \ --mean_values=128,128,128 --std_dev_values=1,1,1 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops ``` **Listing 3** TensorFlow Lite converter command lines for 'little' model quantization. ``` #file_convert \ --keras_model_file=_/Output_Models/$MODELNAME].h5 \ --output_file=_/Output_Models/$MODELNAME].tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shapes=1,128,3 \ --input_arrays=model_input_outputs=model_input_outputs=model_input_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputsoutputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputsoutputs=model_outputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs_outputs=model_outputsoutputs=model_outputsoutputsoutputs_outputs=model_outputsoutputs=model_outputsoutputs_outputs=model_outputsoutputs_outputs=model_outputsoutputsoutputs_outputs=model_outputsoutputsoutputs_outputs=model_outputsoutputs_outputs=model_outputsoutputs_outputs=outputsoutputs_outputsoutputs=model_outputsoutputs_outputs=outputsoutputs_outputsoutputs_outputs=outputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputs_outputsoutputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputs_outputsoutputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs__outputs_outputs__outputs_outputs_outputs__outputs_outputs__outputs_outputs__outputs__outputs_outputs_outputs_outputs__outputs_outputs__outputs__outputs__outputs__outputs_outputs__outputs__outputs___outputs_outputs__outputs__outputs__outputs__outputs__outputs__outputs__outputs___outputs__outputs__outputs___outputs__outputs____outputs___outputs___outputs___outputs___outputs___outputs____outputs___outputs____outputs___outputs___outputs___outputs____outputs____outputs____outputs____outputs___outputs____outputs____outputs_____outputs_____outputs____outputs_____outputs___outputs____outputs____outputs____outputs_____outputs____outputs____outputs_____outputs_____outputs_____outputs____outputs_____outputs____outputs_____outputs______outputs_____outputs____outputs____outputs___outputs____outputs___outputs____outputs____outputs___outputs____outputs_____outputs____outputs___outputs____outputs___outputs___outputs____outputs____outputs____outputs___outputs____outputs____outputs____outputs___outputs___outputs___outputs____outputs____outputs___outputs_____outputs___outputs____outputs____outputs___outputs_____outputs____outputs____outputs____outputs_____outputs____outputs_____outputs_______outputs______outputs_____outputs____outputs_____outputs____outputs______outputs_____outputs_____outputs____outputs______outputs_______outputs____outputs______outputs_____outputs_____outputs______outputs_____outputs______outputs_____outputs____ --output_arrays=model_output/BiasAdd \ --default_ranges_min=0 --default_ranges_max=255 \ --mean_values=128 --std_dev_values=1 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops ### Ecm3532 _TENSAIFlow_ from Eta Compute is a framework designed to deploy pre-trained network models for Eta products such as ECM3531 and ECM3532 [29]. It is highly optimized for Eta Compute products to achieve the best balance between performance and efficiency. This framework is not capable of training neural network models such as _TensorFlow_; it only provides the model conversion and deployment after training. After the pre-trained model is converted into a quantized TFLite format by _TensorFlow_ _Lite_, _TENSAIFlow_ converts the TFLite model to the C code which can be invoked with the library source code. The ECM3532 development board is not supported by _NNoM_ or _TensorFlow_ _Lite_. Therefore, _TENSAIFlow_ SDK version 2.0.2 contains the _TENSAIFlow_ converter and neural network library from Eta Compute required to support model deployment on ECM3532. As shown in Figure 11, the pre-trained model from Keras (_TensorFlow_v1.15_) is quantized to the UINT8 format by using the _TensorFlow_ _Lite_ converter first (Listing 2 and 3) and converted to the readable format for the _TENSAIFlow_ library by using the _TENSAIFlow_ converter (Listing 4). Then, we build the project and flash it to the target ECM3532. ``` //tensiflow_compile \ --tflite_file_./model_zoo/$MODELNAME1.tflite \ --out_path_./..././Applications/${PROJECTNAME}/src/ \ --weights_path_./.././Applications/${PROJECTNAME}/include/ ``` Listing 4TENSAIFlow converter command lines for model conversion. ## 6 Results and Discussion The accuracy of the different configurations and the original using the full HAR test data set is shown in Figure 13. We do not consider different random initialization seeds in this work but we use the same trained network for the different MCUs to perform a fair comparison. We also choose the learning rate carefully, using a relatively slow rate and SGDR to prevent the model from sinking into a locally optimal point instead of the global one. We use holdout cross validation to divide the whole data set: 70% for the training data set, 15% for the validation data set and 15% the testing data set. In Figure 13, the 'big'-only configuration has 91.3% accuracy but the model has a large invocation count that will result in significant latency. The 'big' + six 'little' configuration reaches a comparable level of accuracy and the number of times the 'big' model invoked is reduced from 2947 to 406, reducing the inference time of the 'big' model by two-thirds. The 'big' + 'dual' configuration cannot reach a similar accuracy due to the low accuracy of the secondary 'dual' network. The 'big' + distance configuration achieves a relatively low testing accuracy and it invokes the 'big' network 669 times in 2947 data samples. In order to establish the same testing environments and provide the same test samples for all MCU boards, we choose to apply the data samples from the UCI-HAR test data set rather than a real-time signal from the board sensors. Only 60 data samples from the UCI-HAR test data set can be selected to fit in the MCU boards together with the model and switching algorithm due to memory limitations. Therefore, we select ten data samples for each activity and compose them into a certain sequence of activity I to VI. This means that there are five activity changes in the test data sequence. We have verified that the classification results obtained in these 60 samples are equivalent to the ones obtained with the whole data set, although there are some negligible differences between devices due to the different toolchains. The following evaluations are performed under a working frequency of 48 MHz without debugging. Four configurations of the adaptive neural network are evaluated below: ### Big' Only As shown in Figure 13, after removing the LSTM layers used in [23], we still maintain an accuracy level of around 90% for the 'big' network on the activity classification task compared to the results in [22,23]. The original 'big' model method performs 2947 inferences on all test data samples. Due to the large topology of the 'big' model and a large number of inferences, the 'big'-only configuration has the highest execution time. This can be seen in Figure 14: for all four MCUs working at the same operating frequency of 48 MHz, the latency of the 'big'-only model is the highest among all four configurations. The power consumption values for each configuration show negligible variations for each MCU in Figure 15. Therefore, the energy consumption for each configuration is only affected by the inference time. As shown in Figure 15, the 'big'-only configuration consumes the highest value of energy. Figure 13: The accuracy and the ‘big’ inference counts for four configurations (quantized TFLite format) on the test data set (2947 samples) have been evaluated on a PC. Figure 14: The time evaluation of four adaptive configurations on MCU boards with the ‘big’ inference counts. A total of 60 data samples extracted from the UCI-HAR test data set are tested to form the evaluation. ### 'Big' + Six 'Title' The difference between the inference time of the 'big'-only and 'big' + six 'little' configurations is shown in Figure 14. The inference latency of the 'big' model is around 12 times longer than the latency of the 'little' model. Therefore, the lower the number of times the 'big' network gets invoked, the higher the efficiency of the system is. In this configuration, six 'little' models are applied to save time by restricting the 'big' inference count to around ten times. In all MCU evaluations in Figure 14, the time result of the 'big' + six 'little' configuration is the lowest and this reduces the execution time by around 80% compared to the original 'big'-only configuration and around 50% compared to the others. For all four configurations, the power is largely equivalent, as can be seen in Figure 15. Due to the significant advantage of the 'big' + six 'little' configuration in terms of execution time, this configuration achieves energy savings of around 80% compared to the original 'big' method on all MCUs. ### 'Big' + 'Dual' In contrast to the 'big' + six 'little' configuration, the 'big' + 'dual' configuration is not restricted by the number of categories that need to be classified. The number of 'little' networks in the previous configuration is determined by the number of categories, which leads to difficulties in model deployment if the number of categories is large such as in the CIFAR-100 data set. By applying a network focusing on detecting activity changes, the 'big' + 'dual' configuration can pick up activity changes by comparing the current activity and the previous activity. However, two deficiencies appear in this configuration. Firstly, in the 7352 training data samples, there are only 280 cases of activity switching. We extract 280 data samples with an 'activity change' label and 7072 samples with an 'activity continua Figure 15: The power and energy evaluation of four adaptive configurations on MCU boards. A total of 60 data samples extracted from the UCI-HAR test data set are tested to form the evaluation. the 'dual' model, resulting in an unbalanced training data set. Secondly, there is an error propagation problem which occurs when 'dual' classification is incorrect in the case of 'activity change'. For example, in Figure 16, the 'dual' model has an error at the seventh data sample where the activity switches from I to III, skipping 'big' inference and misleading the adaptive system to output activity I. After that, the 'dual' model has no errors for the rest of the data, detecting no activity changes. This adaptive system continues to propagate the output errors because the seventh output is set up as activity I instead of III. Compared to the 'big' + six 'little' configuration, the 'big' model is also skipped at the seventh data because the 'little' model does not pick up any changes (an error). However, after the next data input, the 'little' model is able to recognize that the activity is not activity I anymore. Then, the 'big' model is invoked to output the correct activity label and the system recovers to a correct state. Although the 'dual' model is able to solve the large category issue, it is not sufficiently trained due to the unbalanced training data set. Due to the poor accuracy of the 'dual' model, the error propagation mentioned in Figure 16 occurs and fails to switch on the 'big' model when detecting an activity change for further inference. This results in a minimal 'big' inference count but a relatively poor performance in terms of accuracy. Therefore, the overall accuracy of this adaptive system (around 60% for all test data on a PC) is lower compared to the other configurations as shown in Figure 13. Furthermore, because the complexity of the 'dual' model is relatively high and the 'dual' model is activated continuously, this leads to a higher complexity in the inference process. Additionally, the combination of previous and current data samples for the 'dual' input needs to be pre-processed. Therefore, despite having the fewest number of 'big' inference counts (Figure 13), the latency and energy consumption double compared to the best configuration of the 'big' + six 'little' models as shown in Figures 14 and 15. ### 'Big' + Distance The 'big' + distance configuration, as shown in Figure 17, shows that the Manhattan distance and Euclidean distance have a poor performance when distinguishing activities I to III which are WALKING, WALKING_UPSTAIRS, and WALKING_DOWNSTAIRS. The distance between the data samples of the same activities exceeds the distance between the ones of different activities (see data 8 to 10 in Figure 17). Therefore, a clear threshold boundary cannot be set to separate the case of 'activity change' from unchanged activities due to these indistinguishable values. Figure 16: The output comparison of two configurations when an error occurs at the moment of an ‘activity change’. Errors have been labelled in the color red. The results including the primary module, secondary module, and overall adaptive system have been shown below. In the 'big' + distance configuration, a threshold point of 8000 for the Manhattan distance is selected for the evaluation in Figures 13 and 14. This threshold of 8000 triggers the 'big' model more frequently so it can be considered sensitive. As with the 'big' + 'dual' model, the 'big' + distance model also suffers from the error propagation issue which severely affects the overall accuracy. Compared to the 'big' + six 'little' configuration, this configuration achieves a relatively low accuracy level at around 76% with a higher number of 'big' invocation times as shown in Figure 13. Furthermore, this configuration has a significant latency and energy costs which doubles compared to 'big' + six 'little' models and it is similar to the 'big' + 'dual' configuration as shown in Figures 14 and 15. Overall and across all MCUs, our best adaptive network configuration, the 'big' + six 'little' configuration, achieves a high prediction accuracy level of around 90%, which is comparable to the original 'big'-only method. As discussed in Section 3's initial evaluation of the MCU, ECM3532 achieves the highest processing speed, followed by STM32L4, SparkFun Edge (Apollo3) and Apollo2 (listed fastest to slowest). With the same configuration, the execution time in Figure 14 shows that this is consistent across all four MCU boards. For the 'big' + six 'little' configuration, the 'big' inference count is reduced by around 85% compared with the original method, achieving up to 5\(\times\) the acceleration on MCUs. Since the MCU boards are in working mode when running different configurations, the power consumption of these configurations is similar to the MCU shown in Figure 15. Due to the negligible differences between network configurations in terms of power, the distribution of the energy consumption of the configurations for each MCU follows the time cost distribution in Figure 14. As shown in Figure 15, across all devices, the 'big' + six 'little' algorithm configuration achieves energy savings of around 80% compared to the original 'big'-only method, and around 50% compared to the other two configurations. Furthermore, compared to a standard MCU running the 'big' network only, the best configuration, the 'big' + six 'little' model, coupled with the best state-of-the-art near-threshold hardware, can achieve a reduction in energy of up to 98% that will translate into a 62\(\times\) increase in the operating lifetime of an application for detecting battery-powered activity. in significantly better energy and performance characteristics. The proposed algorithms can be successfully deployed on STM32L4R5ZI, Apollo2 Blue, SparkFun Edge (Apollo3 Blue) and ECM3532. The application UCI-HAR is representative of an activity recognition task that assumes that an activity will remain constant for some period of time before switching to a different activity. In order to save time and energy, we activate the secondary model with a faster inference speed to pause the primary model when the activity remains constant. The best adaptive network configuration, the 'big' + six 'little' configuration, has achieved a reduction in energy of 80% and a comparable level of prediction accuracy to the original method in the UCI-HAR test. The results prove that the proposed methods can deliver different levels of time-energy reduction and constant accuracy on all the devices we tested. Furthermore, coupled with near-threshold MCUs, the best configuration is able to increase battery life by up to 62x on UCI-HAR compared to the original non-adaptive method using a standard MCU. Future work involves extending the work to other application areas such as machine health monitoring and anomaly detection. In addition, we plan to investigate how the approach can be scaled to applications with a large number of possible output categories without an explosion in the memory requirements by using additional network hierarchies. Finally, a future research direction includes developing a framework that is able to automatically extract optimal 'little' configurations from a 'big' configuration in terms of overall accuracy and energy in order to replace manual analysis. Methodology, Z.S., N.H. and J.N.-Y.; software, Z.S. and J.N.-Y.; validation, Z.S.; resources, Z.S.; data curation, Z.S.; writing--original draft preparation, Z.S.; writing--review and editing, Z.S., N.H. and J.N.-Y.; visualization, Z.S.; supervision, J.N.-Y.; All authors have read and agreed to the published version of the manuscript. This work was partially funded by the Royal Society INF/R2/192044 Machine Intelligence at the Network Edge (MINET) fellowship. Not applicable Not applicable Our work can be found here: (accessed on 9 March 2022) [https://github.com/DarkSZChao/Big-Little_NN_Strategies](https://github.com/DarkSZChao/Big-Little_NN_Strategies). The authors declare no conflicts of interest. The following abbreviations are used in this manuscript: \begin{tabular}{l l} MCU & Microcontroller Unit \\ LoT & Internet of Things \\ CNN & Convolutional Neural Network \\ UCI-HAR & UCI-Human Activity Recognition \\ \end{tabular}
2303.02001
Zero-shot Object Counting
Class-agnostic object counting aims to count object instances of an arbitrary class at test time. It is challenging but also enables many potential applications. Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories, especially for autonomous systems. Thus, we propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time. Such a counting system does not require human annotators in the loop and can operate automatically. Starting from a class name, we propose a method that can accurately identify the optimal patches which can then be used as counting exemplars. Specifically, we first construct a class prototype to select the patches that are likely to contain the objects of interest, namely class-relevant patches. Furthermore, we introduce a model that can quantitatively measure how suitable an arbitrary patch is as a counting exemplar. By applying this model to all the candidate patches, we can select the most suitable patches as exemplars for counting. Experimental results on a recent class-agnostic counting dataset, FSC-147, validate the effectiveness of our method. Code is available at https://github.com/cvlab-stonybrook/zero-shot-counting
Jingyi Xu, Hieu Le, Vu Nguyen, Viresh Ranjan, Dimitris Samaras
2023-03-03T15:14:36Z
http://arxiv.org/abs/2303.02001v2
# Zero-Shot Object Counting ###### Abstract Class-agnostic object counting aims to count object instances of an arbitrary class at test time. Current methods for this challenging problem require human-annotated exemplars as inputs, which are often unavailable for novel categories, especially for autonomous systems. Thus, we propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time. Such a counting system does not require human annotators in the loop and can operate automatically. Starting from a class name, we propose a method that can accurately identify the optimal patches which can then be used as counting exemplars. Specifically, we first construct a class prototype to select the patches that are likely to contain the objects of interest, namely class-relevant patches. Furthermore, we introduce a model that can quantitatively measure how suitable an arbitrary patch is as a counting exemplar. By applying this model to all the candidate patches, we can select the most suitable patches as exemplars for counting. Experimental results on a recent class-agnostic counting dataset, FSC-147, validate the effectiveness of our method. Code is available at [https://github.com/cvlab-stonybrook/zero-shot-counting](https://github.com/cvlab-stonybrook/zero-shot-counting). ## 1 Introduction Object counting aims to infer the number of objects in an image. Most of the existing methods focus on counting objects from specialized categories such as human crowds [37], cars [29], animals [4], and cells [46]. These methods count only a single category at a time. Recently, class-agnostic counting [28, 34, 38] has been proposed to count objects of arbitrary categories. Several human-annotated bounding boxes of objects are required to specify the objects of interest (see Figure 0(a)). However, having humans in the loop is not practical for many real-world applications, such as fully automated wildlife monitoring systems or visual anomaly detection systems. A more practical setting, exemplar-free class-agnostic counting, has been proposed recently by Ranjan _et al_. [33]. They introduce RepRPN, which first identifies the objects that occur most frequently in the image, and then uses them as exemplars for object counting. Even though RepRPN does not require any annotated boxes at test time, the method simply counts objects from the class with the highest number of instances. Thus, it can not be used for counting a specific class of interest. The method is only suitable for counting images with a single dominant object class, which limits the potential applicability. Our goal is to build an exemplar-free object counter where we can specify what to count. To this end, we introduce a new counting task in which the user only needs to provide the name of the class for counting rather than the exemplars (see Figure 0(b)). In this way, the counting model can not only operate in an automatic manner but also allow the user to define what to count by simply providing the class name. Note that the class to count during test time can be arbitrary. For cases where the test class is completely unseen to the trained model, the counter needs to adapt to the unseen class without any annotated data. Hence, we Figure 1: Our proposed task of zero-shot object counting (ZSC). Traditional few-shot counting methods require a few exemplars of the object category (a). We propose zero-shot counting where the counter only needs the class name to count the number of object instances. (b). Few-shot counting methods require human annotators at test time while zero-shot counters can be fully automatic. name this setting zero-shot object counting (ZSC), inspired by previous zero-shot learning approaches [6, 57]. To count without any annotated exemplars, our idea is to identify a few patches in the input image containing the target object that can be used as counting exemplars. Here the challenges are twofold: 1) how to localize patches that contain the object of interest based on the provided class name, and 2) how to select _good_ exemplars for counting. Ideally, good object exemplars are visually representative for most instances in the image, which can benefit the object counter. In addition, we want to avoid selecting patches that contain irrelevant objects or backgrounds, which likely lead to incorrect object counts. To this end, we propose a two-step method that first localizes the class-relevant patches which contain the objects of interest based on the given class name, and then selects among these patches the optimal exemplars for counting. We use these selected exemplars, together with a pre-trained exemplar-based counting model, to achieve exemplar-free object counting. In particular, to localize the patches containing the objects of interest, we first construct a class prototype in a pre-trained embedding space based on the given class name. To construct the class prototype, we train a conditional variational autoencoder (VAE) to generate features for an arbitrary class conditioned on its semantic embedding. The class prototype is computed by taking the average of the generated features. We then select the patches whose embeddings are the \(k\)-nearest neighbors of the class prototype as the class-relevant patches. After obtaining the class-relevant patches, we further select among them the optimal patches to be used as counting exemplars. Here we observe that the feature maps obtained using _good_ exemplars and _bad_ exemplars often exhibit distinguishable differences. An example of the feature maps obtained with different exemplars is shown in Figure 2. The feature map from a _good_ exemplar typically exhibits some repetitive patterns (e.g., the dots on the feature map) that center around the object areas while the patterns from a _bad_ exemplar are more irregular and occur randomly across the image. Based on this observation, we train a model to measure the goodness of an input patch based on its corresponding feature maps. Specifically, given an arbitrary patch and a pre-trained exemplar-based object counter, we train this model to predict the counting error of the counter when using the patch as the exemplar. Here the counting error can indicate the goodness of the exemplar. After this error predictor is trained, we use it to select those patches with the smallest predicted errors as the final exemplars for counting. Experiments on the FSC-147 dataset show that our method outperforms the previous exemplar-free counting method [33] by a large margin. We also provide analyses to show that patches selected by our method can be used in other exemplar-based counting methods to achieve exemplar-free counting. In short, our main contributions can be summarized as follows: * We introduce the task of zero-shot object counting that counts the number of instances of a specific class in the input image, given only the class name and without relying on any human-annotated exemplars. * We propose a simple yet effective patch selection method that can accurately localize the optimal patches across the query image as exemplars for zero-shot object counting. * We verify the effectiveness of our method on the FSC-147 dataset, through extensive ablation studies and visualization results. ## 2 Related Work ### Class-specific Object Counting Class-specific object counting focuses on counting predefined categories, such as humans [1, 15, 24, 39, 40, 42, 52, 53, 55, 56], animals [4], cells [46], or cars [14, 29]. Generally, existing methods can be categorized into two groups: detection-based methods [8, 18, 14] and regression-based methods [10, 11, 27, 41, 53, 56]. Detection-based methods apply an object detector on the image and count the number of objects based on the detected boxes. Regression-based methods predict a density map for each input image, and the final result is obtained by summing up the pixel values. Both types of methods require abundant training data to learn a good model. Class-specific counters can perform well on trained categories. However, they can not be used to count objects of arbitrary categories at test time. ### Class-agnostic Object Counting Class-agnostic object counting aims to count arbitrary categories given only a few exemplars [3, 13, 25, 28, 31, 34, 50, 51]. GMN [28] uses a shared embedding module to Figure 2: Feature maps obtained using different exemplars given a pre-trained exemplar-based counting model. The feature maps obtained using good exemplars typically exhibit some repetitive patterns while the patterns from bad exemplars are more irregular. extract feature maps for both query images and exemplars, which are then concatenated and fed into a matching module to regress the object count. FamNet [34] adopts a similar way to do correlation matching and further applies test-time adaptation. These methods require human-annotated exemplars as inputs. Recently, Ranjan _et al._ have proposed RepRPN [33], which achieves exemplar-free counting by identifying exemplars from the most frequent objects via a Region Proposal Network (RPN)-based model. However, the class of interest can not be explicitly specified for the RepRPN. In comparison, our proposed method can count instances of a specific class given only the class name. ### Zero-shot Image Classification Zero-shot classification aims to classify unseen categories for which data is not available during training [5, 9, 12, 16, 19, 21, 35, 36]. Semantic descriptors are mostly leveraged as a bridge to enable the knowledge transfer between seen and unseen classes. Earlier zero-shot learning (ZSL) works relate the semantic descriptors with visual features in an embedding space and recognize unseen samples by searching their nearest class-level semantic descriptor in this embedding space [17, 36, 43, 54]. Recently, generative models [20, 48, 49, 22] have been widely employed to synthesize unseen class data to facilitate ZSL [44, 45, 30]. Xian _et al._[44] use a conditional Wasserstein Generative Adversarial Network (GAN) [2] to generate unseen features which can then be used to train a discriminative classifier for ZSL. In our method, we also train a generative model conditioned on class-specific semantic embedding. Instead of using this generative model to hallucinate data, we use it to compute a prototype for each class. This class prototype is then used to select patches that contain objects of interest. ## 3 Method Figure 3 summarizes our proposed method. Given an input query image and a class label, we first use a generative model to construct a class prototype for the given class in a pre-trained feature space. We then randomly sample a number of patches of various sizes and extract the feature embedding for each patch. The class-relevant patches are those patches whose embeddings are the nearest neighbors of the class prototype in the embedding space. We further use an error predictor to select the patches with the smallest predicted errors as the final exemplars for counting. We use the selected exemplars in an exemplar-based object counter to infer the object counts. For the rest of the paper, we denote this exemplar-based counter as the "base counting model". We will first describe how we train this base counting model and then present the details of our patch selection method. ### Training Base Counting Model We train our base counting model using abundant training images with annotations. Similar to previous works [34, 38], the base counting model uses the input image and the exemplars to obtain a density map for object counting. The model consists of a feature extractor \(F\) and a counter \(C\). Given a query image \(I\) and an exemplar \(B\) of an arbitrary class \(c\), we input \(I\) and \(B\) to the feature extractor to obtain the corresponding output, denoted as \(F(I)\) and \(F(B)\) re Figure 3: Overview of the proposed method. We first use a generative model to obtain a class prototype for the given class (e.g. grape) in a pre-trained feature space. Then given an input query image, we randomly sample a number of patches of various sizes and extract the corresponding feature embedding for each patch. We select the patches whose embeddings are the nearest neighbors of the class prototype as class-relevant patches. Then for each of the selected class-relevant patches, we use a pre-trained exemplar-based counting model to obtain the intermediate feature maps. Our proposed error predictor then takes the feature maps as input and predicts the counting error (here we use normalized counting errors). We select the patches with the smallest predicted errors as the final exemplar patches and use them for counting. , respectively. \(F(I)\) is a feature map of size \(d*h_{I}*w_{I}\) and \(F(B)\) is a feature map of size \(d*h_{B}*w_{B}\). We further perform global average pooling on \(F(B)\) to form a feature vector \(b\) of \(d\) dimensions. After feature extraction, we obtain the similarity map \(S\) by correlating the exemplar feature vector \(b\) with the image feature map \(F(I)\). Specifically, if \(w_{ij}=F_{ij}(I)\) is the channel feature at spatial position \((i,j)\), \(S\) can be computed by: \[S_{ij}(I,B)=w_{ij}^{T}b. \tag{1}\] In the case where \(n\) exemplars are given, we use Eq. 1 to calculate \(n\) similarity maps, and the final similarity map is the average of these \(n\) similarity maps. We then concatenate the image feature map \(F(I)\) with the similarity map \(S\), and input them into the counter \(C\) to predict a density map \(D\). The final predicted count \(N\) is obtained by summing over the predicted density map \(D\): \[N=\sum_{i,j}D_{(i,j)}, \tag{2}\] where \(D_{(i,j)}\) denotes the density value for pixel \((i,j)\). The supervision signal for training the counting model is the \(L_{2}\) loss between the predicted density map and the ground truth density map: \[L_{\text{count}}=\|D(I,B)-D^{*}(I)\|_{2}^{2}, \tag{3}\] where \(D^{*}\) denotes the ground truth density map. ### Zero-shot Object Counting In this section, we describe how we count objects of any unseen category given only the class name without access to any exemplar. Our strategy is to select a few patches in the image that can be used as exemplars for the base counting model. These patches are selected such that: 1) they contain the objects that we are counting and 2) they benefit the counting model, i.e., lead to small counting errors. #### 3.2.1 Selecting Class-relevant Patches To select patches that contain the objects of interest, we first generate a class prototype based on the given class name using a conditional VAE model. Then we randomly sample a number of patches across the query image and select the class-relevant patches based on the generated prototype. **Class prototype generation.** Inspired by previous zero-shot learning approaches [44, 45], we train a conditional VAE model to generate features for an arbitrary class based on the semantic embedding of the class. The semantic embedding is obtained from a pre-trained text-vision model [32] given the corresponding class name. Specifically, we train the VAE model to reconstruct features in a pre-trained ImageNet feature space. The VAE is composed of an Encoder \(E\), which maps a visual feature \(x\) to a latent code \(z\), and a decoder \(G\) which reconstructs \(x\) from \(z\). Both \(E\) and \(G\) are conditioned on the semantic embedding \(a\).The loss function for training this VAE for an input feature \(x\) can be defined as: \[\begin{split} L_{V}(x)=\text{KL}\left(q(z|x,a)||p(z|a)\right)\\ -\text{E}_{q(z|x,a)}[\text{log}\ p(x|z,a)].\end{split} \tag{4}\] The first term is the Kullback-Leibler divergence between the VAE posterior \(q(z|x,a)\) and a prior distribution \(p(z|a)\). The second term is the decoder's reconstruction error. \(q(z|x,a)\) is modeled as \(E(x,a)\) and \(p(x|z,a)\) is equal to \(G(z,a)\). The prior distribution is assumed to be \(\mathcal{N}(0,I)\) for all classes. We can use the trained VAE to generate the class prototype for an arbitrary target class for counting. Specifically, given the target class name \(y\), we first generate a set of features by inputting the respective semantic vector \(a^{y}\) and a noise vector \(z\) to the decoder \(G\): \[\mathbb{G}^{y}=\{\hat{x}|\hat{x}=G(z,y),z\sim\mathcal{N}(0,I)\}. \tag{5}\] The class prototype p\({}^{y}\) is computed by taking the mean of all the features generated by VAE: \[\text{p}^{y}=\frac{1}{|\mathbb{G}^{y}|}{\sum}_{\hat{x}\in\mathbb{G}^{y}}\hat{x} \tag{6}\] **Class-relevant patch selection.** The generated class prototype can be considered as a class center representing the distribution of features of the corresponding class in the embedding space. Using the class prototype, we can select the class-relevant patches across the query image. Specifically, we first randomly sample \(M\) patches of various sizes \(\{b_{1},b_{2},...,b_{m}\}\) across the query image and extract their corresponding ImageNet features \(\{f_{1},f_{2},...,f_{m}\}\). To select the class-relevant patches, we calculate the \(L_{2}\) distance between the class prototype and the patch embedding, namely \(d_{i}=\|f_{i}-\text{p}^{y}\|_{2}\). Then we select the patches whose embeddings are the \(k\)-nearest neighbors of the class prototype as the class-relevant patches. Since the ImageNet feature space is highly discriminative, i.e., features close to each other typically belong to the same class, the selected patches are likely to contain the objects of the target class. #### 3.2.2 Selecting Exemplars for Counting Given a set of class-relevant patches and a pre-trained exemplar-based object counter, we aim to select a few exemplars from these patches that are optimal for counting. To do so, we introduce an error prediction network that predicts the counting error of an arbitrary patch when the patch is used as the exemplar. The counting error is calculated from the pre-trained counting model. Specifically, to train this error predictor, given a query image \(\bar{I}\) and an arbitrary patch \(\bar{B}\) cropped from \(\bar{I}\), we first use the base counting model to get the image feature map \(F(\bar{I})\), similarity map \(\bar{S}\), and the final predicted density map \(\bar{D}\). The counting error of the base counting model can be written as: \[\epsilon=|\sum_{i,j}\bar{D}_{(i,j)}-\bar{N^{*}}|, \tag{7}\] where \(\bar{N^{*}}\) denotes the ground truth object count in image \(\bar{I}\). \(\epsilon\) can be used to measure the goodness of \(\bar{B}\) as an exemplar for \(\bar{I}\), i.e., a small \(\epsilon\) indicates that \(\bar{B}\) is a suitable exemplar for counting and vice versa. The error predictor \(R\) is trained to regress the counting error produced by the base counting model. The input of \(R\) is the channel-wise concatenation of the image feature map \(F(\bar{I})\) and the similarity map \(\bar{S}\). The training objective is the minimization of the mean squared error between the output of the predictor \(R(F(\bar{I}),\bar{S})\) and the actual counting error produced by the base counting model \(\epsilon\). After the error predictor is trained, we can use it to select the optimal patches for counting. The candidates for selection here are the class-relevant patches selected by the class prototype in the previous step. For each candidate patch, we use the trained error predictor to infer the counting error when it is being used as the exemplar. The final selected patches for counting are the patches that yield the top-\(s\) smallest counting errors. #### 3.2.3 Using the Selected Patches as Exemplars Using the error predictor, we predict the error for each candidate patch and select the patches that lead to the smallest counting errors. The selected patches can then be used as exemplars for the base counting model to get the density map and the final count. We also conduct experiments to show that these selected patches can serve as exemplars for other exemplar-based counting models to achieve exemplar-free class-agnostic counting. ## 4 Experiments ### Implementation Details **Network architecture** For the _base counting model_, we use ResNet-50 as the backbone of the feature extractor, initialized with the weights of a pre-trained ImageNet model. The backbone outputs feature maps of \(1024\) channels. For each query image, the number of channels is reduced to \(256\) using an \(1\times 1\) convolution. For each exemplar, the feature maps are first processed with global average pooling and then linearly mapped to obtain a \(256\)-d feature vector. The counter consists of \(5\) convolutional and bilinear upsampling layers to regress a density map of the same size as the query image. For the _feature generation model_, both the encoder and the decoder are two-layer fully-connected (FC) networks with 4096 hidden units. LeakyReLU and ReLU are the non-linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic embeddings are both set to be \(512\). For the _error predictor_, \(5\) convolutional and bilinear upsampling layers are followed by a linear layer to output the counting error. **Dataset** We use the FSC-147 dataset [34] to train the base counting model and the error predictor. FSC-147 is the first large-scale dataset for class-agnostic counting. It includes \(6135\) images from \(147\) categories varying from animals, kitchen utensils, to vehicles. The categories in the training, validation, and test sets do not overlap. The feature generator is trained on the MS-COCO detection dataset. Note that the previous exemplar-free method [33] also uses MS-COCO to pre-train their counter. **Training details** Both the base counting model and the error predictor are trained using the AdamW optimizer with a fixed learning rate of \(10^{-5}\). The base counting model is trained for \(300\) epochs with a batch size of \(8\). We resize the input query image to a fixed height of \(384\), and the width is adjusted accordingly to preserve the aspect ratio of the original image. Exemplars are resized to \(128\times 128\) before being input into the feature extractor. The feature generation model is trained using the Adam optimizer and the learning rate is set to be \(10^{-4}\). The semantic embeddings are extracted from CLIP [32]. To select the class-relevant patches, we randomly sample \(450\) boxes of various sizes across the input query image and select \(10\) patches whose embeddings are the \(10\)-nearest neighbors of the class prototype. The final selected patches are those that yield the top-\(3\) smallest counting errors predicted by the error predictor. ### Evaluation Metrics We use Mean Average Error (MAE) and Root Mean Squared Error (RMSE) to measure the performance of different object counters. Besides, we follow [31] to report the Normalized Relative Error (NAE) and Squared Relative Error (SRE). In particular, MAE = \(\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y_{i}}|\); RMSE = \(\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}\); NAE = \(\frac{1}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y_{i}}|}{y_{i}}\); SRE = \(\sqrt{\frac{1}{n}\sum_{i=1}^{n}\frac{(y_{i}-\hat{y_{i}})^{2}}{y_{i}}}\) where \(n\) is the number of test images, and \(y_{i}\) and \(\hat{y_{i}}\) are the ground truth and the predicted number of objects for image \(i\) respectively. ### Comparing Methods We compare our method with the previous works on class-agnostic counting. RepRPN-Counter [33] is the only previous class-agnostic counting method that does not require human-annotated exemplars as input. In order to make other exemplar based class-agnostic methods including GMN (General Matching Network [28]), FamNet (Few-shot adaptation and matching Network [34]) and BMNet (Bilinear Matching Network [38]) work in the exemplar-free setup, we replace the human-provided exemplars with the exemplars generated by a pre-trained object detector. Specifically, we use the RPN of Faster RCNN pre-trained on MS-COCO dataset and select the top-\(3\) proposals with the highest objectness score as the exemplars. We also include the performance of these methods using human-annotated exemplars for a complete comparison. ### Results **Quantitative results.** As shown in Table 1, our proposed method outperforms the previous exemplar-free counting method [33] by a large margin, resulting in a reduction of \(10.10\)_w.r.t._ the validation RMSE and \(14.52\)_w.r.t._ the test RMSE. We also notice that the performance of all exemplar-based counting methods drops significantly when replacing human-annotated exemplars with RPN generated proposals. The state-of-the-art exemplar-based method BMNet+ [38], for example, shows an \(19.90\) error increase _w.r.t._ the test MAE and a \(40.81\) increase _w.r.t._ the test RMSE. In comparison, the performance gap is much smaller when using our selected patches as exemplars, as reflected by a \(1.41\) increase _w.r.t._ the test MAE and a \(6.03\) increase _w.r.t._ the test RMSE. Noticeably, the NAE and the SRE on the test set are even reduced when using our selected patches compared with the human-annotated exemplars. **Qualitative analysis.** In Figure 4, we present a few input images, the image patches selected by our method, and the corresponding density maps. Our method effectively identifies the patches that are suitable for object counting. The density maps produced by our selected patches are meaningful and close to the density maps produced by human-annotated patches. The counting model with random image patches as exemplars, in comparison, fails to output meaningful density maps and infers incorrect object counts. ## 5 Analyses ### Ablation Studies Our proposed patch selection method consists of two steps: the selection of class-relevant patches via a generated class prototype and the selection of the optimal patches via an error predictor. We analyze the contribution of each step quantitatively and qualitatively. Quantitative results are in Table 2. We first evaluate the performance of our baseline, i.e. using \(3\) randomly sampled patches as exemplars without any selection step. As shown in Table 2, using the class prototype to select class-relevant patches reduces the error rate by \(7.19\) and \(6.07\) on the validation and test set of MAE, respectively. Applying the error predictor can improve the baseline performance by \(7.22\) on the validation MAE and \(7.57\) on the test MAE. Finally, applying the two components together further boosts performance, achieving \(26.93\) on the validation MAE and \(22.09\) on the test MAE. We provide further qualitative analysis by visualizing the selected patches. As shown in Figure 5, for each input query image, we show \(10\) class-relevant patches selected using our generated prototype, ranked by their predicted counting error (from low to high). All the \(10\) selected class-relevant patches exhibit some class specific features. However, not all these patches are suitable to be used as counting exemplars, i.e., some patches only contain parts of the object, and some patches contain some background. By further applying our proposed error predictor, we can identify the most suitable patches with the smallest predicted counting errors. ### Generalization to Exemplar-based Methods Our proposed method can be considered as a general patch selection method that is applicable to other visual counters to achieve exemplar-free counting. To verify that, we use our selected patches as the exemplars for three \begin{table} \begin{tabular}{l|c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Exemplars} & \multicolumn{4}{c|}{Val Set} & \multicolumn{4}{c}{Test Set} \\ & & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline \multirow{2}{*}{GMN [28]} & GT & 29.66 & 89.81 & - & - & 26.52 & 124.57 & - & - \\ & RPN & 40.96 & 108.47 & - & - & 39.72 & 142.81 & - & - \\ \hline \multirow{2}{*}{FamNet+ [34]} & GT & 23.75 & 69.07 & 0.52 & 4.25 & 22.08 & 99.54 & 0.44 & 6.45 \\ & RPN & 42.85 & 121.59 & 0.75 & 6.94 & 42.70 & 146.08 & 0.74 & 7.14 \\ \hline \multirow{2}{*}{BMNet [38]} & GT & 19.06 & 67.95 & 0.26 & 4.39 & 16.71 & 103.31 & 0.26 & 3.32 \\ & RPN & 37.26 & 108.54 & 0.42 & 5.43 & 37.22 & 143.13 & 0.41 & 5.31 \\ \hline \multirow{2}{*}{BMNet+ [38]} & GT & 15.74 & 58.53 & 0.27 & 6.57 & 14.62 & 91.83 & 0.25 & 2.74 \\ & RPN & 35.15 & 106.07 & 0.41 & 5.28 & 34.52 & 132.64 & 0.39 & 5.26 \\ \hline \multirow{2}{*}{RepRPN-Counter [33]} & - & 30.40 & 98.73 & - & - & 27.45 & 129.69 & - & - \\ \hline \multirow{2}{*}{Ours (Base)} & GT & 18.55 & 61.12 & 0.30 & 3.18 & 20.68 & 109.14 & 0.36 & 7.63 \\ & RPN & 32.19 & 99.21 & 0.38 & 4.80 & 29.25 & 130.65 & 0.35 & 4.35 \\ \multirow{2}{*}{} & Patch-Selection & **26.93** & **88.63** & **0.36** & **4.26** & **22.09** & **115.17** & **0.34** & **3.74** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons on the FSC-147 dataset. “GT” denotes using human-annotated boxes as exemplars. “RPN” denotes using the top-3 RPN proposals with the highest objectness scores as exemplars. “Patch-Selection” denotes using our selected patches as exemplars. other different exemplar-based methods: FamNet [34], BMNet and BMNet+ [38]. Figure 6 (a) shows the results on the FSC-147 validation set. The baseline uses three randomly sampled patches as the exemplars for the pre-trained exemplar-based counter. By using the generated class prototype to select class-relevant patches, the error rate is reduced by \(5.18\), \(8.59\) and \(5.60\) on FamNet, BMNet and BMNet+, respectively. In addition, as the error predictor is additionally adopted, the error rate is further reduced by \(1.76\), \(1.00\) and \(1.08\) on FamNet, BMNet and BMNet+, respectively. Similarly, Figure 6 (b) shows the results on the FSC-147 validation set. \begin{table} \begin{tabular}{c|c|c c c c|c c c} \hline \hline \multirow{2}{*}{Prototype} & \multirow{2}{*}{Predictor} & \multicolumn{4}{c|}{Val Set} & \multicolumn{4}{c}{Test Set} \\ & & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline - & - & 35.20 & 106.70 & 0.61 & 6.68 & 31.37 & 134.98 & 0.52 & 5.92 \\ ✓ & - & 28.01 & 88.29 & 0.39 & 4.66 & 25.30 & **113.82** & 0.40 & 4.88 \\ - & ✓ & 27.98 & **88.62** & 0.43 & 4.59 & 23.80 & 128.36 & 0.40 & 4.43 \\ ✓ & ✓ & **26.93** & 88.63 & **0.36** & **4.26** & **22.09** & 115.17 & **0.34** & **3.74** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on each component’s contribution to the final results. We show the effectiveness of the two steps of our framework: selecting class-relevant patches via a generated class prototype and selecting optimal patches via an error predictor. Figure 4: Qualitative results on the FSC-147 dataset. We show the counting exemplars and the corresponding density maps of ground truth boxes, randomly selected patches, and our selected patches respectively. Predicted counting results are shown at the top-right corner. Our method accurately identifies suitable patches for counting and the predicted density maps are close to the ground truth density maps. Figure 5: Qualitative ablation analysis. All the \(10\) selected class-relevant patches exhibit some class-specific attributes. They are ranked by the predicted counting errors and the final selected patches with the smallest errors are framed in green. 147 test set. Our method achieves consistent performance improvements for all three methods. ### Multi-class Object Counting Our method can count instances of a specific class given the class name, which is particularly useful when there are multiple classes in the same image. In this section, we show some visualization results in this multi-class scenario. As seen in Figure 7, our method selects patches according to the given class name and counts instances from that specific class in the input image. Correspondingly, the heatmap highlights the image regions that are most relevant to the specified class. Here the heatmaps are obtained by correlating the exemplar feature vector with the image feature map in a pre-trained ImageNet feature space. Note that we mask out the image region where the activation value in the heatmap is below a threshold when counting. We also show the patches selected using another exemplar-free counting method, RepRPN [33]. The class of RepRPN selected patches can not be explicitly specified. It simply selects patches from the class with the highest number of instances in the image according to the repetition score. ## 6 Conclusion In this paper, we proposed a new task, zero-shot object counting, to count instances of a specific class given only the class name without access to any exemplars. To address this, we developed a simple yet effective method that accurately localizes the optimal patches across the query image that can be used as counting exemplars. Specifically, we construct a class prototype in a pre-trained feature space and use the prototype to select patches that contain objects of interest; then we use an error predictor to select those patches with the smallest predicted errors as the final exemplars for counting. Extensive results demonstrate the effectiveness of our method. We also conduct experiments to show that our selected patches can be used for other exemplar-based counting methods to achieve exemplar-free counting. **Acknowledgements.** This research was partially supported by NSF grants IIS-2123920 and IIS-2212046 and the NASA Biodiversity program (Award 80NSSC21K1027). Figure 6: Using our selected patches as exemplars for other exemplar-based class-agnostic counting methods (FamNet, BMNet and BMNet+) on FSC-147 dataset. Blue bars are the MAEs of using three randomly sampled patches. Orange bars are the MAEs of using the class prototype to select class-relevant patches as exemplars. Green bars are the MAEs of using the class prototype and error predictor to select optimal patches as exemplars. Figure 7: Visualization results of our method in some multi-class examples. Our method selects patches according to the given class name and the corresponding heatmap highlights the relevant areas.
2302.04981
AutoNMT: A Framework to Streamline the Research of Seq2Seq Models
We present AutoNMT, a framework to streamline the research of seq-to-seq models by automating the data pipeline (i.e., file management, data preprocessing, and exploratory analysis), automating experimentation in a toolkit-agnostic manner, which allows users to use either their own models or existing seq-to-seq toolkits such as Fairseq or OpenNMT, and finally, automating the report generation (plots and summaries). Furthermore, this library comes with its own seq-to-seq toolkit so that users can easily customize it for non-standard tasks.
Salvador Carrión, Francisco Casacuberta
2023-02-09T23:42:30Z
http://arxiv.org/abs/2302.04981v1
# AutoNMT: A Framework to Streamline the Research of Seq2Seq Models ###### Abstract We present AutoNMT1, a framework to streamline the research of seq-to-seq models by automating the data pipeline (i.e., file management, data preprocessing, and exploratory analysis), automating experimentation in a toolkit-agnostic manner, which allows users to use either their own models or existing seq-to-seq toolkits such as Fairseq or OpenNMT, and finally, automating the report generation (plots and summaries). Furthermore, this library comes with its own seq-to-seq toolkit so that users can easily customize it for non-standard tasks. Footnote 1: [https://github.com/salvacarrion/autonmt](https://github.com/salvacarrion/autonmt) ## 1 Introduction The performance of NMT models is often greatly affected by decisions such as the normalization used, the tokenization, the size of the vocabulary, the model, etc., decisions that must be remembered at both a training and inference time so that the model can perform as expected. In addition to this, it is also very common to train and evaluate these models on multiple datasets, which, added to the above, makes the research of seq-to-seq models a harder task than necessary. Furthermore, in research, it is very common to prototype ideas rapidly and write throwaway code, which in complex pipelines can lead to small errors that can easily go unnoticed. In addition to this, researchers, more often than not, spend countless hours on time-consuming tasks that are not strictly related to their research, such as writing boilerplate code, debugging errors, creating charts, etc. To address these challenges, we have built AutoNMT, a Python framework to make the research of seq-to-seq models an easier task and, therefore, allow researchers to spend more time on their ideas. This framework aims to automate as many tasks of the typical seq-to-seq pipeline as possible but without imposing further constraints on the researcher's workflow by: * Automating the data pipeline (i.e., file management, data preprocessing, and exploratory analysis). * Automating the experimentation in a toolkit-agnostic environment so that users can use their own models, vocabularies, and toolkits (e.g., Fairseq, OpenNMT, HuggingFace, etc.). * Managing reporting, logging, and versioning. Related Work In recent years, machine learning has enjoyed great popularity thanks to scientific advances in the field, which in many cases has ended up materialized into products such as Keras (Chollet et al., 2015), Tensorflow (Abadi et al., 2015), PyTorch (Paszke et al., 2019), HuggingFace (Wolf et al., 2019), Scikit-learn (Pedregosa et al., 2011),... that make the lives of many scientists easier by allowing them to research more efficiently. Under this premise, many products or libraries have appeared to streamline the workflow of engineers, researchers, and developers. For example, Fairseq (Ott et al., 2019) and OpenNMT (Klein et al., 2017) were solutions to deal with the complex training pipelines in Neural Machine Translation; HuggingFace (Wolf et al., 2019) on democratizing NLP; AutoML and Auto-Sklearn (Feurer et al., 2015) put their focus on automating the training and evaluation of specific machine learning models; Ray Tune (Liaw et al., 2018) targeted the experiment execution and hyperparameter tuning at any scale; ONNX (Bai et al., 2019) was designed to solve the model interoperability; SentencePiece (Kudo and Richardson, 2018) delivered an efficient implementation on many text tokenizers; etc. Inspired by these libraries, we decided to go one step ahead and build a new tool to streamline the research of seq-to-seq models by building a library on top of these well-tested libraries: PyTorch (Paszke et al., 2019), SentencePiece (Kudo and Richardson, 2018), Sacremoses (Koehn et al., 2007), SacreBleu (Post, 2018), Fairseq (Ott et al., 2019), OpenNMT (Klein et al., 2017), among many others. ## 3 AutoNMT Framework The core of this library is composed of three components: the _Dataset Builder_, the _Meta-Trainer_, and the utility to generate automatic reports. ### Dataset Builder The _DatasetBuilder_ is the class in charge of managing data and generating new datasets. First, it keeps organized the datasets, vocabularies, statistics, models, graphs, and reports. And second, it generates new datasets variants on-demand, using parameters such as: * **Normalizations**: NFD, NKFT, Strip, StripAccents, LowerCase, Replace,... * **Tokenizations**: Bytes, Chars, Chars+Bytes, Unigram, Unigram+Bytes, BPE, BPE+Bytes, Words, Words+bytes and None2. F Figure 1: **Data pipeline**: Workflow of the _DatasetBuilder_ component * **Vocabulary sizes**: List of vocabulary max. sizes (32K, 16K, 8K,...) * **Training sizes**: Limits the number of sentences in the training set (10M, 1M, 100K,...) The workflow of this component can be seen in Figure 1. The data pipeline starts by checking if there are files to process (raw or splits); if no files are found, the component will ask the user3 if it can create the directories where the user is expected to put the datasets so that later can be found. Footnote 3: The interactive flag must be set to _True_ (default) To use this component, the user only needs to put the datasets (i.e., raw or split files) into their corresponding folders (these folders will be created interactively from the base path specified). After that, the _DatasetBuilder_ will be able to index and preprocess all datasets automatically. For example, in Figure 2 we can see the code for a DataBuilder instance that will generate a total of 36 datasets variants (18 for Multi30K4 and another 18 for Europarl5, corresponding to different datasets, languages, sizes, tokenizations, and vocabularies. Footnote 4: Multi30K: 1 language x 2 sizes (training set limit) x (2+3) subword models (with 3 and 1 vocab sizes) Footnote 5: Europarl: 2 languages x 1 size (training set limit) x (2+3) subword models (with 3 and 1 vocab sizes) Furthermore, the _DatasetBuilder_ will handle part of the exploratory analysis by creating plots, stats, and reports that are typically used to describe the datasets, such as tokens per partition, sentence length distributions, token frequency, max/min/avg/ sentence length, unknowns per sentence, etc. (See Figure 3). Figure 3: **Automatic Exploratory Analysis**: AutoNMT automatically generates basic reports, statistics and plots to describe the datasets and its vocabularies. Figure 2: **DatasetBuilder**. This code will create (1x2 + 2x1) datasets with (2x3 + 3x1) variations each to explore the effects of these settings in our models. ### Meta-Trainer The Meta-Trainer has two functions: Training and Evaluation (See Figure 4). First, it abstracts the seq-to-seq toolkit so that users can train their models using either their toolkit of preference (e.g., Fairseq, OpenNMT, AutoNMT) or a customized one that inherits from our toolkit. Second, it acts as a unified interface to evaluate each of the trained models against a large set of available metrics (BLEU (Papineni et al., 2002), chrF (Popovic, 2015), BERTScore (Zhang et al., 2019), COMET (Rei et al., 2020), BEER (Stanojevic and Sima'an, 2014), etc.). The motivation for building this meta-trainer was mainly due to three reasons: * First, to automate the training and evaluation of the models using the datasets generated by the DatasetBuilder without worrying about the inner workings of the data pipeline. * Second, to automate the experimentation and, at the same time, allow users to use their preferred seq-to-seq toolkit through a minimal and unified interface. * Third, to allow users to easily create customized models, trainings, and toolkits when the existing solutions could not meet their needs. Besides, they can compare their implementation against other toolkits in a controlled environment to ease its debugging. #### 3.2.1 Training Since this object acts as a wrapper to automate the training in any of the supported toolkits, a user that is used to work with a toolkit such as FairSeq could simple instantiate the _FairSeq-Translator_ class and call its fit function to train a FairSeq model using the datasets generated by the _DatasetBuilder_ (See Figure 5a). However, if a user wants to have more control over the training, the models, and its data, they could simply replace the FairSeqTranslator class with AutonmtTranslator class (See 5b). Figure 4: The **Meta-Trainer** has two functions: i) Training in a toolkit-agnostic manner; and ii) Evaluating the models against multiple metrics using a unified interface Figure 5: **Meta-Trainer**: This object abstracts the training component so that a user can train their models using our data pipeline with their preferred seq-to-seq toolkit. In the case that a user wanted to use a toolkit that is not supported, they would only have to create a new object that inherits from the _BaseTranslator_ class and override the _preprocess_, _train_ and translate methods. On the other hand, if these solutions could not meet the user requirements, the user could easily create or extend the existing classes to meet their needs while taking advantage of all the functionalities this library provides. #### 3.2.2 Evaluation Meta-Trainer can also be used to evaluate the trained models regardless of the toolkit used for their training. One of its main features is that in addition to being able to evaluate the models on their test sets, they can also be evaluated on all compatible datasets (e.g., same languages). This feature is particularly relevant for studying continual learning, domain-shift problems, model generalization capabilities, the effects of the catastrophic forgetting problem, etc. Another advantage of this component is that it allows the evaluation of models using lists of arguments with which to specify the metrics to be used (e.g., BLEU, chrF, BERTScore, COMET, BEER, etc.) or the translation settings (e.g., beam width). ### Reporting Creating reports, summaries, and graphs is usually a very time-consuming task. Because of that, we decided to include a set of utilities to collect all available information6 about the datasets, models, configurations, training, and evaluations to ease its analysis and generate automatic graphs for the following use-cases: Footnote 6: Data and reports are saved as CSV and JSON so that other libraries can load them easily * Evaluate the performance of a model using one or more metrics (See Figure 5(a)) * Evaluate the generalization capabilities a model (See Figure 5(b)) * Evaluate multiple variables as function of another (See Figure 7) ## 4 AutoNMT toolkit Given the flexibility of the Meta-Trainer to support multiple toolkits, we decided to write our own toolkit (AutoNMT toolkit) so that users could easily extend it to create new models, tasks, or non-standard trainings (e.g., specific data augmentations of the fly, custom teacher-student approaches, etc.) Figure 6: **Automatic evaluation:** AutoNMT can evaluate each model using the test set of their dataset (See Figure 5(a)) or all compatible test sets indexed by the _DatasetBuilder_ (See Figure 5(b)). This feature is particularly useful for studying continual learning or domain-shift problems As with any other toolkit, the AutoNMT toolkit simply inherits from the _BaseTranslator_ class and overrides its default methods (preprocess, train and translate). However, the Trainer defined in this class inherits from the _LightningModule_ class, which allows us to create scalable models that can run on distributed hardware seamlessly. The main reason for using PyTorch-Lightning was to work within a well-known research framework that lets advanced users modify the code without having to learn the inner workings of our library. A second reason to use PyTorch-Lightning was to boost our Trainer with features such as data parallelization, distributed training, mixed precision, early stopping, logging, fault-tolerant training, etc. #### 4.0.1 AutoNMT toolkit: comparison In order to demonstrate the competitiveness of our toolkit, we decided to compare it against other reference toolkits such as Fairseq and OpenNMT. To do so, we trained multiple models with different configurations using these toolkits on the following datasets: Multi30K, Europarl (de-en), SciELO (Health), SciELO (Biological), among others. Even though our toolkit is in active development and lacks some of the training features enabled (by default) on the reference toolkits, the performance was remarkably similar. For example, the average difference in performance from the experiments shown in Table 1 was 0.25pts of BLEU. This table contains the results from the models trained using Fairseq and AutoNMT (Toolkit) on the SciELO datasets (Health and Biological), under different preprocessing configurations (two subword models: _Word_ and _Unigram+Bytes_, and two vocabulary constraints: 16000 and 8000 words). Similarly, more experiments were performed, but the differences in performance remained consistent between toolkits, datasets, and configurations. Concerning the raw speed, it is fast enough to challenge these toolkits competitively for the average researcher, given that most data parallelization modes, scaling strategies, and optimization features are available through the _PyTorch-Lightning_ module. ## 5 Uses cases ### Automating experimentation The most common use case for this library is automating the experimentation, from data preprocessing and training to evaluation and reporting. Due to the empirical component of much of the research in the field of machine learning, repeating experiments under different configurations and with multiple datasets is a standard practice to improve the robustness of our findings. As an example of this use case, after training the models, a user could simply call the _generate_report_ function to generate an automatic report similar to the one in Figure 5(a). \begin{table} \begin{tabular}{l l l r r r} \hline \hline **Train domain** & **Test domain** & **Subword model** & **Vocab. size** & **Fairseq BLEU** & **AutoNMT BLEU** \\ \hline Health & Health & Word & 8,000 & 24.22 & 23.95 \\ Health & Health & Word & 16,000 & 25.00 & 25.36 \\ Biological & Biological & Word & 8,000 & 26.61 & 25.66 \\ Biological & Biological & Word & 16,000 & 28.31 & 27.64 \\ Health & Health & Unigram+Bytes & 8,000 & 28.41 & 29.09 \\ Health & Health & Unigram+Bytes & 16,000 & 26.68 & 26.82 \\ Biological & Biological & Unigram+Bytes & 8,000 & 32.78 & 32.00 \\ Biological & Biological & Unigram+Bytes & 16,000 & 31.12 & 30.62 \\ \hline \hline \end{tabular} \end{table} Table 1: **Toolkit comparison**: The performance of AutoNMT is remarkably similar to Fairseq even though AutoNMT was missing some relevant training features that were enabled by default in Fairseq. ### Studying domain-shift effects Similar to the previous use case, if AutoNMT detects that each of the models has been evaluated for more than one dataset, it will generate a report that is similar to the one in Figure 5(b), which can be used to study the problem of domain adaptation. ### Finding the optimal vocabulary size Sometimes we need to study the performance of our models as a function of one or more variables. To do so, we can make use of the _generate_multivariable_report_ function, which allows us to plot one or more variables as a function of another. For example, in Figure 6(a) we compare the performance of two models (Europarl-50K and Europarl-100K) measured in BLEU points and the average number of tokens per sentence, as a function of the vocabulary size. Similarly, in Figure 6(b), we compare four models under similar settings. ### Studying the continual learning problem Finally, we can also use the report generation from AutoNMT to study the continual learning problem in Machine Translation, or the performance of lifelong learning for seq-to-seq models in general. For example, in Figure 8 we have generated a report to visualize the effects of the catastrophic forgetting problem for a model trained in the SciELO (Health) dataset. Figure 8: **Continual learning report**: AutoNMT can generate reports to visualize the performance of lifelong learning seq-to-seq models, or in this case, the effects of the catastrophic forgetting problem in Neural Machine Translation. Figure 7: **Multivariable report**: AutoNMT can generate report for one or more variables as a function of another. In this case, we compare the performance of two models and the average number of tokens as a function of the vocabulary size. Another feature typically used in this scenario is filtering line pairs in the dataset by language, domain, or a tag placed at the beginning of the sentence. ## 6 Conclusions In this paper, we have introduced AutoNMT, a framework to streamline the research of seq-to-seq models through automation and abstraction. First, we have presented the three core components of the framework: i) the _Dataset-Builder_, which is the component in charge of automating the data pipeline (file management and preprocessing); ii) the _Meta-Trainer_, which is the component in charge of automating the experimentation (training and evaluation) in a toolkit-agnostic manner; and iii) a utility to generate automatic reports. Finally, we have presented our seq-to-seq toolkit, the AutoNMT Toolkit, along with a performance comparison against state-of-the-art toolkits and four use cases where this framework can be typically used. ## 7 Future Work In future work, AutoNMT will support more features, toolkits, and tasks. ### Acknowledgment Work supported by the Horizon 2020 - European Commission (H2020) under the SELENE project (grant agreement no 871467) and the project Deep learning for adaptive and multimodal interaction in pattern recognition (DeepPattern) (grant agreement PROMETEO/2019/121). We gratefully acknowledge the support of NVIDIA Corporation with the donation of a GPU used for part of this research.
2304.00869
GreekBART: The First Pretrained Greek Sequence-to-Sequence Model
The era of transfer learning has revolutionized the fields of Computer Vision and Natural Language Processing, bringing powerful pretrained models with exceptional performance across a variety of tasks. Specifically, Natural Language Processing tasks have been dominated by transformer-based language models. In Natural Language Inference and Natural Language Generation tasks, the BERT model and its variants, as well as the GPT model and its successors, demonstrated exemplary performance. However, the majority of these models are pretrained and assessed primarily for the English language or on a multilingual corpus. In this paper, we introduce GreekBART, the first Seq2Seq model based on BART-base architecture and pretrained on a large-scale Greek corpus. We evaluate and compare GreekBART against BART-random, Greek-BERT, and XLM-R on a variety of discriminative tasks. In addition, we examine its performance on two NLG tasks from GreekSUM, a newly introduced summarization dataset for the Greek language. The model, the code, and the new summarization dataset will be publicly available.
Iakovos Evdaimon, Hadi Abdine, Christos Xypolopoulos, Stamatis Outsios, Michalis Vazirgiannis, Giorgos Stamou
2023-04-03T10:48:51Z
http://arxiv.org/abs/2304.00869v1
# GreekBART: The First Pretrained Greek Sequence-to-Sequence Model ###### Abstract The era of transfer learning has revolutionized the fields of Computer Vision and Natural Language Processing, bringing powerful pre-trained models with exceptional performance across a variety of tasks. Specifically, Natural Language Processing tasks have been dominated by transformer-based language models. In Natural Language Inference and Natural Language Generation tasks, the BERT model and its variants, as well as the GPT model and its successors, demonstrated exemplary performance. However, the majority of these models are pretrained and assessed primarily for the English language or on a multilingual corpus. In this paper, we introduce GreekBART, the first Seq2Seq model based on BART-base architecture and pretrained on a large-scale Greek corpus. We evaluate and compare GreekBART against BART-random, Greek-BERT, and XLM-R on a variety of discriminative tasks. In addition, we examine its performance on two NLG tasks from GreekSUM, a newly introduced summarization dataset for the Greek language. The model, the code, and the new summarization dataset will be publicly available. ## 1 Introduction and Related Work The field of machine learning has entered a new era with the establishment of transfer learning, providing new possibilities, especially in the areas of Computer Vision (Krizhevsky et al., 2017) and Natural Language Processing. Transfer learning has become a new trend that is so uncommon to train a model for computer vision or natural language processing tasks from scratch, dealing with the issue of insufficient training data for real-world machine learning applications. Tasks are solved by reusing pretrained models which are trained on enormous amounts of data, and the resulting models have reached state-of-the-art performance. Transformer (Vaswani et al., 2017) based pretrained models, as BERT (Devlin et al., 2019) and its variants, are broadly used in Natural Language Processing, as have been shown to be effective in many tasks. BART (Lewis et al., 2020) is a denoising auto-encoder for pretraining sequence-to-sequence models. It is trained by corrupting text with an arbitrary noising function and learning a model to reconstruct the original text. It uses a standard Transformer-based neural machine translation architecture and a standard seq2seq architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT (Radford et al., 2018)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like GPT2 (Radford et al., 2019). The unsupervised pretrained BART learns a language model, giving us the possibility to adapt it to a particular NLP task. So, large-scale labeled datasets are not required for fine-tuning. This type of model is suitable for machine translation, question-answering, and especially, text summarization tasks, but that does not mean that BART is insufficient in sequence classification tasks, on the contrary, it is also quite effective in that type of tasks. In the last few years, a lot of research has been conducted on other languages, except for the English language. For instance, CamemBERT (Martin et al., 2020) and BARThez (Kamal Eddine et al., 2021) for French language, CAMeLBERT (Inoue et al., 2021) and AraBART (Eddine et al., 2022) for Arabic language, BART for Japanese language (Kim and Komachi, 2021), BETO (Canete et al., 2020) and NASes (Ahuir et al., 2021) for Spanish and Catalan languages, and BARTpho (Tran et al., 2021) for Vietnamese language. Recently, a variety of multilingual language models have been presented, covering multiple languages by being pretrained on a large-scale corpus of different languages, trying to learn the language model of multiple languages at once. Notably, M-BERT (Devlin et al., 2019) is a case of a multilingual pretrained language model, which consists of the multilingual version of BERT, pretrained in the top 100 languages with the largest Wikipedias. Another case of a popular multilingual model is the XLM Conneau and Lample (2019) which is a transformer-based multilingual language model pretrained on Wikipedias of 15 languages. This model was trained in two auxiliary tasks, Masked Language Modeling, and the Translation Language Modeling task. Training a cross-lingual language model can be very beneficial for low-resource languages, as all languages are processed with the same shared vocabulary. Conneau et al. (2020) introduced XLM-R, an improved version of XLM based on the RoBERTa model. The model was trained with a cross-lingual masked language modeling objective on 2.5TB data in 100 languages from Common Crawl Wenzek et al. (2020); Conneau et al. (2020), increasing the amount of training available data for low-resource languages by two orders of magnitude on average. Finally, mBART Liu et al. (2020) is the multilingual version of BART and it is pretrained on a subset of 25 languages from the same dataset as XLM-R. In mBART, we use its 250K sentencepiece Kudo and Richardson (2018) model which was trained using monolingual data for 100 languages from XLM-R, supporting languages beyond the original 25 mBART was trained on. The parameters of mBART25 are roughly 610M. Later, an extension of mBART in additional 25 languages (_e.g._ total 50 languages) was proposed, mBART50 Tang et al. (2020), increasing the number of parameters to approximately 680M. Except for mBART and mBART50, all other aforementioned multilingual models support the Greek language. mBART25 and mBART50 are not pretrained on modern Greek, but it is included in their vocabulary. Nevertheless, multilingual models cannot compete with the performance of monolingual models in most NLP tasks. In the last months, another related model to BART that is in the spotlight of the NLP research area is ChatGPT 1. ChatGPT 2 is built on top of GPT-3 architectureBrown et al. (2020), so it is a transformer-based language model that has been pretrained on massive amounts of text data and fine-tuned for conversational AI applications. Like BART, ChatGPT is capable of generating high-quality sequences of text, making it suitable for tasks such as text summarization and question answering. However, unlike BART, ChatGPT is specifically designed for conversational applications, making it well-suited for chatbots and other dialogue systems. In addition, ChatGPT's architecture is unidirectional, which means that it can generate text in a left-to-right sequence, making it more suitable for tasks such as language generation and dialogue. Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Compared to languages that are widely spoken, Greek has fewer linguistic resources available. Especially, the available research in deep learning models for Greek is still very undeveloped. However, there are some efforts to develop datasets, models, knowledge bases, and frameworks for Greek NLP. Outsios et al. (2018) presented the production of Greek word embeddings, where a large corpus of about 50GB (contains 120 million sentences), crawled from about 20 million URLs, was used for their work. Later, Lioudakis et al. (2020) presented an ensemble method, Continuous Bag-of-Skip-grams, for extracting word representations for Greek. Recently, Koutsikakis et al. (2020) employed Greek-BERT, the first transformer-based language model, based on BERT, for the Greek language. The model was pretrained on a dataset of 29GB, achieving state-of-the-art performance in several NLP tasks in Greek. It is worth noting that Papantoniou and Tzitzikas (2020) have provided a throughout survey of the work that has been conducted in NLP for the Greek language. In this contribution, we try to handle the issue that the multilingual models are not sufficient to compete with the monolingual ones and the limited available deep learning models for the Greek language. Thus, we propose the first pretrained Seq2Seq monolingual model for the Greek language. The model is called GreekBART, as we pretrained the BART-base architecture on a large monolingual Greek corpus. Despite the existence of the Greek-BERT Koutsikakis et al. (2020), our model exceeds the possibilities of Greek-BERT, focusing on generative tasks. GreekBART is evaluated on two different generative tasks and on four discriminative tasks. Our main contributions are: * We introduce the pretrained Seq2Seq model for the Greek language, based on BART-base architecture Lewis et al. (2020), and pretrained on a large corpus of 87.6 GB. We examine the performance of our model in four discriminative tasks (_i.e._ two classification tasks, one sentimental analysis task, and one sentimental Language Inference task) and in two generative tasks. * We present the first summarization dataset in Greek, GreekSUM, introducing two generative tasks and a classification task by processing this dataset. * We compare GreekBART against popular language models, already pretrained or not on Greek. In the case of the discriminative tasks we collate our model, a BART-random model, Greek-BERT (Koutsikakis et al., 2020) and XLM-R (Conneau et al., 2020). We also inspect the differences, in terms of performance, between the GreekBART (_i.e._ our model), BART-random model, mBART25 (Liu et al., 2020) and mBART50 (Tang et al., 2020) on two novel generative tasks. * We will publish our code and models2, providing access to everyone, who wants to further extend the applications of our work or take advantage of our contributions in favor of his/her work. Footnote 2: [https://github.com/iakovosevdaimon/GreekBART](https://github.com/iakovosevdaimon/GreekBART) ## 2 GreekBART Our proposed model is based on BART (Lewis et al., 2020) a denoising auto-encoder. We use the _BASE_ architecture, with 6 encoder and 6 decoder layers. Also, it is used 768 hidden dimensions, 12 attention heads in both the encoder and the decoder, and a normalization layer on top of both the encoder and the decoder (Liu et al., 2020) is added. The purpose of these additional layers is to stabilize the training when FP16 precision (Micikevicius et al., 2017) is applied. The use of FP16 precision speeds up the pretraining of the model. In total, our model has roughly 181M parameters. Generally, we followed a similar methodology as Kamal Eddine et al. 2021, in which a monolingual model in a different language than English is pretrained, following BART (Lewis et al., 2020) and mBART (Liu et al., 2020) methodologies. ### Pretraining corpus The pretrained corpus is produced by the following corpora: (a) the Greek part of Wikipedia3; (b) the Greek part of the European Parliament Proceedings Parallel Corpus (EuroParl)4 (Koehn, 2005); (c) the Greek part of OSCAR5 (Abadji et al., 2022), a clean version of CommonCrawl6; (d) the Greek Web Corpus, crawled from about 20 million Greek-language URLs7(Outsios et al., 2018). In particular, we use the same datasets as the Greek-BERT (Koutsikakis et al., 2020) model, including also the dataset of Outsios et al. 2018 in order to have a larger corpus that will be well suited for the pre-training of BART model. Moreover, by choosing these datasets we cover a wide variety of Greek language areas, which includes formal and informal text, news articles, encyclopedic information, and political conversations. This diverse range of text types helps to ensure that the pretraining of the BART model is robust and able to handle different styles and registers of Greek language use. Overall, the choice of datasets helps to ensure that the Greek BART model is well-equipped to handle a wide range of natural language processing tasks in the Greek language. Footnote 3: [https://dumps.wikimedia.org/elwiki/](https://dumps.wikimedia.org/elwiki/) Footnote 4: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/) We preprocessed each of the aforementioned corpora by removing URLs, emojis, tags, and hashtags. Also, we erase comments, and some observed noisy sentences which do not provide any additional contextual meaning. The noisy sentences differ from dataset to dataset, so we had to detect them "manually". Furthermore, for all corpora except Wikipedia's dataset, we got rid of documents that contained less than one thousand characters. In the case of Wikipedia, we removed documents with less than thirty characters. Generally, we did not remove non-Greek characters, because we supposed that it will not prevent the GreekBART from understanding the language model, as their amount is insignificant. We deduplicated each corpora and then, we concatenated all of them in one corpus. Again, we deduplicated the merged dataset for a final time. The deduplication process was done using the runiq package8. To generate our vocabulary, we used SentencePiece9(Kudo and Richardson, 2018) that implements byte-pair-encoding (BPE) (Sennrich et al., 2016). So, any type of pre-tokenization was not necessary. We fixed the size of the vocabulary to 50K sub-words and the SentencePiece model was trained on a 20GB random sample of the pretraining corpus. We set the character coverage to \(99.95\%\). The total corpus size was 76.9/87.6GB before/after SentencePiece tokenization. ### Training details We adhere to the same pretraining process as BART. Thus, GreekBART tries to reconstruct the corrupted input by minimizing the cross-entropy loss between the decoder's output and the original input. Two types of noise are applied in the input text. First, we employ the text infilling technique, where a number of text spans are replaced by a special token, called [MASK], masking \(30\%\) of text. A Poisson distribution with \((lambda=3.5)\) is used to determine the spans' length. Sentence permutation is the second perturbation method, where the sentences of the input document are shuffled randomly. We pretrained GreekBART on Jean Zay, using a batch size equal to 768000 tokens per GPU, as we set the update frequency to 128. We used the Adam optimizer Kingma and Ba (2015) with \(\epsilon=10^{-6},\beta_{1}=0.9\), and \(\beta_{2}=0.999\), with a learning rate starting from \(6.10^{-4}\) and decreasing linearly as a function of the training step. We used a warm-up of \(6\%\) of the total number of training steps. In the first 12 epochs, we fixed the dropout to 0.1, for epochs 12 to 16 we decreased it to 0.05, and finally, we set it to zero for epochs 16 to 20. All experiments were carried out using the Fairseq library10(Ott et al., 2019). Footnote 10: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq) ## 3 GreekSUM Transformer-based Seq2Seq models, including BART, can perform not only extractive but abstractive summarization, as well. This type of summarization is one of the most central and challenging evaluation tasks in NLP. However, there is not any available summarization dataset for the Greek language. Therefore, we created the first dataset in the Greek language, well-suited to the abstractive summarization task. ### Motivation Our main goal was to create a Greek version equivalent of the OrangeSum dataset11(Kamal Eddine et al., 2021) and XSum dataset Narayan et al. (2018). OrangeSum was produced by scraping articles, their single-sentence title, and their brief abstract from the "Orange Actu" website12. The title and the abstract of each article are written by the author of the article. Well-performed models on OrangeSum, as well as XSum, require a high degree of abstractivity. Footnote 11: [https://github.com/Tixierae/OrangeSum](https://github.com/Tixierae/OrangeSum) Footnote 12: [https://actu.orange.fr/](https://actu.orange.fr/) ### Data collection We followed a similar approach, scraping the "News24/7" website13. News24/7 is one of the leading news websites in Greece, part of the 24 MEDIA digital publishing group14. We collected data from web pages that span from October 2007 to June 2022, covering five major categories: politics, society, economy, culture, and world. Each article had a one-sentence title and a succinct abstract, features which were extracted, yielding two summarization tasks: GreekSUM Title and GreekSUM Abstract. The average length of these two novel tasks' gold summaries is 9.95 and 24.55 words respectively (see Table 2). Footnote 13: [https://www.news247.gr/](https://www.news247.gr/) Footnote 14: [https://www.24media.gr/](https://www.24media.gr/) ### Post-processing Initially, we filtered the scrapped pages, removing all empty articles and articles whose titles were shorter than 2 words or whose abstracts were less than 5 words. Secondly, we filtered the duplicated articles (_i.e._ articles with the same body, or with the same title, or with the same abstract), as an article can belong to more than one category, and thus be crawled multiple times. Finally, we noticed that several abstracts looked more like introductions rather than actual summaries of the article. Therefore, we eliminated 10% of the articles with the highest proportion of novel unigrams in the abstracts. This corresponded to a threshold of 46.7% novel unigrams. For both proposed summarization tasks, we reserved 10k pairs for testing, 10k for validation, and all the remaining pairs for training. The released GreekSUM dataset can be reproduced by using our code15. Footnote 15: [https://github.com/iakovosevdaimon/GreekSUM](https://github.com/iakovosevdaimon/GreekSUM) \begin{table} \begin{tabular}{|l|c|c|} \hline **Corpus** & **Size before** & **Size after** \\ & **deduplication** & **deduplication** \\ \hline OSCAR & \(51.7\) & \(44.6\) \\ Greek Web Corpus & \(38.4\) & \(30.9\) \\ Wikipedia & \(0.9\) & \(0.9\) \\ EuroParl & \(0.5\) & \(0.5\) \\ \hline **Total** & \(91.5\) & \(76.9\) \\ \hline \end{tabular} \end{table} Table 1: Datasets which consists of the GreekBART pretraining corpus (sizes in GB, before and after cleaning and deduplication). ### Analysis In Table 2 is compared the GreekSUM with OrangeSum, XSum, and the well-known CNN, DailyMail, and NY Times datasets Hermann et al. (2015). We can observe that GreekSUM and OrangeSum datasets are very equivalent in terms of average documents and summaries length. Also, GreekSUM has a similar scale to XSum. Inspecting the Table 3, it is noticeable that extractive methods (_i.e._ LEAD and EXT-ORACLE) do not perform so well on GreekSUM, thus our dataset is less biased towards extractive models. Because of the poor performance of the two extractive methods, it seems that GreekSUM is more abstractive than the traditional summarization datasets (_i.e._ CNN, DailyMail, NY Times). However, the summaries and the titles of GreekSUM do not display such a high degree of novelty as the ones of OrangeSum and XSum. In the GreekSUM dataset, there are 20.6% novel unigrams in the abstracts and 26.7% novel unigrams in the titles compared with 30% in the OrangeSum Abstract, 26.5% in the OrangeSum Title, and 35.7% in XSum. Therefore, we can conclude that the summaries of GreekSUM are not as abstractive as we would like them to be. ## 4 Experiments In this section, we present the results of all experiments. Basically, we have two types of downstream tasks, discriminative tasks, and summarization tasks. In the case of discriminative tasks, we compare GreekBART with BART-random, GreekBERT Koutsikakis et al. (2020), and XLM-R model Conneau et al. (2020). Except for BART-random, the other models are already pretrained on the Greek language. So, we evaluate the performance of our model against the current state-of-the-art monolingual model pretrained only on the Greek language as well as against a widely used multilingual model. We fine-tuned all the above-mentioned models on the downstream tasks. For the summarization task, we set side by side the GreekBART, the BART-random and the two versions of mBART Liu et al. (2020); Tang et al. (2020). mBART25 and mBART50 are built upon the _LARGE_ architecture of BART, and they are pretrained on 25 and 50 languages respectively, excluding the Greek language. Therefore, we performed zero-shot learning for the summarization task. On the other hand, the BART-random model uses the same architecture and vocabulary as GreekBART, however, it is trained from scratch on the downstream tasks. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Dataset & train/val/test & \multicolumn{2}{c|}{avg. doc length} & \multicolumn{2}{c|}{avg. summary length} & \multicolumn{2}{c|}{vocabulary size} \\ & \multicolumn{2}{c|}{words} & \multicolumn{2}{c|}{sentences} & \multicolumn{2}{c|}{words} & \multicolumn{2}{c|}{sentences} & \multicolumn{2}{c|}{docs} & \multicolumn{2}{c|}{summaries} \\ \hline CNN & \(90.3/1.22/1.09\) & \(760.50\) & \(33.98\) & \(45.70\) & \(3.58\) & \(34\) & \(89\) \\ DailyMail & \(197/12.15/10.40\) & \(653.33\) & \(29.33\) & \(54.65\) & \(3.86\) & \(564\) & \(180\) \\ NY Times & \(590/32.73/32.73\) & \(800.04\) & \(35.55\) & \(45.54\) & \(2.44\) & \(1233\) & \(293\) \\ \hline XSum & \(204/11.33/11.33\) & \(431.07\) & \(19.77\) & \(23.26\) & \(1.00\) & \(399\) & \(81\) \\ OrangeSum Title & \(30.6/1.5/1.5\) & \(315.31\) & \(10.87\) & \(11.42\) & \(1.00\) & \(483\) & \(43\) \\ OrangeSum Abstract & \(21.4/1.5/1.5\) & \(350\) & \(12.06\) & \(32.12\) & \(1.43\) & \(420\) & \(71\) \\ GreekSUM Title & \(146.04/10/10\) & \(355.49\) & \(14.26\) & \(9.95\) & \(1.05\) & \(663\) & \(91\) \\ GreekSUM Abstract & \(129.159/10/10\) & \(368.97\) & \(14.76\) & \(24.55\) & \(1.46\) & \(629\) & \(127\) \\ \hline \end{tabular} \end{table} Table 2: Sizes (column 2) are given in thousands of documents. Document and summary lengths are in words, while vocabulary sizes are in thousands of tokens \begin{table} \begin{tabular}{|l|c c c c|c c c|c c c|} \hline Dataset & \multicolumn{3}{c|}{\% of novel n-grams in gold summary} & \multicolumn{3}{c|}{LEAD} & \multicolumn{3}{c|}{EXT-ORACLE} \\ & unigrams & bigrams & trigrams & 4-grams & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \hline CNN & \(16.75\) & \(54.33\) & \(72.42\) & \(80.37\) & \(29.15\) & \(11.13\) & \(25.95\) & \(50.38\) & \(28.55\) & \(46.58\) \\ DailyMail & \(17.03\) & \(53.78\) & \(72.14\) & \(80.28\) & \(40.68\) & \(18.36\) & \(37.25\) & \(55.12\) & \(30.55\) & \(51.24\) \\ NY Times & \(22.64\) & \(55.59\) & \(71.93\) & \(80.16\) & \(31.85\) & \(15.86\) & \(23.75\) & \(52.08\) & \(31.59\) & \(46.72\) \\ \hline XSum & \(35.76\) & \(83.45\) & \(95.50\) & \(98.49\) & \(16.30\) & \(1.61\) & \(11.95\) & \(29.79\) & \(8.81\) & \(22.65\) \\ OrangeSum Title & \(26.54\) & \(66.70\) & \(84.18\) & \(91.12\) & \(19.84\) & \(08.11\) & \(16.13\) & \(31.62\) & \(17.06\) & \(28.26\) \\ OrangeSum Abstract & \(30.03\) & \(67.15\) & \(81.94\) & \(88.3\) & \(22.21\) & \(07.00\) & \(15.48\) & \(38.36\) & \(20.87\) & \(31.08\) \\ GreekSUM Title & \(26.7\) & \(67.9\) & \(84.5\) & \(91.4\) & \(14.68\) & \(04.46\) & \(14.37\) & \(23.36\) & \(07.39\) & \(23.12\) \\ GreekSUM Abstract & \(20.6\) & \(50.8\) & \(65.3\) & \(73.0\) & \(17.11\) & \(06.17\) & \(16.69\) & \(34.18\) & \(14.17\) & \(33.93\) \\ \hline \end{tabular} \end{table} Table 3: Degree of abstractivity of GreekSUM compared with that of other datasets. It depicts that GreekSUM follows XSum, and OrangeSum, being more abstractive than traditional summarization datasets. ### Discriminative tasks Except for generative tasks, the BART model achieves remarkable results also in discriminative tasks Lewis et al. (2020). In the case of sequence classification, a classification head is added on top of the model and the input is fed into both the encoder and the decoder. The representation of the final decoder token is used by the newly introduced multi-class linear classifier. We examine the performance of the models (_i.e._ Greek-BERT, XLM-R, BART-random, GreekBART) on four discriminative tasks. More precisely, we evaluate our model on two classification tasks, one task of sentimental analysis and a Natural Language Inference (NLI) task. #### 4.1.1 Training details In all experiments, we fine-tuned the models with a learning rate chosen from \(\{10^{-4},5.10^{-5},10^{-5}\}\), based on the best validation score. We repeat each experiment 3 times with different seeds and we record the mean and standard deviation of their accuracy on the test set of each aforementioned task. #### 4.1.2 NCC task (News Category Classification task) For the first classification task, we used the novel summarization dataset (GreekSum, see section 3) which we scraped from the news website News24/7 16. We considered the five distinct subjects that an article may fall into politics, society, economy, culture, and world. These categories serve as labels for the classification task that our model is being trained to perform. Essentially, the model is fed with the content of an article and learns to predict which category it belongs to (_i.e._ subject). We fine-tuned all examined models for 5 epochs, using a batch size equal to 32. For XLM-R model we set the learning rate equal to \(5.10^{-5}\) while for the rest of the models, the learning rate is equal to \(10^{-4}\). The training set consists of 146,046 samples, whereas both the validation and the test set have 10,000 instances exactly like the two summarization datasets (_i.e._ GreekSUM Abstract and GreekSUM Title). Footnote 16: [https://www.news247.gr/](https://www.news247.gr/) In the second classification task, we used the proposed Greek classification dataset of Lioudakis et al. (2020), which was created from articles from Makedonia newspaper. The dataset contains 8005 articles from 18 different categories: Sports, Reportage, Economy, Politics, International, Television, Arts-Culture, Letters, Opinions, Interviews, Weather, Society, Advertisements, Biographies, Others, Articles, Police, and Zodiacs. We reserved 70% of the dataset for train and the remaining 30% for both validation and test. So, the train set consists of 5610 samples, whereas the test set and the validation set consist of 1191 and 1204 instances, respectively. All the models are fine-tuned for 20 epochs, with a batch size of 16 and a learning rate equal to \(5.10^{-5}\). Due to the small size of the dataset, we trained the models for more epochs and smaller batch sizes. #### 4.1.3 Natural Language Inference Cross-lingual Natural Language Inference Corpus (XNLI) Conneau et al. (2018) contains pairs of sentences. The objective of this task is to determine whether the first sentence, also known as the premise, entails, contradicts, or is neutral in relation to the second sentence, referred to as the hypothesis. The XNLI corpus contains 5,000 test and 2,500 validation pairs, and 340k training pairs from the MultiNLI corpus Williams et al. (2018). The dataset has been translated from English to 14 languages, including Greek. Unfortunately, a large number of the training pairs are of extremely poor quality, as they are produced by machine translation. This condition may affect the performance of models. We fine-tuned for 5 epochs, using 32 batches, and a learning rate equal to \(5.10^{-5}\). #### 4.1.4 Sentimental Analysis task We used a publicly available sentimental analysis dataset17 about movies' reviews in Greek. We pre-processed the dataset by mainly removing emojis and hashtags. Each instance consists of a review and a rating. To distinguish between positive and negative reviews, we established a threshold of 3 out of 5. Ratings above this threshold were categorized as positive reviews, while those at or below 3 out of 5 were classified as negative reviews. In an effort to create a balanced dataset, we aimed to include a similar number of positive and negative reviews. For the purpose of our task, we only retained the reviews and the ratings, discarding any additional information. We split the dataset into the train, validation, and test set. The train set consists of 104,157 samples, while the validation and test contain 22,320 and 22,318 instances respectively. We set the learning rate and the batch size equal to \(5.10^{-5}\) and 16 respectively. We fine-tuned the models for 5 epochs. #### 4.1.5 Results Table 4 reports the test set accuracy on the four different tasks. We compare our model with GreekBERT Koutsikakis et al. (2020), XLM-R Conneau et al. (2020), and BART-random. For all models, their corresponding _BASE_ architecture is used. Among the models, we observe that GreekBART is the best in almost all discriminative tasks, except for the sentimental analysis task, where GreekBERT achieved the best performance. Generally, it is common for BERT models to perform better than BART models in that kind of tasks. The performance of our model (_i.e._ GreekBART) verifies the results of BART paper Lewis et al. (2020) that models based on that architecture perform well on both generative and discriminative tasks. ### Summarization We evaluate our model in two distinct summarization tasks, in which the model learns to predict the title and the abstract of an article based on its corresponding content. In both generative tasks, the GreekBART was fine-tuned for 30 epochs with a learning rate equal to \(5.10^{-5}\) that was warmed up for 6% of the training steps and then decreased linearly to 0. We used the same set of hyper-parameters as those of GreekBART to train mBART25 and mBART50. While for BART-random, we trained the model for 60 epochs. To produce the summaries for the test set, we used ROUGE-L Lin (2004) to select the checkpoint that was associated with the best validation score. In addition, we incorporated two extractive techniques as baselines: EXT-ORACLE and LEAD Narayan et al. (2018). The LEAD technique generates a summary by extracting the first \(N\) sentences from the document, with \(N\) set to 1 in our case. On the other hand, EXT-ORACLE selects the set of sentences from the document that maximizes a specific score, with ROUGE-L being the score used in our implementation. In particular, we extracted the one sentence of the document with the highest ROUGE-L score. In Table 5, we report the ROUGE-1, ROUGE-2, ROUGE-L scores Lin (2004) and two different BERScores Zhang et al. (2019), using the M-BERT Devlin et al. (2019) model and the Greek-BERT model in order to calculate the contextual embeddings. BERTScore is a recently proposed metric that makes use of the contextual representations of the predicted and gold sentences. BERTScore focuses on semantic similarity between tokens of reference and hypothesis, trying to understand the meaning of what you have generated and what was supposed to be generated. We report BERTScore because ROUGE can mainly capture n-gram overlap, which is inadequate for the abstractive summarization setting. Some examples of the generated summarizations are available in the appendix section A, B. #### 4.2.1 Quantitative results In Table 5 we compare the performance of our models fine-tuned on the summarization task. Despite that GreekBART is a BART-_BASE_ model and it is compared with BART-_LARGE_ models, it is able to achieve better performance than all other models in the task of GreekSUM abstract. Only mBART50 achieves a slightly higher BERTScore than GreekBART when evaluated using the M-BERT model. On the other hand, both mBART models surpass our model in the GreekSUM title task. Although, even in that task the performance of GreekBART is comparable to one of the two mBART models, both in terms of ROUGE and BERTScore. Our evaluation indicates that mBART50 and GreekBART are the most promising models for the two summarization tasks. Specifically, mBART50 performs better overall in both generative tasks, being the top-performing model in the GreekSUM title task and second-best in the GreekSUM Abstract task, according to its ROUGE and BERTScores. On the other hand, GreekBART excels in the GreekSUM abstract task, but ranks third-best in the GreekSUM title task. Generally, it is remarkable the fact that both mBART models, which are not pretrained on the Greek language, are capable to achieve a good performance due to the size of GreekSUM dataset, which contains more than 100k training samples. It is clear that BART-random has the poorest performance by a significant margin. Finally, it is interesting that mBART50 has a better performance than mBART25 in terms of both ROUGE and BERTScore, while their only difference is the number of languages on which they are pretrained. This situation warrants further investigation, as it is possible that some of the additional 25 languages supported by mBART50 have roots in the Greek language, potentially contributing to a better understanding of the language model. #### 4.2.2 Qualitative results As shown in Table 6, GreekBART is more abstractive than the two mBART models, as its generated summaries display a higher degree of novel n-grams. In general, none of the models surpass the LEAD method in terms of ROUGE scores. Furthermore, the ROUGE scores of the models suggest that the machine-generated summaries tend to be extractive, as the gold summaries are also predominantly extractive in nature. This situation is confirmed by the proportion of novel n-grams that are introduced (Table 6), where few new words are introduced in the gold summaries of GreekSUM, influencing, therefore, the training of the examined models, forcing them to generate more extractive summaries. Moreover, Table 6 depicts that the length of all generated summaries is pretty close to the length of ground truth summaries. According to Table 7 the generated summaries of mBART50 contain the smallest percentage of repetitions, with GreekBART following. The rate of repeated words on mBART50 summaries is close to the one of ground truth summaries. Finally, we notice that BART-random introduces many new words, however, they are irrelevant. #### 4.2.3 Human Evaluation In order to further understand and validate the quantitative results, we conducted a human evaluation study, using Best-Worst Scaling Louviere et al. (2015). We chose 11 native Greek speakers from diverse age groups, ranging from 18 to 60 years old, with varying educational backgrounds and levels. Following Narayan et al. 2018 method, we randomly selected 14 documents from the test set of GreekSUM abstract and for each document we generated all possible pairs of human-authored (Gold), GreekBART, BART-random, mBART25, and mBART50 summaries, resulting in a total of 140 pairs for all documents. Thus, each pair of summaries consists of two summaries generated by two different models. Volunteers were presented with a document and a pair of summaries and they should decide which one is the best summary and which was the worst, based on the accuracy (does the summary contain accurate facts?), the informativeness (is important information captured?) and the fluency (is the summary written in well-formed Greek?). Each summary pair was assigned randomly to three participants, and a system's score was determined by calculating the percentage of times it was selected as the _best_ summary, minus the percentage of times it was selected as the _worst_ summary. Thus, the maximum score that a model can achieve is \(100\), whereas the minimum score can be \(-100\). The results of the human evaluation study are presented in Table 8. Gold reaches first place, followed by mBART50 and GreekBART. According to the evaluators, Gold is by far the most preferred summary, while the score of mBART50 is remarkably higher than that of GreekBART, verifying our assumptions based on the quantitative results. Finally, the high negative score of BART-random indicates that its summaries were consid \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{NCC} & \multicolumn{1}{c|}{Sentimental} & \multirow{2}{*}{XNLI} \\ \cline{2-2} \cline{4-4} & News247 (ours) & & \multicolumn{1}{c|}{Makedonia Lioudakis et al. (2020)} & & \\ \hline Greek-BERT & \(92.61^{\pm 0.19}\) & \(89.45^{\pm 0.84}\) & \(\textbf{86.39}^{\pm 0.06}\) & \(78.6^{\pm 0.62}\) \\ XLM-R & \(93.1^{\pm 0.51}\) & \(89.6^{\pm 0.29}\) & \(85.43^{\pm 0.05}\) & \(78.2^{\pm 0.59}\) \\ BART-random & \(91.33^{\pm 0.17}\) & \(80.17^{\pm 0.09}\) & \(80.87^{\pm 0.12}\) & \(60.1^{\pm 0.43}\) \\ GreekBART (ours) & \(\textbf{93.2}^{\pm 0.29}\) & \(\textbf{91.1}^{\pm 0.43}\) & \(85.43^{\pm 0.19}\) & \(\textbf{78.67}^{\pm 0.25}\) \\ \hline \end{tabular} \end{table} Table 4: Results on discriminative tasks. We present the mean accuracy as well as the standard deviation. \begin{table} \begin{tabular}{|l l|c c c c|c c c c|} \hline \multirow{3}{*}{} & \multicolumn{4}{c|}{GreekSUM Abstract} & \multicolumn{4}{c|}{GreekSUM Title} \\ & & R-1 & R-2 & R-L & BertScore & R-1 & R-2 & R-L & BertScore \\ \hline \multirow{3}{*}{} & LEAD & \(17.11\) & \(06.17\) & \(16.69\) & \(72.61/63.56\) & \(14.68\) & \(04.46\) & \(14.37\) & \(70/57.13\) \\ & EXT-ORACLE & \(34.18\) & \(14.17\) & \(33.93\) & \(73.89/65.43\) & \(23.36\) & \(07.39\) & \(23.12\) & \(70.02/57.33\) \\ \hline \multirow{3}{*}{} & BART-random & \(13.85\) & \(04.47\) & \(13.65\) & \(72.44/63.27\) & \(11.55\) & \(03.27\) & \(11.42\) & \(74.47/62.22\) \\ & GreekBART (ours) & **16.5** & **06.13** & **16.21** & \(73.03/\textbf{64.46}\) & \(15.35\) & \(05.02\) & \(15.18\) & \(75.78/63.98\) \\ \hline \multirow{3}{*}{} & mBART25 & \(15.07\) & \(05.8\) & \(14.82\) & \(72.75/64.08\) & \(16.09\) & \(05.58\) & \(15.93\) & **76.81/65.38** \\ & mBART50 & \(15.53\) & \(06.\) & \(15.31\) & \(\textbf{73.07}/64.43\) & **16.1** & **05.59** & **15.96** & **76.81/65.38** \\ \hline \end{tabular} \end{table} Table 5: Results on GreekSUM. Except for ROUGE, we provide also the BertScore. The left-hand BERTScore has calculated using the M-BERT model Devlin et al. (2019), while the right-hand uses the Greek-BERT Koutisikakis et al. (2020) ered to be worse in the majority of cases. ## 5 Conclusion We implemented GreekBART, the first pretrained Seq2Seq model for the Greek language specifically. Also, we created the first summarization dataset for the Greek language. Our model showed to outperform former state-of-the-art models on 3 out of 4 discriminative tasks and to be on par with BART-_LARGE_ models on summarization tasks. Moreover, we presented the capabilities of zero-shot learning, training from scratch a multilingual BART model on summarization tasks, even though it was not pretrained on the Greek language. As a future work, we can consider the creation of a more abstractive summarization dataset, and the investigation of any correlation between the Greek language and one or more of the 25 extra languages of mBART50. Finally, it would be interesting to try to boost the performance of mBART50 on summarization tasks by applying an affordable language-adaptive phase in order to further pretrain it on the Greek language for a logical number of epochs. ## Ethics Statement The collection of the GreekSUM dataset was performed using a Python crawler that respected the _robots.txt_ of [http://www.news247.gr](http://www.news247.gr). As the dataset is used only for evaluation purposes the content follows the legal instructions listed on the webpage. For the training of GreekBART we used a cluster of GPUs consisting of 2 NVIDIA V100 GPUs for 20 days. As the majority of language models that are based on BART architecture the energy resources required for pretraining models currently are very high and need to be tackled soon (Strubell et al., 2019). ## Limitations The proposed GreekSUM dataset that we used for the evaluation of our model is limited to news articles from one webpage only. Thus, the capability of abstractive summarization of GreekBART is only assessed on one domain only. This is due to the fact that there is a lack of non-English benchmarks and tasks. This is also applicable in the discriminative tasks, where the only available ones for Greek are either sentence classification or natural language inference. While other evaluation datasets are not existing for the Greek language (i.e. Word Sense Disambiguation) or are not available to the public (i.e. Named Entity Recognition dataset). On the other hand, GreekBART is only compared with extractive summarization methods or with large multi-lingual language models for the summarization task. Since it is the first base model for this language and since the base mBART model does not exist publicly, a fair in-depth comparison of GreekBART with other summarization systems could not be conducted. \begin{table} \begin{tabular}{|l l|c|c c c c|c c c c c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{5}{c|}{GreekSUM Abstract} & \multicolumn{5}{c|}{GreekSUM Title} \\ & \multicolumn{1}{c|}{unigrams} & \multicolumn{1}{c|}{bigrams} & \multicolumn{1}{c|}{trigrams} & \multicolumn{1}{c|}{t-grams} & length & \multicolumn{1}{c|}{unigrams} & trigrams & trigrams & \multicolumn{1}{c|}{t-grams} & length \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & Gold & \(20.6\) & \(50.8\) & \(65.3\) & \(73.0\) & \(24.55\) & \(26.7\) & \(67.9\) & \(84.5\) & \(91.4\) & \(9.55\) \\ \cline{2-13} & BART-random & \(9.6\) & \(43.0\) & \(64.5\) & \(76.8\) & \(20.27\) & \(21.6\) & \(69.4\) & \(89.1\) & \(95.8\) & \(9.37\) \\ & GreekBART (ours) & **7.4** & **23.5** & **34.5** & **42.2** & \(23.63\) & **14.9** & **50.1** & **69.3** & **79.9** & **9.78** \\ \hline \multirow{3}{*}{ \begin{tabular}{c} \end{tabular} } & mBART25 & \(6.2\) & \(20.0\) & \(29.4\) & \(36.0\) & \(26.22\) & \(12.8\) & \(46.6\) & \(65.6\) & \(76.2\) & \(10.67\) \\ & mBART50 & \(6.5\) & \(21.8\) & \(32.3\) & \(39.7\) & \(23.95\) & \(12.8\) & \(46.6\) & \(65.6\) & \(76.2\) & \(10.67\) \\ \hline \end{tabular} \end{table} Table 6: Proportion of novel n-grams in the generated summaries. Also, it is given the length (number of words) of the generated summaries \begin{table} \begin{tabular}{|l|l|c|} \hline & **System** & **Score** \\ \hline & Gold & \(45.24\) \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & BART-random & \(-72.62\) \\ & GreekBART (ours) & \(10.71\) \\ \hline \multirow{3}{*}{ \begin{tabular}{c} \end{tabular} } & mBART25 & \(-03.57\) \\ & mBART50 & **20.24** \\ \hline \end{tabular} \end{table} Table 8: The results of human evaluation study ## Acknowledgements This research was supported by the ANR chair AML/HELAS (ANR-CHIA-0020-01). This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013750 made by GENCI. We would like to express our sincere gratitude to all the participants who took part in this human evaluation study. Your time and effort in completing the questionnaires and participating in the study have been invaluable in helping us gather meaningful data. Your willingness to share your experiences, insights, and opinions has been instrumental in informing our research, and we appreciate the trust you have placed in us. Your contributions have helped us improve our understanding of the topic under investigation and have the potential to make a significant impact on future research and practice. We would also like to acknowledge the importance of obtaining informed consent from all participants before their involvement in the study. Your participation was entirely voluntary, and we appreciate your willingness to take part in the study. Once again, we extend our sincere thanks to all the participants for their valuable contributions to this study.
2306.09952
Convective heat transfer in the Burgers-Rayleigh-Bénard system
The dynamics of heat transfer in a model system of Rayleigh-B\'enard (RB) convection reduced to its essential, here dubbed Burgers-Rayleigh-B\'enard (BRB), is studied. The system is spatially one-dimensional, the flow field is compressible and its evolution is described by the Burgers equation forced by an active temperature field. The BRB dynamics shares some remarkable similarities with realistic RB thermal convection in higher spatial dimensions: i) it has a supercritical pitchfork instability for the onset of convection which solely depends on the Rayleigh number $(Ra)$ and not on Prandlt $(Pr)$, occurring at the critical value $Ra_c = (2\pi)^4$ ii) the convective regime is spatially organized in distinct boundary-layers and bulk regions, iii) the asymptotic high $Ra$ limit displays the Nusselt and Reynolds numbers scaling regime $Nu = \sqrt{RaPr}/4$ for $Pr\ll 1$, $Nu=\sqrt{Ra}/(4\sqrt{\pi})$ for $Pr\gg1$ and $Re = \sqrt{Ra/Pr}/\sqrt{12}$, thus making BRB the simplest wall-bounded convective system exhibiting the so called ultimate regime of convection. These scaling laws, derived analytically through a matched asymptotic analysis are fully supported by the results of the accompanying numerical simulations. A major difference with realistic natural convection is the absence of turbulence. The BRB dynamics is stationary at any $Ra$ number above the onset of convection. This feature results from a nonlinear saturation mechanism whose existence is grasped by means of a two-mode truncated equation system and via a stability analysis of the convective regime.
Enrico Calzavarini, Silvia C. Hirata
2023-06-16T16:33:16Z
http://arxiv.org/abs/2306.09952v2
# Convective heat transfer in the Burgers-Rayleigh-Benard system ###### Abstract The dynamics of heat transfer in a model system of Rayleigh-Benard (RB) convection reduced to its essential, here dubbed Burgers-Rayleigh-Benard (BRB), is studied. The system is spatially one-dimensional, the flow field is compressible and its evolution is described by the Burgers equation forced by an active temperature field. The BRB dynamics shares some remarkable similarities with realistic RB thermal convection in higher spatial dimensions: i) it has a supercritical pitchfork instability for the onset of convection which solely depends on the Rayleigh number (\(Ra\)) and not on Prandtl (\(Pr\)), occurring at the critical value \(Ra_{c}=(2\pi)^{4}\) ii) the convective regime is spatially organized in distinct boundary-layers and bulk regions, iii) the asymptotic high \(Ra\) limit displays the Nusselt and Reynolds numbers scaling regime \(Nu=\sqrt{RaPr}/4\) for \(Pr\ll 1\), \(Nu=\sqrt{Ra}/(4\sqrt{\pi})\) for \(Pr\gg 1\) and \(Re=\sqrt{Ra/Pr}/\sqrt{12}\), thus making BRB the simplest wall-bounded convective system exhibiting the so called ultimate regime of convection. These scaling laws, derived analytically through a matched asymptotic analysis are fully supported by the results of the accompanying numerical simulations. A major difference with realistic natural convection is the absence of turbulence. The BRB dynamics is stationary at any \(Ra\) number above the onset of convection. This feature results from a nonlinear saturation mechanism whose existence is grasped by means of a two-mode truncated equation system and via a stability analysis of the convective regime. ## I Introduction Thought experiments, toy-models, low-dimensional representations are keys to the scientific thinking, and allow to get insight into the complex physics of many real systems. In fluid-dynamics research, reduced models obtained, e.g., via expansion and truncations of the original dynamical equations have been used to conceptualize and to understand, for instance, the chaotic dynamics of flows (Lorenz system [1]), or the physics of energy cascade in developed turbulence (Shell models [2; 3]). In this study we focus on the problem of thermal convection and, in the spirit of the one-dimensional (1D) toy model for granular media introduced by Du, Li and Kadanoff [4; 5], we here introduce a stripped-down mock-up of the classical Rayleigh-Benard (RB) system [6]. The RB has been extensively studied either in its spatial three-dimensional or in its two-dimensional version [7; 8]. We are not aware of any study of the system in one-dimension. This is after all quite understandable, since in 1D the incompressibility condition for the flow does not hold and one expects a rather different physical behaviour. This is indeed already the case for the 1D version of the Navier-Stokes equation, i.e., the Burgers equation [9]. It is well known that the Burgers equation does not display a turbulent behaviour, because it can be recast in term of a diffusion equation via the Hopf-Cole transformation [10; 11]. However, stochastically forced Burgers equation does produce a special kind of turbulence, dubbed Burgulence [12], that has drawn the attention of recent research [13]. We show in this study that a 1D deterministically-forced version of the RB system that we dub Burgers-Rayleigh-Benard (BRB) system can be defined. Interestingly, this system possesses a certain number of similarities with thermal convection in higher spatial dimensions: it has a supercritical linear instability for the onset of convection, the convective regime is spatially organized in distinct boundary-layers and bulk regions, and the asymptotic high \(Ra\) limit displays the so-called ultimate Nusselt and Reynolds numbers scalings [8], although it lacks of any turbulent behaviour. It also admits shock-like solutions that are peculiar of the Burgers dynamics. The article is organized as follows. We first define the BRB system, next we examine its most relevant symmetries and its global properties. In particular, we introduce the definition of the Nusselt and Reynolds numbers, which are the two global response parameters of the system. Secondly, we perform a theoretical analysis on the system dynamics, focusing on the calculation of the linear instability threshold for convection, on the subsequent non-linear saturation mechanism and on a derivation of a steady matched asymptotic solution for the very intense convection state. Third, we push forward the analysis by means of a numerical approach. In particular we show that the system is stationary at all Rayleigh numbers, this is first revealed empirically then verified by means of numerically-based stability analysis. We then show that the Nusselt and Reynolds number asymptotically approaches the ultimate state of thermal convection, this both in their Rayleigh and Prandtl number dependencies. Finally, we discuss the implications of our findings and possible perspectives. ## II The Burgers-Rayleigh-Benard model system ### Equations of motion We study the spatio-temporal evolution of a single-component velocity \(W(Z,\tau)\) and temperature \(T(Z,\tau)\) fields in a one-dimensional domain \(Z\in[0,H]\), described by the coupled system of differential equations: \[W_{\tau}+W\ W_{Z} = \nu\ W_{ZZ}+\beta g(T-T_{c}) \tag{1}\] \[T_{\tau}+W\ T_{Z} = \kappa\ T_{ZZ}, \tag{2}\] with Dirichlet boundary conditions \[W = 0,\ T=\frac{\Delta}{2}\quad\mbox{in $Z=0$ (bottom)}, \tag{3}\] \[W = 0,\ T=-\frac{\Delta}{2}\quad\mbox{in $Z=H$ (top)}, \tag{4}\] where \(\nu\) and \(\kappa\) denotes respectively the viscosity and thermal diffusivity, \(\beta\) the thermal expansion coefficient, \(g\) the gravitational acceleration intensity, and \(T_{c}\) the linear profile given by \(T_{c}(Z)=-(\Delta/H)Z+\Delta/2\), which is also said conductive because it represents a solution for the temperature field when \(W=0\) in all the domain. Furthermore, to keep the similarity with realistic RB convection we adopt the additional constraint that the global value of velocity and temperature fields are null, \[\int_{0}^{H}W\ dZ=\int_{0}^{H}T\ dZ=0\ \ \mbox{(no-zero mode condition)}. \tag{5}\] This prevents the possibility for the system to acquire a vertical mean flow and to heat up/cool off. We will comment later on the consequence of this constraint. As we have already mentioned, the above model constitutes an oversimplified representation of the Rayleigh-Benard system. It can be loosely obtained from the Navier-Stokes-Boussinesq set of equations for the three-dimensional velocity \({\bf U}=(U,V,W)\) and temperature \(T\), by assuming that (i) the vertical component of the velocity, \(W\), and the temperature depend only on the vertical direction, \(Z\), (ii) by removing the hydrodynamic pressure field and (iii) by expressing the buoyancy force as proportional to the temperature deviation from the local conductive temperature profile. With the above assumptions, the equations for \(T\) and \(W\) decouple from the ones for horizontal components \(U,V\) and can be treated separately. As a consequence the vertical velocity gradient \(W_{Z}\) becomes unconstrained and the corresponding unidimensional velocity field is compressible. We stress that the BRB model can not be regarded as a low-dimensional mean-field form of the Boussinesq system, neither a model of convection in compressible gases (where the continuity equation would have a different form). However, we believe that despite its incompleteness this model is useful to get an insight in what does/does not occur in the realistic system. The equations (1-2) can be made dimensionless by means of the linear size of the domain (or height \(H\)), the free-fall velocity \(U_{f}=\sqrt{\beta gH\Delta}\) and the global temperature gap \(\Delta\) (i.e. the difference between the top temperature and the bottom one). This leads to the two control parameters in the system: the Rayleigh number \(Ra=(U_{f}H)^{2}/(\nu\kappa)\) and the Prandtl number \(Pr=\nu/\kappa\). With these choices the equations can be conveniently rewritten in term of the velocity, \(w=W/U_{f}\), and the temperature deviation from the conductive profile, \(\theta(z,t)=(T(Z,\tau)-T_{c}(Z))/\Delta\), as: \[w_{t}+w\ w_{z} = \sqrt{\frac{Pr}{Ra}}\ w_{zz}+\theta \tag{6}\] \[\theta_{t}+w\ \theta_{z} = \frac{1}{\sqrt{PrRa}}\ \theta_{zz}+w, \tag{7}\] with \(w=\theta=0\quad\mbox{in}\quad z=0\) and \(z=1\) and \(\langle w\rangle=\langle\theta\rangle=0\), where \(\langle\ldots\rangle=\int_{0}^{1}\ldots dz\) is the spatial average (all lower-case letter denote dimensionless variables). Equation (6) is the 1D forced Burgers equation, which is coupled to the advection-diffusion equation for a scalar field (7) which is in turn forced by \(w\). ### Symmetries The system (6-7) enjoys a series of symmetries which greatly affect its dynamics. We describe them in detail in this section. To begin with we note that when \(Pr=1\), \(\theta=w\) is a permitted solution of the BRB model system. Second, the set of equations (6-7) is invariant with respect to the transformation \((z,\theta)\rightarrow(1-z,-\theta)\) which also implies \(w\rightarrow-w\) because by definition \(w=dz/dt\). This means that if the couple \(\theta(t,z),w(t,z)\) indicates a solution of the equations, than also \(-\theta(t,1-z),-w(t,1-z)\) is a solution. Combining this with the condition \(\langle w\rangle=\langle\theta\rangle=0\), it entails that both \(\theta\) and \(w\) are odd functions with respect to \(z=1/2\), and so \(\theta(z=1/2)=w(z=1/2)=0\). A third symmetry is the following: \[z\rightarrow\begin{cases}z+1/2\text{ if }z\leq 1/2\\ z-1/2\text{ if }z>1/2\end{cases}\quad\text{ or }\quad z\to z+\text{ sign}\left(\frac{1}{2}-z\right)\frac{1}{2}. \tag{8}\] It corresponds to swapping the spatial interval \([0,1/2]\) with the one \([1/2,1]\). As shown in the sketch in Fig. 1(a)-(b) this symmetry transforms what we call a "boundary-layer" type solution to a "shock" type solution (more on this later). In other words, due to the zero boundary conditions and to the second symmetry, the functions \(w(z)\) and \(\theta(z)\) can be seen as periodic odd functions. This means that adding a phase of half the period is still a solution of the system. Equivalently one can say that, \(z\to z+1/2\) is a symmetry of the system. A fourth remarkable symmetry of the system is the following: Let's call \(w(t,z;Ra,Pr),\theta(t,z;Ra,Pr)\) the solution of the system for a given value of the parameters \(Ra,Pr\). The system is then invariant with respect to the transformation: \[w(t,z;Ra,Pr) \rightarrow w(t,2nz;\frac{Ra}{2n},Pr)/(2n) \tag{9}\] \[\theta(t,z;Ra,Pr) \rightarrow \theta(t,2nz;\frac{Ra}{2n},Pr)/(2n) \tag{10}\] where \(n\) is a positive integer number. This "rescaling transformation" symmetry is illustrated in Fig. 1(c) for the case \(n=1\). ### Global response parameters: Nusselt and Reynolds To derive the expression for the global heat flux it is convenient to resort to the dimensional notation. The temperature equation (2) in conservative form reads: \(T_{\tau}+(J_{T})_{Z}=0\), where \[J_{T}(Z,\tau)=WT-\kappa T_{Z}-\int_{0}^{Z}TW_{Z^{\prime}}\ dZ^{\prime}, \tag{11}\] is the local and instantaneous heat flux at position \(Z\) and time \(\tau\). Averaging the conservative form equation over time (here denoted as overline) and assuming a steady state gives the expression of the mean global heat-flux: \[\overline{J_{T}}(Z)=\overline{WT}-\kappa\overline{T}_{Z}-\int_{0}^{Z}\ \overline{TW_{Z^{\prime}}}\ dZ^{\prime}=const. \tag{12}\] Figure 1: Symmetries of the system of equations: (a) odd symmetry of \(w\) and \(\theta\) with respect to the position \(z=1/2\); (b) swap symmetry with resect to the system mid-point (8); (c) rescaling transformation (9)-(10) with \(n=1\). The integral term in the above expression, which is absent in the mean heat flux of the RB system, is a consequence of the compressibility of the velocity field. The mean Nusselt number is defined by adimensionalizing the mean global heat flux with respect to the conductive heat flux (i.e. the state where \(W=0\) and \(T=T_{c}\)): \[Nu\ \equiv\ \frac{\overline{J_{T}}}{J_{T_{c}}}=const. \tag{13}\] We observe that by plugging into the above expression the dimensionless temperature fluctuation \(\theta\), and evaluating the expression either in \(z=0\) or \(z=1\), gives the following equivalent expressions for \(Nu\): \[Nu\ =\ 1-\overline{\theta}_{z}(0)=1-\overline{\theta}_{z}(1). \tag{14}\] One can remark that this same expression for \(Nu\) is obtained in the RB flow ruled by the Boussinesq system of equations. On the contrary, if one considers the spatial average of \(Nu\) (spatial average of eq. (13)) one gets: \[Nu=1+\sqrt{PrRa}\left(\langle\overline{w\theta}\rangle-\langle\int_{0}^{z}( \overline{\theta w_{z^{\prime}}}+\overline{w})dz^{\prime}\rangle\right) \tag{15}\] which is different from the RB expression by the appearance of the integral term on the _r.h.s._, which originates, as already mentioned, by the flow compressibility. The volume averaged expression of the Nusselt number is convenient for numerical calculations, as it is less affected by discretization and numerical errors (we will use this expression in the numerical calculations presented in this article). Finally, we note that the Reynolds number defined as a system response parameter is here: \[Re\equiv\frac{\overline{\langle W^{2}\rangle}^{1/2}H}{\nu}=\sqrt{\frac{Ra}{Pr }}\ \overline{\langle w^{2}\rangle}^{1/2}. \tag{16}\] ## III The BRB dynamics: theoretical analysis This section presents some notable analytical results on the dynamics of the BRB model system. First, we perform the linear stability analysis to determine the transition from the conductive to the convective state. Second, we address the non-linear saturation mechanism that is responsible for the stabilization of the flow after the inception of convection. Third, by means of a standard matched asymptotic (\(ma\)) analysis, we solve the BRB system of equations in steady condition in the limit of large \(Ra\) numbers. Finally, based on the \(ma\) solution we derive the asymptotic-in-Rayleigh scaling laws for the Nusselt and Reynolds numbers. ### Onset of convection The linearization of the system (6-7) with respect to \(w\) and \(\theta\) satisfies solutions of the form \[w=\Sigma_{n=1}^{\infty}w_{n}e^{\sigma_{n}t}\sin(nkz),\quad\theta=\Sigma_{n=1} ^{\infty}\theta_{n}e^{\sigma_{n}t}\sin(nkz), \tag{17}\] where \(k=2\pi\) and \(n\) is an integer value, and with the growth rate \[\sigma_{n}=\frac{\sqrt{(Pr+1)^{2}(nk)^{4}+4Pr(Ra-(nk)^{4})}-(Pr+1)(nk)^{2}}{2 \sqrt{RaPr}}. \tag{18}\] Therefore, for \(Ra>Ra_{c}=k^{4}=(2\pi)^{4}\simeq 1558\) and at any \(Pr\) value the system becomes linearly unstable (\(\sigma_{1}>0\)). The critical Rayleigh number happens to be the same as in the three-dimensional three-periodic homogeneous Rayleigh-Benard system [14; 15] although in that case the perturbation form is different as it depends only on the horizontal coordinates. We observe that the relative amplitude of the velocity and temperature field is \(w_{1}/\theta_{1}=(\sigma_{1}+k^{2}/\sqrt{PrRa})\), this implies that \(w_{1}=\theta_{1}\) for \(Pr=1\). This prediction will be verified in Sec. IV by means of a numerical simulation starting from a tiny white noise perturbation on \(w\) and \(\theta\) fields (see also Fig.6). The described exponentially growing solution is eventually saturated by the presence of the nonlinear terms, as we discuss in the next section. ### Non-linear saturation mechanism Similarly to what occurs in a three-dimensional RB system in slightly supercritical conditions (\(Ra\gtrsim Ra_{c}\)) the exponential growth rate of the perturbation rapidly saturates into a convective steady state. This phenomenology can be promptly explained for the BRB at \(Pr=1\) by means of a two modes Galerkin expansion, which we detail in the following. We assume \[w(z,t)=\Sigma_{n=1,2}A_{n}(t)\sin{(nkz)},\quad\theta(z,t)=\Sigma_{n=1,2}B_{n}(t )\sin{(nkz)}. \tag{19}\] Upon its substitution into the equations of motion, retaining only terms in \(\sin{(kz)}\) and \(\sin{(2kz)}\), we obtain the first-order differential system for the evolution of the amplitudes of the four considered modes: \[\dot{A_{1}} = \frac{Ra_{c}^{1/4}}{2}A_{1}A_{2}-\sqrt{\frac{Ra_{c}}{Ra/Pr}}A_{1} +B_{1} \tag{20}\] \[\dot{A_{2}} = -\frac{Ra_{c}^{1/4}}{2}A_{1}^{2}-4\sqrt{\frac{Ra_{c}}{Ra/Pr}}A_{2 }+B_{2}\] (21) \[\dot{B_{1}} = \frac{Ra_{c}^{1/4}}{2}(2A_{1}B_{2}-A_{2}B_{1})-\sqrt{\frac{Ra_{c} }{RaPr}}B_{1}+A_{1}\] (22) \[\dot{B_{2}} = -\frac{Ra_{c}^{1/4}}{2}A_{1}B_{1}-4\sqrt{\frac{Ra_{c}}{RaPr}}B_{2 }+A_{2}, \tag{23}\] where we have taken into account that \(Ra_{c}=k^{4}\). Such system can not be solved analytically, however that is possible for the special case \(Pr=1\). In this condition, the equations for \(A_{i}\) and \(B_{i}\) become identical, hence the differential Figure 2: Steady solution of the amplitude \(A_{1}\) of the two-mode truncated system as a function of the control parameter \(Ra\), as given in eq. (27),(28). The graph has a typical structure of a supercritical pitchfork bifurcation, solid lines denote stable branches while the dotted one denotes the unstable brach. We indicate as “conduction” the solution corresponding to \(A_{1}=0\) (and \(A_{2}=0\) too), with “shock” type solution the one presenting a sharp variation on the bulk of the system, which corresponds to the case \(A_{1}>0\), and finally with “boundary layer” type the one where the variation occurs close to the boundaries (\(A_{1}<0\) in this case). The insets show the complete steady two-mode solutions for \(\theta(z)\) or \(w(z)\) of the shock and boundary layer types for the case \(Ra=2000\), marked with colored circles on the main panels. system reduces to a two-dimensional one: \[\dot{A}_{1} = \sigma_{1}A_{1}+\frac{Ra_{c}^{1/4}}{2}A_{1}A_{2} \tag{24}\] \[\dot{A}_{2} = \sigma_{2}A_{2}-\frac{Ra_{c}^{1/4}}{2}A_{1}^{2}, \tag{25}\] with \(\sigma_{1}=1-\sqrt{Ra_{c}/Ra}>0\) and \(\sigma_{2}=1-4\sqrt{Ra_{c}/Ra}<0\) for \(Ra_{c}<Ra<16Ra_{c}\). As the characteristic time-scale of the second mode \(\tau_{2}=|\sigma_{2}|^{-1}\) is smaller than the characteristic time of the first mode \(\tau_{1}=|\sigma_{1}|^{-1}\), we can perform a so-called adiabatic elimination and take \(\dot{A}_{2}\approx 0\) in the vicinity of the bifurcation threshold. By doing so, we obtain the following Landau equation [16] for the evolution of the amplitude \(A_{1}\): \[\dot{A}_{1}=\sigma_{1}A1-\gamma A_{1}^{3} \tag{26}\] with \(\gamma=\frac{\sqrt{Ra_{c}}}{4(4\sqrt{\frac{Ra_{c}}{Ra}}-1)}\). Eq. (26) admits three steady solutions: \[A_{1}^{ss} = 0\;\;(\mbox{conductive state}) \tag{27}\] \[A_{1}^{ss} = \pm\sqrt{\sigma_{1}/\gamma}\;\;(\mbox{two convective states}). \tag{28}\] Assuming the perturbation expansion \(A_{1}(t)=A_{1}^{ss}+\varepsilon A_{1p}(t)+O(\varepsilon^{2})\), the amplitude equation at order \(O(\varepsilon)\) writes \[\dot{A}_{1p}=A_{1p}(t)(\sigma_{1}-3\gamma(A_{1}^{ss})^{2}) \tag{29}\] A solution of the linear and homogeneous equation above can be written as \(A_{1p}(t)=A_{1p}(0)\exp{(-i\omega t)}\), which leads to \[\sigma_{1}-3\gamma(A_{1}^{ss})^{2}+i\omega=0 \tag{30}\] Since all the coefficients are real and \(\omega=\omega_{R}+i\omega_{I}\), we get \(\omega_{R}=0\) (meaning that solutions are stationary) and \(\omega_{I}=\sigma_{1}-3\gamma(A_{1}^{ss})^{2}\). Hence, the growth rates of the trivial (27) and non-trivial (28) steady-states are \(\omega_{I}=\sigma_{1}\) and \(\omega_{I}=-2\sigma_{1}\) respectively. Therefore, for \(Ra<Ra_{c}\) the trivial conductive solution is stable, whereas for \(Ra>Ra_{c}\) the non-trivial convective solutions are the ones to be stable. This corresponds to a supercritical pitchfork bifurcation, as illustrated in the graph of Fig. 2. The multiplicity of the above convective solutions is a consequence of the swap symmetry mentioned in the previous section, which in the case of a sinusoidal profile takes the form \(\sin(nkz)\rightarrow(-1)^{n}\sin(nkz)\) for integers \(n\). In the two mode expansion it therefore affects only the mode \(A_{1}\). This produces a change from a solution with sharp variations at the boundaries, which we can denote as "boundary layer" type solution, to one with a sharp transition in the bulk of the domain, that we call "shock" type solution. While the boundary layer type solution is analogous to the vertical flow profile observed in the RB system, the shock type solution has no corresponding in 2D or 3D convective systems. For this reason, in our analysis we will mainly focus on the former kind of solution. The temperature/velocity spatial profiles corresponding to the complete convective steady solutions of (24)-(25), i.e., the two-mode truncated series (19) with \[A_{1} = \pm\sqrt{\sigma_{1}/\gamma}=\frac{\pm 2}{Ra_{c}^{1/4}}\left(1- \sqrt{\frac{Ra_{c}}{Ra}}\right)^{\frac{1}{2}}\left(4\sqrt{\frac{Ra_{c}}{Ra}}- 1\right)^{\frac{1}{2}} \tag{31}\] \[A_{2} = -2\sigma_{1}/Ra_{c}^{1/4}=\frac{-2}{Ra_{c}^{1/4}}\left(1-\sqrt{ \frac{Ra_{c}}{Ra}}\right) \tag{32}\] are traced in Fig.2 (insets) [17]. The above convective solutions allows to estimate the Nusselt number near to the onset, as \[Nu=1-2\pi\left(A_{1}+2A_{2}\right). \tag{33}\] They also imply that \[\langle w^{2}\rangle=\langle\theta^{2}\rangle=\langle w\theta\rangle=\frac{1 }{2}(A_{1}^{2}+A_{2}^{2})=\frac{6}{\sqrt{Ra}}\left(1-\sqrt{\frac{Ra_{c}}{Ra}} \right), \tag{34}\] so the Reynolds number becomes: \[Re = \sqrt{6\left(\sqrt{Ra}-\sqrt{Ra_{c}}\right)}. \tag{35}\] These predictions will be tested in Sec. IV by means of direct numerical simulations. Finally, we wish to comment on the role of the no-zero mode condition, \(\langle w\rangle=\langle\theta\rangle=0\), introduced in our model system. The removal of this condition allows for an anticipated onset of convection at \(Ra_{c}=\pi^{4}\) (16 times smaller than in the present case). It also permits non-odd solutions characterized by a single boundary layer on one of the sides of the domain. By virtue of the system symmetries, these solutions are trivially linked to the ones described above: their period and their amplitude are doubled but all scaling properties remains identical. ### Large-Ra asymptotic dynamics What happens to the system dynamics at large \(Ra\)? Does it keep stationary or rather does it become time-dependent, and possibly chaotic and turbulent? Replying to these questions on the basis of a theoretical analysis is challenging. We will try to reply to this question with the aid of numerical simulations, in Sec IV. However, it is possible to derive a steady solution of (6)-(7) in the limit of large \(Ra\) and at any \(Pr\), by means of the standard technique of matched asymptotic expansion. In the following we detail the derivation of such remarkable solution and we will then use it to give a prediction for the Nusselt and Reynolds scaling in the asymptotically large \(Ra\) regime. The numerical simulations, described in Sec IV, will prove that the steady \(ma\) solution describe strikingly well the real behaviour of the BRB system. #### iv.3.1 Approximate stationary solution with matched asymptotic expansion The use of matched asymptotics to describe the structure of shocks in the Burgers equation at very small viscosities is a classical approach [18]. This technique has been applied in several studies to estimate the contribution of shocks to the anomalous dissipation of kinetic energy [19], to the energy spectrum of solutions [20], to propose closures for statistical theories of Burgers' turbulence [21] and more recently to explain the spontaneous stochasticity of Lagrangian trajectories in Burgers equation [22]. Here we look for a solution of Eq. (6-7) under the assumption that both \(\frac{\sqrt{Pr}}{\sqrt{Ra}}\) and \(\frac{1}{\sqrt{Ra}Pr}\) are small parameters (or equivalently \(Ra^{-1}\ll Pr\ll Ra\)). First, we consider the solution of the system far from boundaries, denoted as _outer_ solution. In this region the dissipative terms are negligible, because they multiply the above mentioned small parameters, hence the system reduce to: \[w\ w_{z} = \theta \tag{36}\] \[w\ \theta_{z} = w, \tag{37}\] From the second equation we get \(\theta_{out}=z+c_{1}\), which is then plugged into the first equation leading to \(ww_{z}=z+c_{1}\) which admits the solution \(w_{out}=\sqrt{z^{2}+2c_{1}z+c_{2}}\). By using the condition that the solution should be an odd function in the domain (i.e. \(w(1/2)=\theta(1/2)=0\)) we determine the constants \(c_{1}=-1/2\), \(c_{2}=1/4\) and so \(w_{out}=\theta_{out}=z-1/2\). Second, we consider the solution near to a boundary, denoted as _inner_ solution (we choose here the boundary close to \(z=0\)). In this region, the application of standard least degeneracy principle [23] leads to the system: \[w\ w_{z} = \frac{\sqrt{Pr}}{\sqrt{Ra}}\ w_{zz} \tag{38}\] \[w\ \theta_{z} = \frac{1}{\sqrt{PrRa}}\ \theta_{zz}, \tag{39}\] Here one first solve the equation for \(w\). This leads to \(w_{in}=-a\tanh\left(a\sqrt{\frac{Ra}{Pr}}\ z/2\right)\) where we have adopted the boundary condition in \(w(0)=0\). The constant \(a\) can be determined by matching the inner and outer solutions for \(w\), \(\lim_{z\rightarrow\infty}w_{in}=\lim_{z\to 0}w_{out}\): \(-a=-1/2\), so \(a=1/2\). \[w_{in}=-\frac{1}{2}\tanh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z}{4}\right) \tag{40}\] By substituting now the inner solution \(w_{in}\) in the equation for \(\theta\) we obtain \[-\frac{1}{2}\tanh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z}{4}\right)\,\theta_{z}= \frac{1}{\sqrt{PrRa}}\ \theta_{zz} \tag{41}\] which can be rewritten as \[-\frac{1}{2}\sqrt{PrRa}\tanh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z}{4}\right)= \frac{d}{dz}\log\theta_{z} \tag{42}\] and integrated to \[\log\left(\cosh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z}{4}\right)\right)^{-2Pr}= \log\theta_{z}+\log K \tag{43}\] where \(\log K\) is a constant, hence removing the log: \[\left(\cosh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z}{4}\right)\right)^{-2Pr}=K \theta_{z}. \tag{44}\] This can be integrated in the interval \([0,z]\) \[\theta(z)-\theta(0)=\frac{1}{K}\int_{0}^{z}\left(\cosh\left(\sqrt{\frac{Ra}{Pr }}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime} \tag{45}\] We now apply the boundary condition \(\theta(0)=0\), while the value of \(K\) is obtained by matching the inner and outer solutions for \(\theta\), \(\lim_{z\rightarrow\infty}\theta_{in}=\lim_{z\to 0}\theta_{out}\): \(-\frac{1}{2}=\frac{1}{K}\lim_{z\rightarrow\infty}\int_{0}^{z}\left(\cosh \left(\sqrt{\frac{Ra}{Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}\). It follows: \[\theta_{in}(z)=-\frac{1}{2}\frac{\int_{0}^{z}\left(\cosh\left(\sqrt{\frac{Ra}{ Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}}{\int_{0}^{\infty}\left(\cosh \left(\sqrt{\frac{Ra}{Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}}. \tag{46}\] The complete perturbative solution is obtained by summing up the inner and the outer solution and by subtracting their overlap, \(w_{ma}(z)=w_{in}(z)+w_{outer}(z)-w_{overlap}\), where \(w_{overlap}=\lim_{z\rightarrow\infty}w_{in}=\lim_{z\to 0}w_{out}=- \frac{1}{2}\). This leads to the final expression for the matched asymptotic solution: \[w_{ma}(z) = z-\frac{1}{2}\tanh\left(\sqrt{\frac{Ra}{Pr}}\frac{z}{4}\right) \tag{47}\] \[\theta_{ma}(z) = z-\frac{1}{2}\frac{\int_{0}^{z}\left(\cosh\left(\sqrt{\frac{Ra}{ Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}}{\int_{0}^{\infty}\left(\cosh \left(\sqrt{\frac{Ra}{Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}} \tag{48}\] We note that if \(Pr=1\) the solutions takes the simpler form \[w_{ma}(z)=\theta_{ma}(z)=z-\frac{1}{2}\tanh\left(\sqrt{Ra}\frac{z}{4}\right). \tag{49}\] The solution (49) for \(Pr=1\) coincides with the shock solution derived by Saffman [24] for the randomly forced Burgers equation in the limit of \(t\rightarrow+\infty\) and large \(Re\) (when \(\sqrt{Ra}\) is replaced by \(Re\)), see also [25] for a recent discussion. Remark that the matched asymptotic solutions (47),(48) are valid only in the interval \(z\in[0,1/2]\), but they can be applied to the interval \(z\in[1/2,1]\) with the transformation \(z\to z-1\). This at the price of accepting a discontinuity in the origin, because e.g. for the velocity \(w_{ma}(z=1/2)=\varepsilon\neq 0\). However, such a discontinuity goes as \(2\varepsilon=1-\tanh\left(\sqrt{\frac{Ra}{Pr}}\frac{1}{8}\right)\), therefore it vanishes asymptotically with \(Ra\). We also note that the perturbative solution is not an exact solution of the original system. However, it is asymptotically correct. This can be seen by plugging it into the stationary equations and considering the \(Ra\to\infty\) limit of the residuals \[\lim_{Ra\to\infty} w_{ma}w_{ma,z}-\frac{\sqrt{Pr}}{\sqrt{Ra}}\ w_{ma,zz}-\theta_{ma}=0\] \[\lim_{Ra\to\infty} w_{ma}\theta_{ma,z}-\frac{1}{\sqrt{PrRa}}\ \theta_{ma,zz}-w_{ma}=0.\] Another observation is in order about the the shape of the solution. We remark here that by choosing that the internal solution occurs at \(z=0\), we have implicitly selected the boundary-layer type of the solution. A different, equally admissible choice, is to place the inner solution close to \(z=1/2\), this leads to a shock type solution. One can trivially go from the former type of solution to the latter, by applying the third symmetry transformation (8) discussed in Sec.II.2. We observe that these two type of solutions are characterized by the same Reynolds number, as \(Re\sim\langle w^{2}\rangle^{1/2}\), while they differ for the Nusselt number, because \(Nu=1-\theta_{z}(0)\). The latter observation implies that while boundary layer (BL) type solutions are characterized by an increasing \(Nu\) as \(Ra\to\infty\), in the shock type solution the Nusselt number vanishes (leading to a perfectly insulating system for \(Ra\to\infty\)). As we mentioned before, although these two convective states are equally probable, we will limit our considerations to the case of BL type solution, as it offers a better analogy with the dynamics of the realistic RB system which motivates this study. ### Upper bounds and asymptotic scalings for the Nusselt and Reynolds numbers The matched asymptotic solution allows to promptly compute asymptotic expressions for all global quantities in the system, we focus here on the two main output observables of the BRB system, the Nusselt and the Reynolds number. #### ii.4.1 Nusselt number We evaluate \(Nu=1-\theta_{ma,z}(0)\). In the general \(Pr\) case, by using (48) we get \[Nu=\frac{1}{2\int_{0}^{\infty}\left(\cosh\left(\sqrt{\frac{Ra}{Pr}}\ \frac{z^{\prime}}{4}\right)\right)^{-2Pr}dz^{\prime}} \tag{50}\] First of all let us note that the \(Ra\) dependence can be factorized by introducing the auxiliary variable \(\tilde{z}=\sqrt{Ra}\ z^{\prime}\) in the integral, so: \[Nu=\frac{\sqrt{Ra}}{2\int_{0}^{\infty}\left(\cosh\left(\frac{\tilde{z}}{4 \sqrt{Pr}}\right)\right)^{-2Pr}d\tilde{z}}. \tag{51}\] Because the denominator only depends on \(Pr\) it is therefore clear that the scaling \(Nu\sim\sqrt{Ra}\) is to be expected asymptotically in \(Ra\) for any \(Pr\). We now focus on the \(Pr\) dependence. Using the property \(e^{x}/2\leq\cosh x\leq e^{x}\) for \(x>0\), one can write, \[\frac{2^{2Pr+1}}{\sqrt{Pr}}\geq\int_{0}^{\infty}\left(\cosh\left(\frac{\tilde{ z}}{4\sqrt{Pr}}\right)\right)^{-2Pr}d\tilde{z}\geq\frac{2}{\sqrt{Pr}},\] and finally by using (51): \[\frac{\sqrt{RaPr}}{4^{Pr+1}}\leq Nu\leq\frac{\sqrt{RaPr}}{4}.\] This bounding relation is relevant in the limit of small \(Pr\), because \(4^{Pr}\to 1\), leading to the scaling law \[Nu\simeq\frac{\sqrt{RaPr}}{4}\quad(Pr\ \text{small}). \tag{52}\] In the limit of large \(Pr\) as \(\left(\cosh\left(\frac{\tilde{z}}{4\sqrt{Pr}}\right)\right)^{-2Pr}\to e^{-( \frac{4}{4})^{2}}\) and using (51) one obtains: \[Nu=\frac{\sqrt{Ra}}{4\sqrt{\pi}}\quad(Pr\to\infty). \tag{53}\] Which is an expression that does remarkably not depend on the \(Pr\) value. It is worth observing that the saturation of the \(Nu\) number for large \(Pr\) values (53) is a feature also observed in the realistic \(RB\) system (see e.g. [26] Figure 5). Finally, for the intermediate case, \(Pr=1\), taking advantage of the simpler form of the matched asymptotic solution (49), one can exactly derive: \[Nu=\frac{\sqrt{Ra}}{8}\quad(Pr=1). \tag{54}\] #### iii.2.2 Reynolds number We now turn the attention to the Reynolds number. Asymptotically in \(Ra\) we observe that the \(ma\) velocity solution approaches the behaviour \(w_{ma}\simeq w_{out}=z-1/2\), while the boundary layers (BL) become thinner and thinner. This points to the existence of the upper bound for the velocity variance: \(\langle w^{2}\rangle\leq\int_{0}^{1}(z-1/2)^{2}\ dz=1/12\), which implies: \[Re=\sqrt{\frac{Ra}{Pr}}\langle w^{2}\rangle^{1/2}\leq\frac{1}{\sqrt{12}}\sqrt {\frac{Ra}{Pr}} \tag{55}\] A lower bound for \(Re\) can be obtained by just considering that the asymptotic outer solution is the one most contributing to the global velocity variance, so: \[\langle w^{2}\rangle\ \gtrsim\ \int_{\delta}^{1-\delta}w_{out}^{2}\ dz\simeq\int_{ \delta}^{1-\delta}\left(z-\frac{1}{2}\right)^{2}dz=\frac{(1-2\delta)^{3}}{12}\] where \(\delta\) is an estimation for the thickness of the kinetic boundary layer, which we define as the height \(z\) where the argument of the hyperbolic function in \(w_{in}\) is one, i.e., \(\delta=4/\sqrt{Ra/Pr}\). This implies \[Re\gtrsim\frac{1}{\sqrt{12}}\sqrt{\frac{Ra}{Pr}}\left(1-8\sqrt{\frac{Pr}{Ra}} \right)^{3/2} \tag{56}\] We will show that the above predictions for the Nusselt and the Reynolds numbers approach quite well the result that we obtain from the numerical simulations of the BRB system in the asymptotic large-\(Ra\) limit (see below Sec. IV.3). Scaling laws of the form \(Nu\sim\sqrt{Ra\ Pr}\) and \(Re\sim\sqrt{Ra/Pr}\) identifies the so called ultimate regime of thermal convection. Physically it can be interpreted as a regime where the microscopic diffusion material properties, i.e. the viscosity and the thermal diffusivities, have a negligible role in the determination of the intensity of the heat transport and of the kinetic energy in the system. The ultimate regime has been predicted to occur in the RB systems in the asymptotic high-Ra limit [27] (see also [28]). However, its verification in RB experiments and simulations is still debated (see [29] for a recent concise account). On the opposite, it has been clearly observed in the so called homogeneous-Rayleigh-Benard (HRB) model system, which is a three-dimensional vertically unbounded system, either triperiodic [14] or laterally confined [30], which can be realized only in numerical simulations. It is important to note that the HRB model has no horizontal wall boundaries, as such it misses the corresponding kinetic and thermal boundary layers, i.e. well identified regions where dissipation has the dominant role with respect to inertial transport terms. Experimental realizations of the \(Nu\sim\sqrt{Ra}\) regime have been achieved only in bulk dominated convective systems, such as in the vertical channel setup [31; 32] or in systems where the wall thermal heating has been replaced by volumetric radiative heating [33; 34]. More recently [35] have numerically demonstrated the occurrence of the ultimate regime of convection also in a RB system with permeable walls. This system possesses thermal and kinetic BL but does not enforce the cancellation of the vertical velocity on the top-bottom walls. In the light of this, the verification of eqs. (54) and (55) for the BRB system in high-Ra regime would make it the first bounded system, with kinetic and thermal boundary layers, to display the ultimate regime. As we will numerically demonstrate in the next section, the BRB clearly possesses this feature. ## IV The BRB dynamics: numerical simulation analysis In order to test the predictions presented in the previous section and to get deeper insight into the BRB dynamics, we performed direct numerical simulations (DNS) of the system of equations. This is conveniently done by means of a Fourier pseudo-spectral method. For this study we use spatial resolutions ranging from \(N=2^{13}\) to \(2^{16}\) grid points and explore the two-dimensional parameter space \(Ra\)-\(Pr\). The simulations evolve in time from an initial state where \(w\) and \(\theta\) are null except for a tiny random uniform spatially-uncorrelated perturbation. The adopted numerical methods and protocols are described in detail in the Appendix A. ### Temperature and velocity profiles We numerically find that the system displays a steady solution at any \(Ra\), up to \(10^{10}\) simulated in this study, and at any \(Pr\) in the range \(\left[10^{-2},10^{2}\right]\). Furthermore, when \(Pr=1\) the \(w\) and \(\theta\) profiles are always coincident. As illustrated in Fig. 3 and 4 when the Rayleigh number is slightly beyond the critical threshold \(Ra=O(10^{3})\), the two-modes solution (31)(32) perfectly approximate the numerically computed temperature and velocity profiles. On the other Figure 4: Graph of the numerical solution for the temperature fluctuation \(\theta(z)\) and velocity \(w(z)\) at \(Ra=10^{5}\) and \(Pr=5\) (a) and \(Pr=1/5\) (b), and comparison with the matched asymptotic solutions for both fields. Note that we show here for better visibility the profile in the bottom half of the domain. Figure 3: Graph of the numerical solution for the temperature fluctuation \(\theta(z)\) at \(Pr=1\) (identical to the velocity \(w(z)\)) for several \(Ra\) values from \(Ra=1700=1.09Ra_{c}\), to \(Ra=10^{5}=64.16Ra_{c}\) We show the two-modes truncated solution for \(Ra=1700\) as well as the matched asymptotic one for \(Ra=10^{5}\). The solution approach the bulk behaviour, \(z-1/2\), at increasing the \(Ra\) number. hand for \(Ra=10^{5}\) the agreement with the matched asymptotic solution (49) is already excellent. It is also evident that as \(Ra\to+\infty\) the profiles approaches the \(z-1/2\) linear shape, with vanishingly thin boundary layers. Note that the temperature \(T\) is the sum of the conductive profile and the fluctuation \(\theta\), this implies that \(T\) is essentially constant in the well mixed bulk of the system and changes sharply only in the boundary layer, in close analogy with the mean vertical temperature profile in the turbulent high-\(Ra\) RB system. Figure 4 reports the corresponding numerical results for non unit Prandtl numbers (\(Pr=5\) and \(1/5\)) at \(Ra=10^{5}\). One can appreciate that for \(Pr>1\) the thermal boundary layer is thinner than the kinetic one and vice-versa for \(Pr<1\). Note also that the two cases are distinct, because \(Pr\to Pr^{-1}\) is not a symmetry of the system. Again we observe a good agreement with the matched asymptotic solutions (47),(48). In the asymptotic high-\(Pr\) limit one shall expect that \(\theta\) will become almost everywhere equal to \(z-1/2\), while in the low-\(Pr\) limit the same will happen for \(w\). ### Is the convective stationary regime stable? As mentioned above, numerically we find that the convective state of the system displays a stationary solution at any \(Ra\) and \(Pr\) values. This aspect might seem surprising as it is different from what happens in the RB system, where successive bifurcations occur as \(Ra\) is increased, leading first to temporally periodic solutions, then chaotic ones and finally to progressively more turbulent states. In the BRB case, although we can not rule out the existence of sub-critical bifurcations that might lead to the existence of such unsteady states, we can prove that the convective state displayed by the system is linearly stable. This point is addressed in the current section. In section III.1 we studied the stability of the conductive state. It was shown that, beyond \(Ra_{c}=k^{4}=(2\pi)^{4}\), the system bifurcates to a convective stationary state, hereafter denoted as \(w_{s},\theta_{s}\). Depending on the value of \(Ra\) this state can be approximated in three ways: 1. by a two-mode expansion eq. (19) and (31),(32) valid near the critical threshold \(Ra_{c}\); 2. by a matched asymptotic solution eqs. (47),(48), valid in the limit of large Rayleigh numbers; 3. by an interpolation of the discretized numerical solution obtained from the DNS, which is valid in the whole \(Ra\) range. We can now study the stability of these states by applying the linear stability analysis.We adopt the Galerkin method of weighted residuals [36]. In short, the idea is to choose trial functions that satisfy the boundary conditions exactly and solve the differential equations in an averaged sense, by imposing the condition that the residuals are orthogonal to the trial functions. The result is a set of homogeneous equations whose non-trivial solution leads to an eigenvalue problem. Here we denote with \(\tilde{\sigma}(n)\) the series of the eigenvalues, which are determined in terms of the \(Ra\) and \(Pr\) parameters. Similarly to the previous case, we write \[w=\sum_{n=1}^{N}\tilde{w}_{n}\sin(nkz)e^{\tilde{\sigma}(n)t}+w_{s},\quad \theta=\sum_{n=1}^{N}\tilde{\theta}_{n}\sin(nkz)e^{\tilde{\sigma}(n)t}+\theta _{s} \tag{57}\] with \(n\in\mathbb{N}\) and \(k=2\pi\). The number of modes \(N\) is chosen so that the convergence is assured for the different parameter values. Figure 5a shows the evolution of the perturbation growth rate \(\tilde{\sigma}=\tilde{\sigma}(1)\) with \(Ra(>Ra_{c})\), for the three different approximated base solutions and \(Pr=1\). As expected, the two-mode solution (case i) agrees well with the numerical convective state (case iii) for moderate values of the Rayleigh number, while the matched asymptotic does it for large values. The evolution of the larger growth rate \(\tilde{\sigma}\) for different values of Prandtl obtained with the numerical base state is depicted on Figure 5b. It can be seen that the growth rate is always negative, indicating that the the first convective stationary state never loses its stability, even for large values of \(Ra\), in agreement with the observations from the DNS. ### Measure of Nusselt and Reynolds number asymptotic scalings Form the numerically computed \(\theta(z)\) and \(w(z)\) profiles one can estimate the corresponding Nusselt and Reynolds numbers and analyze their functional dependencies with \(Ra\) and \(Pr\). The behaviour of \(Nu\) as a function of \(Ra\), for \(Pr=1\), is shown in Fig. 6(a). We observe that for \(Ra<Ra_{c}\)\(Nu\simeq 1\) (vertical dotted line), hence only conductive heat-transfer occurs, as expected from the theoretical analysis. Beyond \(Ra_{c}\) convection starts and \(Nu\) progressively increases. Close to the onset, for \(Ra\lesssim 2Ra_{c}\sim 3\times 10^{3}\), the two-modes expression (33), (solid line), approaches well the numerical results. On the other hand the matched asymptotic prediction agrees with the data from \(Ra\sim 10^{4}\) (dashed line) up to the highest explored \(Ra\) number (\(10^{10}\)). Indeed, as it is better appreciated in the compensated plot Fig. 6(b) at the largest \(Ra\) the normalized data approaches the value \(1/8\) (dashed-dotted line) meaning that \(Nu\sim\sqrt{Ra}/8\) in excellent agreement with the \(ma\) prediction (54). We now look at the heat-flux dependence with respect to \(Pr\). This is illustrated in Figure 7(a), where \(Nu(Pr)\) is traced for various Rayleigh numbers \(Ra=10^{6},10^{7},10^{8}\). Also in this case we see an excellent agreement with the matched asymptotic solution (50) and with the small/large-\(Pr\) asymptotic behaviours (52),(53) derived in the previous section. We clearly observe a saturation of the Nusselt number for \(Pr\gg 1\). Furthermore, Figure 7(b) shows how all the data points can be collapsed on a single curve, by means of the rescaling \(Nu/\sqrt{Ra}\). This means that, in agreement with the matched asymptotic solution, the \(Ra\) and \(Pr\) dependence can be factorized (see eq. (51)). Similar observations can be made for the dependence of the Reynolds number versus \(Ra\) for \(Pr=1\) (Fig. 8(a,b)) and \(Re(Pr)\) figure Fig. 9 (a)). At high-\(Ra\) agreement with the \(ma\) predictions is satisfactory in all cases. The \(Re(Ra,Pr)\) functional relation is also well described by the expression (56), which is simpler in form than the \(ma\) expression (see again Fig. 8(b) and Fig. 9 (a)). In Figure 9(a), we observe that the compensated Reynolds number expression \(Re/\sqrt{Ra/Pr}\), which is equivalent to the mean root-mean-squared velocity \(\sqrt{\langle w^{2}\rangle}\), decreases for large \(Pr\). This is consistent with the fact that the kinematic boundary layer becomes thicker hence the velocity reduce in intensity. The opposite is true for the fluctuations of the temperature field \(\sqrt{\langle\theta^{2}\rangle}\), which is reported in Fig. 9(b), and for which again the \(ma\) solution offers an excellent approximation. Overall the DNS confirms the realization of the ultimate regime for the Reynolds and Nusselt numbers in respect to both the Rayleigh and Prandtl dependence. To our knowledge the occurrence of this regime was previously assessed only for the 3D homogeneous-Rayleigh-Benard system in [14] and more recently in a wider \(Ra\) and \(Pr\) range in [37]. Despite its great degree of abstraction (1D, compressible flow) the BRB system represents a second convective model system where this flow regime takes place. Moreover the saturation of \(Nu\) for large-\(Pr\) is a feature of the RB model [8] which is here present, while it was instead missing in the HRB system. ## V Conclusions The BRB dynamics shares remarkable similarities with realistic thermal convection in higher spatial dimensions, i.e., the Rayleigh-Benard system under Oberbeck-Boussinesq conditions. In this work we have shown that: i) BRB has a supercritical linear instability for the onset of convection which solely depends on the Rayleigh number and not on Prandtl (same as in RB), occurring at the critical value \(Ra_{c}\approx 1558\) which is of the same order as in the RB Figure 5: Results of linear stability analysis of the convective steady state \(w_{s},\theta_{s}\). Perturbation growth rate \(\tilde{\sigma}\) as a function of \(Ra\): (a) for the case \(Pr=1\) computed from three different representation of the base state: the numerical (valid in the full range of \(Ra\)), the two-mode Galerkin truncation base state (valid at small \(Ra\)) and the matched asymptotic one (valid for \(Ra\rightarrow+\infty\)); (b) for different Prandtl numbers \(Pr\in[0.1,10^{2}]\) using the numerical base state. The dotted vertical line it is traced at \(Ra=Ra_{c}\). system; ii) the convective regime is spatially organized in distinct boundary-layers and bulk regions, although shock like solutions are equally admitted; iii) the asymptotic high \(Ra\) limit displays the ultimate Nusselt and Reynolds numbers scaling regime \(Nu=\sqrt{RaPr}/4\) for \(Pr\ll 1\), \(Nu=\sqrt{Ra}/(4\sqrt{\pi})\) for \(Pr\gg 1\) and \(Re=\sqrt{Ra/Pr}/\sqrt{12}\) thus making BRB the simplest convective system with boundaries exhibiting the ultimate regime of convection. A major difference with realistic higher dimensional natural convection is the absence of turbulence. The BRB dynamics is stationary at all \(Ra\) numbers above the onset of convection for all \(Pr\) values, a feature that results from a nonlinear saturation mechanism. One may object that the odd symmetry in \(w\) and \(\theta\) makes the BRB de facto a periodic system, as the fields can be expressed in term of sine series. For this reason in the future it would be interesting to explore the dynamics of this system in a higher (two- or three-) dimensional space. In this case the convective state might be unstable due to the Figure 6: Nusselt-Rayleigh scaling at \(Pr=1\). (a): \(Nu\) vs. \(Ra\). We display the numerical simulation results (red circle symbols) and comparisons with different analytical predictions: (vertical dotted line) the critical Rayleigh number for the onset of convection \(Ra_{c}\simeq 1558\); (solid line) the Nusselt number calculated from the two-mode steady solution (33); (dashed line) the \(Nu\) expression computed via the matched asymptotic solution \(Nu=\sqrt{Ra}/8\). (b): The same data points for \(Nu\) and theoretical predictions, here compensated with respect to \(Ra^{1/2}\). Figure 7: Nusselt-Prandlt scaling behaviour at various Rayleigh numbers \(Ra=10^{6},10^{7},10^{8}\). (a): Nusselt as a function of Prandtl. We display the numerical simulation results (symbols) and the corresponding matched asymptotic predictions (colored dotted-dashed lines). The behaviours \(\sqrt{RaPr}/4\) for small \(Pr\) (grey dashed line) and \(\sqrt{Ra/\pi}/4\) for large \(Pr\) (black dotted line). (b): Compensated graph \(Nu/\sqrt{Ra}\) vs. \(Pr\). The value \(1/8\) corresponding to \(Pr=1\) is also traced. One can appreciate the collapse of the measurements on a single curve. increased number of degrees of freedom and to the increased system symmetries available in higher dimensions. Weather the missing physics in the present model, namely the violation of the incompressibility, the absence of the pressure field or of the spatial lateral dimensions, might be related to the realization of the ultimate regime of heat transfer remains a question for further investigations. In this sense it would be interesting, and perhaps useful, to think up of a similar minimalistic model capable to reproduce the classical scaling of thermal convection (\(Nu\sim Ra^{1/3}\)), in order to see which key mathematical terms and corresponding physical features are needed for the realization of this different convection regime. AcknowledgmentsThe authors are grateful to Prof. M. N. Ouarzazi for useful comments. Figure 8: Reynolds-Rayleigh scaling at \(Pr=1\). (a): \(Re\) vs. \(Ra\). We display the numerical simulation results (red circle symbols) and comparisons with different analytical predictions: (vertical dotted line) the critical Rayleigh number for the onset of convection \(Ra_{c}\simeq 1558\); (violet solid line) the Reynolds number calculated from the two-mode steady solution (35); (black solid line) the \(Re\) expression computed via the matched asymptotic solution; (green dotted-dashed line) the upper bound value \(Re=\sqrt{Ra/12}\) eq (55). (b): The same data points for \(Re\) and theoretical predictions, here compensated with respect to \(Ra^{1/2}\) We include also the approximated expression (56) (blue dashed line) which has a much simpler form as the \(ma\) prediction and fits equally well the measurements at large \(Ra\). Figure 9: Reynolds number scaling versus \(Pr\). (a) Compensated Reynolds number \(Re/\sqrt{Ra/Pr}\), equivalent to the root-mean-square value of the velocity \(\sqrt{\langle w^{2}\rangle}\) as a function of \(Pr\in[10^{-2},10^{2}]\) for \(Ra=10^{6},10^{7},10^{8}\); (b) root-mean-square value of the temperature fluctuation \(\sqrt{\langle\theta^{2}\rangle}\). In both cases we display the numerical simulation results (symbols) and comparisons with the dependencies computed upon integration of the matched asymptotic solutions (47),(48) and the upper bound \(1/\sqrt{12}\) (green dashed-dotted) that can be computed from the bulk solution. In (a) the algebraic approximation (56) (dotted) is also reported. ## Appendix A Numerical simulation method Equations (6)-(7) in Fourier space, denoted with (\(\widetilde{\phantom{-}}\)), read: \[\tilde{w}_{t}+\widetilde{ww_{z}} = -k^{2}\sqrt{\frac{Pr}{Ra}}\ \widetilde{w}+\tilde{\theta} \tag{10}\] \[\tilde{\theta}_{t}+\widetilde{w\theta_{z}} = -k^{2}\frac{1}{\sqrt{PrRa}}\widetilde{\theta}+\tilde{w}. \tag{11}\] The dissipative terms can be analytically integrated, while the nonlinear terms can be evaluated via pseudo-spectral algorithm. In our code we adopt the standard \(2/3\) dealiasing procedure for the computation of the nonlinear terms. Furthermore, we enforce the boundary conditions by imposing that both \(w\) and \(\theta\) are in real space zero-mean odd-functions (i.e. we use sinus transform instead of Fourier). The temporal discretisation with time step \(\delta t\), performed by means of a second order Adams-Bashfort algorithm, leads to: \[\tilde{w}_{n+1} = \left(\tilde{w}_{n}+\frac{\delta t}{2}\left(3\left[-\widetilde{w \theta_{z}}+\tilde{\theta}\right]_{n}-\left[-\widetilde{w\theta_{z}}+\tilde{ \theta}\right]_{n-1}e^{-k^{2}\sqrt{\frac{\rho}{Ra}}\delta t}\right)\right)e^{- k^{2}\sqrt{\frac{\rho}{Ra}}\delta t} \tag{12}\] \[\tilde{\theta}_{n+1} = \left(\tilde{\theta}_{n}+\frac{\delta t}{2}\left(3\left[- \widetilde{w\theta_{z}}+\tilde{w}\right]_{n}-\left[-\widetilde{w\theta_{z}}+ \tilde{w}\right]_{n-1}e^{-k^{2}\frac{1}{\sqrt{PrRa}}\delta t}\right)\right)e^ {-k^{2}\frac{1}{\sqrt{PrRa}}\delta t}, \tag{13}\] where the subscript indexes indicate the discretized value of time. The time step width is chosen as \(\delta t=10^{-2}/\sigma\) where \(\sigma\) is the growth rate of the most unstable mode, eq. (18), while the spatial resolution is increased till when the resulting velocity and temperature profiles become independent on the number of discretization points \(N\). However, we note that the existence of sharp variations in the solution is at odds with our discretization method based on sine-Fourier transform which is known being affected by from the Gibbs phenomenon. Indeed, at high-\(Ra\) some Gibb's like spurious fluctuations are seen in correspondence of the bulk to boundary layer transition. This fluctuations do not affect the scaling laws presented in this work. The simulations of the BRB system have been also validated against a second code based on finite-difference discretization. The temperature and velocity fields in the simulations are initialized with a spatially uncorrelated pseudo-random noise. These perturbations lead in \(50\%\) of cases to the BL-type solution and in the remaining cases to the shock solution. Although these two states correspond to different global heat-transfer modes, the resulting velocity and temperature profiles can be transformed one into another by the swap transformation, eq. (8). In the present analysis, focused on the scaling of Nusselt in the BL type state, we take advantage of the swap transformation to maximize the the number of realizations of BL solutions. However, we note that it is possible to direct the instability towards one of the two possible convective states, e.g. by adding a sinusoidal modulation to the initial white noise. A modulation of the form, \(-\sin(2\pi z)\), leads to BL state while its opposite, \(\sin(2\pi z)\), favours the transition towards the shock type solution. This aspect has an analogous in the RB system where one can control the large-scale circulation direction of the convective cells by intializing the flow with an horizontally asymmetric perturbation of the temperature field. The codes used in this study are available at [https://github.com/ecalzavarini/BurgersRB](https://github.com/ecalzavarini/BurgersRB).
2310.05656
Mode-Shell correspondence, a unifying phase space theory in topological physics -- Part I: Chiral number of zero-modes
We propose a theory, that we call the \textit{mode-shell correspondence}, which relates the topological zero-modes localised in phase space to a \textit{shell} invariant defined on the surface forming a shell enclosing these zero-modes. We show that the mode-shell formalism provides a general framework unifying important results of topological physics, such as the bulk-edge correspondence, higher-order topological insulators, but also the Atiyah-Singer and the Callias index theories. In this paper, we discuss the already rich phenomenology of chiral symmetric Hamiltonians where the topological quantity is the chiral number of zero-dimensionial zero-energy modes. We explain how, in a lot of cases, the shell-invariant has a semi-classical limit expressed as a generalised winding number on the shell, which makes it accessible to analytical computations.
Lucien Jezequel, Pierre Delplace
2023-10-09T12:11:05Z
http://arxiv.org/abs/2310.05656v2
# Mode-Shell correspondence, ###### Abstract We propose a theory, that we call the _mode-shell correspondence_, which relates the topological zero-modes localised in phase space to a _shell_ invariant defined on the surface forming a shell enclosing these zero-modes. We show that the mode-shell formalism provides a general framework unifying important results of topological physics, such as the bulk-edge correspondence, higher-order topological insulators, but also the Atiyah-Singer and the Callias index theories. In this paper, we discuss the already rich phenomenology of chiral symmetric Hamiltonians where the topological quantity is the chiral number of zero-dimensional zero-energy modes. We explain how, in a lot of cases, the shell-invariant has a semi-classical limit expressed as a generalised winding number on the shell, which makes it accessible to analytical computations. ###### Contents * 1 Introduction * 2 Overview of the mode-shell correspondence for chiral symmetric systems * 2.1 Chiral symmetry and chiral index * 2.2 Role and necessity of a smooth energy filter \(f(E)\) * 2.3 Role and necessity of a phase space filter \(\hat{\theta}_{\Gamma}\) * 2.4 Mode-shell correspondence * 2.5 Winding numbers as semi-classical limits of the chiral invariant in phase space * 3 Mode-shell correspondences in \(1D\) spaces * 3.1 The bulk-edge correspondence for \(1D\) unbounded chiral lattices * 3.2 A _low-high wavenumber correspondence_ for bounded continuous systems * 3.3 A mixed \(x-k\) correspondence in phase space for unbounded continuous \(1D\) systems * 3.4 Discrete approximations of continuous/unbounded topological models * 4 Higher dimensional chiral mode-shell correspondences * 4.1 Expression of the general chiral index * 4.2 Chiral Weak-insulators and flat-band topology * 4.3 Higher-order chiral insulators and additive tensor product construction * 5 Conclusion * A Smoothness/fast-decay Fourier duality * B Wigner-Weyl transform * B.1 The continuous version * B.2 The discrete version * C Short-range couplings in phase space * D Beyond 1D lattices with balanced unit cells * E proof of the general correspondence for the chiral index * F Higher order insulators with hard boundary: Partial semi-classical limit and numerical programs ## 1 Introduction The bulk-edge (or bulk-boundary) correspondence is a fundamental concept in topological physics. Very elegantly, it relates the number of robust gapless boundary modes of a physical system with a topological invariant defined from the wavefunctions in the gapped bulk of the material. For that reason, such boundary states are said to be topological, or topologically protected. The bulk-edge correspondence was first explicitly introduced in the context of the quantum Hall effect [1] in order to clarify two interpretations of the quantised transverse conductance [2, 3]: a bulk interpretation, where the transverse conductivity of an infinite quantum Hall system was theoretically shown to be proportional to a topological index of the Bloch bands [4], and an edge interpretation where unidirectional edge states were expected to exist as bended Landau levels due to edge confinement [5] and were shown to carry electric charges without dissipation along the boundaries in multi-probe (experimental) geometries [6, 7]. Remarkably, the bulk-edge correspondence turned out to be the key concept that allowed the rise of topological physics, in particular beyond quantum matter. As initiated with photonic crystals [8, 9, 10], it was realized that the validity of the bulk-edge correspondence was much less demanding than the quantisation of a response function of the physical system, such as a conductivity. It followed that many experimental platforms, quantum and classical, emerged with the aim of engineering, probing and manipulating robust boundary modes, for instance, for robust wave guiding [11] or quantum computing [12, 13]. Another success of the bulk-edge correspondence is its validity in any dimension. For instance, the observation, through ARPES measurements, of topologically protected surface states behaving as two-dimensional (\(2D\)) massless Dirac fermions, was a convincing experimental proof of the existence of \(3D\) topological insulators [14], while no equivalent of a quantised bulk conductivity was available as an alternative signature. Actually, this success of the bulk-boundary correspondence was twofold, because those \(3D\) topological insulators also belonged to a different symmetry class than that of the quantum Hall effect it was originally conceived. The correspondence still holds in other symmetry classes and arbitrary dimension [15, 16, 17, 18], but the nature of the boundary modes changes: massless Dirac fermions as \(2D\) surface states of \(3D\) topological insulators [14, 19], helical Kramers pairs as \(1D\) edge states of the \(2D\) quantum spin Hall effect [20, 21, 22], Majorana quasi-particles as \(0D\) boundary modes of \(1D\) topological superconducting wires [13], are among the most famous examples. Since then, the bulk-edge correspondence has been challenged several times, and had to adapt itself to incorporate new phenomenologies that did not fit the standard paradigm. One important development of the last years was the discovery of higher order topological insulators, that display a richer hierarchy of boundary modes that are not predicted by the usual bulk-boundary correspondence. Such \(3D\) materials can not only host surface states, but also hinge states and corner states [23, 24, 25]. Another recent fruitful direction is the study of topological modes in continuous media, mostly motivated by classical wave physics, such as geo- and astrophysical fluids [26, 27, 28, 29], active fluids [30], plasmas [27] but also photonics [31, 32]. Of interest was the apparent failure of the bulk-edge correspondence in the absence of a lattice, that stimulated several extension works [33, 34, 35, 36, 37, 38, 39, 40]. A last stimulating development of the bulk-edge correspondence concerns non-Hermitian systems. This field of research can somehow be traced back to the rise of topological states in periodically driven (Floquet) systems [41, 42, 43, 44], quantum walks [45, 46], and scattering networks [47, 48], where a new bulk-edge [42] correspondence was found to emerge from unitary operators, such as the evolution operator, rather than the usual Hermitian Hamiltonian. In that context, the standard bulk-edge correspondence was also found to fail, but a suitable generalization was found out. Since more recently, non-Hermitian topology rather more implicitly designates classical or quantum systems whose dynamics displays topological properties that are dictated by non-unitary evolutions, whether because of different sources of gain or loss [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71]. These numerous upheavals and challenges have continually refined the bulk-edge correspondence's contours in order to preserve this powerful concept. This evolution goes together with the difficulty to provide a general formalism encompassing such a rich phenomenology, while being also both based on sound mathematics and of practical convenience for most physicists. Many proofs of bulk-edge correspondences exist in the literature [38, 39, 40, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 289, 281, 284, 286, 287, 289, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 32, 332, 333, 34, 35, 36, 37, 38, 39, 32, 34, 36, 38, 39, 33, 35, 37, 39, 33, 36, 39, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 89, 80, 83, 85, 88, 89, 91, 84, 86, 88, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 161, 163, 164, 165, 167, 168, 169, 170, 171, 172, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 184, 187, 189, 191, 200, 203, 204, 205, 206, 207, 208, 209, 211, 21, 22, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 92, 94, 95, 96, 97, 98, 99, 100, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 113, 109, 114, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 182, 183, 184, 185, 186, 187, 188, 192, 193, 194, 195, 196, 197, 198, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 54, 55, 56, 57, 59, 61, 63, 64, 65, 66, 67, 69, 71, 72, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 113, 114, 109, 115, 116, 117, 119, 120, 121, 124, 125, region, while the second one is an invariant defined an a _shell_ surrounding this _mode_ in phase space \((x,k)\in\mathds{R}^{2n}\) where the Hamiltonian (or more generically the wave operator) is assumed to be gapped. This general and basic equality is the first brick of the theory. If the two indices can in principle always be computed numerically, it is illusory to expect them to be computable analytically in general. One may however hope to have a simpler formulation in systems with some structure. This brings us to the second step of the theory, which consists in a semi-classical approximation [83] of the shell invariant \(\mathcal{I}_{\text{shell}}\). When such an approximation is possible, \(\mathcal{I}_{\text{shell}}\) can be expressed as an integral over the shell. In that limit, one recovers well-known expressions of so-called bulk topological invariants, such as winding numbers and Chern numbers, but also of less standard invariants which are nonetheless physically relevant. In order to keep the presentation intelligible and the article length reasonable, we shall focus in this paper on Hermitian wave operators with chiral symmetry. More specifically, we shall even only focus on _zero-dimensional zero-energy modes_, simply dubbed zero-modes, which are usually associated to the edge states of \(1D\) systems in the AIII symmetry class of the tenfold way classification of topological insulators and superconductors [15]. Here, our aim is not to provide another derivation of the tenfold way, but rather to extract the many topological aspects of this single and apparently simple case, through the mode-shell correspondence. We provide an explicit derivation of the correspondence in that case, and illustrate it with several detailed examples. The outline of the paper is as follows. In section 2, we present a non technical overview of the mode-shell correspondence. In particular, we introduce the mode invariant \(\mathcal{I}_{\text{mode}}\) for chiral symmetric systems, and show how it is related to the shell invariant \(\mathcal{I}_{\text{shell}}\). We introduce the notion of the symbol Hamiltonian \(H(x,k)\) that is a phase space representation of the operator Hamiltonian \(\hat{H}(x,\partial_{x})\) through a Wigner-Weyl transform. We discuss the semi-classical approximation that simplifies the shell invariant into a general winding number for arbitrary dimensional systems. Section 3 is dedicated specifically to \(1D\) systems. The mode-shell correspondence is then derived and illustrated on models for \(1D\) lattices, continuous bounded and continuous unbounded geometries. From there, we show that the mode-shell correspondence includes the bulk-edge and bulk-interface correspondences, where zero-energy modes are localized in position \(x\)-space at a boundary or an interface, but it also describes a dual situation where the topological modes are localized in wavenumber \(k\)-space, and even an hybrid situation with a confinement in \(x-k\) phase space. Section 4 is devoted to higher dimensional chiral symmetric systems hosting such zero-modes or other apparently different modes whose topological origin can eventually be reduced to that of the chiral zero-modes described in section 3. Those cases include (but are not restricted to) weak and higher order topological insulators. Other higher dimensional chiral symmetric topological systems are expected from the tenfold way classification [15]. Those are not discussed in the present paper, but will be treated in a follow up paper, where the mode-shell correspondence will be applied to address higher dimensional topologically protected modes, such as \(1D\) spectral flows of quantum Hall systems, \(2D\) Dirac and \(3D\) Weyl fermions. Overview of the mode-shell correspondence for chiral symmetric systems The aim of this section is to introduce, in a non technical way, the mode-shell correspondence by focusing on the zero-dimensional zero-energy modes of chiral symmetric systems. ### Chiral symmetry and chiral index In this section, we introduce an index, denoted by \(\mathcal{I}_{\text{modes}}\), that counts the number of chiral zero-energy modes. This index can be used when the Hamiltonian \(\hat{H}\) has a chiral symmetry, that is, when there exists a unitary operator \(\hat{C}\) satisfying the anti-commutation relation \(\hat{H}\hat{C}+\hat{C}\hat{H}=0\). This symmetry typically appears when the system is bi-partied in two groups of degrees of freedom \(A\) and \(B\), such that the Hamiltonian only couples \(A\) and \(B\). These two groups can, for example, be two groups of atoms that interact in a lattice through a nearest neighbor interaction (see Figure 1). Chiral symmetry is given by a diagonal operator in the \(A-B\) block basis, with coefficients \(+1\) on A and \(-1\) on B, that is \[\hat{C}=\begin{pmatrix}\mathds{1}_{A}&0\\ 0&-\mathds{1}_{B}\end{pmatrix} \tag{1}\] where \(\mathds{1}\) denotes the identity operator. We shall call such a basis the _chiral basis_ in the following. The _chirality_ of a mode \(\ket{\psi}\) then refers to the eigenvalues of the chiral operator; it is \(+1\) for the modes \(\ket{\psi}\) satisfying \(\hat{C}\ket{\psi}=\ket{\psi}\) and \(-1\) for those satisfying \(\hat{C}\ket{\psi}=-\ket{\psi}\). The chirality is a signature of the polarisation of the modes on the A or B degrees of freedom. In the chiral basis, the Hamiltonian is off-diagonal \[\hat{H}=\begin{pmatrix}0&\hat{h}^{\dagger}\\ \hat{h}&0\end{pmatrix} \tag{2}\] Figure 1: Examples of chiral lattices in a) \(1D\), b) and c) \(2D\) and d) \(3D\). The example c) illustrates a disordered (amorphous) lattice which is still chiral symmetric. The red/blue dots represent the sites of opposite chirality and the grey links represent the different couplings between those sites. All these lattices can host topological modes for well-chosen Hamiltonians. where the operators \(\hat{h}\) and \(\hat{h}^{\dagger}\) encode the couplings between \(A\) and \(B\) degrees of freedom. It follows from (1) and (2) that the identity \(\hat{C}\hat{H}+\hat{H}\hat{C}=0\) is automatically satisfied. A direct consequence of chiral symmetry is that every eigenstate \(\ket{\psi}\) of \(\hat{H}\) with a non-zero eigen-energy \(E\) comes with a chiral symmetric partner \(\hat{C}\ket{\psi}\) of opposite energy \(-E\). A special attention will be paid on zero-energy modes of chiral symmetric systems (usually simply dubbed _zero-modes_). The key point is that those zero-modes are topologically protected when they are exponentially localized in _regions_ outside of which the Hamiltonian is gapped. Those regions can for instance correspond to edges, interfaces or defects in real space, and the zero-modes then correspond to various kinds of boundary states. But we will see that those regions may also more generally designate a part of phase space (position and wavenumber space). Here we are concerned with a _chiral index_ that counts algebraically the number of localized zero-energy modes in those regions, with a sign given by their chirality. In other words, the chiral index counts the total chirality of the zero-modes and can thus be formally introduced as \[\mathcal{I}_{\text{modes}}=\#\,\text{zero-modes of chirality $+1-\#\,\text{ zero-modes of chirality $-1$}}. \tag{3}\] An alternative (although equivalent) definition of the chiral index can be found by using the off-diagonal structure (2) of \(\hat{H}\) in the chiral basis, so that the zero-modes \(\ket{\psi}=(\ket{\psi}_{A},\ket{\psi}_{B})^{t}\) must satisfy \[\hat{H}\ket{\psi}=0=\begin{pmatrix}\hat{h}^{\dagger}\ket{\psi}_{B}\\ \hat{h}\ket{\psi}_{A}\end{pmatrix}. \tag{4}\] It follows that the zero-modes \(\ket{\psi}\) of positive chirality are in bijection with the \(\ket{\psi}_{A}\) in the kernel of \(\hat{h}\). The zero modes of negative chirality are as well in bijection with the \(\ket{\psi}_{B}\) in the kernel of \(\hat{h}^{\dagger}\). So one can rewrite the index in the commonly used form \[\mathcal{I}_{\text{modes}}=\dim\ker(\hat{h})-\dim\ker(\hat{h}^{\dagger}) \equiv\text{Ind}(\hat{h}). \tag{5}\] where \(\text{Ind}(\hat{h})\) is known as the analytical index of the operator \(\hat{h}\). It is worth stressing here that chiral symmetry is not restricted to lattices, and is also often encountered when dealing with classical waves in continuous media [84]. In that case, the structure introduced above follows often from the time-reversal symmetry of the system which induces a bi-partition of the degrees of freedom between those which are odd with respect to inversion of time, like velocity fields, and those that are even, like pressure fields. The operator which is \(1/-1\) on these even/odd degrees of freedom appears then as a chiral symmetry of the Hamiltonian.1 In the rest of the paper, we will develop a theory that applies for both discrete and continuous media, quantum or classical, and we will keep the notation \(\hat{H}\) when referring to classical wave operators, and even abusively call it "Hamiltonian" for the sake of standardizing the notations. Footnote 1: Let us stress that in classical waves systems, time-reversal symmetry is just an orthogonal symmetric matrix which is \(1\) on even degrees of freedom and \(-1\) on the odds ones. This is different from quantum mechanics where the Schrödinger equation carries a complex structure and where time-reversal symmetry is encoded as a complex conjugation or in general as an anti-unitary operator. This may cause confusion when one wants to apply to classical waves the ten-fold classification [15, 16] which was constructed with the quantum version of the time-reversal symmetry in mind. ### Role and necessity of a smooth energy filter \(f(E)\) Actually, the definitions (3) and (5) of the chiral index only work in idealized infinite systems but are difficult to manipulate or to approximate in finite size systems. Indeed, in finite size systems, the zero-modes of the different regions are always coupled with each other through exponentially small but non-zero overlapping. This coupling, in general, shifts the energy of the modes such that, in perfect rigour, one never reaches perfect zero-energy modes. To overcome this limitation, we introduce a formulation of the chiral index that is continuous in the coefficients of \(\hat{H}\), making it easier to manipulate in practical computations and simulations. To do so, we first assume the system to be gapped far away from the zero-mode, and we denote by \(\Delta>0\) the half-amplitude of the gap \([-\Delta,\Delta]\). Then we define the operator \(\hat{H}_{F}=f(\hat{H})\) where we choose \(f\) to be an odd function taking the value \(-1\) for negative gapped energies \(E<-\Delta\) and \(+1\) for positive gapped energies \(E>\Delta\) with a smooth transition in the gapless region in between (see Figure 2). This means that \(\hat{H}_{F}\) is the operator with the same eigenmodes as \(\hat{H}\) but with rescaled energies \(E\to f(E)\). This operation flattens the gapped bands and hence \(\hat{H}_{F}\) can be seen as a flatten Hamiltonian. Then the chiral index can be formally defined as \[\mathcal{I}_{\text{modes}}=\text{Tr}\Big{(}\hat{C}(1-\hat{H}_{F}^{2})\Big{)}. \tag{6}\] To see why (6) is indeed a meaningful definition of the chiral index, we express it in a common diagonal basis \(\{|\psi_{\lambda}\rangle\}\) of \(\hat{C}\) and \(\hat{H}^{2}\) (which is always possible since \([\hat{H}^{2},\hat{C}]=0\)) and we get \[\mathcal{I}_{\text{modes}}=\sum_{\lambda}C_{\lambda}(1-f(E_{\lambda})^{2}) \,\langle\psi_{\lambda}|\psi_{\lambda}\rangle=\sum_{\lambda}C_{\lambda}(1-f(E_ {\lambda})^{2}) \tag{7}\] with \(C_{\lambda}\) and \(E_{\lambda}^{2}\) the eigenvalues \(\hat{C}\) and \(\hat{H}^{2}\). The term \(1-f(E_{\lambda})^{2}\) is identically zero for all the modes that do not lie in the gap \([-\Delta,\Delta]\). We are thus left with the zero modes we would like to keep Figure 2: a) Projected energy spectrum of a typical topological Hamiltonian \(\hat{H}\). United stripes denote the gapped bulk bands, circles denote the isolated eigenvalue of modes localised at the edge and red circles denote the topological zero modes. b) Dispersion relation in the bulk where edge modes cannot be seen. c) Sketch of a possible smooth flattening function. d-e) Spectrum of the operator \(\hat{H}_{F}=f(\hat{H})\) where the bulk bands are flattened. f) Spectrum of \(1-\hat{H}_{F}^{2}\), where only a finite number of non-zero excitations remains: the bulk excitations vanish and the original zero-modes are dominant. (full circles in figure 2), and _a priori_ other gapless but non-zero modes (hollow circles in figure 2). As a matter of fact, the latest come by pairs of opposite chirality, due to chiral symmetry as if \(\ket{\psi}\) is an eigenmode of both \(\hat{H}^{2}\) and \(\hat{C}\) with eigenvalues \(E_{\lambda}^{2}\) and \(C\lambda\) then \(H\ket{\psi}\) is also an eigenmode with eigenvalues \(E_{\lambda}^{2}\) and \(-C\lambda\) except when \(H\ket{\psi}=0\). They therefore cancel out two by two in the sum thanks to the introduction of the chiral operator \(\hat{C}\) in the definition of \(\mathcal{I}_{\text{modes}}\). The only contributions that remain are those of the zero-energy modes \(\hat{H}\ket{\psi_{\lambda}}=0\) that do not allow a valid way to construct a symmetric partner of opposite chirality. So we end up with \(\mathcal{I}_{\text{modes}}=\sum_{\lambda,E_{\lambda}=0}C_{\lambda}\) which is exactly the chirality of the zero-modes. The two equivalent expressions (3) and (6) of the chiral index \(\mathcal{I}_{\text{modes}}\) show that the number of zero-modes of the Hamiltonian \(\hat{H}\) is a topological quantity : (3) shows that \(\mathcal{I}_{\text{modes}}\) is an integer number while (6) shows that it depends continuously of the Hamiltonian. \(\mathcal{I}_{\text{modes}}\) is therefore an integer that is stable under smooth variations of the coefficients of \(\hat{H}\), hence its topological nature. However, as they are written, the different expressions of \(\mathcal{I}_{\text{modes}}\) count the _total_ number of zero-modes of \(\hat{H}\). This is an issue when dealing with finite size systems, or with numerical simulations, that involve more than one gapless region (e.g. two edges, multiple corners...). In those cases, one is more interested in the chirality of the modes localised in specific sub-regions of phase space (just counting the zero-modes near an edge/corner/...) than the total chirality of the zero-modes of the entire system, which is also often trivial. One therefore needs a cut-off in phase space to obtain this _local_ topological information, a process we now aim at describing. ### Role and necessity of a phase space filter \(\hat{\theta}_{\Gamma}\) In order to capture the chiral zero-modes in specific regions of phase space, one needs to add, to the definition of \(\mathcal{I}_{\text{modes}}\), a function \(\hat{\theta}_{\Gamma}\) that selects the a zero-mode in phase space. (sketched in red in figure 3). Such a _cut-off operator_ is close to identity over a gapless target region that encloses the zero-mode, over a typical distance \(\Gamma\) (in green in figure 3), and then drops to zero away from it, where the Hamiltonian is gapped. We shall later refer to the domain where \(\hat{\theta}_{\Gamma}\) drops as the _shell_. In this way, the selected zero-modes are localised within the shell, while the other zero-modes remain outside (in blue in figure 3). A _local_ version of the the chiral index thus reads \[\mathcal{I}_{\text{modes}}=\text{Tr}\Big{(}\hat{C}(1-\hat{H}_{F}^{2})\hat{ \theta}_{\Gamma}\Big{)} \tag{8}\] and which, by construction, counts the chirality of the zero-modes in a selected region of phase space. More formally, this phase space representation of zero-modes is typically made possible thanks to a Wigner transform, that we introduce in section 2.5. The red and blue gapless regions in figure 3 are thus sketches of the amplitude of the Wigner function of the zero-modes. Importantly, the quantisation of the index does not strongly depend on the shape of the shell, nor on how the cut-off operator is explicitly defined, as long as it is close to identity in the target gapless region (where the Wigner representation of the zero-mode is located) within the shell and close to zero in the other gapless regions, outside the shell. As we will see below, the target region, defined in phase space, is in correspondence with the localisation of the zero-modes, and many situations can be covered by the same local chiral index (8), which makes it quite general and powerful. For instance, if we are interested in finding a zero-mode localized in real space, at an edge of a \(1D\) chain, positioned around \(x\sim 0\), the cut-off operator can be chosen as \(\hat{\theta}_{\Gamma}=e^{-x^{2}/\Gamma^{2}}\) with a cut-off parameter \(\Gamma\). The corresponding target region in phase space is therefore only constrained in the \(x\) direction and not in wavenumber \(k\). This is the example illustrated in figure 3 a) and discussed in details in section 3.1. Our formalism allows to tackle the dual situation of the previous case on the same footing, where the zero-modes are now localized in wavenumber, for instance in the slow varying modes region of a continuous Hamiltonian. A possible cut-off operator then reads \(\hat{\theta}_{\Gamma}=e^{\Delta/\Gamma^{2}}\approx e^{-k^{2}/\Gamma^{2}}\) where \(\Delta\) is the Laplacian operator, and the associated target region in phase space is represented in figure 3 b). This formalism is then similar to the so-called heat kernel approach used in the context of the Atiyah-Singer index theorem. A model displaying such zero-modes is addressed in section 3.2. More generally, the zero-modes can also be localised in a mixed way in position/wavenumber. In that case, the cut-off operator can be chosen as \(\hat{\theta}_{\Gamma}=e^{(-x^{2}+\partial_{x}^{2})/\Gamma^{2}}\approx e^{-(x^{ 2}+k^{2})/\Gamma^{2}}\)2. The shell enclosing the target region in phase space is then a circle (figure 3 c) and a corresponding example is shown in section 3.3. Footnote 2: We choose to work with adimensioned models, hence the adimensioned expression in \(x\) and \(k\). Finally, this approach can be generalized to higher dimensions, to address zero-modes in higher-order topological insulators with chiral symmetry. A simple example is that of corner states of a two-dimensional system. In that case, the cut-off operator can be chosen as \(\hat{\theta}_{\Gamma}=e^{-(x^{2}+y^{2})/\Gamma^{2}}\), and the target region in phase space is shown in figure 3 d). This higher dimensional case, is discussed among others in section 4. In finite systems, the necessary introduction of a cut-off operator alters the quantisation of the chiral index, which is no longer exactly an integer. However, in large systems, when the gapless regions we want to select are far away from each other in phase space and \(\Gamma\) is large, the correction to an integer value decays exponentially fast with the sizes of the system [85] and it is reasonable to still talk about quantised index with a satisfying approximation. Moreover in the limit case of infinite systems, the cut-off parameter \(\Gamma\) can be put to infinity, so that \(\hat{\theta}_{\Gamma}\) is replaced by the identity and we recover the previous exact index (6). In all those cases, the notion of "large system" should be understood as large compared to the typical coupling distances of the Hamiltonian \(\hat{H}\) in phase space. So, if the cut-off operator acts in position space, we need \(\hat{H}\) to be short-range in position space, and the system's size must be large compared to the typical coupling distance in position. The unbounded limit \(L\to\infty\) in figure 4 Figure 3: Sketches of the Wigner representation of zero-modes (in red/blue depending of their positive/negative chirality) embedded in different phase spaces with examples where the zero-modes are localised a) in position but not in wavenumber b) in wavenumber but not in position c) in position and in wavenumber d) in position but in a \(2D\) space. The cut-off function \(\theta\) selects a region that encloses one zero-mode by taking the value \(\theta\sim 1\) (green), while dismissing the other gapless regions (\(\theta\sim 0\) in white). The transition region where the cut-off goes from one to zero is called the shell (dark green line). satisfies this condition. If the cut-off operator acts in wavenumber space, we need \(\hat{H}\) to be short-range in wavenumber space and the lattice wavenumber \(k_{0}=2\pi/a\) to be large compared to typical coupling distance in wavenumber. The continuous limit \(a\to 0\) in figure 4 satisfies this condition. Also, another reason why we choose \(f\) to be a smooth function in energy is because it is a required property to extend the short-range behaviour in phase space of \(\hat{H}\) to \(\hat{H}_{F}\) (see Appendix C). ### Mode-shell correspondence The chiral index we have introduced requires the use of the cut-off operator that embeds a gapless target region in phase space where zero-modes live. The boundary of this embedding, namely the shell, plays a crucial role in the theory that we now want to emphasize. This is due to the fact that, up to a rearrangement of its terms the index \(\mathcal{I}_{\text{modes}}\) can be shown to be equal to an invariant \(\mathcal{I}_{\text{shell}}\), that essentially depends on the properties of \(\hat{H}\)_on_ the shell, the region where the cut-off drops from the identity to zero. This index reads \[\mathcal{I}_{\text{shell}}=\operatorname{Tr}(\hat{C}\hat{\theta}_{\Gamma})+ \frac{1}{2}\operatorname{Tr}(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H}_ {F}]). \tag{9}\] The first term \(\operatorname{Tr}(\hat{C}\hat{\theta}_{\Gamma})\) does not depend on \(\hat{H}\). It is the polarisation in the number of degrees of freedoms of positive/negative chirality, weighted by \(\theta_{\Gamma}(x)\). For example, in a lattice, this term is just the polarisation in the number of sites of positive/negative chirality (again weighted by \(\theta_{\Gamma}(x)\)). In this paper we will mostly deal with situations where the density of states of positive/negative chirality is _balanced_ and where this term therefore vanishes. If such density of states does not compensate, then this term is necessary to recover the mode-shell correspondence [86, 87, 88]. The second term, \(\frac{1}{2}\operatorname{Tr}(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H}_ {F}])\), does depend on \(\hat{H}\). However, the trace contains the commutator \([\hat{H}_{F},\hat{\theta}_{\Gamma}]\) which vanishes both _inside_ the shell, where \(\hat{\theta}_{\Gamma}\approx\mathds{1}\) (any operators commutes with the identity), and _away_ from the shell since there \(\hat{\theta}_{\Gamma}\approx 0\). Therefore, the non-negligible contributions of the trace only come from the shell which is the region where \(\hat{\theta}_{\Gamma}\) goes from the identity to zero. This property explains the appellation of the index \(\mathcal{I}_{\text{shell}}\). Figure 4: Table summarising the different categories of systems according to two infinite limits: a large length limit, where the length \(L\) of the system is considered as infinitely large, and a small length limit where the characteristic distance \(a\) between two sites becomes infinitely small and where therefore the set of possible wavenumbers becomes unbounded. The fact that \(\mathcal{I}_{\text{modes}}\) can be re-expressed into \(\mathcal{I}_{\text{shell}}\) is proved in a few lines of algebra. In fact it suffices to use the anti-commutation relation with the chirality operator \(\hat{C}\hat{H}_{F}=\hat{C}f(\hat{H})=f(-\hat{H})\hat{C}=-\hat{H}_{F}\hat{C}\) (remember that \(f\) is an odd function) as well as the cyclicity of the trace to rearrange the terms in the following order3, and we get Footnote 3: This derivation can be performed in infinite systems since the cut-off operator makes the trace finite. \[\mathcal{I}_{\text{modes}} =\text{Tr}(\hat{C}\hat{\theta}_{\Gamma}(1-\hat{H}_{F}^{2})) \tag{10}\] \[=\text{Tr}(\hat{C}\hat{\theta}_{\Gamma})-\text{Tr}(\hat{C}\hat{ \theta}_{\Gamma}\hat{H}_{F}^{2})\] \[=\text{Tr}(\hat{C}\hat{\theta}_{\Gamma})-\frac{1}{2}\left(\text{ Tr}(\hat{H}_{F}^{2}\hat{C}\hat{\theta}_{\Gamma})+\text{Tr}(\hat{H}_{F}\hat{C} \hat{\theta}_{\Gamma}\hat{H}_{F})\right)\] \[=\text{Tr}\Big{(}\hat{C}\hat{\theta}_{\Gamma}\Big{)}+\frac{1}{2} \,\text{Tr}\Big{(}\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H}_{F}]\Big{)}\] which shows the equality \[\mathcal{I}_{\text{modes}}=\mathcal{I}_{\text{shell}} \tag{11}\] that we call the _mode-shell correspondence_, as it relates the number of chiral zero-modes to a property on the shell surrounding those modes in phase space. Because of this equality, we will use the notation \(\mathcal{I}\) to denote both indices. In general, the index \(\mathcal{I}\) can be computed numerically and is prone to describe the topology of inhomogeneous or disordered systems since its definition does not rely on any periodicity assumption. However the shell formulation of the invariant is particularly suitable to semi-classical approximations [83] in a lot of systems which simplifies its computation and provides another topological meaning to the index. ### Winding numbers as semi-classical limits of the chiral invariant in phase space The index formulation we developed is made at the operator level whereas semi-classical approximations are usually performed in phase space \((x,k)\) (\(\hbar=1\) in the quantum situations). The connection between, on one hand, operators such as the cut-off operator \(\hat{\theta}_{\Gamma}\) or the Hamiltonian \(\hat{H}\), and, on the other hand, functions in phase space, is made possible by Wigner-Weyl calculus. In particular, we will use the Wigner transform of the Hamiltonian operator, defined as (see Appendix B) \[H(x,k)=\int_{\mathds{R}}dx^{\prime}\left\langle x+\frac{x^{\prime}}{2}\right| \hat{H}\left|x-\frac{x^{\prime}}{2}\right\rangle e^{-ikx^{\prime}} \tag{12}\] with \(k\in\mathds{R}\) when the Hamiltonian \(\hat{H}=H(x,\partial_{x})\) is a differential operator that describes a continuous model, and as \[H(n,k)=\sum_{n^{\prime}}\left\langle n^{\prime}\right|\hat{H}\left|n\right\rangle e ^{-ik(n^{\prime}-n)} \tag{13}\] with periodic parameter \(k\in[0,2\pi]\) to address the discrete case, where the lattice sites (or unit cells) are labelled by an integer \(n\). Those expressions generalize straightforwardly to higher dimensions. In both cases, we will refer to \(H=H(x,k)\) as the _symbol_ of \(\hat{H}\). It is reduced operator acting only on the internal degrees of freedom, but parametrized in phase space. Similarly, zero-modes can be represented in phase space by a Wigner transform of their density matrix, leading schematically to the red and blue spots in figure 3. The mapping of the Hamiltonian \(\hat{H}\) into a symbol Hamiltonian \(H(x,k)\) allows us to express the chiral index as a generalized winding number, given by an integral over the \(2D-1\)-dimensional shell in phase space \[\mathcal{I}\underset{\text{S-C lim}}{=}\frac{-2(D)!}{(2D)!(-2i\pi)^{D}}\int_{ \text{shell}}\text{Tr}^{\text{int}}(U^{\dagger}dU)^{2D-1}\equiv\mathcal{W}_{2D-1} \tag{14}\] where \(D\) is the dimension of the system, the trace \(\text{Tr}^{\text{int}}\) only acts on the internal degrees of freedom, and \(U\) is the off-diagonal component of \(H_{F}=\left(\begin{smallmatrix}0&U^{\dagger}\\ U&0\end{smallmatrix}\right)\). Since, on the shell, the Hamiltonian has no gapless mode, the symbol of the flatten Hamiltonian has energies \(E_{F}=\pm 1\) and can thus be written as \(H_{F}(x,k)=H(x,k)/\sqrt{H(x,k)^{2}}\), and \(U(x,k)\) can be shown to be a unitary operator. We provide an explicit demonstration of the formula (14) in appendix E. The formula (14) can be seen as a generalization of the bulk-edge correspondence. When dealing with bounded one-dimensional (\(1D\)) lattices with open boundary conditions, \(\mathcal{I}\) can be seen as an _edge index_ that counts the chirality of the zero-modes at one boundary, while \(\mathcal{W}_{1}\) is the usual _bulk_ winding number expressed as an integral over the \(1D\) Brillouin zone in \(k\)-space. However, the formula (14) describes a much richer class of chiral systems that goes well beyond \(1D\) lattices. Indeed, the system of interest can be of higher dimension, discrete or continuous, bounded or unbounded, and the zero-modes characterized by (14) can be localized in position (such as edge states), but also in wavenumber space. The surface of integration, i.e. the shell, is a surface of dimension \(2D-1\) that encloses the chiral zero-mode in phase space of dimension \(2D\). The shell is therefore always a surface of odd dimension, which guarantees that the integral (14) is not trivially zero for any \(D\)-dimensional chiral symmetric system.4 This contrasts the celebrated classification of topological insulators where the chiral symmetric class (AIII) is known to allow topologically non-trivial phases in odd dimensions only [15]. The fact that our formula (14) predicts the existence of chiral zero-modes also in even dimension \(D\) is because the shell lives in phase space, and is therefore not restricted to the \(k\)-space Figure 5: Summary diagram of the mode-shell correspondence. We use a smoothly flatten version \(\hat{H}_{F}\) of the Hamiltonian \(\hat{H}\) to define two indices: \(I_{\text{modes}}\) counting the chiral number of zero-modes localised in a target gapless region in phase space, a gapless property of \(\hat{H}\), and \(I_{\text{shell}}\) measuring gapped properties on the boundary enclosing the gapless region (namely the shell) and which reduces, in a semi-classical limit, to a (higher) winding number. Both indices are equal due to the mode-shell equality (11). The prefactor \(C_{D}\) of \(W_{2D-1}\) is given in (14) Brillouin zone. The formula (14) also includes other previously existing results in topological physics that differ from the standard bulk-edge correspondence. It includes for example the formula derived by Atyiah and Singer in the 60s [89] for continuous operators when the position manifold is a torus 5 and where the shell is therefore the unit sphere in wavenumber space tensored with the manifold in position space \((x,p)\in\mathds{T}^{d}\times\mathds{S}^{d-1}\). Our formula also includes the formula proposed by Teo and Kane to classify topological point defects zero-modes [90]. In that case, the shell consists of the sphere enclosing the zero-modes in position space tensored with the Brillouin zone \((x,p)\in\mathds{S}^{d-1}\times\mathds{T}^{d}\). Finally it also includes the Callias index formula [34, 76] (also derived by Hornander [77] generalising a result by Fedosov [78]) which deals with defects localised in position space, as in the Teo and Kane's work, but for continuous operators, and where the shell is then the phase space sphere \((x,p)\in S^{2d-1}\) (localised in position and wavenumber). Our general formula (14) thus unifies all these results. The generality of the formula makes it more flexible and covers for examples the cases with both continuous and discrete dimensions, which would not fit into any of the previously cited theories. Footnote 5: This restriction comes from simplifying hypothesis in the semi-classical expansion. Manifold with curvature lead to more complex expressions which would require a separate paper. Note that an equivalent expression of the winding number in (14), can be obtained by homotopy in terms of \(h(x,p)\), the symbol of \(\hat{h}\), as \[\mathcal{W}_{2D-1}=\frac{-2(D)!}{(2D)!(-2i\pi)^{D}}\int_{\rm shell}\mathrm{Tr} ^{\rm int}(h^{-1}dh)^{2D-1}. \tag{15}\] This expression could be of practical interest since it bypasses the computation of \(H_{F}\). Finally, we should note that the formula (14) is obtained in a certain _semi-classical limit_, hence the subscript "S-C \(\lim\)" (we shall just write \(lim\) in the rest of the paper). This limit is reached when the variations of the symbol in position \(x\) or in wavenumber \(k\) become small compared to the gap of the symbol. This hypothesis can be stated as follow (see appendix (B) for justification): Semi-classical hypothesis:_For a given symbol \(H(x,k)\), its characteristic variation distances in position \(d_{x}\) and wavenumber \(d_{k}\) spaces can be estimated through the formula_ \[1/d_{x/k}\sim\|\partial_{x/k}H(x,k)\|/\Delta(x,k) \tag{16}\] _where \(\Delta(x,k)\) is the gap of the symbol \(H(x,k)\). The semi-classical limit is reached asymptotically near the shell when \(\epsilon\equiv 1/(d_{x}d_{k})\ll 1\)._ For example, in \(1D\) lattices, the symbol of the Hamiltonian becomes completely independent of position in the bulk, so that \(1/d_{x}\to 0\). In most of the examples treated here, we will have \(\epsilon=1/(d_{x}d_{k})=O(1/\Gamma)\). In other words, (at least) one of the characteristic distances of variation becomes small for points \((x,k)\) in phase space which are close to the shell. Hence, the semi-classical approximation becomes exact in the asymptotic limit \(\Gamma\to+\infty\). This semi-classical approximation makes the winding number \(\mathcal{W}_{2D-1}\) in general simpler to calculate than the original chiral index \(\mathcal{I}\), making the formula (14) of practical interest. All those results are recapped in figure 5. ## 3 Mode-shell correspondences in \(1d\) spaces ### The bulk-edge correspondence for \(1d\) unbounded chiral lattices #### General results In this section, we discuss the particular case of Hamiltonians on \(1D\) lattices with edges and show how the usual winding number is obtained as a semi-classical approximation of the shell index and therefore counts the number of chiral zero energy edge states: a result known as the _bulk-edge correspondence_, which is well established for \(1D\) lattices, both physically and mathematically [86, 87, 91, 92, 93, 94, 95, 96]. This derivation will serve as a pedagogical example to introduce a few key tools and concepts in more details. We shall also treat in parallel the case of _interface_ zero-modes, in contrast with _edge_ modes. We will therefore assume that the gapless target region is either an edge, or an interface, located at \(x\sim 0\), so that the cut-off operator can be chosen as \(\hat{\theta}_{\Gamma}=e^{-x^{2}/\Gamma^{2}}\). The chirality of zero-modes localised in that region is given by the shell index (9) with that specific cut-off operator. Let us now show how, under some assumptions, a semi-classical approximation of this index is made possible and yields a more familiar and simpler expression. In the following, \(n\in\mathcal{L}\) is the unit cell index of the lattice, it runs over \(\mathcal{L}=\mathds{N}\) if we deal with a lattice with an edge and over \(\mathcal{L}=\mathds{Z}\) in the case of an interface. We also introduce \(\alpha\) to label the (finite) internal degrees of freedom (e.g. orbital, spin...). We assume the chiral operator \(\hat{C}\) to be diagonal in the \((n,\alpha)\) basis and independent of the unit cell, and denote by \(C_{\alpha}\) the chirality of the internal degrees of freedom. We then use the discrete Weyl transform (13) where \(\langle n^{\prime}|\,\hat{H}\,|n\rangle\) is the matrix containing the couplings between the internal degrees of freedom of the unit cells \(n\) and \(n^{\prime}\). The symbol Hamiltonian \(H(n,k)\) we obtain thus acts only on the internal degrees of freedoms, with parameters \((n,k)\in\mathcal{L}\times S^{1}\) living on the discrete phase space. In some sense, this discrete Wigner transform can be seen as a generalisation of the Bloch transform to non-periodic couplings on a grid. We then make the following hypothesis: we assume that the Hamiltonian \(\hat{H}\) is asymptotically periodic far from the boundary/interface. More precisely, in the case of an edge (\(\mathcal{L}=\mathds{N}\)), we assume that the symbol Hamiltonian \(H(n,k)\) converges asymptotically to a _bulk_, (i.e. position independent) Hamiltonian \(H^{+}(k)\) when \(n\to+\infty\). Similarly, in the case of an interface (\(\mathcal{L}=\mathds{Z}\)), we ask that the symbol Hamiltonian converges toward two bulk Hamiltonians far to the left/right of the interface, that is \(H(n,k)\to H^{\pm}(k)\) when \(n\to\pm\infty\).6 Footnote 6: Actually, we only need the weaker assumption \(H_{F}(n,k)\xrightarrow{}H_{F,\pm}(k)\) to obtain a valid semi-classical limit which is useful in some cases. Let us now estimate the term \(\operatorname{Tr}\!\left(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H}_{F} ]\right)\) of the chiral index, with \(\hat{\theta}_{\Gamma}=e^{-x^{2}/\Gamma^{2}}\) in the limit \(\Gamma\to+\infty\). For that purpose, we first rewrite the trace as an integral in phase space by using the Moyal \(\star\) product between symbols as \[\operatorname{Tr}\!\left(\hat{A}\hat{B}\right)=\frac{1}{2\pi}\sum_{n}\int_{0} ^{2\pi}dk\operatorname{Tr}^{\text{int}}\!\left(A(n,k)\star B(n,k)\right) \tag{17}\] where \(\operatorname{Tr}^{\text{int}}\) is the trace on the internal degrees of freedom only (see appendix B). We obtain \[\frac{1}{2}\operatorname{Tr}\!\left(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma}, \hat{H}_{F}]\right)=\frac{1}{4\pi}\sum_{n\in\mathcal{L}}\int_{0}^{2\pi}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! topological index describing the chiral number of the zero-modes localised at the interface/boundary. Moreover as \(\Gamma\to+\infty\), \(\theta_{\Gamma}(n)\) varies slower and slower with \(n\), so that we probe a region which is further and further in the bulk where \(H(n,k)\) has asymptotically no dependence in position, by hypothesis. The product of the symbols \(H(n,k)\) with \(\theta_{\gamma}(n)\) is therefore prone to a semi-classical approximation, obtained in the limit \(\Gamma\to+\infty\). The leading term of such a semi-classical expansion is obtained by simply replacing all the Moyal products by standard product \(A\star B\sim AB\), and the Moyal commutator by a Poisson bracket \([A,B]_{\star}\sim i\{A,B\}\) (see Appendix B), so that \[\frac{1}{2}\operatorname{Tr}(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H} _{F}])=\frac{-1}{4i\pi}\sum_{n\in\mathcal{L}^{\prime}}\int_{0}^{2\pi}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with \(W^{+}_{\uparrow}\) and \(W^{-}_{\uparrow}\) the winding numbers of \(U^{+}\) and \(U^{-}\) defined in the bulks far to the positive and negative sides of the interface respectively, and integrated over the \(1D\) Brillouin zone. The vertical arrow \(\uparrow\) specifies the direction of integration in \(k\), from \(0\) to \(2\pi\). We now need to deal with the first term \(\operatorname{Tr}\hat{C}\hat{\theta}_{\Gamma}\) of the chiral index in (9), that reads \[\operatorname{Tr}(\hat{C}\hat{\theta}_{\Gamma})=\sum_{n}\sum_{\alpha}C_{ \alpha}\theta_{\Gamma}(n). \tag{25}\] This term vanishes when the lattice has "balanced unit cells", that is when there is an equal number of degrees of freedom of positive and negative chirality \(C_{\alpha}\) per unit cell \(n\), since then \(\sum_{\alpha}C_{\alpha}=0\). Therefore, in that case, one recovers the expected bulk-interface correspondence for \(1D\) chiral chains in the limit \(\Gamma\to+\infty\) \[\mathcal{I}\underset{\text{lim}}{=}\ W^{+}_{\uparrow}-W^{-}_{\uparrow}. \tag{26}\] Otherwise, if the unit-cell structure is broken at the boundary, this equality must be corrected by the term \(\operatorname{Tr}\hat{C}\hat{\theta}_{\Gamma}\) to account for the chirality of the lattice's sites [86, 87, 88]. The term \(\operatorname{Tr}\hat{C}\hat{\theta}_{\Gamma}\) is also non-zero when the bulk unit cell is unbalanced in chirality \(\sum_{\alpha}C_{\alpha}\neq 0\). However, this case is excluded from our theory because it leads to bulk zero-modes that violate the gap hypothesis (see appendix D). The case of an edge, rather than an interface, is obtained similarly. The only difference being that the sum in \(n\) runs now over \(\mathcal{L}=\mathds{N}\) (for a left edge) instead of \(\mathds{Z}\). As a consequence, the second term in the right hand side of the equation (20) is missing, and we end up with the bulk-edge correspondence \[\mathcal{I}\underset{\text{lim}}{=}\ W^{+}_{\uparrow} \tag{27}\] that relates the chirality of zero-energy edge modes, at a given edge, to a winding number in the bulk of the lattice. We now illustrate this approach on the seminal example of the dimerized chain: the so-called Su-Schrieffer-Heeger model. #### Example: The Su-Schrieffer-Heeger (SSH) chain A seminal example of a \(1D\) chiral symmetric lattice model exhibiting zero-energy edge modes, is that of a \(1D\) dimerised chain, often referred to as the Su-Schrieffer-Heeger (SSH) model [97] (see figure 6) even though there is an overlap with other types of dimerised model, like the Schockley chain [98, 99]. In any case, the unit cell owns two internal degrees of freedom denoted A and B (being the even/odd sites \(n\) in the SSH case), and the model consists of nearest neighbour staggered couplings of amplitude \(t\) and \(t^{\prime}\) between A and B. Let us revisit this celebrated SSH/Schockley model in the light of the mode-shell correspondence. In fact, this model is simple enough to be analytically solvable, and we will thus be able to derive the bulk-edge correspondence explicitly. The corresponding Hamiltonian \(\hat{H}=\hat{H}_{\text{SSH}}\) reads \[\hat{H}=\sum_{n}t\ket{B,n}\!\langle A,n|+t^{\prime}\ket{B,n-1}\!\langle A,n|+ h.c. \tag{28}\] except at the edges where the hopping term \(t^{\prime}\) leads to an empty site outside the lattice. In that case, it is put to zero (open boundary condition). Since this Hamiltonian only couples \(A\) sites with \(B\) sites, it is chiral symmetric and the chiral operator reads \(\hat{C}=\sum_{n}\ket{A,n}\!\langle A,n|-|B,n}\!\langle B,n|\). We can therefore define a chiral index \(\mathcal{I}\). Then, far in the bulk, the Hamiltonian is invariant by translation and the Wigner-Weyl transform reduces to a discrete Fourier transform where \[H(n,k)=\begin{pmatrix}0&t+t^{\prime}e^{-ik}\\ t+t^{\prime}e^{ik}&0\end{pmatrix}=H(k) \tag{29}\] and whose energy spectrum \(E\) is gapped for \(t\neq t^{\prime}\). Next, we want to compute the "flatten" version of the symbol, \(H_{F}\). To do so, we use the fact that, at first order of the semi-classical expansion, the symbol of \(\hat{H}_{F}=f(\hat{H})\) is simply given by applying directly the function \(f\) to the symbol \(H(n,k)\), that is \(H_{F}=f(H(k))\). Moreover, we have chosen \(f\) such that, for gapped states of energy \(E\), we have \(f(E)=E/\sqrt{E^{2}}\) so, in the bulk, \(H_{F}(x,k)=H(x,k)/\sqrt{H(x,k)^{2}}\). Therefore, since \(H^{2}(n,k)=|t+t^{\prime}e^{ik}|^{2}\,\mathds{1}\), we deduce that \[H_{F}(k)=\frac{1}{|t+t^{\prime}e^{ik}|}\begin{pmatrix}0&t+t^{\prime}e^{ik}\\ t+t^{\prime}e^{-ik}&0\end{pmatrix}. \tag{30}\] This allows us to identify \(U=(t+t^{\prime}e^{ik})/|t+t^{\prime}e^{-ik}|\) which is just a unit complex number here. A direct computation of the winding number \(W_{\uparrow}=\frac{1}{2i\pi}\int_{k\in S^{1}}dk\,\mathrm{Tr}^{\mathrm{int}}( U^{\dagger}(k)\partial_{k}U(k))\) yields \(W_{\uparrow}=+1\) for \(|t^{\prime}|>|t|\) and \(W_{\uparrow}=0\) for \(|t^{\prime}|<|t|\). We now turn to the computation of zero-modes localized at a single edge. We thus assume the lattice to be semi-infinite, with no boundary to the right and a left boundary at \(n=0\). The zero-modes of this model can be analytically found by searching them of the form \(\ket{\psi}=\sum_{n\geq 0}\psi_{A,n}\ket{A,n}+\psi_{B,n}\ket{B,n}\) such that \(\hat{H}\ket{\psi}=0\). Combined with the boundary condition \(\bra{B,-1}\ket{\psi}=0\), we obtain the constraints \[\bra{B,n}\hat{H}\ket{\psi} =t\psi_{A,n}+t^{\prime}\psi_{A,n+1}=0 n\geq 0 \tag{31}\] \[\bra{A,n}\hat{H}\ket{\psi} =t\psi_{B,n}+t^{\prime}\psi_{B,n-1}=0 n>0\] \[\bra{A,0}\hat{H}\ket{\psi} =t\psi_{B,0}=0 n=0\.\] If we remove the pathological case \(t^{\prime}=0\), this system implies \(\forall n,\psi_{B,n}=0\) and \(\psi_{A,n}=(\frac{-t}{t^{\prime}})^{n}\psi_{A,0}\). Figure 6: Representation of a SSH chain of \(N=10\) unit cells delimited in purple. The left red edge is the gapless region where we want to compute the chiral number of zero-modes. The right blue edge is the other gapless region of opposite chirality that is dismissed through the cut-off function \(\theta(x)\) in green. The dark green zone is the shell where is evaluated the bulk-index \(\frac{1}{2}\,\mathrm{Tr}\Big{(}\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H }_{F}]\Big{)}\) since the coefficients of the trace quickly vanish away from it. To correspond to an edge mode, this solution must be normalized, which is only possible when \(|t^{\prime}|>|t|\). We deduce that one zero-energy edge mode of positive chirality (i-e: localised on the A sites only) exists for \(|t^{\prime}|>|t|\), leading to \(\mathcal{I}=1\), while no edge mode exists when \(|t^{\prime}|<|t|\), leading to \(\mathcal{I}=0\). As a result, in both cases we can check that \(\mathcal{I}=W_{\uparrow}\) which is an illustration of the bulk-edge correspondence in a simple but non-trivial example. #### Validity of the semi-classical limit Since the SSH model is invariant by translation far in the bulk, we have \(1/d_{x}\to 0\) on the shell when \(\Gamma\to+\infty\). Besides, as \(1/d_{k}\) remains bounded because \(\hat{H}\) is short-range in position, it implies \(1/(d_{x}d_{k})\to 0\) and therefore the semi-classical limit becomes exact in the limit \(\Gamma\to+\infty\). ### A _low-high wavenumber correspondence_ for bounded continuous systems #### General results In the previous section, we focused on \(1D\) discrete lattices and discussed an example where the zero-modes are related to a winding number on a shell defined along the \(k\) axis at large \(x\), away from the zero-mode. In the large distance limit the lattice can be seen as infinite, and the different zero-modes, localized at opposite boundaries decouple, can be treated separately. Continuous systems are an other kind of systems with an infinite number of degrees of freedom. This infinity does not come from the the size of the system, but instead from the distance between two sites/degrees of freedom that becomes infinitesimal (see figure 4). At the Hilbert-space level, this limit can also be seen as the fast varying functions limit or, in other words, as the large wavenumber \(k\to+\infty\) limit. In this section, we discuss how we can exploit such limits to create topologically protected zero-modes which are separated, not in position, but in wavenumber, and how the mode-shell correspondence captures this situation. Figure 7: (left) Plot of the topological zero-modes of an SSH chain in real and Fourier space with \(t^{\prime}=1\) and \(t=0.6\) for \(N=10\) dimers. (right) The absolute value of the Wigner-Weyl transform is plotted for the same edge mode in phase space. The region selected by the cut-off is shown in green, and its boundary (the shell), of length \(\Gamma\) in real space, is highlighted by a dotted line along which the winding number is integrated. We are concerned with \(1D\) continuous systems, where the physical quantities are encoded in vector-valued wave-function \(|\psi(x)\rangle=(\psi_{\alpha}(x))_{\alpha}\) where \(x\) is a continuous coordinate and \(\alpha\) labels the internal degrees of freedom. These degrees of freedom can, for example, be the spin or pseudo-spin components of quantum (quasi-)particle, like in the Dirac equation, or be a combination of classical fields, like the velocity \(v(x)\) and the pressure \(p(x)\) in the acoustic wave equation. As in the previous section, we assume that the time evolution of the wave function is encoded by a Hamiltonian \(\hat{H}\). Because we now deal with continuous system, \(\hat{H}\) is in general be differential operator which depends on position \(x\) and of some of its derivatives as \(\hat{H}=\sum_{n}h_{n}(x)\partial_{x}^{n}\), where \(h_{n}(x)\) are operators acting on the internal degrees of freedom. Similarly to the discrete case, we use a Wigner transform (12) which associates, to an operator \(\hat{H}\), a symbol \(H(x,k)\) parameterised in phase space and acting on the internal degrees of freedom (see Appendix B) where now \(k\in\mathds{R}\) belongs to the whole real line which is not a bounded set (contrary to the lattice case where \(k\) is reduced to the Brillouin zone \([0,2\pi]\)). Therefore, the major difference with the lattice case is that there is not only the limit \(x\to\pm\infty\) (i-e: far away from an interface/edge) to be considered, but also the \(k\to\pm\infty\) limit of fast varying solutions. Since the limit in real space is similar to that discussed previously, we would like to focus only on the momentum limit. For that purpose, we consider systems where the position space is bounded. Also, we choose to consider the position space as a manifold with no edges. For example, the position space could be a circle (see figure 4), a torus, a sphere, etc..., and the differential operators in the Hamiltonian \(\hat{H}\) act on continuous functions defined on those manifolds. Then, if the Hamiltonian is gapped in the large wavenumber limit (i.e. when acting on fast varying functions) then one can define the chiral index \(\mathcal{I}\) (8) with \(\hat{\theta}_{\Gamma}=\exp\bigl{\{}\Delta/\Gamma^{2}\bigr{\}}\), which is referred to as the heat kernel associated to the Laplacian \(\Delta\) on the manifold. As we already saw, this index is equal to the chirality of zero-modes through the analytical index (5). This framework is actually that discussed in the celebrated Atiyah-Singer index theorem, as it is described in the mathematical community [79, 81, 82]. Here, we focus on the \(1D\) case where the underlying manifold is the circle, and derive the semi-classical winding number associated to chiral zero-modes. Note that since our position space is a circle, and not just a real line, it implies some subtleties in the definition of the symbol, the formula (12) being only valid in the real line case. But, as long as \(\hat{H}\) is short range compared to the topology of the manifold, we can always use the definition (12) in a local chart around \(x\) to extend it to the circle case7. Footnote 7: In particular one can use the geodesic chart to describe the neighborhood of \(x\) as a subset of \(\mathds{R}\) (see [80] for a more formalised definition). There is however some problem for curved manifold, the semi-classical expansion is modified in those cases. Also our proof of the semi-classical invariants in the higher-dimension case relies on the existence of operators verifying \([a_{i},b_{j}]=\delta_{i,j}\,\mathds{1}\) which can only be found when the phase space is \(\mathds{R}^{d}\times\mathds{R}^{d}\) or \(\mathds{R}^{d}\times\mathds{T}^{d}\). Therefore our formula (56) will only works in the case where the position manifold is a n-torus (which has no intrinsic curvature). As the general expression of the symbol index in the Atiyah-Singer theorem involves the curvature of the manifold, it is not surprising that our formula is limited to the n-torus cases which are manifolds of zero curvature. We believe there is a way to derive the general Atiyah-Singer theorem using the fact that any manifold can in fact be embedded in \(\mathds{R}^{m}\) where our semi-classical formula could be applied. But the derivation of the formula would go beyond the scope of this paper. In order to derive the semi-classical index, we proceed similarly to the discrete case: We first express the term \(\operatorname{Tr}(\hat{C}\hat{H}_{F}[\hat{\theta}_{\Gamma},\hat{H}_{F}])\) of the shell index, in phase space through the trace identity \(\operatorname{Tr}\hat{A}=\frac{1}{2\pi}\int_{\mathds{S}}dx\int_{\mathds{R}} dk\operatorname{Tr}^{\text{int}}A(x,k)\) with an integration in position on the circle. This operation maps the commutator of operators into the Moyal commutator of their symbols. We then take the limit \(\Gamma\to\infty\) and keep the lower order term in \(1/\Gamma\), which amounts to approximate the Moyal commutator by a Poisson bracket. This Poisson bracket contains only the term \(\partial_{k}\theta_{\Gamma}\partial_{x}H_{F}\) because here the cut-off function \(\theta_{\Gamma}=e^{-k^{2}/\Gamma^{2}}\) depends only on wavenumber and not on position. This leads to the expression \[\mathcal{I}\underset{\text{lim}}{=}\frac{1}{4i\pi}\int_{0}^{2\pi}dx\int_{-\infty}^ {+\infty}dk\operatorname{Tr}^{\text{int}}CH_{F}(x,k)\partial_{k}\theta_{\Gamma}(k )\partial_{x}H_{F}(x,k). \tag{32}\] Next, we perform the integration over \(k\). This is not as simple as the integration over \(x\) in the discrete case where we assumed a bulk (i.e. \(x\) independent) limit of the symbol Hamiltonian, since here \(H_{F}(x,k)\) may not be totally independent of \(k\). We can however use the fact that the right hand side of (32) does not depend of the special shape of \(\theta_{\Gamma}(k)\) (see appendix E). Therefore, we can smoothly deform the cut-off function \(\theta_{\Gamma}=\exp\bigl{(}-k^{2}/\Gamma^{2}\bigr{)}\) into the sharper one \(\tilde{\theta}_{\Gamma}=\mathds{1}_{|k|\leq\Gamma}\) such that the derivative \(\partial_{k}\theta_{\Gamma}\) can be replaced by a \(\delta\)-Dirac distribution, which transforms the surface integral in phase space into two line integrals over \(x\) at \(k=\pm\Gamma\) as \[\mathcal{I}\underset{\text{lim}}{=}\frac{1}{4i\pi}\int_{0}^{2\pi} dx\operatorname{Tr}^{\text{int}}(C(-H_{F}(x,\Gamma)\partial_{x}H_{F}(x,\Gamma)+H_{F}(x,-\Gamma)\partial_{x}H_{F}(x,-\Gamma))) \tag{33}\] \[\underset{\text{lim}}{=}\frac{1}{2\pi i}\int_{0}^{2\pi}\!\!\!dx \operatorname{Tr}^{\text{int}}\left(-U^{\dagger}(x,\Gamma)\partial_{x}U(x, \Gamma)+U^{\dagger}(x,-\Gamma)\partial_{x}U(x,-\Gamma)\right). \tag{34}\] Finally, we obtain that the chiral index is again related to a difference of winding numbers, but where the integration runs now over position space for large positive/negative wavenumbers, as depicted by horizontal dashed lines in figure 8. We will thus indicate this "horizontal" line integration in phase space by horizontal arrows, so that we get \[\mathcal{I}\underset{\text{lim}}{=}-W_{\rightarrow}^{+}+W_{\rightarrow}^{-} \tag{35}\] where \(\pm\) refers to \(k=\pm\Gamma\). This second application of the mode-shell correspondence in \(1D\) can be seen as dual to the lattice case previously discussed, and in particular, (35) can be compared to (26). In both cases, the shells correspond to lines in a single subspace, either \(x\) or \(k\), and they both enclose chiral-zero modes in phase space. In the present case, those modes are "located" in the _low_ wavenumber region, while the shell, in the semi-classical limit, is considered in the _high_ wavenumber limit. The mode-shell correspondence thus better translates here to a _low-high wavenumber_ correspondence, rather than to a _bulk-edge_ or _bulk-interface_ correspondence. We now illustrate this correspondence with an example. #### Example: \(1d\) Dirac equation on the circle with varying potential and velocity To illustrate the previous result, we propose the following model of a Dirac Hamiltonian on a circle with spatially a varying potential \(V(x)\) and a spatially varying small "local wave velocity" \(\epsilon c(x)\) \[\hat{H}(x,\partial_{x})=\begin{pmatrix}0&V(x)+\epsilon c(x)\partial_{x}\\ V(x)-\epsilon\partial_{x}c(x)&0\end{pmatrix} \tag{36}\] where the role of the small scaling parameter \(\epsilon\) will be explained at the end of the section. This Hamiltonian acts on a \(2D\) vector field \(|\psi\rangle=(\psi_{A}(x),\psi_{B}(x))^{t}\). The Hamiltonian operator \(\hat{H}\) has the following symbol \[H(x,k)=\begin{pmatrix}0&V(x)-\frac{\epsilon}{2}c^{\prime}(x)+i\epsilon kc(x) \\ V(x)-\frac{\epsilon}{2}c^{\prime}(x)-i\epsilon kc(x)&0\end{pmatrix} \tag{37}\] where \(c^{\prime}(x)\equiv\partial_{x}c(x)\). By choosing \(V(x)=\cos(x)\) and \(c(x)=\sin(x)\), the symbol Hamiltonian \(H(x,k)\) has energies \(E_{\pm}=\pm\sqrt{(1-\frac{\epsilon}{2})^{2}\cos(x)^{2}+(\epsilon k)^{2}\sin(x) ^{2}}\) and is therefore gapped uniformly for all positions \(x\) when \(k\neq 0\) and \(\epsilon\neq 2\). However this gapped property of this symbol only implies a gap of the operator in the regime where the semi-classical approximation is valid which occurs only when \(\epsilon\ll 1\) as discussed later in the paper, so the case \(\epsilon>2\) is dismissed. The shell we introduce to compute the winding numbers is \(\{(x,k),x\in[0,2\pi],k=\pm\Gamma\}\) which consists in two circles in position at fixed \(k=\pm\Gamma\) in a cylindrical phase space. The off diagonal component of the Hamiltonian is just \(h(x,k)=(1-\epsilon/2)\cos(x)-i\epsilon k\sin(x)\) so one can compute the winding numbers and we find \(W^{+}_{\rightarrow}=-1\) for positive \(k\) and \(W^{-}_{\rightarrow}=1\) for negative \(k\) (for \(\epsilon>0\)). Therefore, according to the mode-shell correspondence the total chirality of zero-modes of \(\hat{H}\) should be \(\mathcal{I}=-(-1)+1=2\). One thus expects \(\hat{H}\) to have at least two zeros-modes of positive chirality in the relatively slowly varying region. This is indeed the result obtained for numerical simulations of the model (see Figure 8), where we indeed find 2 zero-modes of positive chirality (i.e. fully polarized on the \(A\) degrees of freedom) and located in the low-wavenumber \(k\sim 0\) region. Due to the discretisation procedure when solving the model numerically, we moreover find two other zero-modes localised at high wavenumber. Those additional zero-modes have together a chirality of \(-2\) (i.e. fully polarized on the \(B\) degrees of freedom) that balance the total chirality of zero-modes in this discretised version of the model. We now conclude this section by a list of technical remarks. Figure 8: a) Zeros-modes of a discretised version of (36) where continuous derivatives are replaced by their discrete counterparts, with \(\epsilon=0,02\) and \(N=50\) sites (containing each two degrees of freedoms A/B) and therefore an inter-site distance of \(a=2\pi/N\). The zero-modes are shown in real space (left) and in wavenumber space (right). Those modes are stationary with an exponentially small error \(\left\|\hat{H}\left|\psi\right\rangle\right\|\sim 10^{-9}\). b) Absolute value of the Wigner-Weyl transform of the different zero-modes in phase space. The integration contours of the winding numbers correspond to the shell enclosing the low wavenumber zero-modes and are denoted by two dotted lines. #### Remarks on the protection of zero-modes separated in wavenumber space It is clear that for the finite size topological insulators, such as the SSH chain, boundary modes localised at opposite edges can hybridise. The coupling between those modes can however be negligible whenever the lattice is sufficiently long, that is, more precisely, when the characteristic distance \(d_{x}\) of the coupling elements \(\hat{H}_{x,x^{\prime}}\) in position space remains much smaller than the size \(L\) of the lattice. Similarly to the example discussed above, the discretisation of a continuous model typically induces multiple gapless modes which are separated from each other in wavenumber. Using the duality between wavenumber and position space we can translate the previous criteria of weak hybridization into wavenumber space, by demanding that the couplings in wavenumber space \(\hat{H}_{k,k^{\prime}}\) are short-range and decays with a characteristic distance \(d_{k}\) in wavenumber which is much smaller than the lattice wavenumber \(k_{0}=2\pi/a\) of the lattice (where \(a\) is the lattice spacing). Using the Wigner-Weyl transform, this is equivalent to demand that the symbol Hamiltonian \(H(x,k)\) varies slowly in position space (see appendix A and B): its typical variations must evolve over a much larger distance than the inter-site spacing. One should note that this condition for the non-hybridization of the zero-modes in wavenumber space is quite different from the position case, and may be difficult to reach in practice, depending on the physical context of interest. For example, in condensed matter systems, the introduction of an impurity or a vacancy in the lattice induces variations of the electronic potential over a characteristic distance equivalent to the size of the lattice and immediately hybridises edge states separated in wavenumber, and thus gap them [100]. Therefore, condensed matter applications would require a strict limitation of such impurities. In other physical systems, like in fluid mechanics or in acoustics, the smooth variation of the system's parameters in space is probably more naturally realised due to local homogenisation. #### Gap condition is less restrictive than elliptic condition We already briefly mentioned that the mode-shell correspondence intersects Atiyah-Singer index theory. Actually, in the literature about the Atiyah-Singer theorem, it is stated that in odd dimension, the index of any differential operator should be zero. Therefore, it may be surprising that a model like (36), in \(D=1\), exists. This apparent contradiction can be explained by the fact that Atiyah-Singer theorem makes the assumption that \(\hat{H}\) is _elliptic_[81, 79, 82] which requires that the polynomial expression of the symbol \(H\) with highest degree in \(k\), called the _principal symbol_\(H_{\mathrm{pr}}\), is invertible (i.e. gapped in our vocabulary) when \(k\neq 0\). Indeed, a principal symbol \(H_{\mathrm{pr}}\) of order \(n\) in \(k\) always have the symmetry \(H_{\mathrm{pr}}(x,-k)=(-1)^{n}H_{\mathrm{pr}}(x,k)\) because \((-k)^{n}=(-1)^{n}k^{n}\). Therefore, when a principal symbol is gapped, we have the "asymptotic" symmetry \(H_{F}(x,-k)=\underset{\mathrm{lim}}{H_{\mathrm{pr},F}(x,-k)=(-1)^{n}H_{\mathrm{ pr},F}(x,k)}=(-1)^{n}H_{F}(x,k)\) when \(|k|\to+\infty\) and thus \(U(x,-k)=(-1)^{n}U(x,k)\). Substituting this relation in the formula (35) implies \(\mathcal{I}=-\mathcal{I}\) so that the chiral index vanishes. In the more general case, given by (14), the shell can be a \(D\)-dimensional torus and we obtain in that case \(\mathcal{I}=(-1)^{D}\mathcal{I}\). We thus recover the result that the index should vanish in odd dimension. This conclusion however does not hold for our model (36), because our theory lies on gap assumption, which is less restrictive than the an elliptic assumption. In particular, the principal symbol of our model (36) reads \(H_{\mathrm{pr}}=i\epsilon k\sin(x)\sigma_{y}\) (with \(\sigma_{y}\) the standard Pauli matrix), which is not uniformly gapped, because the gap closes for \(k\neq 0\) in \(x=0\) and \(x=\pi\). The elliptic condition is thus broken. Instead, we have considered the full symbol \(H(x,p)\) of \(\hat{H}\) that also includes the component of order zero in \(k\) and that satisfies our gap assumption (see (37)). The topological properties of such systems are thus not captured if we impose the elliptic condition. The gap condition of the full symbol, therefore, allows for more topological models to exist. #### Validity of the semi-classical limit Our second remark is that, with the ellipticity assumption, \(H_{F}(x,k)\) always behaves semi-classically for large enough wavenumbers \(|k|\to+\infty\). This is due to the fact that if \(H(x,k)\) is elliptic of maximal order \(n\) in \(k\), then its gap is or order \(\Delta(x,k)\sim k^{d}\) but \(\partial_{k}H(x,k)\) is of maximal order \(n-1\) in \(k\). Therefore, the characteristic distance of variation \(d_{k}\) is asymptotically always large as \(1/d_{k}\sim\|\partial_{k}H(x,k)\|/\Delta(x,k)=O(1/k)\). Moreover, \(\partial_{x}H(x,k)\) is of maximum order \(n\) in \(k\), so \(\|\partial_{x}H(x,k)\|/\Delta(x,k)\sim 1/d_{x}\) is bounded. It follows that \(1/(d_{x}d_{k})=O(1/k)\ll_{|k|\to+\infty}1\) meaning that the semi-classical approximation is exact asymptotically. This fact is however no longer true for our different gap condition except for \(\epsilon\ll 1\). For example, in our model (36), we have \(\partial_{k}H(x,k)=i\epsilon\sin(x)\sigma_{y}\), so that \(\|\partial_{k}H(x,k)\|\sim\epsilon\), which is only small compared to the gap \(\Delta(x,k)\sim(1-\epsilon/2)\) when \(\epsilon\ll 1\). So, when \(\epsilon\) is not small - say \(\epsilon\sim 1\) - the characteristic distance in momentum varies as \(1/d_{k}\sim\|\partial_{k}H(x,k)/\|\Delta(x,k)\sim\epsilon/(1-\epsilon/2)\sim 1\). Since \(1/d_{x}\sim 1\), we have \(1/(d_{x}d_{k})\sim 1\) and therefore the semi-classical approximation becomes not valid8. To make valid the semi-classical approximation, one thus need \(\epsilon\ll 1\). Footnote 8: In fact we have observed numerically in our model that there is a gap closing and a disappearing of the edge modes for \(\epsilon\) largely below the predicted \(\epsilon=2\) semi-classical threshold. ### A mixed \(x-k\) correspondence in phase space for unbounded continuous \(1d\) systems In the previous sections, we explained how the topological nature of chiral zero-modes is revealed by isolating them through large gapped regions which surround them either in position (case of unbounded \(1D\) lattices) or in wavenumber (case of bounded continuous \(1D\) systems). In the present section, we want to address the mixed case where the modes are surrounded by a gap region both in position and momentum directions. For that purpose, let us consider unbounded \(1D\) continuous systems. We will make use of the continuous Wigner transform (12) to map the Hamiltonian \(\hat{H}\) to the symbol \(H(x,k)\) acting on internal degrees of freedom, and parameterised in phase space \((x,k)\in\mathds{R}\times\mathds{R}\) (see Appendix B). We therefore have to deal with both limits \(x\to\pm\infty\) (i.e. far away from an interface hosting zero-modes) and \(k\to\pm\infty\) (i.e. fast varying solutions). We thus consider a mixed cut-off operator such as \(\hat{\theta}_{\Gamma}=e^{-(x^{2}-\theta_{x}^{2})/\Gamma^{2}}\) of symbol \(\theta_{\Gamma}(x,k)\approx e^{-(x^{2}+k^{2})/\Gamma^{2}}\) at first order of the semi-classical expansion. Now, the gap hypothesis means that we assume the symbol \(H(x,k)\) to be gapped both when \(|x|\to\infty\) and when \(|k|\to+\infty\) (even for \(x\) near the interface). For example, \(H(x,k)=\left(\begin{smallmatrix}0&x+ik\\ x-ik&0\end{smallmatrix}\right)\) satisfies such requirement since its spectrum \(\pm\sqrt{x^{2}+k^{2}}\) converges uniformly toward infinity for both \(x\to\pm\infty\) and \(k\to\pm\infty\). We can then derive the semi-classical expression of the chiral invariant by rewriting the term \(\operatorname{Tr}(\hat{CH}_{F}[\hat{\theta}_{\Gamma},\hat{H}_{F}])\) similarly to the two previous sections (the term \(\operatorname{Tr}\hat{CH}_{\Gamma}\) vanish to preserve the gap assumption if we have a balanced number of degrees of freedom), that is by turning the trace into an integral over phase space and then expanding to lowest order in \(1/\Gamma\) by assuming that \(H_{F}(x,k)\) varies slowly for large \(|(x,k)|\), which leads to \[\mathcal{I}=\frac{-1}{4i\pi}\int_{\mathds{R}}dx\int_{\mathds{R}}dk\,\mathrm{Tr} ^{\mathrm{int}}(CH_{F}(\partial_{x}\theta_{\Gamma}\partial_{k}H_{F}-\partial_{ k}\theta_{\Gamma}\partial_{x}H_{F})). \tag{38}\] Note that all the terms of the Poisson bracket appear, in contrast with the winding numbers previously derived in sections 3.1 and 3.2. If we denote by \(dA=\partial_{x}A\,dx+\partial_{k}A\,dk\) the differential one-form of the symbol \(A\), the expression (38) can be written in a more compact fashion as \[\mathcal{I}=\frac{-1}{4i\pi}\int_{\mathds{R}^{2}}\mathrm{Tr}^{\mathrm{int}}( CH_{F}d\theta_{\Gamma}\wedge dH_{F}) \tag{39}\] where \(\wedge\) is the usual anti-symmetric wedge product. Moreover since \(H_{F}(x,k)\) is assumed to vary slowly, the integration of \(d\theta_{\Gamma}\) can be done independently. The integration on the two-form is then reduced to the integration of a one-form on the circle of radius \(\Gamma\), which is tangent to the gradient of \(d\theta\). This leads to the final result \[\mathcal{I}=\frac{1}{4i\pi}\int_{S^{1}(\Gamma)}\mathrm{Tr}^{ \mathrm{int}}(CH_{F}dH_{F}) \tag{40}\] \[=\frac{1}{2i\pi}\int_{S^{1}(\Gamma)}\mathrm{Tr}^{\mathrm{int}}( U^{\dagger}dU)\equiv W_{\circlearrowleft} \tag{41}\] which is again a winding number, but where the integration runs now over the circle \(x^{2}+k^{2}=\Gamma^{2}\) in phase space instead of the Brillouin zone \(k\in[0,2\pi]\) (for discrete unbounded systems) or the position space \(x\in[0,2\pi]\) (for continuous circular systems). This is therefore a different semi-classical manifestation of the mode-shell correspondence, where the circle encloses the zero-mode in phase space. #### Example: The Jackiw-Rebbi model The simplest example of a continuous \(1D\) Hamiltonian operator \(\hat{H}\) involving both \(x\) and \(\partial_{x}\) which is topological is given by the celebrated Jackiw-Rebbi model \[\hat{H}=\begin{pmatrix}0&x-\partial_{x}\\ x+\partial_{x}&0\end{pmatrix}\,. \tag{42}\] This Hamiltonian can be thought of as a one dimensional Dirac Hamiltonian \(\hat{H}=-i\partial_{x}\sigma_{y}\) with a linearly varying potential \(V(x)\sigma_{x}\) that can be seen as a mass term.9 Such a Hamiltonian can for instance be obtained in stratified and/or compressible fluids where the pressure and velocity are additionally coupled through an acoustic-buoyant frequency \(S(x)=V(x)\)[84, 28] which changes sign in space. This coupling can have many origins, for example in fluids, where the sound velocity varies in space and reaches a minimum. We will also see later that this Hamiltonian can be obtained as a continuous version of an SSH model with slowly varying couplings. Footnote 9: Usually the potential is written as \(V(x)\sigma_{z}\) but this is equivalent to our model up to a change of basis which exchanges \(\sigma_{z}\leftrightarrow\sigma_{x}\). The Hamiltonian (42) is easily diagonalizable by introducing the bosonic creation-annihilation operators \(a=(x+\partial_{x})/\sqrt{2}\) and \(a^{\dagger}=(x-\partial_{x})/\sqrt{2}\) \[\hat{H}=\sqrt{2}\begin{pmatrix}0&a^{\dagger}\\ a&0\end{pmatrix}\,. \tag{43}\] One can then easily check that \(\hat{H}\) has a unique zero-mode \((e^{-x^{2}/2}/\sqrt{2\pi},0)^{t}\) with positive chirality in the convention \(\tilde{C}=\left(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix}\right)\). The symbol of this Hamiltonian has the simple form \[H=\begin{pmatrix}0&x-ik\\ x+ik&0\end{pmatrix} \tag{44}\] and its energy spectrum reads \(\pm\sqrt{k^{2}+x^{2}}\) which indeed satisfies the gap condition when \(|x|\) or \(|k|\) is large. Moreover, the symbol of the flatten Hamiltonian can be computed easily as \(H_{F}(x,k)=f(H(x,k))=H(x,k)/\sqrt{H(x,k)^{2}}\). By using \(H(x,k)^{2}=(x^{2}+k^{2})^{2}\mathds{1}\), we obtain \[H_{F}(x,k)=\frac{1}{\sqrt{x^{2}+k^{2}}}\begin{pmatrix}0&x-ik\\ x+ik&0\end{pmatrix}=\begin{pmatrix}0&e^{-i\phi}\\ e^{i\phi}&0\end{pmatrix} \tag{45}\] which yields the expression for \(U=e^{i\phi}\). One can then compute \(U^{\dagger}dU=id\theta\) so that the winding number \(W_{\circlearrowright}=\frac{1}{2i\pi}\int_{S^{1}}\mathrm{Tr}^{\mathrm{int}}(CU ^{\dagger}dU)\) gives \(W_{\circlearrowright}=1\) in agreement with the number of chiral zero-modes. #### Validity of the semi-classical limit In this example, we have \(\partial_{x/k}H=\sigma_{x/y}\) and therefore \(\|\partial_{x/k}H\|=1\), which does not decrease when \((x,k)\) is large. However, because the gap of \(H(x,k)\) varies as \(\sqrt{x^{2}+k^{2}}\), our definition of \(1/d_{x/k}=\|\partial_{x/k}H(x,k)\|/\Delta(x,k)\) yields \(1/d_{x/k}=O(1/\sqrt{x^{2}+k^{2}})\) and hence \(1/(d_{x}d_{k})\to 0\). So, this is an example where, even though the variations of the symbol do not vanish at infinity, we still have an exact semi-classical limit because those variations become small compared to the gap. ### Discrete approximations of continuous/unbounded topological models In the previous section, we introduced the topological Hamiltonian (42) which acts on a continuous system that is unbounded both in position and wavenumber spaces. However, in practice, there are physical or numerical limitations which impose bounds on the validity of the model at high position/wavenumber. It is therefore instructive to study finite versions of such models with cut-offs in wavenumber and position. Such finite models are therefore defined on a lattice of lattice spacing \(a\) and size \(L\). For example, the Hamiltonian (42) can be seen as a continuous limit of a discrete SSH Hamiltonian with varying coefficients. If one takes the symbol of the discrete SSH model (30) and replaces the constant coefficients \(t\) and \(t^{\prime}\) by \(t^{\prime}=1/a\) and \(t=-1/a+\sin(2\pi x/L)L/(2\pi)\), one obtains a discrete Hamiltonian on a finite lattice of lattice spacing \(a\) and length \(L>>a\) with periodic boundary conditions, whose symbol reads \[H_{\!I}(x,k)=\begin{pmatrix}0&\sin\bigl{(}\tfrac{2\pi x}{L}\bigr{)}\tfrac{L}{ 2\pi}+\tfrac{e^{-ika}-1}{a}\\ \sin\bigl{(}\tfrac{2\pi x}{L}\bigr{)}\tfrac{L}{2\pi}+\tfrac{e^{ika}-1}{a}&0 \end{pmatrix} \tag{46}\] and which, by construction, approximates the Jackiw-Rebbi model in the limit \(a\to 0\) and \(L\to+\infty\). We now want to determine the points \((x,k)\) of phase space where band crossings occur at zero-energy (see figure 9). Indeed, if such singular points exist and are surrounded by sufficiently large gapped regions in phase space, their non-zero winding number would be associated with topologically protected chiral zero-modes at the operator level (see figure 10). Those points are solution of the equation \[\sin(2\pi x/L)L/(2\pi)+(e^{ika}-1)/a=0\iff\left\{\begin{matrix}\sin(ak)/a=0\\ (\cos(ka)-1)/a+\sin(2\pi x/L)L/(2\pi)=0\end{matrix}\right.. \tag{47}\] This system has the expected solution \((x,k)=(0,0)\) of winding number \(W_{\circlearrowright}=+1\) consistently with the fact that this model is built in order to approximate the continuous Jackiw-Rebbi model whose symbol (44) also has this singular point. However, due to the discretisation process, we also get another singular point \((x,k)=(L/2,0)=(-L/2,0)\) (due to the \(L\) periodicity in \(x\)), and whose winding number is found to be \(W_{\circlearrowright}=-1\) (see figure 9). The two winding numbers therefore sum up to zero as it is expected for finite lattices with equal number of sites of positive/negative chirality. The existence of such a second singular point due to the discretisation process is therefore topologically constrained. Note that those two singular points are the only existing ones when \(4\pi/(aL)>1\). In the case \(4\pi/(aL)<1\), two other points also appear at \((x,k)=(\arcsin(4\pi/(aL))/(2\pi)L,\pi/a)\) and \((x,k)=(L/2-\arcsin(4\pi/(aL))/(2\pi)L,\pi/a)\) which are also characterized by a non-zero winding numbers that sum up to zero. For the sake of brevity and simplicity, we shall however only focus our discussion on the case \(4\pi/(aL)>1\) that yields only two singular points. Figure 9: (top) Energies of the symbol Hamiltonians for (left) the model \(H_{\!I}\) in the regime \(4\pi/(aL)>1\), (center) the model \(H_{\!I\!I}\) in the regime \(La/\pi>1\) and (right) the model \(H_{\!I\!I\!I}\). (bottom) Position of the gap closing points of in phase space. Those positions are denoted by a red/blue dot depending on the chirality of the zero-mode associated to them at the operator level (light color is used for the equivalent periodic images). The different values of the winding number \(W\) is enhanced by a red/blue/green color. In that case, the two chiral zero-modes associated, at the operator level, to these two degeneracy points of opposite winding numbers, resemble the two edge states of the standard SSH model with open boundary conditions, in that they are well separated in position space, around \(x=0\) and \(x=L\), the only difference being that the new system displays smoother interfaces. Therefore, one can also apply the usual bulk-edge correspondence, by relating the existence of a topological zero-mode with the difference of Brillouin zone winding numbers \(W_{\uparrow}\) far to the left/right side of the mode in position space (vertical dashed lines in figure 9). The two results agree i.e. the value of the winding number \(W_{\circlearrowright}\), when the shell circles around a zero-energy degeneracy point, corresponds to the difference of the Brillouin zone winding numbers \(W_{\uparrow}\) from each side of the interface (see figure 9). One should nevertheless notice that this equivalence is only well established here because there is no other singular mode in the vertical line (\(x=0,k\in[0,2\pi/a]\)) and so that the circle surrounding a degeneracy point can be smoothly deformed into two vertical lines along the Brillouin zone without crossing another band crossing. This is not always the case, and a good illustration is the following dual model of (46) where position and wavenumber play inverted roles \[H_{\textit{II}}(x,k)=\begin{pmatrix}0&-i\frac{\sin(ak)}{a}+i(e^{-i2\pi x/L}-1 )\frac{L}{2\pi}\\ i\frac{\sin(ak)}{a}-i(e^{i2\pi x/L}-1)\frac{L}{2\pi}&0\end{pmatrix}. \tag{48}\] The Jackiw-Rebbi model is again recovered in the limit \(a\to 0\), \(L\to+\infty\), but this second discretized model also exhibits two singular points (in the regime \(La/\pi>1\)) \((x,k)=(0,0)\) and \((x,k)=(0,\pi/a)\) which are now separated in wavenumber space rather than in position space. The associated chiral zero-modes, at the operator level, are thus only separated in wavenumber space, unlike the previous discrete model. As such, the difference of Brillouin zone winding numbers \(W_{\uparrow}\) vanishes and is thus unable to detect the existence of chiral zero-modes. This is an example where the bulk-edge correspondence of a discretized version of a continuous model is not appropriate to identify chiral zero-modes, while the mixed \(x-k\) correspondence, applied locally in phase space, still is. The difference of position winding numbers \(W_{\to}\) along horizontal lines of positive/negative wavenumber - as in the bounded continuous case of section 3.2 - however coincides with the value of \(W_{\circlearrowright}\), since no low-wavenumber singular points is here to prevent the deformation of the circle contour into the horizontal one. Finally, since the topological index only accounts for modes localised both in position and wavenumber spaces, pairs of spurious zero-modes of opposite chirality separated in either or both Figure 10: Plots of the different zero-modes of the operator associated to the symbol (left) \(H_{\textit{I}}\), (center) \(H_{\textit{II}}\), and (right) \(H_{\textit{III}}\) with \(L=20\) and \(a=0.4\). We plot in red/orange the modes of positive chirality and in blue/purple the modes of negative chirality in real space and Fourier space. All those modes are zero-modes in a very good approximation \(\left\|H\left|\psi\right\rangle\right\|<10^{-9}\). position/wavenumber directions could appear when one studies finite approximations of a continuous model. A last example where zero-modes appear on both directions is provided by the model \[H_{\textit{III}}(x,k)=\begin{pmatrix}0&-i\frac{\sin(ak)}{a}+\sin(2\pi x/L)\frac {L}{2\pi}\\ i\frac{\sin(ak)}{a}-\sin(2\pi x/L)\frac{L}{2\pi}&0\end{pmatrix} \tag{49}\] which again converges to the Jackiw-Rebbi model in the limit \(L\to+\infty\) and \(a\to 0\), but displays now 4 singular points in phase space: 2 of winding number \(W_{\circlearrowright}=+1\) at \((x,p)=0\) and \((x,p)=(L/2,\pi/a)\), and 2 of winding number \(W_{\circlearrowright}=-1\) at \((x,p)=(L/2,0)\) and \((x,p)=(0,\pi/a)\). Therefore, the only winding numbers that can detect the presence of chiral zero-modes are the \(W_{\circlearrowright}\)'s of the mixed \(x-k\) mode-shell correspondence, that are evaluated on \((x-k)\)-circle in phase space, since the position and Brillouin zone winding numbers \(W_{\rightarrow}\) and \(W_{\uparrow}\) both vanish. Those three examples illustrate why the mode-shell correspondence is a natural and general formalism to describe in a unified fashion the existence of all the topologically protected chiral zero-modes. The bulk-edge correspondence and the low-high-wavenumber correspondence are just particular cases which, alone, are not always able to predict the existence of topologically protected chiral zero-modes. ## 4 Higher dimensional chiral mode-shell correspondences ### Expression of the general chiral index In the previous sections, we focused on the mode-shell correspondence in systems of dimension \(D=1\) since this is where the semi-classical invariant takes the simplest forms. However the mode-shell correspondence generalizes when chiral zero-modes are embedded in a space of higher dimension \(D>1\). While this correspondence can still easily be shown to satisfy \(\mathcal{I}_{\text{modes}}=\mathcal{I}_{\text{shell}}\), the main difficulty is to obtain a semi-classical expression of the invariant \(\mathcal{I}_{\text{shell}}\). To understand why there is a difficulty in higher dimension, let us start in \(D=1\) dimension, but take into account the lattice polarisation terms \(\operatorname{Tr}\hat{C}\hat{\theta}_{\Gamma}\) in the mode-shell correspondence (11). As we show in appendix D the naive semi-classical expansion of \(\mathcal{I}_{\text{shell}}\) becomes \[\mathcal{I}_{\text{shell}}=\sum_{\alpha}C_{\alpha}\Gamma+W^{+}-W^{-}+O(1/\Gamma) \tag{50}\] which contains the expected difference of winding numbers \(W^{+}-W^{-}\) but also a diverging term in \(\Gamma\), proportional to \(\sum_{\alpha}C_{\alpha}\), which is the chiral polarisation of the sites in the unit cell. We argue that, in fact, since \(\mathcal{I}_{\text{shell}}\) is finite under the gap condition, the term \(\sum_{\alpha}C_{\alpha}\Gamma\) must vanish through the condition \(\sum_{\alpha}C_{\alpha}=0\). Actually, this expression is reminiscent of what occurs in higher-dimensional spaces. Indeed, a naive semi-classical expansion of the shell index for \(D\geqslant 1\) (i.e. with cut-off parameter \(\Gamma\to+\infty\)), would lead to an expansion of the form \[\mathcal{I}_{\text{shell}}=\sum_{k=1}^{D_{\mathcal{I}}}c_{k}\Gamma^{D_{ \mathcal{I}}-k}+O(1/\Gamma) \tag{51}\] where \(D_{\mathcal{I}}\) is the number of infinite dimensions (in position and in wavenumber) of the problem. In the \(1D\) case above \(c_{0}=\sum_{\alpha}C_{\alpha}\) and \(c_{1}=W^{+}-W^{-}\). In general, because the index must converge toward an integer in the \(\Gamma\to+\infty\) limit, some cancellations must occur so that \(c_{k}=0\) for \(k<D_{\mathcal{I}}\) and only the term \(c_{D_{\mathcal{I}}}\) remains, which turns out to be a (higher dimensional)-winding number. However it is not easy to prove that \(c_{k}=0\) for all \(k<D_{\mathcal{I}}\), without demanding that the index much converge toward the integer. More importantly, because we would need to carry the naive semi-classical expansion of \(\mathcal{I}_{\text{shell}}\) to a higher order term, in order to capture the converging component \(c_{D_{\mathcal{I}}}\), the number of terms in the expression of \(c_{D_{\mathcal{I}}}\) should rise, which is difficult to manage and simplify. In the appendix E, we develop a systematic method to make appear the cancellations at the level of the operators. We are therefore able to obtain an operator expression of the shell index whose semi-classical limit gives directly the coefficient \(c_{D_{\mathcal{I}}}\) as the leading term. This allows us to obtain a meaningful semi-classical expression of the shell index, as a generalized winding number \(\mathcal{W}_{2D-1}\) in \(2D-1\) dimensions as \[\mathcal{I}_{\text{shell}}\underset{\text{lim}}{=}\frac{-2(D!)}{(2D)!(-2i\pi) ^{D}}\int_{\text{shell}}\operatorname{Tr}^{\text{int}}(U^{\dagger}dU)^{2D-1} \equiv\mathcal{W}_{2D-1} \tag{52}\] which is the expression anticipated in the introductory general outlines (14). This is one of the key results of this paper. We now provide some elements of the proof of this formula to give some intuition of the result while keeping the more computational intensive part in the appendix E. Consider a \(D\)-dimensional system whose Hilbert space basis is labelled by \(\mathds{Z}^{n}\times\mathds{R}^{m}\times\llbracket 1,K\rrbracket\) with \(n+m=D\), where \(n\) is the number of discrete dimensions, \(m\) is the number of continuous dimensions and \(K\) is the number of internal degrees of freedom. Sub-systems of \(\mathds{Z}^{n}\times\mathds{R}^{m}\times\llbracket 1,K\rrbracket\) such as e.g. the discrete and continuous half-planes (\(\mathds{N}\times\mathds{Z}\) and \(\mathds{R}^{+}\times\mathds{R}\) respectively) as well as finite lattices, could also be included as we often have a natural way to extend the sub-system Hamiltonian to a larger system by introducing trivial coefficients with no inter-site coupling elsewhere. After assuming this structure, we assign to each continuous dimension \(i\) a position operator \(x_{i}\) and a wavenumber operator \(\partial_{x_{i}}\) which satisfy \([\partial_{x_{i}},x_{i}]=\mathds{1}\). Similarly, to each discrete dimension \(i\) is assigned a position operator \(n_{i}\) and a translation operator \(T_{i}\) which satisfy \(T_{i}^{\dagger}n_{i}T_{i}-n_{i}=\mathds{1}\). To treat the continuous and discrete cases in a unified way, we can define the operator \(\hat{d}_{i}\) as \(\hat{d}_{2j}=x_{j}\) and \(\hat{d}_{2j+1}=\partial_{x_{j}}\) in the continuous case, and as \(\hat{d}_{2j}=T_{j}^{\dagger}n_{j}\) and \(\hat{d}_{2i+1}=T_{j}\) in the discrete case, so that we have the single commutation relation \([\hat{d}_{2j+1},\hat{d}_{2j}]=\mathds{1}\). Since this commutation relation is proportional to the identity, it allows us to use an "integration by part tric". Indeed, similarly to functions, where \(\int dxa(x)=-\int dxx\partial_{x}a(x)\), we also have the following relation for operators \[\operatorname{Tr}\hat{A}=\operatorname{Tr}\Bigl{(}[\partial_{x},x]\hat{A} \Bigr{)}=-\operatorname{Tr}\Bigl{(}x[\partial_{x},\hat{A}]\Bigr{)}\,. \tag{53}\] In the appendix E we use such an integration by part to make appear some cancellations at the operator level, and thus obtain another expression of the shell index which reads \[\mathcal{I}_{\text{shell}}=\frac{(-1)^{D}(D)!}{2^{D}(2D)!}\left( \operatorname{Tr}(\hat{C}\hat{H}_{F}[\hat{d},\hat{H}_{F}]^{2D-1}[\hat{d},\hat {\theta}_{\Gamma}])+\frac{1}{2}\operatorname{Tr}^{\prime}(\hat{C}^{\prime} \hat{H}_{F}[\hat{d},\hat{H}_{F}]^{2D}[\hat{\theta}_{\Gamma},\hat{H}_{F}]) \right)+O(\Gamma^{-\infty}) \tag{54}\] where \(O(\Gamma^{-\infty})\) means that the equality is valid up to terms which decay faster than any polynomial in \(\Gamma\). In this expression, appears \(\hat{d}\equiv\sum_{i}\hat{d}_{2j}dx_{j}+\hat{d}_{2j+1}dk_{j}\), which is an operator-valued one-form. The equation (54) is therefore an anti-symmetrised sum on the permutations \(\sigma\) of the coefficients \(d_{j}\). For example \[\operatorname{Tr}(\hat{C}^{\prime}\hat{H}_{F}[\hat{d},\hat{H}_{F}]^{2D-1}[\hat{d},\hat{\theta}_{\Gamma}])=\sum_{\sigma}(-1)^{\sigma}\operatorname{Tr}\left(\hat{ C}\hat{H}_{F}\prod_{j=1}^{2D-1}[\hat{d}_{\sigma(j)},\hat{H}_{F}][\hat{d}_{ \sigma(2d)},\hat{\theta}_{\Gamma}]\right). \tag{55}\] The equality (54) can be obtained by assuming only that \(\hat{H}\) is gapped deep inside the shell. But if moreover the symbol \(H(x,k)\) admits a semi-classical limit when \(\Gamma\to+\infty\), we can show that (54) reduces to the simplified expression \[\mathcal{I}_{\text{shell}}\underset{\lim}{=}\frac{D!}{(2D)!(2i\pi)^{D}}\int_{ \text{shell}}\operatorname{Tr}^{\text{int}}(CH_{F}(dH_{F})^{2D-1}) \tag{56}\] where \(dH_{F}\) is now the differential 1-form of \(H_{F}\) in phase space which replaces the commutator \([\hat{d},\hat{H}_{F}]\) in the semi-classical limit, \(dH_{F}^{2D-1}\) is the \(2D-1\)-wedge product of \(dH_{F}\), and the shell is the \(2D-1\) dimensional surface enclosing the zero-mode in phase space. The final result (52) is then obtained by substituting \(H_{F}\) by \[H_{F}=\begin{pmatrix}0&U^{\dagger}\\ U&0\end{pmatrix} \tag{57}\] in (56). Note that, by homotopy, this formula can also be transformed into \[\mathcal{W}_{2D-1}=\frac{-2(D!)}{(2D)!(-2i\pi)^{D}}\int_{\text{ shell}}\operatorname{Tr}^{\text{int}}(h^{-1}dh)^{2D-1} \tag{58}\] where \(h(x,k)\) is the lower off-diagonal block of the symbol \(H(x,k)\) (see (2)). The homotopy invariance is obtained from the smooth deformation of \(h\) into \(U\) through the homotopic map \(h_{t}=h(1-t+t\sqrt{h^{\dagger}h})^{-1}\), with \(t\) varying from 0 to 1. In the next two sections, we present different examples of chiral topological systems in higher (\(D>1\)) dimensions and analyse how the mode-shell correspondence stated above can be applied to those examples. In order to simplify the analysis, we focus on examples in \(D=2\) dimensions. We present two general methods that provide those simple-to-analyse higher-dimensional examples, combining lower-dimensional examples through tensor product structures. The first method, that we refer to as the _multiplicative_ tensor product construction and denote with the symbol \(\boxtimes\), yields examples of weak insulators, that exhibit a macroscopic number of boundary states, while the second method, that we refer to as the _additive_ tensor product construction and denote with the symbol \(\boxplus\), provides examples of higher-order insulators that exhibit e.g. corner states (see figure 11). The systems serving as building blocks for these construction can be discrete or continuous, and of any dimension. Also, those constructions can be combined or used multiple times to create examples in even higher dimension (see figure 12 for \(D=3\)). ### Chiral Weak-insulators and flat-band topology One way to engineer topological states in higher dimension is to stack \(1D\) topological systems, such like SSH chains. We would then have a number of gapless modes growing extensively with the transverse size (say \(y\)) of the sample as it would be equal to the number of copies \(N_{y}\). The zero-modes would then gradually form a flat zero-energy band in this transverse direction, a phenomenon observed experimentally [86, 87, 91, 101]. Such stacked systems result is what is often called "weak topological insulators" in the literature [101, 102, 103, 104, 105, 106, 107]. Stacked versions of \(2D\) quantum spin Hall [20, 21, 111] or quantum Hall [112] phases are other \(3D\) examples beyond the chiral. The adjective "weak" was originally used since the edge states were first expected not to be topologically protected against disorder or inter-layer couplings [102, 103], but it was later realized that they turned out to be robust to such kinds of perturbations [104, 105, 106, 107] making the terminology nowadays a little bit outdated. Also, a weak topological insulator is usually characterized by a topological index associated to a reduced Brillouin zone (and thus dubbed _weak invariant_) in contrast with _strong_ topological insulators whose (strong) invariants encompass the entire Brillouin zone. We recall that \(1D\) strong chiral topological insulators are the only strong insulators that are captured by the chiral index defined in this paper. The mode-shell correspondence with higher-dimensional strong invariants will be exposed in a follow up paper. In this section, we analyse chiral weak insulators through the mode-shell correspondence. To do so, we consider \(2D\) systems, such as those depicted in figure 13 and 14, where the left and right edges host a macroscopic number of edge modes in the \(y\) direction. To select the leftmost extended edge states, we then choose a cut-off operator which is uniform in the \(y\) direction and localised near the left edge. Next, if the system is such that its bulk is gapped, and if its upper and lower edges are also gapped, then the invariant \(\mathcal{I}=\mathrm{Tr}(\hat{C}(1-\hat{H}_{F}^{2})\hat{\theta}_{\Gamma})\) can be shown to be quantised as in the \(1D\) case. The only difference with the \(1D\) case is that the index \(\mathcal{I}\) is (macroscopically) much larger and depends of the transverse length \(N_{y}\) of the lattice. The proof of the quantisation of the number of edge modes only requires chiral symmetry and is insensitive to the presence of disorder or inter-layer couplings, which shows the robustness of those modes. Let us now compute the shell invariant in phase space using the Wigner-Weyl transform. One Figure 11: Sketched of \(2D\) lattices built out of (a) the multiplicative tensor product construction and (b) the additive tensor product construction, from lower dimensional Hamiltonians \(H_{1}\) and \(H_{2}\). The density of the zero-modes is represented in red/blue depending on their positive/negative chirality; while grey sites denote non chirality. The thickness of the bounds represent the amplitude of the coupling. gets \[\mathcal{I}=\frac{1}{2}\sum_{(x,y)\in\mathcal{L}}\int_{0}^{2\pi}\frac{dk_{x}}{2\pi} \int_{0}^{2\pi}\frac{dk_{y}}{2\pi}\operatorname{Tr}^{\text{int}}(C\star H_{F} \star[\theta_{\Gamma},H_{F}]_{\star}). \tag{59}\] The next step is to perform a semi-classical expansion in terms of \(1/\Gamma\) and keep the dominant term. To be valid, this approximation requires the Hamiltonian to vary slowly in position \((x,y)\) (see section 2.5); this is valid in the major part of the shell which is in the bulk as we have translation invariance in both directions. However, it is not valid near the upper and lower edges since there the Hamiltonian varies sharply in the \(y\) direction. If we were ignoring the perturbations due to the edges, we would be allowed to perform a semi-classical expansion in both directions and we would get a bulk index \(\mathcal{I}^{b}\sim\mathcal{I}\) \[\begin{split}\mathcal{I}^{b}&=\frac{1}{2}\int_{0}^{2 \pi}\frac{dk_{y}}{2\pi}\int_{0}^{2\pi}\frac{dk_{x}}{2\pi}\operatorname{Tr}^{ \text{int}}(CH_{F}^{b}i\partial_{k_{x}}H_{F}^{b}\sum_{(x,y)\in\mathcal{L}} \delta_{x}\theta_{\Gamma}(x,y))\\ &=N_{y}\int_{0}^{2\pi}\frac{dk_{y}}{2\pi}\int_{0}^{2\pi}\frac{dk_ {x}}{4i\pi}\operatorname{Tr}^{\text{int}}(CH_{F}^{b}\partial_{k_{x}}H_{F}^{b} )\end{split} \tag{60}\] where \(N_{y}\) is the number of stacked chains and \(H_{F}^{b}\) is the symbol of \(\hat{H}_{F}\) in the bulk. In the right hand-side of the expression, the term \[\frac{1}{4i\pi}\int_{k_{x}\in[0,2\pi],k_{y}=k_{y0}}dk_{x}\operatorname{Tr}^{ \text{int}}(CH_{F}^{b}\partial_{k_{x}}H_{F}^{b})\equiv\mathcal{I}_{\text{weak }}(k_{y_{0}}) \tag{61}\] is known to be a topological invariant which remains constant when deforming the symbol \(H_{F}^{b}\) without closing the gap. As a result, it does not depend on the choice of \(k_{y_{0}}\), so the average \(\int_{0}^{2\pi}dk_{x}\) can be replaced by the integration over any line of constant \(k_{y}\) in Fourier space. Such an invariant Figure 12: Sketches of possible of \(3D\) topological systems obtained by applying (left) twice the additive tensor product construction yielding zero-modes corner states, (center) both the additive and the multiplicative tensor product construction, providing an extensive number of zero-modes states localised on hinges, and (right) twice the multiplicative tensor product construction leading to an extensive number of zero-modes states localised on surfaces. is sometimes called weak invariant because the integration only runs over a one-dimensional path while the system is two-dimensional. The bulk invariant then reads \[\mathcal{I}^{b}=N_{y}\,\mathcal{I}_{\rm weak}\,. \tag{62}\] By ignoring the effect of the edges, \(\mathcal{I}^{b}\) is in principle an approximation of \(\mathcal{I}\). To recover \(\mathcal{I}\), one thus needs to add a correction term \(\Delta_{\rm edge}\) coming from the fact that the actual Hamiltonian near the edges at \(y=0\) and \(y=L_{y}\) differs from the bulk Hamiltonian, that is \[\mathcal{I}=\mathcal{I}^{b}+\Delta_{\rm edge}\,. \tag{63}\] This correction can be computed numerically by evaluating \(\mathcal{I}\) before the semi-classical expansion. Since \(\mathcal{I}\), \(N_{y}\) and \(\mathcal{I}_{\rm weak}\) are all integers, \(\Delta_{\rm edge}\) must also be an integer and its specific value may _a priori_ depend on the boundary conditions. However, since the correction term only originates from the sites located close to a boundary, this term is bounded and thus cannot scale with \(N_{y}\). As a result, even the strangest boundary condition cannot change the fact that there is a macroscopic number of zero-modes localised on the left edge of the \(2D\) lattice. In fact even if one has a boundary condition that closes the gap on the upper/lower boundary, the computation above mostly remains the same and we still have the relation (63). Furthermore, since the chiral index also reads \(\mathcal{I}=\operatorname{Tr}\hat{C}(1-\hat{H}_{F}^{2})\hat{\theta}_{\Gamma}\), and since \((1-\hat{H}_{F}^{2})\) is only non-zero for modes of very small energy, it follows that there should be a macroscopic number of very small energy modes on the left edge. However, \(\mathcal{I}\) and \(\Delta_{\rm edge}\) may in that case no longer be integers and deviate from quantisation. But the massive polarisation of the zero-modes remains. Those are still protected as long as \(\mathcal{I}_{\rm weak}\neq 0\) which is guaranteed since \(\mathcal{I}_{\rm weak}\) is a bulk topological invariant which cannot change as long as there is a bulk gap. It should be noted that weak invariants are also used to prove the existence of edge Fermi arcs in semi-metals like graphene and Weyl semimetals [91, 112, 113]. If we decide to not add this example in Figure 13: Two examples of lattices with a macroscopic number of chiral zero-modes on the left vertical edge. (Left) Stacking of topological chiral SSH chains in the vertical direction with interlayer couplings that preserve chiral symmetry. (Right) Stacking of staggered trivial and topological SSH chains that preserve chiral symmetry. This example has the advantage of involving only nearest neighbour interactions without breaking chiral symmetry. To compute the macroscopic topological index associated to these lattices, one needs to chose a cut-off which is uniform in the vertical coordinate and decreases only in the horizontal coordinate. order to not further lengthen the size of the article, it can be understood in a similar manner as the previous presentation, the only difference being that we have to introduce, in the definition of the index, a cut-off in wavenumber space of the direction tangent to the edge in order to select the part of the Fourier space where Fermi arcs exists (between the edge projection of two bulk Dirac/Weyl cones where the bulk gap closes). #### Multiplicative tensor product construction \(\boxtimes\) In this section, we present a simple but general mathematical procedure to generate such staking of chiral topological systems. At the Hamiltonian level, it simply consists in defining a Hamiltonian \(\hat{H}\) as the tensor product of two lower dimensional gapped Hamiltonians \(\hat{H}_{1}\) and \(\hat{H}_{2}\)[114] as \[\hat{H}=\hat{H}_{1}\otimes\hat{H}_{2} \tag{64}\] where \(\hat{H}_{1}\) is chiral symmetric, while \(\hat{H}_{2}\) encodes the coupling between the stacked copies. Such a procedure was recently referred to as "multiplicative topology" in the literature [115, 116, 117]. If \(\hat{H}_{1}\) and \(\hat{H}_{2}\) are Hamiltonians on lattices or continuous spaces of dimension \(D_{1}\) and \(D_{2}\), then \(\hat{H}\) is a Hamiltonian which acts on a \(D=D_{1}+D_{2}\) dimensional space. Moreover, if we denote by \(\hat{C}_{1}\), the chiral symmetry operator of \(\hat{H}_{1}\), then \(\hat{H}\) has the chiral symmetry \(\hat{C}=\hat{C}_{1}\otimes\mathds{1}\). Importantly, the spectral properties of \(\hat{H}\) are entirely determined by those of the sub-systems \(\hat{H}_{1}\) and \(\hat{H}_{2}\). Indeed, if \(|\psi_{1}^{n}\rangle\) is an eigenbasis of \(\hat{H}_{1}\) with energies \(E_{1}^{n}\)10 and \(|\psi_{2}^{m}\rangle\) is an eigenbasis of \(\hat{H}_{2}\) with energies \(E_{2}^{m}\), then \(|\psi_{1}^{n}\rangle\otimes|\psi_{2}^{m}\rangle\) is an eigenbasis of \(\hat{H}\) with energies \(E_{1}^{n}E_{2}^{m}\). In particular the zero-modes of \(\hat{H}\) are those which are of the form (or a linear combination of) \(|\psi_{1}^{n}\rangle\otimes|\psi_{2}^{m}\rangle\) where either \(|\psi_{1}^{n}\rangle\) or \(|\psi_{2}^{m}\rangle\) a zero-mode. This means that if \(\hat{H}_{2}\) acts on a finite space with \(N\) sites, then for each zero-mode \(|\psi_{1}^{n_{0}}\rangle\) of \(\hat{H}_{1}\), one can associate \(N\) zero-modes of \(\hat{H}\) since no matter what is \(|\psi_{2}^{m}\rangle\), \(|\psi_{1}^{n_{0}}\rangle\otimes|\psi_{2}^{m}\rangle\) remains a zero-mode. Footnote 10: the \(n\) in \(E_{1}^{n}\) is just a label and must not be understood as an exponent Figure 14: Phase space representation of the zero-modes (in red and blue) and the shell (in green) for weak insulators such as depicted in figure 13. In the bulk where semi-classical limit occurs, the shell invariant can be reduced to a weak invariant which is a winding number in the \(k_{x}\) direction (green arrows) and a multiplicative constant depending on the \(y\) and \(k_{y}\) directions. In particular, if \(\hat{H}_{2}\) is just the identity operator on a finite lattice of \(N_{y}\) sites, i.e. \(\hat{H}_{2}=\sum_{j=1}^{N_{y}}\ket{j}\bra{j}\), then \(\hat{H}=\hat{H}_{1}\otimes\hat{H}_{2}\) is just the Hamiltonian of \(N_{y}\) stacked copies of topological chiral systems described by \(\hat{H}_{1}\) with no coupling between the different copies. If \(\mathcal{I}_{1}\) is the non-trivial chiral topological index of \(\hat{H}_{1}\) with respect to a cut-off operator \(\hat{\theta}_{\Gamma}\), and if \(\hat{H}_{2}\) is gapped, then, as we detail below, one can check that \(\hat{H}\) has also a well defined topological index \(\mathcal{I}\) associated to the cut-off operator \(\hat{\theta}_{\Gamma}\otimes\mathds{1}\) which is given by \[\mathcal{I}=\mathcal{I}_{1}\times N_{y}\,. \tag{65}\] We thus naturally end up with a number of chiral zero-modes that grows extensively with the stacking direction of the system. The zero-modes would then gradually form a zero energy flat-band in this direction, a phenomenon observed experimentally [86, 87, 91, 101]. Examples :One can use this multiplicative construction to generate some of the lattices in the figure 11 and 13. For example, the model described in the left parts of these figures can be generated from the tensor product of a topological SSH model, i.e. \(\hat{H}_{1}=\hat{H}_{\text{SSH}}\) given by (28), with a simple non-chiral model of the form \(\hat{H}_{2}=\sum_{n}\ket{n}\bra{n}+t^{\prime\prime}(\ket{n}\bra{n+1}+\ket{n+1 }\bra{n})/2\) which, in the bulk, has the symbol \(H_{2}(n,k)=1+t^{\prime\prime}\cos(k_{y})\) and is hence gapped when \(\ket{t^{\prime\prime}}<1\). In the multiplicative construction, the \(t^{\prime\prime}\) coefficient then creates vertical couplings between different SSH layers which preserve chiral symmetry and the existence of topological zero-modes. The symbol of the tensored Hamiltonian \(\hat{H}=\hat{H}_{1}\otimes\hat{H}_{2}\) reads \[H(n_{x},n_{y},k_{x},k_{y})=\begin{pmatrix}0&t+t^{\prime}e^{ik_{x}}\\ t+t^{\prime}e^{-ik_{x}}&0\end{pmatrix}(1+t^{\prime\prime}\cos(k_{y})) \tag{66}\] which is just the symbol of the SSH model multiplied by a scalar constant depending of \(k_{y}\). Therefore, when computing the weak index \(\mathcal{I}_{\text{weak}}(k_{y_{0}})\) as described by the formula (65), we obtain that, for \(\ket{t^{\prime\prime}}<1\), \(\mathcal{I}_{\text{weak}}(k_{y_{0}})=\mathcal{W}_{1}\) with \(\mathcal{W}_{1}\) the winding number of the \(1D\) SSH model. The \(1D\) mode-shell correspondence gives us \(\mathcal{I}_{1}=\mathcal{W}_{1}\) and the multiplicative structure implies \(\mathcal{I}=\mathcal{I}_{1}\times N_{y}\). One can therefore verify that at the leading order in \(N_{y}\) we have \[\mathcal{I}\underset{N_{y}}{\sim}\mathcal{I}_{\text{weak}}(k_{y_{0}})N_{y} \tag{67}\] which is indeed the result predicted by the mode-shell correspondence. In this system, this relation is in fact an equality, due to the multiplicative structure which forbids edge corrections of the formula to occur. One problem one may have with the above example is that nearest neighbour couplings are forbidden because they would break chiral symmetry. There are models which do not have this Figure 15: Tensor product structure which generates the model illustrated in figure 13 b) up to the addition, _a posteriori_, of inter-layer couplings. The left model, given by \(H_{1}\) is the superposition of a topological and trivial SSH models. The right model, given by \(H_{2}\), is a trivial chain with constant onsite coupling \(\hat{H}_{2}=\sum_{n}\ket{n}\bra{n}=\mathds{1}\). drawback. For example, one could first stack a topological chain with a trivial SSH chain as depicted in figure 15 and then use the tensor product construction to create a \(2D\) stack of this two-layer quasi-\(1D\) model. If one adds nearest neighbour interactions between the layers, one would then obtain the model depicted in the right part of the figure 13. This inter-layer coupling breaks the multiplicative structure but the existence of chiral zero-modes only relies on the chiral symmetry and on a gap on the shell, two assumptions which are not broken by those couplings. So, as long as these inter-layer couplings are not too strong to close the gap, the macroscopic number of chiral zero-modes on the left edge remains topologically protected. ### Higher-order chiral insulators and additive tensor product construction In this section, we discuss how the modes-shell correspondence can be applied to describe higher-order chiral insulators which exhibit zero-modes localised in more than 1 dimensions. To generate higher-dimensional chiral topological examples which are simple to study, we follow a procedure that we call the _additive tensor product construction_ and we refer to it by the symbol \(\boxplus\). We use this method to generate two examples in \(D=2\): one lattice model and one continuous model on which we verify, illustrate and discuss the predictions of the mode-shell correspondence theory. #### Additive tensor product construction \(\boxplus\) The additive tensor product construction is another procedure [118, 119] by which one can generate a higher-dimensional topological chiral Hamiltonian \(\hat{H}\) from two lower dimensional Hamiltonians \(\hat{H}_{1}\) and \(\hat{H}_{2}\). It requires that both \(\hat{H}_{1}\) and \(\hat{H}_{2}\) are chiral symmetric with chiral operators \(\hat{C}_{1}\) and \(\hat{C}_{2}\) respectively. A chiral higher dimensional Hamiltonian can then defined as \[\hat{H}=\hat{H}_{1}\otimes\mathds{1}+\hat{C}_{1}\otimes\hat{H}_{2} \tag{68}\] or equivalently by substituting 1 by 2. If this additive construction seems a little bit more involved than the multiplicative one, the spectral properties of \(\hat{H}\) are still determined by those of \(\hat{H}_{1}\) and \(\hat{H}_{2}\) since \[\hat{H}^{2}=\hat{H}_{1}^{2}\otimes\mathds{1}+\mathds{1}\otimes\hat{H}_{2}^{2}+ \{\hat{H}_{1},\hat{C}_{1}\}\otimes\hat{H}_{2}=\hat{H}_{1}^{2}\otimes\mathds{1} +\mathds{1}\otimes\hat{H}_{2}^{2}. \tag{69}\] Therefore, if \(|\psi_{1}^{n}\rangle\) is an eigenbasis of \(\hat{H}_{1}\) with energies \(E_{1}^{n}\) and \(|\psi_{2}^{m}\rangle\) is an eigenbasis of \(\hat{H}_{2}\) with energies \(E_{2}^{m}\), then \(|\psi_{1}^{n}\rangle\otimes|\psi_{2}^{m}\rangle\) is the eigenbasis of \(\hat{H}^{2}\) with energies \((E_{1}^{n})^{2}+(E_{2}^{m})^{2}\), so that the eigenvalues of \(\hat{H}\) are \(\pm\sqrt{(E_{1}^{n})^{2}+(E_{2}^{m})^{2}}\). It follows that the zero-modes of \(\hat{H}\) are of the form \(|\psi_{1}^{n_{0}}\rangle\otimes|\psi_{2}^{m_{0}}\rangle\) where \(|\psi_{1}^{n_{0}}\rangle\)_and_\(|\psi_{2}^{m_{0}}\rangle\) are zero-modes respectively of \(\hat{H}_{1}\) and \(\hat{H}_{2}\). This is quite different from the additive construction, since we need here the two Hamiltonians \(\hat{H}_{1}\) and \(\hat{H}_{2}\) to have of zero-modes, and not only one of them. As a result, this procedure generates higher order chiral insulators with few zero-modes, in contrast with weak chiral insulators (see figure 11). Indeed, if \(\hat{H}_{1}\) and \(\hat{H}_{2}\) have each a well defined chiral topolgical index \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) (with respect to the cut-off operators \(\hat{\theta}_{\Gamma,1}\) and \(\hat{\theta}_{\Gamma,2}\)), then one can check that \(\hat{H}\) has also a well defined chiral index \(\mathcal{I}\), associated to the cut-off operator \(\hat{\theta}_{\Gamma,1}\otimes\hat{\theta}_{\Gamma,2}\). We then use the fact that the chiral polarisation of the zero-modes \(|\psi_{1}^{n_{0}}\rangle\otimes|\psi_{2}^{m_{0}}\rangle\) is the product of the chiral polarisation of each individual mode, which leads to \[\mathcal{I}=\mathcal{I}_{1}\times\mathcal{I}_{2}. \tag{70}\] Of course, since the higher-dimensional Hamiltonian \(\hat{H}\) is itself chiral symmetric, it can serves as a new block to apply the procedure again. By induction, we get the more general formula \[\hat{H}=\hat{H}_{1}\otimes\mathds{1}^{\otimes(N-1)}\,+\hat{C}_{1}\otimes\hat{ H}_{2}\otimes\mathds{1}^{\otimes(N-2)}\,+\cdots+\,\hat{C}_{1}\otimes\cdots \otimes\hat{C}_{N-1}\otimes\hat{H}_{N} \tag{71}\] of a chiral Hamiltonian resulting from the additive tensor product construction with \(N\) chiral symmetric Hamiltonians \(\hat{H}_{j}\) of chiral symmetry operator \(\hat{C}_{j}\) and chiral topological index \(\mathcal{I}_{j}\) (\(j=1\ldots N\)). The chiral symmetry operator of \(\hat{H}\) is then given by the tensor product \(\tilde{C}_{1}\otimes\cdots\otimes\hat{C}_{N}\), and the zero-modes of \(\hat{H}\) have a chiral index \(\mathcal{I}=\mathcal{I}_{1}\times\cdots\times\mathcal{I}_{N}\). The next two paragraphs are dedicated to two simple illustrations of the additive tensor product construction in \(D=2\) and to the analysis of the resulting models through the higher-dimensional mode-shell correspondence. #### Example 1: The Benalcazar-Bernevig-Hughes (BBH) model with open boundary conditions Higher-order topological insulators (HOTI) constitute a class of systems where topological zero-modes are embedded in a higher dimensional phase space. Those zero-modes could then be trapped at the corners of a material where the trapping potential varies sharply at the edges [23, 24, 119, 120, 121, 122, 123]. The archetypal lattice model describing such a situation is the Benalcazar, Bernevig, Hughes (BBH) model [23], depicted in figure 16, which essentially consists in "crossing" arrays of SSH models along the \(x\) and \(y\) directions such that chiral symmetry is preserved. The resulting Hamiltonian follows the additive construction and reads \[\hat{H}=\hat{H}_{\text{SSH},x}\otimes\mathds{1}+\sigma_{z}\otimes\hat{H}_{ \text{SSH},y} \tag{72}\] where \(\sigma_{z}\) is the chiral-symmetric operator of the two underlying SSH models, and which, in a more explicit form, becomes \[\hat{H}=\begin{pmatrix}0&t+t^{\prime}T_{y}^{\dagger}&t+t^{\prime}T_{x}^{ \dagger}&0\\ t+t^{\prime}T_{y}&0&0&t+t^{\prime}T_{x}^{\dagger}\\ t+t^{\prime}T_{x}&0&0&-(t+t^{\prime}T_{y}^{\dagger})\\ 0&t+t^{\prime}T_{x}&-(t+t^{\prime}T_{y})&0\end{pmatrix} \tag{73}\] where \(T_{x}=\sum_{n_{x},n_{y}}\left|n_{x}+1,n_{y}\right\rangle\left\langle n_{x},n_{ y}\right|\) and \(T_{y}=\sum_{n_{x},n_{y}}\left|n_{x},n_{y}+1\right\rangle\left\langle n_{x},n_{ y}\right|\) are the translation operators of one lattice unit along \(x\) and \(y\) respectively. We also impose open boundary conditions as in figure 16. The Hamiltonian \(\hat{H}\) inherits chiral symmetry from the two underlying chiral symmetric lower dimensional systems, and its chiral operator reads \(\hat{C}=\sigma_{z}\otimes\sigma_{z}\). The chiral zero-modes of \(\hat{H}\) can then be easily found: using the additive chiral construction, we know that the zero-modes of \(\hat{H}\) must also be zero-modes of \(\hat{H}_{\text{SSH},x}\) and \(\hat{H}_{\text{SSH},y}\) on each part of the tensor product. They must therefore be of the form \(\psi_{x}\otimes\psi_{y}\) where \(\psi_{x/y}\) is the zero-mode in the \(x/y\) direction of the SSH model. It follows that for \(\left|t^{\prime}\right|>\left|t\right|\), where the SSH models are topological, the BBH model \(\hat{H}\) has one topological zero-mode of chirality \(+1\) in its bottom-left corner, and therefore, it has \(\mathcal{I}=1\) for the cut-off \(\hat{\theta}_{\Gamma}=e^{-(x^{2}+y^{2})/\Gamma^{2}}\), which acts in both position directions. The use of the shell invariant is, however, not as straightforward as in the previous cases. The reason is that the shell, that encircles the corner of interest (see figure 16), runs not only over the bulk but also over the sharp edges where the Hamiltonian does not vary smoothly. Therefore, taking the limit \(\Gamma\to\infty\) in the shell invariant does not guarantee the semi-classical limit in every directions. As a consequence, we cannot derive an expression of the shell index which is as simple as (52). We can however still simplify partially the expression by using the translation invariance of \(\hat{H}_{F}\) in the direction parallel to each edge. In fact, one can show (see appendix F) that the index can be written as the sum of two contributions \(\mathcal{I}_{\text{shell}}=\mathcal{I}_{\text{edge},x}+\mathcal{I}_{\text{edge},y}\) where each contribution is localised at one of the two edges and where we can use the Fourier transform in the direction parallel to that edge. In particular, \(\mathcal{I}_{\text{edge},x}\) can be written as \[\mathcal{I}_{\text{edge},x}=\frac{-1}{24\pi}\int_{0}^{2\pi}dk_{x}\tilde{\text{ Tr}}\left(\tilde{C}\tilde{H}_{F}[\tilde{d},\tilde{H}_{F}]^{3}\right) \tag{74}\] where the notation \(\sim\) means that the Wigner-Weyl transform is performed in the tangent direction only. The operator \(\tilde{d}\) is for example \(\tilde{d}=\partial_{x}dk_{x}+T_{y}^{\dagger}n_{y}dy-iT_{y}dk_{y}\). The expression of \(\mathcal{I}_{\text{edge},x}\) is obtained by switching the \(x\) and \(y\) coordinates. Since the semi-classical treatment is not valid in the perpendicular direction to the edge, \(\mathcal{I}_{\text{edge},x}\) (or equivalently \(\mathcal{I}_{\text{edge},y}\)) remains cumbersome to manipulate by hand. In principle, a way to compute it analytically would be to list the bulk eigenmodes of the form \(\psi(n_{y},k_{x})=\psi(k_{x})z^{n_{y}}\) with \(z\) a complex number and \(n_{y}\) the index of the unit-cell in the \(y\)-direction. Then, solving the eigenmodes of energy \(E\) in the semi-infinite geometry (with one edge) can be done by searching them as a sum of eigenmodes of the bulk problem with the same energy \(E\) and with \(z\) such that \(|z|\leq 1\) (not exponentially increasing) and then imposing the boundary conditions. Doing so allows for diagonalising \(H(k_{x})\), from which one can deduce \(\tilde{H}_{F}(k_{x})\) and then finally compute \(\mathcal{I}_{\text{edge},x}\). Because these computations would be long and not particularly enlightening, we prefer here to evaluate \(\mathcal{I}_{\text{edge},x/y}\) numerically. This also gives us the opportunity to show explicit examples of numerical codes that compute numerical approximations of the indices. Those can be found in Appendix F. One of these codes computes the index \(\mathcal{I}_{\text{modes}}\) directly from the initial formula (8) while the other one compute \(\mathcal{I}_{\text{shell}}\) using the formulation with partial semi-classical limit (74). For lattices of length \(L=10\) sites, both codes give \(\mathcal{I}=1\) up to a deviation of less than \(1\%\), which validates numerically the mode-shell correspondence for the BBH model. Figure 16: Lattice BBH model with chiral zero-modes of positive/negative chirality depicted in red/blue. The shell is shown in green, an example of unit cell is shown in purple. The thickness of the links represents the strength of the coupling. The system has open boundary conditions with edges given by the horizontal and vertical lines \(x=0\), \(x=L\), \(y=0\), \(y=L\). We conclude this example by mentioning that the recourse to additional symmetries is commonly used in the literature of higher-order topological insulators. Those symmetries can be interpreted as a way to reduce the complexity of the computation of the shell index by re-expressing it as a pure bulk quantity. For instance in [23], it is claimed that due to the \(C_{4}\) rotational symmetry which is present in the BBH model, the quadruple moment is a topological invariant in the bulk related to the number of corner modes. In the next section, we consider the case of smooth interfaces rather than abrupt open boundary conditions. This will allow us to employ a full semi-classical treatment in both directions and therefore compute analytically the shell invariant as a generalized winding number. #### Example 2: \(2d\) Jackiw-Rossi model with smooth potentials We now discuss an example of the higher-dimensional mode-shell correspondence for a chiral zero-mode trapped at a domain wall where the Hamiltonian is smoothly varying. Such a situation has been studied in the literature in the context of defects modes [90, 124]. It allows for a full semi-classical limit of the shell index leading to the Teo and Kane formula in the case of discrete lattice [90] and the Callias index formula in the continuous case [76]. As both discrete and continuous cases are relatively similar, we made the choice to focus our attention to the continuous case in this section. For that purpose, we revisit the Jackiw-Rossi model [125] which follows from the same construction as the BBH model above: The two-dimensional Jackiw-Rossi Hamiltonian \(\hat{H}\) is obtained by combining, in perpendicular directions \(x\) and \(y\), two one-dimensional Jackiw-Rebbi Hamiltonians \(\hat{H}_{\rm JR}\) introduced in (42), by following the additive tensor product construction, that is \[\hat{H}=\hat{H}_{\rm JR}(x,\partial_{x})\otimes\mathds{1}+\sigma_{z}\otimes \hat{H}_{\rm JR}(y,\partial_{y}) \tag{75}\] where \(\sigma_{z}\) is the chiral-symmetric operator of the two underlying Jackiw-Rebbi models, and which, in a more explicit form, reads \[\hat{H}=\begin{pmatrix}0&y-\partial_{y}&x-\partial_{x}&0\\ y+\partial_{y}&0&0&x-\partial_{x}\\ x+\partial_{x}&0&0&-(y-\partial_{y})\\ 0&x+\partial_{x}&-(y+\partial_{y})&0\end{pmatrix}. \tag{76}\] Similarly to the previous example with open conditions, this Hamiltonian has a chiral symmetry with a chiral operator \(\hat{C}=\sigma_{z}\otimes\sigma_{z}\). Writing \(\hat{H}^{2}=\hat{H}^{2}_{\rm JR}(x,\partial_{x})\otimes\mathds{1}+\mathds{1} \otimes\hat{H}^{2}_{\rm JR}(y,\partial_{y})\) implies that the chiral zero-modes of \(\hat{H}\) must also be chiral zero-modes of \(\hat{H}_{\rm JR}(x,\partial_{x})\) and \(\hat{H}_{\rm JR}(y,\partial_{y})\) on each part of the tensor product. Those modes must thus be of the form \(\psi_{x}\otimes\psi_{y}=(e^{-(x^{2}+y^{2})/2},0,0,0)^{t}\) where \(\psi(x)=(e^{-x^{2}/2},0)^{t}\) is the zero-mode of the Jackiw-Rebbi model. Therefore, \(\hat{H}\) has one topological zero-mode of chirality \(+1\) and thus \(\mathcal{I}=1\) for the cut-off \(\hat{\theta}_{\Gamma}=e^{-(x^{2}+y^{2}-\partial_{x}^{2}-\partial_{y}^{2})/ \Gamma^{2}}\) which acts here both in position and wavenumber. The corresponding shell is a \(3D\) sphere enclosing the chiral zero-mode in \(4D\) phase space, as sketched in figure 17. The symbol \(H\) of the Jackiw-Rossi Hamiltonian reads \[H=\begin{pmatrix}0&x-ik_{x}\\ x+ik_{x}&0\end{pmatrix}\otimes\mathds{1}+\sigma_{z}\otimes\begin{pmatrix}0&y-ik _{y}\\ y+ik_{y}&0\end{pmatrix} \tag{77}\] from which we deduce \[\begin{split} H_{F}&=\frac{H}{\sqrt{x^{2}+y^{2}+k_{x}^{ 2}+k_{y}^{2}}}\\ &=\cos(\theta)\left(\begin{smallmatrix}0&e^{-i\phi_{1}}\\ e^{i\phi_{1}}&0\end{smallmatrix}\right)\otimes\mathds{1}+\sigma_{z} \otimes\sin(\theta)\left(\begin{smallmatrix}0&e^{-i\phi_{2}}\\ e^{i\phi_{2}}&0\end{smallmatrix}\right)\end{split} \tag{78}\] where \((\theta,\phi_{1},\phi_{2})\in[0,\pi/2]\times[0,2\pi]^{2}\) are the Hopf-coordinates of \(S^{3}\). One can then compute analytically \(\int_{S^{3}}\mathrm{Tr}^{\mathrm{int}}(CH_{F}dH_{F}^{3})=-12(2\pi)^{2}\) which is exactly the normalisation needed to have \(\mathcal{I}=1\). ## 5 Conclusion We have proposed a unifying mode-shell correspondence theory, a powerful tool in topological physics and in particular in the topology of wave operators. This correspondence relates a spectral property of gapless systems in localised regions of phase space - here the chiral number of zero-modes - to another topological invariant associated to a gapped operator on the shell surrounding this region in phase space. This correspondence is particularly useful since a semi-classical limit of the shell invariant can be derived in a lot of (but not all) situations. This limit simplifies the expression of the shell invariant into (higher-dimensional) winding numbers which are easier to calculate analytically. We have shown that this correspondence unifies several results in wave topology, from the bulk-edge correspondence, higher-order topological phases, to the Callias index formula and the Atiyah-Singer index theory. We provided a wide variety of examples, for discrete and continuous systems, in one and higher dimensions, to illustrate the mode-shell correspondence in concrete models. In particular, we showed how the mode-shell correspondence describes not only zero-energy edge states, but also zero-modes that can be more generally localized in a region of phase space. We also discussed two systematic methods, dubbed "additive" and "multiplicative" tensor products constructions, to simply elaborate topological examples in higher \(D>1\) dimensions. In this paper, we focused on the case where the gapless index is the chiral number of zero-modes, since it already encompasses a rich variety of situations which deserved a complete study. Similar mode-shell correspondences can be derived in cases where the gapless index \(\mathcal{I}_{\mathrm{modes}}\) is instead the \(1D\)-spectral flow invariant, as for edge states of the \(2D\) quantum hall effect, or where the index gives the number of \(2D\)-Dirac or \(3D\)-Weyl points, which can either be localised in wavenumber or be surface states of respectively \(3D\) of \(4D\)-insulators. Those aspects of the mode-shell correspondence are planned to be addressed in a follow-up paper.
2307.06948
Self-regulating Prompts: Foundational Model Adaptation without Forgetting
Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.
Muhammad Uzair Khattak, Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan
2023-07-13T17:59:35Z
http://arxiv.org/abs/2307.06948v2
# Self-regulating Prompts: Foundational Model Adaptation without Forgetting ###### Abstract Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: [https://github.com/muzairkhattak/PromptSRC](https://github.com/muzairkhattak/PromptSRC). ## 1 Introduction Vision-Language (VL) models, such as CLIP [35] and ALIGN [20], have demonstrated remarkable generalization capabilities for downstream tasks. These VL models are trained on large-scale web data with a contrastive loss, which allows them to encode open-vocabulary concepts by aligning pairs of images and texts in a shared embedding space. The resulting model is suited for downstream tasks such as open-vocabulary image recognition [23], object detection [11], and image segmentation [29]. Prompt learning has emerged as a more efficient alternative to fine-tuning large-scale models, as shown in recent studies [58, 59, 3, 17, 40, 28]. This approach introduces a few learnable prompt vectors to adapt models like CLIP for downstream tasks while keeping the pre-trained model weights fixed. However, since the prompts are optimized with respect to the task-specific objective [59], such as the cross-entropy loss for ImageNet [6] classification, the prompted model tends to overfit to the task-specific data distribution as the training progresses. This can result in the prompted model losing the original generalization capability of the frozen CLIP model towards new tasks. Therefore, learning prompts that can model both task-specific and task-agnostic representations remain a major challenge for adapting foundational VL models. This work seeks to self-regulate prompts to address the issue of prompt overfitting. To this end, we propose a self-regularizing framework that guides the prompts to jointly optimize for both task-specific and task-agnostic general representations using a three-pronged approach. **a)**_Regulating via Mutual Agreement Maximization:_ We observe that generalizable zero-shot knowledge is preserved within frozen pre-trained VL model features but they lack task-specific knowledge. In contrast, prompts achieve better adaptation to a given task but with reduced generalizability to new tasks. Therefore, we propose to regulate learned prompts by maximizing the agreement between prompted and frozen VL model features while adapting them to the downstream task. **b)**_Regulating with the Self-ensemble:_ In the early epochs, prompts act are not mature to capture contextual information. As the training progresses, prompts tend to become more task-specific. Therefore we deploy a weighted prompt aggregation technique to prompts during training to regulate them using their self-ensemble over the training phase. The weights are sampled from a Gaussian distribution which suitably aggregates the useful knowledge learned by prompts at different training epochs. **c)**_Regulating with Textual Diversity:_ We note that unlike having multiple image samples per category for the vision encoder, there is only a single textual label available for each class. Therefore, imposing the mutual agreement constraints on multi-modal features results in sub-optimal performance due to the lack of diversity in text-side labels for the text encoder. We overcome this disparity and regulate the prompts through diverse text label templates for each class. Overall, our approach explicitly steers prompts to learn a representation space that maximizes its performance on downstream tasks without compromising pre-trained CLIP generalization (Fig. 1: Left). We demonstrate the effectiveness of PromptSRC on four representative tasks. On the base-to-novel generalization benchmark across 11 datasets (Fig. 1: Middle), our method achieves average gains of +1.42% in harmonic-mean over the state-of-the-art MaPLe [22] and +8.26% over CLIP. Further, PromptSRC achieves competitive results in cross-dataset transfer, domain generalization, and few-shot image recognition (Fig. 1: Right). In summary, our self-regulating prompt learning framework has the following main contributions: * We address the inherent problem of prompt overfitting for adapting foundational models through self-regularization. Our framework explicitly guides the prompts to jointly acquire both _task-specific knowledge_ and _task-agnostic generalized knowledge_ by maximizing the mutual agreement between prompted and frozen VL model features. (SS3.2.1) * We suggest a weighted self-ensembling strategy for prompts that captures their complementary features learned at different epochs during training and enhances their generalization performance. (SS3.2.2) * To overcome the significant diversity mismatch between the text and visual domains, we propose text-side diversity which complements limited textual labels via multiple text augmentations and regularizes prompts to learn more generalized contexts. (SS3.2.3) ## 2 Related Work **Vision Language models:** Foundational vision-language (VL) models [35, 20, 54, 49, 51] leverage both visual and textual modalities to encode rich multi-modal representations. These models are pre-trained on a large corpus of image-text pairs available on the internet in a self-supervised manner. For instance, CLIP [35] and ALIGN [20] utilize around 400M and 1B image-text pairs, respectively, to train their multi-modal networks. During pre-training, contrastive loss is commonly used as a self-supervision loss. This loss pulls together the features of paired images and texts while pushing away the unpaired image-text features. VL models possess a strong understanding of open-vocabulary concepts, making them suitable for various downstream vision and vision-language applications [12, 56, 38, 30, 60, 13, 32, 53, 26, 36, 8]. However, transferring these foundational models for downstream tasks without compromising on their original generalization ability still remains a major challenge. Our work aims to address this problem by proposing a novel regularization framework to adapt VL models via prompt learning. **Prompt learning:** Prompt learning is an alternative fine-tuning method for transferring a model towards downstream tasks without re-learning the trained model parameters. This approach adapts a pre-trained model by adding a small number of new learnable embeddings at the input known as prompt tokens. Due to its efficiency in terms of parameters and convergence rate, prompt learning is found to be of great interest for adapting foundational models like CLIP for vision [21, 57, 45, 46] and vision-language tasks [59, 58, 61, 7]. CoOp [59] fine-tunes CLIP by optimizing a continuous set of prompt vectors in its language branch for few-shot image recognition. Bahng _et al._[1] perform visual prompt tuning on CLIP by learning prompts Figure 1: **(Left): Existing prompt learning approaches rely on task-specific objectives that restrict prompt learning to learn a feature space suitable only for downstream tasks and consequently lose the generalized knowledge of CLIP (shown in purple). Our self-regulating framework explicitly guides the training trajectory of prompts towards the closest point between two optimal solution manifolds (solid line) to learn task-specific representations while also retaining generalized CLIP knowledge (shown in green). (Middle): Averaged across 11 image recognition datasets, PromptSRC surpasses existing methods on the base-to-novel generalization setting. (Right): We evaluate our approach on four diverse image recognition benchmarks and it overall shows competitive results compared to the previous state-of-the-art.** on the vision branch. [3] and [28] propose to learn multiple sets of prompts for learning different contextual representations. CoCoOp [58] highlights the overfitting problem of CoOp and proposes to condition prompts based on visual features for improved performance on generalization tasks. MaPLe [22] proposes a multi-modal prompt learning approach by learning hierarchical prompts jointly at the vision and language branches of CLIP for better transfer. Our approach builds on a variant [37] where prompts are learned at both the vision and language encoder of CLIP. **Network regularization:** Incorporating regularization techniques in neural networks has been proven to enhance their generalization capabilities [25]. Regularization strategies can be broadly classified into two streams. The first stream consists of constraint-based regularization methods, such as weight decay [27] and adversarial training [50]. These techniques introduce additional constraints to the learning process, which helps to prevent overfitting. The second stream of regularization techniques involves modifying the inputs, model parameters, or annotations. This category includes methods such as data augmentations [52, 55, 5], dropout [42], model ensembling [18, 47], label smoothing [43] and batch normalization [19]. Our method aims to enhance the generalization performance of learned prompts via a multi-stage regularization framework, which takes inspiration from both streams of regularization techniques mentioned above. However, to the best of our knowledge, this is the first effort to regularize prompts during adaptation by jointly attending to the original VL model feature space, the training trajectory of prompts as well as the diversity of textual inputs for the multi-modal models. ## 3 Proposed Method Prompt learning aims to adapt the general knowledge of VL foundational models like CLIP without full fine-tuning [59, 58, 3]. Since prompts are the only learnable vectors, this strategy aims to retain the pretrained generalized feature representations of CLIP while re-purposing them for downstream task-specific data via prompts. Although effective, they are susceptible to overfitting on the supervised downstream task (see Fig. 2) and their generalization towards new classes and datasets reduces as compared to the original zero-shot pre-trained CLIP. Our work seeks to address the overfitting behavior of prompts. Unlike prior prompting approaches that improve generalization mainly from the model architecture perspective [58, 22], we motivate our work from the regularization perspective. As evidenced by the strong zero-shot performance, pre-trained CLIP features possess robust generalization characteristics. However, naively training prompts with the supervised task-specific loss struggles to retain these general attributes from the frozen CLIP. To this end, we propose a self-regularizing framework to explicitly guide the training trajectory of prompts to maximize its interaction with the pre-trained knowledge stored in the frozen CLIP. Fig. 3 shows our overall methodology which optimizes the prompts as follows. **a)**_Regularization through mutual agreement maximization:_ We impose an explicit consistency constraint between prompted features and the pre-trained CLIP features within the CLIP embedding space. **b)**_Regularization through prompt self-ensembling:_ To further reduce overfitting, we propose a Gaussian weighted average of the prompt vectors learned at different training epochs. This ensemble-level regularization aggregates information from learned prompts across different epochs for improved generalization. **c)**_Regularization through textual diversity:_ Unlike having multiple images for each class, the text labels during fine-tuning are limited and bounded by the number of class categories. We incorporate textual augmentations by defining multiple text label templates for a given class. The ensemble of textual labels regularizes the prompts for better generalization during optimization. We now continue by explaining our methodology in detail. We first revisit CLIP and CLIP-based prompt learning in Sec. 3.1. This is followed by the explanation of our self-regulating prompt learning approach in Sec. 3.2. ### Preliminaries We denote the CLIP image and text encoders as \(f\) and \(g\), respectively and their pretrained parameters as \(\theta_{\text{CLIP}}=\{\theta_{f},\theta_{g}\}\) where \(\theta_{f}\) and \(\theta_{g}\) refer to the image and text encoder parameters, respectively. The input image \(\mathbf{X}\in\mathbb{R}^{C\times H\times W}\) is divided into \(M\) patches followed by a projection to produce patch tokens. Further, a learnable class token \(\mathbf{e}_{cls}\) is appended with the input patches as \(\mathbf{\tilde{X}}=\{\mathbf{e}_{cls},\mathbf{e}_{1},\mathbf{e}_{2},\cdots,\mathbf{e}_{M}\}\). The image encoder \(f\) encodes the input patches via multiple transformer blocks to produce a latent visual feature representation \(\mathbf{\tilde{f}}=f(\mathbf{\tilde{X}},\theta_{f})\), where \(\mathbf{\tilde{f}}\in\mathbb{R}^{d}\). Next, the corresponding class label Figure 2: Naively training prompts with standard supervised objectives improves supervised class performance but leads to poor generalization as training schedule increases. Our PromptSRC method with explicit prompts consistency constraints improves on base classes as well as shows improvements on novel classes. \(y\) is wrapped within a text template such as 'a photo of a {class label}' which can be formulated as \(\mathbf{\tilde{Y}}=\{\mathbf{t}_{SOS},\mathbf{t}_{1},\mathbf{t}_{2},\cdots,\mathbf{t}_{L},\mathbf{c}_{k},\mathbf{t}_{EOS}\}\). Here \(\{\mathbf{t}_{i}|_{l=1}^{L}\}\) and \(\mathbf{c}_{k}\) are the word embeddings corresponding to the text template and the class label, respectively while \(\mathbf{t}_{SOS}\) and \(\mathbf{t}_{EOS}\) are the learnable start and end token embeddings. The text encoder \(g\) encodes \(\mathbf{\tilde{Y}}\) via multiple transformer blocks to produce the latent textual feature as \(\mathbf{\tilde{g}}=g(\mathbf{\tilde{Y}},\mathbf{\theta}_{g})\), where \(\mathbf{\tilde{g}}\in\mathbb{R}^{d}\). For zero-shot inference, textual features of text template with class labels \(\{1,2,\cdots,C\}\) are matched with image feature \(\mathbf{\tilde{f}}\) as \(\frac{\exp(\text{sim}(\mathbf{\tilde{g}},\mathbf{\tilde{f}})\tau)}{\sum_{i=1}^{C} \exp(\text{sim}(\mathbf{\tilde{g}},\mathbf{\tilde{f}})\tau)}\), where \(\text{sim}()\) denotes the cosine similarity and \(\tau\) is the temperature. **Prompt Learning for CLIP:** Prompt learning approaches append learnable prompt tokens at either the text [59, 58] encoder or image [1] encoder. We use a simple baseline method [37] that learns hierarchical prompt tokens on both the text and image encoders separately, named as Independent Vision-Language Prompting (IVLP). Specifically, we append learnable \(T\) language and \(V\) visual prompts given as \(\mathbf{P_{t}}=\{\mathbf{p}_{t}^{1},\mathbf{p}_{t}^{2},\cdots,\mathbf{p}_{t}^{T}\}\) and \(\mathbf{P_{v}}=\{\mathbf{p}_{v}^{1},\mathbf{p}_{v}^{2},\cdots,\mathbf{p}_{v}^{V}\}\) with the textual and visual input tokens, respectively. Therefore, the image encoder processes the following input tokens \(\mathbf{\tilde{X}_{p}}=\{\mathbf{P_{v}},\mathbf{c}_{cls},\mathbf{e}_{1},\mathbf{e}_{2},\cdots,\mathbf{ e}_{M}^{T}\}\) to generate prompted visual feature represented as \(\mathbf{\tilde{f}_{p}}=f(\mathbf{\tilde{X}_{p}},\theta_{f})\). Similarly, textual feature is obtained as \(\mathbf{\tilde{g}_{p}}=g(\mathbf{\tilde{Y}_{p}},\theta_{g})\), where \(\mathbf{\tilde{Y}_{p}}=\{\mathbf{t}_{SOS},\mathbf{P_{t}},\mathbf{t}_{1},\mathbf{t}_{2},\cdots,\mathbf{ t}_{L},c_{k},\mathbf{t}_{EOS}\}\). In contrast to shallow prompting where learnable prompts are introduced only at the first transformer block of the image and text encoders, our approach uses deep prompting which learns separate sets of prompts at every transformer block. The vision and language prompts are jointly represented as \(\mathbf{P}=\{\mathbf{P_{v}},\mathbf{P_{t}}\}\). The feature representations obtained using these learnable prompts are referred to as _prompted features_. For image classification on a downstream dataset \(\mathcal{D}\), prompts \(\mathbf{P}\) interact with pre-trained and frozen \(\theta_{f}\) and \(\theta_{g}\) and are optimized with the cross-entropy loss, \(\mathcal{L}_{\text{CE}}\), as: \[\mathcal{L}_{\text{CE}}=\text{arg}\min_{\mathbf{P}}\mathbb{E}_{(\mathbf{X},y)\sim \mathcal{D}}\mathcal{L}(\text{sim}(\mathbf{\tilde{f}_{p}},\mathbf{\tilde{g}_{p}}),y). \tag{1}\] ### Self-Regularization for Prompt Learning The \(\mathcal{L}_{\text{CE}}\) objective employs ground truth labels to optimize the prompts for the downstream task. As a result, the prompts adapt and learn _task-specific knowledge_. During training, prompts interact with pre-trained and frozen CLIP tokens through self-attention layers in the transformer blocks. This interaction of prompts tokens with pre-trained CLIP weights \(\theta_{\texttt{CLIP}}\) provides implicit regularization and encourages retaining the _task-agnostic generalized knowledge_ within learned prompts. However, as shown in Fig. 2, prompts tend to overfit on the supervised task and drift away from the generalized CLIP space as the training schedule increases. Consequently, new task performance is degraded, despite the fact that CLIP image and text encoder weights \(\theta_{f}\) and \(\theta_{g}\) are kept frozen. As prompts undergo further training, the implicit generalization constraint becomes weaker against the task-specific \(\mathcal{L}_{\text{CE}}\) objective. One naive approach to address this issue is to reduce the training schedule to balance the performance between Figure 3: Our proposed PromptSRC framework for self-regulating prompt learning. CLIP encoders are used to generate \(\text{prompted}\) (\(\mathbf{\tilde{f}_{p}},\mathbf{\tilde{g}_{p}}\)) and \(\text{pre-trained}\) (\(\mathbf{\tilde{f}},\mathbf{\tilde{g}}\)) features at the image and text sides. First, we introduce textual diversity (§3.2.3) and define textual augmentations to produce a diverse set of frozen VL textual features, which are averaged to represent the pre-trained VL text features (\(\mathbf{\tilde{g}}\)). Next, we employ Mutual Agreement Maximization constraints (\(\mathcal{L}_{\text{SCL}}\)) to regulate the prompts, which ensure that the prompted features align well with the pre-trained VL representations at both the feature and logit levels (§3.2.1). As CLIP is frozen, we use the same VL encoders to obtain both types of features. Further, our prompt self-ensembling combines the strengths of prompts learned at different epochs (\(P_{1},P_{2}\cdots P_{E}\)) during training via Gaussian weighted sampling (§3.2.2). The ensembled \(\mathsf{visual}\) and \(\mathsf{ textual}\) prompts are then used for the final inference. the base and new tasks. However, training the prompts for fewer iterations to prevent losing generalization comes at the cost of relatively lower performance on the supervised task. Here, we present a prompt learning approach that maximizes supervised task performance without sacrificing performance on novel tasks and classes. We propose to anchor prompt training with self-regularization which constitutes three main components as discussed below. #### 3.2.1 Mutual agreement maximization As discussed above, the strong downstream dataset transfer constraint imposed by \(\mathcal{L}_{\text{CE}}\) causes the prompts to over-fit on task-specific data and it struggles to effectively utilize the general information from the frozen CLIP. We propose to explicitly guide the training trajectory by imposing a constraint to maximize its mutual agreement between the prompted and the frozen CLIP features. We achieve this by explicitly conditioning the prompted features to be consistent with the CLIP features obtained without learnable prompts. As we do not require any second model for such conditioning, we call this regularizing constraint as a self-consistency loss (SCL). For a given input sample and its corresponding textual label, we obtain visual features using learnable prompts and pre-trained visual features, \(\mathbf{\tilde{f}_{p}}\) and \(\mathbf{\tilde{f}}\) within the frozen CLIP latent space. Similarly, we obtain textual features \(\mathbf{\tilde{g}_{p}}\) and \(\mathbf{\tilde{g}}\). We then impose a constraint on the prompted visual and text features to ensure their consistency with the CLIP pre-trained features as follows, \[\mathcal{L}_{\text{SCL-image}}=\sum_{i=1}^{d}|\mathbf{\tilde{f}_{p}}-\mathbf{\tilde{f} }|,\ \mathcal{L}_{\text{SCL-text}}=\sum_{i=1}^{d}|\mathbf{\tilde{g}_{p}}-\mathbf{\tilde{g}}|. \tag{2}\] As shown in Eq. 2, we utilize \(L1\) loss to impose the feature level consistency. Note that our self-consistency constraint is also compatible with other variants of matching losses such as cosine similarity or MSE loss which we study in our ablations (Sec. 4.7). To further complement the regularization constraint and maximize the alignment between the general features and the prompted features, we impose logit level self-consistency regularization and condition the prompted logits distribution on pre-trained CLIP logits distribution by minimizing the Kullback-Leibler divergence as follows, \[\mathcal{L}_{\text{SCL-logits}}=\mathcal{D}_{\mathcal{KL}}(\text{sim}(\mathbf{ \tilde{f}_{p}},\mathbf{\tilde{g}_{p}}),\text{sim}(\mathbf{\tilde{f}},\mathbf{\tilde{g}})). \tag{3}\] Overall, the self-consistency training objectives guide the prompts to gain complementary knowledge from pre-trained CLIP features, therefore providing strongly generalized prompts, \[\mathcal{L}_{\text{SCL}}=\lambda_{1}\mathcal{L}_{\text{SCL-image}}+\lambda_{2 }\mathcal{L}_{\text{SCL-text}}+\mathcal{L}_{\text{SCL-logits}}, \tag{4}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are loss balancing hyper-parameters. Our overall training objective thus becomes, \[\mathcal{L}_{\text{final}}=\mathcal{L}_{\text{CE}}+\mathcal{L}_{\text{SCL}}. \tag{5}\] **Discussion on \(\mathcal{L}_{\text{final}}\):**\(\mathcal{L}_{\text{SCL}}\) loss guides the prompts to converge at solutions that are generalized. On the other hand, \(\mathcal{L}_{\text{CE}}\) guides the prompts to maximize performance on the downstream supervised tasks. The combination of these losses conditions the prompts to maximize their performance on supervised tasks and at the same time guides the prompts learning trajectory toward a weight space that is consistent with the CLIP zero-shot features. As shown in Fig. 2, our proposed methodology maximizes the supervised tasks' performance while also improving the generalization. This shows that the proposed training objectives for prompt learning setup are complementary to each other. #### 3.2.2 Regularization with prompt self-ensembling The second component in our self-regularizing framework enforces regularization using prompt self-ensembling. Model ensembling in the weight space has been shown to improve both the performance and generalization of a model [47, 18]. However, it has not been actively studied in the context of prompt learning, where prompts are only learnable parameters with frozen model parameters. To effectively utilize the prompts knowledge from the previous training iterations, we propose prompts aggregation for a generalizable solution. For a training schedule with \(E\) total epochs, prompts at every epoch are given by \(\{\mathbf{P}\}_{t=1}^{E}\). Aggregated prompts (AP) are then calculated as, \[\{\mathbf{P}\}^{\text{AP}}=\sum_{t=1}^{E}\frac{w_{t}.\mathbf{P}}{\sum_{i=1}^{E}w_{i}}, \tag{6}\] where \(w_{i}\) is the weight assigned to prompts at each epoch \(t\). In the early epochs, prompts are not mature to capture contextual information due to their random initialization. During aggregation, they should be given less weight as they act as noise which is carried along with the input tokens. On the other hand, the prompts learned in the last few epochs are task specific and highly favours the supervised downstream task distribution. We propose to perform Gaussian weighted prompt aggregation (GPA), where small aggregation weights are given to prompts at initial epochs, higher weights to prompts at middle epochs, and relatively lower weights to prompts at final epochs, resulting in optimal prompt representations that improve generalization to downstream tasks. GPA provides optimal weight values \(w_{i}\) by sampling from a Gaussian distribution \(w_{i}\sim\mathcal{N}(\mu,\,\sigma^{2})\), where \(\sigma^{2}\) and \(\mu\) are hyper-parameters and \(\sum_{i=1}^{E}w_{i}=1\). Gaussian distribution is defined over the epochs and its mean is dictated by the epoch number. We formulate this weighting as a moving average to avoid saving multiple copies of prompts by keeping one additional copy which is updated via aggregation at every epoch \(i\), \[\mathbf{P}^{\text{GPA}}=\sum_{i=1}^{E}w_{i}.\mathbf{P_{i}}. \tag{7}\] #### 3.2.3 Regulating prompts with textual diversity Through the \(\mathcal{L}_{\text{SCL}}\) loss, the visual prompted features to instill _diverse generalized contexts_ from pre-trained CLIP visual features as multiple image samples are present for each label category. This provides a natural source of augmentations at the image side and promotes additional regularization. However, as opposed to having multiple images per category, we note that the text space during fine-tuning is limited, and prompted features are learned based on pre-trained CLIP text features, with only one feature representation per category. This mismatch between the available diversity at the image and text side leads to sub-optimal learning of prompted textual features. To address the diversity mismatch, we incorporate textual diversity in the text encoder. Specifically, we use a pool of textual prompt templates \(\{PT|_{l=1}^{N}\}\), containing \(N\) augmentations to form multiple text features per category. The pre-trained CLIP textual features are now obtained as an ensemble of multiple prompts templates \(\mathbf{\tilde{g}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{\tilde{g}^{i}}\). As pre-trained CLIP textual features are now represented by the ensemble of multiple augmentations for each label, the prompted textual features learn more _diverse generalized contexts_ from the frozen CLIP. We note that the proposed textual diversity is different from the standard prompt ensembling technique explored by CLIP authors. CLIP uses ensemble of text prompts during inference for classification. In contrast, we utilize them during training for self-regularization by enforcing mutual agreement of ensembled features with prompted features, and prompted features are used at inference. Next, we show the efficacy of our proposed components via comprehensive experiments provided below. ## 4 Experiments ### Evaluation settings We extensively evaluate our approach and present a comparison with other methods on four benchmark settings. **Base-to-novel class generalization:** In this setting, we equally split the datasets into base and novel classes. The model is trained on base classes and evaluated on both base classes and novel classes. This benchmark evaluates the generalization ability of a method within a dataset. **Few-shot learning:** We incorporate this setting to compare the learning capacity of the model under extremely limited supervision and verify if our approach learns complementary task-specific and task-agnostic knowledge. For each dataset, we test the model's generalization for different \(K\)-shots per category, where \(K=1,2,4,8,16\). **Domain generalization setting:** We train a source model on ImageNet [6] and evaluate on out-of-distribution datasets to test performance under domain shifts. **Cross-dataset evaluation:** In cross-dataset transfer, we train the models on ImageNet [6] and directly evaluate it on other datasets without any data-specific fine-tuning. **Datasets:** For base to novel class generalization, few-shot setting and cross-dataset evaluation, we follow CoOp [59] and CoCoOp [58], and use 11 image recognition \begin{table} \begin{tabular}{l l|c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{CLIP CoOp CoCoCoOp ProDA MaPLe PrompGRSC} & \multicolumn{1}{c}{\multirow{2}{*}{ \begin{tabular}{c} \multi datasets. The datasets cover multiple recognition tasks including ImageNet [6] and Caltech101 [10] which consists of generic objects; OxfordPets [34], StanfordCars [24], Flowers102 [33], Food101 [2], and FGVCAircraft [31] for fine-grained classification, SUN397 [48] for scene recognition, UCF101 [41] for action recognition, DTD [4] for texture classification, and EuroSAT [14] which consists of satellite images. For domain generalization benchmark, we use ImageNet [6] as a source dataset and use ImageNet-A [16], ImageNet-R [15], ImageNet-Sketch [44] and ImageNetV2 [39] as out of distribution datasets. **Implementation details:** We use a ViT-B/16 based CLIP model in our experiments and report results averaged over 3 runs. We use deep prompting with \(V=T=4\) VL prompts and train for 50 epochs for few-shot setting and 20 epochs the rest of the 3 benchmarks respectively. For domain generalization and cross-dataset evaluation, we train the ImageNet source model on all classes with \(K=16\) shots using \(V=T=4\) VL prompts in the first 3 transformer layers. For few-shot and base-to-novel setting, prompts are learned in the first 9 transformer layers. Prompts are randomly initialized with a normal distribution except the text prompts of the first layer which are initialized with the word embeddings of "a photo of a". We fix the learning rate to 0.0025. We set \(\lambda_{1}=10\) and \(\lambda_{2}=25\) to weight \(\mathcal{L}_{\text{SCL-image}}\) and \(\mathcal{L}_{\text{SCL-text}}\) respectively. The corresponding hyperparameters are fixed across all datasets and benchmarks. For textual diversity, we use a total of \(N=60\) standard prompt templates provided in [35]. For comparison with ProDA [28], we report their results produced by [7]. Refer to Appendix A for additional implementation details. ### Effectiveness of Self-regulating Prompts We first disentangle the regularization components in our self-regulating prompting framework and show the individual contributions in Table 2. Baseline IVLP provides high base class performance but suffers from poor generalization (row-1). By enforcing mutual agreement through \(\mathcal{L}_{\text{SCL}}\) (row-2), novel class performance significantly increases by 3.95% while maintaining base class gains. This suggests that \(\mathcal{L}_{\text{SCL}}\) explicitly enforces the prompts to capture the generalizable features from frozen CLIP. Integrating GPA (row-3) which suitably aggregates prompts across the training cycle further reduces overfitting and improves the novel class performance. Finally, combined with textual diversity to overcome the diversity mismatch between the text and visual domains (row-4), PromptSRC achieves improvements on both base and novel classes, leading to the average novel class and harmonic mean gains of +4.31% and +2.46% respectively. The averaged results on 11 datasets are summarized in Table 2. Note that even small improvements in these metrics correspond to significant gains. We refer the readers to Appendix B for results on individual datasets. ### Base-to-Novel Generalization We compare the performance of our approach with zero-shot CLIP [35], CoOp [59], CoCoOp [58], ProDA [28] and MaPLe [22], in Table 1. Overall, all existing approaches outperform zero-shot CLIP on base classes but show inferior performance on novel classes except MaPLe. This suggests that they overall tend to lose the generalizable features stored in the frozen CLIP model. In contrast, PromptSRC significantly improves base class performance while improving the zero-shot CLIP novel class accuracy by 1.88%. This shows the importance of explicit guidance provided by PromptSRC in learning complementary task-specific and task-agnostic representations which aid base and novel classes respectively. CoOp is heavily trained on base classes and consequently compromises on its generalization. For instance, on EuroSAT [14], CoOp provides a substantial 92.19% base class accuracy and inferior novel class accuracy of 54.74%. On the other hand, PromptSRC which learns self-regulating prompts provides the highest base and novel class accuracies of 92.90% and 73.90% on EuroSAT respectively. In comparison to CoCoOp and ProDA, PromptSRC shows gains on the 10/11 datasets respectively. Against the recent MaPLe approach, PromptSRC improves performance on 8/11 datasets while using 77x less tunable parameters (3.55M of MaPLe vs 46K of PromptSRC). With respect to the averaged results, PromptSRC provides the best results of 84.26%, 76.10%, and 79.97% on the base class, novel class, and harmonic mean respectively. ### Few-shot Experiments To explicitly verify if our regularization framework restricts the prompts to learn task-specific knowledge or not, we compare our few-shot results with existing methods in Fig. 4. In general, all prompt learning approaches perform better than the linear probe, especially in scenarios with lesser shots _i.e._, \(K=1,2,4\). PromptSRC overall provides consistent improvements on all shots in comparison with all existing methods. When compared with the existing best method MaPLe, PromptSRC consistently provides absolute gains of 3.05%, 2.72%, 2.59%, 1.80%, and, 1.07% on 1, 2, 4, 8, and 16 shots respectively which are averaged over 11 datasets. Furthermore, we note that our approach achieves relatively larger gains in minimal data cases such \begin{table} \begin{tabular}{l c c|c} \hline \hline Method & Base Acc. & Novel Acc. & HM \\ \hline 1: Independent V-L prompting & 84.21 & 71.79 & 77.51 \\ 2: + \(\mathcal{L}_{\text{SCL}}\) & 84.21 & 75.38 & 79.55 \\ 3: + GPA & 84.16 & 75.69 & 79.70 \\ \hline 4: + Textual diversity & **84.26** & **76.10** & **79.97** \\ \hline \hline \end{tabular} \end{table} Table 2: Effect of our proposed regularization techniques. Results are averaged over 11 datasets. HM refers to harmonic mean. as for \(K=1,2\) for almost all datasets. This demonstrates that PromptSRC regulates prompts against overfitting without restricting the prompts to learn task-specific knowledge. ### Cross Dataset Evaluation We compare our cross-dataset performance with previous methods in Table 3. On the source dataset, PromptSRC performs comparably to other methods. In comparison with CoOp and CoCoOp, PromptSRC shows competitive performance and achieves better generalization in 8/10 and 7/10 datasets respectively. Compared with MaPLe, PromptSRC shows improved performance in 5/10 datasets while utilizing significantly less tunable parameters (46K vs 3.55M). ### Domain Generalization Experiments Table 4 summarizes the results of PromptSRC and previous methods on out-of-distribution datasets. We directly evaluate our model trained on ImageNet. On target datasets, PromptSRC consistently outperforms all existing methods, with an overall highest average accuracy of 60.65%. This suggests that our self-regulating framework favors better generalization for datasets with domain shifts. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**Source**} & \multicolumn{4}{c}{**Target**} \\ \cline{2-6} & ImageNet & -V2 & -S & -A & -R & Avg. \\ \hline CLIP & 66.73 & 60.83 & 46.15 & 47.77 & 73.96 & 57.18 \\ CoOp & **71.51** & 64.20 & 47.99 & 49.71 & 75.21 & 59.28 \\ CoCoOp & 71.02 & 64.07 & 48.75 & 50.63 & 76.18 & 59.91 \\ MaPLe & 70.72 & 64.07 & 49.15 & 50.90 & 76.98 & 60.27 \\ \hline \hline PromptSRC & 71.27 & **64.35** & **49.55** & **50.90** & **77.80** & **60.65** \\ \hline \hline \end{tabular} \end{table} Table 4: Domain generalization. Prompt learning methods are trained on imageNet and evaluated on datasets with domain shifts. Figure 4: PromptSRC performance comparison in few-shot image recognition setting. All methods are trained on ViT-B/16 CLIP backbone using their best settings. PromptSRC demonstrates consistent improvements over existing methods specifically for lesser shots _i.e._\(K=1,2,4\). On average, PromptSRC provides the highest performance gains for all shots. These results demonstrate that PromptSRC learns complementary task-agnostic general features from frozen CLIP without being restricted from learning downstream task representations. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Source**} & \multicolumn{4}{c}{**Target**} \\ \cline{2-10} & \multirow{2}{*}{**Tram**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} & \multirow{2}{*}{**Soc**} \\ & & & & & & & & & & & \\ \hline CoOp & **71.51** & 93.70 & 91.64 & 45.61 & 87.1 & 85.30 & 18.47 & 61.5 & 41.92 & 46.39 & 66.55 & 63.88 \\ CoCoOp & 71.02 & **94.43** & 90.14 & 65.32 & 72.18 & 86.06 & 22.94 & **67.36** & 45.73 & 68.21 & 65.74 \\ MaPLe & 70.72 & 93.53 & **90.49** & 65.57 & **72.33** & **86.20** & **24.74** & 67.01 & 46.49 & **88.06** & 68.09 & **66.30** \\ \hline PromptSRC & 71.27 & **93.50** & 90.25 & **65.70** & 70.25 & 86.15 & 23.90 & 67.10 & **46.87** & 45.50 & **68.75** & 66.81 \\ \hline \hline \end{tabular} \end{table} Table 3: Cross-dataset benchmark evaluation. PromptSRC achieves overall favourable performance. ### Ablative Analysis **Embedding consistency loss ablation:** In Table 5, we ablate on the choice of matching loss metric used in our proposed feature level \(\mathcal{L}_{\text{SCL}}\) loss constraints. For simplicity, we only incorporate \(\mathcal{L}_{\text{SCL-image}}\) and \(\mathcal{L}_{\text{SCL-text}}\) on top of the IVLP baseline. Generally, distance-based matching metrics outperform the cosine similarity metric in terms of generalization as they impose a much harder constraint. Overall, the \(L1\) matching metric provides the highest HM. **Prompt ensembling:** Table 6 shows ablation on various prompt ensembling techniques. Using equal weights for prompts reduces base class results as initial epoch prompts are not mature enough. In contrast, our proposed Gaussian weighted prompt aggregation results in the highest performance. Detailed ablation experiments for other hyper-parameters are provided in Appendix C. **Training and inference compute cost analysis:** In Table 7, we show the compute cost analysis of our approach and compare it with other prompting methods. Prompt-SRC's overall training GFLOPs are only 0.13x higher than baseline IVLP, while it maintains the same GFLOPs and throughput during inference. Pre-trained CLIP textual features are pre-computed and a single additional forward pass is required through image encoder to compute pre-trained CLIP visual features for our mutual agreement maximization technique. Training time of PromptSRC is 9.3% longer than IVLP which is significantly lower than CoCoOp. We use 4 vision and text prompts similar to the IVLP. **Prompt Length:** Fig. 5 (right) shows the effect of prompt token length on the harmonic mean. Overall, the performance increases as prompt length increases. Using 4 vision-language prompts provides the highest harmonic mean. **No. of templates in textual diversity:** In Fig. 5 (left), we ablate on the number of text prompt templates for textual diversity. We note that increasing the number of textual templates for textual diversity generally increases the performance. This suggests that adding textual diversity using multiple templates for pre-trained features provides more rich supervision for the learned prompted features. ## 5 Conclusion Prompt learning has emerged as an effective paradigm for adapting foundational VL models like CLIP. However, the prompts learned by the majority of existing methods inherently tend to overfit task-specific objectives and consequently compromise the inherent generalization ability of CLIP. Our work proposes a self-regulating prompt learning framework that addresses the prompt overfitting problem for better generalization. We show it is critical to guide the training trajectory of prompts by explicitly encouraging its mutual agreement with the frozen model through self-consistency constraints supplemented by incorporating textual diversity. We also propose a self-ensembling strategy for prompts that appropriately aggregates them via a Gaussian-weighted approach over the course of training. Extensive evaluations on multiple benchmarks show the benefit of our self-regulating approach for prompt learning.
2310.14983
Causal clustering: design of cluster experiments under network interference
This paper studies the design of cluster experiments to estimate the global treatment effect in the presence of network spillovers. We provide a framework to choose the clustering that minimizes the worst-case mean-squared error of the estimated global effect. We show that optimal clustering solves a novel penalized min-cut optimization problem computed via off-the-shelf semi-definite programming algorithms. Our analysis also characterizes simple conditions to choose between any two cluster designs, including choosing between a cluster or individual-level randomization. We illustrate the method's properties using unique network data from the universe of Facebook's users and existing data from a field experiment.
Davide Viviano, Lihua Lei, Guido Imbens, Brian Karrer, Okke Schrijvers, Liang Shi
2023-10-23T14:30:46Z
http://arxiv.org/abs/2310.14983v2
# Causal clustering: design of cluster experiments under network interference ###### Abstract This paper studies the design of cluster experiments to estimate the global treatment effect in the presence of spillovers on a single network. We provide an econometric framework to choose the clustering that minimizes the worst-case mean-squared error of the estimated global treatment effect. We show that the optimal clustering can be approximated as the solution of a novel penalized min-cut optimization problem computed via off-the-shelf semi-definite programming algorithms. Our analysis also characterizes easy-to-check conditions to choose between a cluster or individual-level randomization. We illustrate the method's properties using unique network data from the universe of Facebook's users and existing network data from a field experiment. _Keywords:_ Experimental Design, Spillover Effects, Causal Inference, Cluster Designs. _JEL Codes:_ C10, C14, C31, C54 Introduction Consider a (large) population of \(n\) individuals connected under a single observed network. Researchers are interested in conducting an experiment to estimate the global average treatment effect, i.e., the difference between the average effect of treating all versus none of the individuals in the population. Treating an individual may generate spillovers to her friends in the network. To capture such effects, researchers conduct a cluster experiment. Individuals are first partitioned into clusters. Within a cluster, either all units are assigned to the treatment or all units are assigned to the control group. Finally, researchers estimate treatment effects by taking a difference between the average outcomes of treated and control units (possibly adjusting for baseline covariates). The cluster design does not require modeling the dependence of individual outcomes on neighbors' assignments, but it requires a choice of clusters and some assumptions on the extend of the spillovers along the network. For example, cluster experiments on online platforms require choosing a partition of the social network, and field experiments require choosing the unit of randomization, such as villages or regions. This raises the question of how many and which clusters to use in experiments. Typical approaches in economic research assume prior knowledge of many independent clusters. There are many settings where this information is not available, and instead units in the population have different degrees of connections.1 This paper provides an econometric framework to choose when and how to design the _clusters_ in cluster experiments. Different from existing clustering algorithms geared towards community detection, we motivate the choice of the clusters based on the task of estimating global treatment effects. The choice of clustering must balance two competing objectives: the larger the clusters (and the smaller the number of clusters), the smaller the bias of the estimated global effect, but the larger its variance. We introduce an algorithmic procedure - entitled _Causal Clustering_ - to choose the clustering that minimizes a weighted combination of the worst-case bias and variance as a function of the network and clusters. The worst-case approach encodes uncertainty over the dependence of individual outcomes on neighbors' assignments. We study (i) _whether_ to run a cluster-level instead of individual-level randomization; (ii) _how_ to cluster individuals (and how many clusters to use). Footnote 1: For example, when using villages as clusters, individuals may interact in the same and nearby villages. See Egger et al. (2022) for an example in cash-transfer programs. We focus on a class of models where spillover effects are small relative to the outcomes' variance but possibly non-negligible for inference. This is formalized in a novel framework of local asymptotics where individual outcomes depend arbitrarily on neighbors' treatments, and, as \(n\) grows, spillovers from neighbors (and possibly also direct effects) converge to zero, but at an arbitrary slow rate (e.g., slower than \(n^{-1/2}\)). This framework encodes the researchers' uncertainty on the presence (and magnitude) of spillover effects by modeling first-order neighbors' effect local to zero, with the convergence rate capturing the expected magnitude of spillovers.2 The local asymptotic framework we study is consistent with settings with small (but non-negligible) treatment and spillover effects, typical, for instance, in online experiments (e.g., Karrer et al., 2021). We characterize optimal clustering as a function of the expected magnitude of the largest spillover effects that the experiment can generate. The largest size of the spillover effects is a key input in our algorithms, whose characterization, in practice, is necessary for the design of the experiment but can be challenging. This parameter can be informed by previous experiments, in the same spirit of minimum detectable effects used in power analysis (e.g. Baird et al., 2018), or using some particular modeling assumptions. We provide guidance to practitioners in Section 6. Footnote 2: The assumption of spillovers within first-order neighbors can be relaxed here by assuming that higher-order spillovers are an order of magnitude smaller than first-order spillovers. Our analysis proceeds as follows. First, we provide a formal characterization of the worst-case bias and variance. We show that the worst-case bias is closely related to a particular notion of between-clusters connectedness, defined as the per-individual average number of friends in other clusters. The worst-case variance can potentially be an arbitrary function of within-clusters covariances and between-clusters covariances: individuals in the same cluster have identical assignments and in different clusters may share common neighbors. We show that the variance only depends on the average squared cluster size, up to an asymptotically negligible error. This result formalizes the intuition that a larger number of clusters, with a small variance in cluster size, decreases the variance of the estimator. We draw the implications of these results for choosing between a cluster experiment (for a given clustering) or assigning treatments independently between individuals (i.e., Bernoulli design or completely randomized design). Suppose the magnitude of the spillover effects is smaller than the square root of the number of clusters. In that case, the variance component dominates the bias, and a Bernoulli design is preferred (where a Bernoulli design is a special case of a cluster design with clusters containing a single unit). Vice-versa, a cluster design is preferred if the bias dominates the variance. Intuitively, because our objective trades off the bias and variance of the estimator, whenever the number of clusters is small, it is best to run a Bernoulli design for any value of spillover effects local to zero. On the other hand, if the number of clusters is sufficiently large, and the cluster design appropriately controls the bias of the estimator, a cluster design is preferred. We provide practitioners with a simple decision rule between cluster and Bernoulli designs that only depends on the number of clusters and the expected spillover effects' magnitude. We then turn to the design of the optimal clustering, where the choice is not whether to run a cluster or Bernoulli design, but rather which clustering to use. The choice of optimal clustering reduces to a novel penalized minimum cut optimization problem, with a penalty that depends on the variation in clusters' size. The program admits a convenient formulation as a sequence of trace-optimization problems, each solved via off-the-shelf semidefinite programming algorithms. We provide an empirical application using unique network data from the universe of Facebook users. We show that our procedure provides an explicit ranking between different clustering algorithms implemented at scale at Facebook. We also illustrate trade-offs in the choice of the clustering algorithm and properties of the graph. A second application using network data from Cai et al. (2015) illustrates the method's advantages for choosing clusters in field experiments. We show that the choice of the clusters based on village identity is sub-optimal in this application, because the number of village clusters is too small relative to the optimal clustering. We present an alternative choice of clusters. This paper connects to the literature on experimental design, causal inference with spillover effects, and clustering. Existing methods for experimental designs with spillover effects include cluster and saturation designs, studied in Baird et al. (2018), Basse and Feller (2016), Karrer et al. (2021), Pouget-Abadie (2018), Taylor and Eckles (2018), Viviano (2020b), among others. These papers provide an analysis of the properties of particular designs for a given clustering or clustering algorithm. Different from the current paper, these references either do not study the question of the optimal clustering algorithm for experimental design, or only provide heuristic comparisons of different clustering methods. A particular class of clustering algorithms is \(\varepsilon\)-net clustering algorithms - which consist of assigning sequentially individuals in the same neighborhood to the same cluster (Eckles et al., 2017; Ugander et al., 2013). Variants of these algorithms have been recently studied in Leung (2022) for spatial spillovers and Faridani and Niehaus (2022) in more general non-Euclidean spaces. These papers provide an optimal rate of the clusters' size as a function of how interference decays in space for approximately unbiased estimators. Here, instead, we provide an explicit trade-off and novel characterization of the bias and variance, leveraging local asymptotics. It allows us to characterize the optimal clustering as the solution of a trace-optimization program (different from \(\varepsilon\)-net clustering). The comparison of a cluster and Bernoulli designs through local asymptotics is also a novel contribution. Additional references of experiments with networks in the _absence_ of cluster experiments are Basse and Airoldi (2018), and Jagadeesan et al. (2020) for estimating direct instead of global average treatment effects studied here; Kang and Imbens (2016) who study encouragement designs, without focusing on the problem of mean-squared error optimal designs; Viviano (2020a) for experiments on networks using information from a pilot study; Basse and Airoldi (2018) discuss limitations of design-based causal inference under interference. None of these paper study (optimal) cluster designs. The literature on treatment effects under network interference includes Aronow and Samii (2017), Hudgens and Halloran (2008), Manski (2013), Leung (2020), Athey et al. (2018), Goldsmith-Pinkham and Imbens (2013), Savje et al. (2021), Ogburn et al. (2017), Manresa (2013), Li and Wager (2022), Leung (2022), among others. The assumption that an individual depends on first-order degree connections is most closely related to Leung (2020).3 None of the above references study experimental designs. Finally, we relate more broadly to the literature on clustering in economics - see Wooldridge (2003), Leung (2023), Abadie et al. (2017) and references therein - with the difference that here we focus on optimal clustering, rather than taking the clusters as given - and the literature on graph clustering, including Von Luxburg (2007), Newman (2013, 2013), Lei (2019), Lei et al. (2020), Li et al. (2022), and references therein. Different from this last set of papers on graph clustering, this paper focuses on treatment effect estimation, instead of community detection. Footnote 3: Our local asymptotic framework extends to settings with higher order dependence (Leung, 2022), assuming that higher order neighbors generate spillovers of smaller order compared to first order neighbors, and the network is sufficiently sparse. The remainder of the paper is organized as follows: Section 2 presents the setup; Section 3 characterizes the bias and variance and contrast cluster and Bernoulli designs; Section 4 formalizes the optimization problem for the optimal clustering; Section 5 presents two applications and numerical studies and Section 6 contains recommendations for practice. ## 2 Setup We consider a setting with \(i\in\{1,\cdots,n\}\) units. Let \(Y_{i}\in\mathbb{R}\) denote the observed outcome of interest for individual \(i\), \(D_{i}\in\{0,1\}\) denote a binary treatment assignment, and \(\mathbf{D}\in\{0,1\}^{n}\) the vector of treatment assignments of each unit. Define \(\mathbf{A}\) a symmetric adjacency matrix with \(\mathbf{A}_{i,j}\in\{0,1\}\), and \(Y_{i}(\mathbf{d}),\mathbf{d}\in\{0,1\}^{n}\) the potential outcome as a function of the entire vector of treatment assignments, with \(Y_{i}=Y_{i}(\mathbf{D})\). We implicitly condition on \(\mathbf{A}\), observed to the researchers, and (unobserved) potential outcomes \(Y(\mathbf{d})\), unless otherwise specified (see Remark 5 for an extension with unobserved \(\mathbf{A}\)). We define \(\mathcal{N}_{i}=\{j:\mathbf{A}_{i,j}=1\}\) the set of individuals connected to \(i\). Let \(\mathcal{N}_{n,\max}=\max_{i\in\{1,\cdots,n\}}|\mathcal{N}_{i}|\), the maximum degree. Our primary focus is on estimating the global average treatment effect, \[\tau_{n}=\frac{1}{n}\sum_{i=1}^{n}\Big{[}Y_{i}(\mathbf{1})-Y_{i}(\mathbf{0}) \Big{]}. \tag{1}\] The overall effect defines the effect if all units had received the treatment compared to none of the individuals receiving the treatment. Table 1 summarizes the notation. ### Spillover effects Next, we impose restrictions on the spillover effects. **Assumption 1** (First-order local interference).: For \(i\in\{1,\cdots,n\}\), \[Y_{i}(\mathbf{d})=\mu_{i}(\mathbf{d}_{i},\mathbf{d}_{\mathcal{N}_{i}}),\quad \forall\mathbf{d}\in\{0,1\}^{n},\] for some functions \(\mu_{i}(1,\cdot)\in\mathcal{M}_{1,i},\mu_{i}(0,\cdot)\in\mathcal{M}_{0,i}\) for some set of functions \(\mathcal{M}_{1,i},\mathcal{M}_{0,i}\). Assumption 1 states that spillovers occur between neighbors and allows for arbitrary dependence of potential outcomes with neighbors' assignments. One-degree neighborhood dependence follows similarly to Leung (2020), Li and Wager (2022), and it is consistent with models often used in applications, e.g., Cai et al. (2015), Sinclair et al. (2012), Muralidharan and Niehaus (2017). Athey et al. (2018) provide a framework for testing Assumption 1. The reader may refer to Remark 2 for higher-order interference. Higher-order interference can be accomodated, although, in practice, higher order effects can be small and difficult to detect. We will refer to \(\tau_{n,\mu}\) as the overall effect in Equation (1) to make the dependence of \(\tau_{n}\) on \((\mu_{1},\cdots,\mu_{n})\) explicit. We do not assume that we know or can estimate consistently \(\mu_{i}\). (Because the functions \(\mu_{i}\) and their classes \(\mathcal{M}_{0,i},\mathcal{M}_{1,i}\) are indexed by \(i\), such functions cannot be consistently estimated.) Instead, we allow for _arbitrary_ classes \(\mathcal{M}_{1,i},\mathcal{M}_{0,i}\) of potential outcome functions, as long as such classes satisfy the conditions below. These classes have to be both sufficiently large in order to accomodate rich structures in the data, as well as satisfy some restrictions to be able to estimate features of causal effects. **Assumption 2** (Sufficiently _large_ function class).: For all \(i\in\{1,\cdots,n\},d\in\{0,1\}\), 1. \(\mu_{i}(d,\cdot)\in\mathcal{M}_{d,i}\Rightarrow-\mu_{i}(d,\cdot)\in\mathcal{ M}_{d,i}\), i.e., \(\mathcal{M}_{d,i}\) is centro-symmetric; 2. There _exist_ functions \(\mu_{i}(d,\cdot)\in\mathcal{M}_{d,i}\) such that \(\forall i\in\{1,\cdots,n\},\mathbf{d}\in\{0,1\}^{|\mathcal{M}_{i}|}\), \(\mu_{i}(d,\mathbf{d})=\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{ \underline{ \underline \cdotcdot}}}}}}}}}}}}}}\psi^ \psi \psi \psi \psi \(\mathcal{M}_{d,i}\) contains each function and its opposite. This restriction does not impose restrictions on exposure mappings and allows for arbitrary sign flips.4 Condition (ii) states that there exists one function in the admissible function class that is constant and positive. Condition (ii) is a regularity condition that guarantees non-degenerate solutions in worst-case scenarios studied in the following section. Condition (iii) considers a product space of potential outcome functions. These conditions are not restrictive, and they hold if \(\mathcal{M}_{0,i},\mathcal{M}_{1,i}\) can contain arbitrary functions. Below we impose the main _restrictions_ on the function class. Footnote 4: Our results also hold if instead of imposing (i) we assume that the _difference_ between \(\mu_{i}(1,\mathbf{d})-\mu_{i}(\mathbf{1}),\mu_{i}(0,\mathbf{d})-\mu(\mathbf{0})\) can be symmetric for all \(\mathbf{d}\in\{0,1\}^{n}\). **Assumption 3** (Restricted class of potential outcome models).: For all \(i\in\{1,\cdots,n\}\), \(d\in\{0,1\}\) 1. \(\mathcal{M}_{d,i}\) contains bounded functions, i.e., \(||\mu_{i}(d,\cdot)||_{\infty}\leq M,\forall\mu_{i}(d,\cdot)\in\mathcal{M}_{d,i}\) for some (unknown) \(M<\infty\); 2. for all \(\mathbf{d}\in\{0,1\}^{|\mathcal{N}_{i}|}\), for some (unknown) \(\alpha_{i}\in(\underline{\alpha},1],\underline{\alpha}>0,\bar{\phi}_{n}\in \mathbb{R}_{+}\), with \(\max_{i}\alpha_{i}=1\), \[\sup_{\mu_{i}(0,\cdot)\in\mathcal{M}_{0,i}}\Big{|}\mu_{i}(0, \mathbf{d})-\mu_{i}(0,\mathbf{0})\Big{|} =\bar{\phi}_{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|}\sum_{k\in \mathcal{N}_{i}}\mathbf{d}_{k},\] (2) \[\sup_{\mu_{i}(1,\cdot)\in\mathcal{M}_{1,i}}\Big{|}\mu_{i}(1, \mathbf{d})-\mu_{i}(1,\mathbf{1})\Big{|} =\bar{\phi}_{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|}\sum_{k\in \mathcal{N}_{i}}\Big{(}1-\mathbf{d}_{k}\Big{)}.\] Assumption 3 imposes two restrictions. Condition (i) states that potential outcomes are uniformly bounded.5 Condition (ii) is our main restriction on the exposure mapping. Condition (ii) is attained if \(\mathcal{M}_{1,i},\mathcal{M}_{0,i}\) are Lipschitz function classes in the share of treated neighbors. Condition (ii) states that potential outcomes vary in the share of neighbors' treatments by _at most_\(\alpha_{i}\bar{\phi}_{n}\). Here \(\bar{\phi}_{n}\) captures the magnitude of (largest) spillovers, and \(\alpha_{i}\) captures individual-level heterogeneity. Footnote 5: This restriction is common in the literature, e.g., Kitagawa and Wang (2021), and can be relaxed by assuming random sub-gaussian potential outcomes. The component \(\bar{\phi}_{n}\) depends on \(n\) and will play an important role in our asymptotic analysis as \(n\to\infty\). We focus on settings where \(\bar{\phi}_{n}\) is small (i.e., \(\bar{\phi}_{n}=o(1)\) as \(n\to\infty\)), in the spirit of a local asymptotic framework (e.g. Hirano and Porter, 2009), but its convergence rate can be arbitrarily slow. These scenarios formalize the idea that spillover effects (and possibly but not necessarily also overall treatment effects \(\tau_{n}\)) are local to zero. For example, in an information campaign, we might expect spillover (and direct) effects to be small but non-negligible for inference, as often occurs in online experiments (Karrer et al., 2021). We show how different magnitudes of spillover effects justify different designs. **Assumption 4** (Local and sparse network asymptotics).: We consider asymptotic scenarios with a sequence of \(\left(\mathbf{A},(Y_{i}(\cdot))_{i=1}^{n}\right)\), indexed by \(n\), where \(\bar{\phi}_{n}\min\{n,\mathcal{N}_{n,\max}^{2}\}=o(1)\). Assumption 4 formalizes the local asymptotic framework considered here: spillovers converge to zero, but the rate can be arbitrary slow up to a term that depends on the squared maximum degree.6 Assumption 4 holds for arbitrary rates of convergence of the spillover effects \(\bar{\phi}_{n}\) for networks with bounded degree (e.g. De Paula et al., 2018, where \(\mathcal{N}_{n,\max}\) is bounded), and requires faster rates of \(\bar{\phi}_{n}\) (smaller spillover effects), for dense networks. Footnote 6: The restriction on the maximum degree squared can be sharpened by imposing restrictions on the average second-order degree, omitted for expositional convenience. We conclude this discussion with examples and remarks. **Example 2.1** (Linear exogenous peer effects).: Consider a class of functions of the form \(\mu_{i}(\mathbf{d})=\mu(T_{i}(\mathbf{d}))+\varepsilon_{i}\), with \(T_{i}(\mathbf{d})=\left[\mathbf{d}_{i},(1-\mathbf{d}_{i})\times\frac{\sum_{j \neq i}\mathbf{A}_{i,j}\mathbf{d}_{j}}{\sum_{j\neq i}\mathbf{A}_{i,j}},\mathbf{ d}_{i}\times\frac{\sum_{j\neq i}\mathbf{A}_{i,j}\mathbf{d}_{j}}{\sum_{j\neq i} \mathbf{A}_{i,j}}\right]\), and \(\mu(t)=t^{\top}\beta\) for \(\beta\in[-\bar{\Delta}_{n},\bar{\Delta}_{n}]\times[-\bar{\phi}_{n},\bar{\phi}_ {n}]^{2}\), for some arbitrary \(\Delta_{n},\phi_{n}\), and \(\varepsilon_{i}\) that is not a function of \(\mathbf{d}\). Then Assumption 3 holds. **Remark 1** (Local asymptotics and direct effect).: Our local asymptotic framework also allows direct treatment effects (and global effects) to be local to zero. Specifically, it is possible that \(\tau_{n}=o(1)\) at arbitrary rate, e.g., at the _same_ rate as \(\bar{\phi}_{n}\). Therefore, our local asymptotic assumption does not require that spillover effects are local to the global effect (since \(\tau_{n}\) can also converge to zero). Instead, our local asymptotics formalizes settings where the noise-to-signal ratio decreases slower than \(n^{-1/2}\). **Remark 2** (Higher order interference and endogenous peer effects).: Our setting generalizes to higher-order interference in two scenarios. First, suppose that friends up to degree \(d<\infty\) generate spillovers in magnitude similar to first-degree friends. Our results extend after we define the set of friends as the set of friends up to degree \(d\), and \(\bar{\phi}_{n}\) the largest effect that such friends generate. Sparsity restrictions on the largest degree in Assumption 4 are with respect to the number of friends up to degree \(d\). In the second scenario, suppose that the assumption of first-order effects approximates higher-order effects up to a term of smaller order than first-order effects. Specifically, suppose that \(Y_{i}(\mathbf{d})=\mu_{i}(\mathbf{d}_{i},\mathbf{d}_{\mathcal{N}_{i}})+ \mathcal{O}(h_{n})\), for some \(h_{n}\to 0\). Our results hold if \(h_{n}=o(1/n)\), capturing the idea that first-order effects \(\bar{\phi}_{n}\) are larger than second-order effects. In Appendix B we provide an example and sufficient conditions for the approximation error due to higher order interference being of order \(o(1/n)\) in the presence of endogenous peer effects (Bramoulle et al., 2009; Manski, 1993), where the individual outcome depends on other units' outcomes. In practice, higher-order effects can be small, leading to under-powered studies, especially when individuals have many friends and first order effects capture most of the spillovers. This motivates our focus on first order effects. ### Experimental design and estimation Next, we turn to the class of designs and estimators considered here. Define a clustering of size \(K_{n}\) as a set of sets of indicators satisfying \[\mathcal{C}_{\mathrm{C},n}=\left\{c_{k}\subseteq\{1,\cdots,n\},k\in\{1,\cdots, K_{n}\},\bigcup_{k}c_{k}=\{1,\cdots,n\},c_{k}\bigcap c_{k^{\prime}}=\emptyset \text{ for }k\neq k^{\prime}\right\}.\] Here, \(\mathcal{C}_{\mathrm{C},n}\) denotes a particular partition of the units in the population with \(K_{n}\) exclusive clusters. With an abuse of notation, let \(c(i)\subseteq\{1,\cdots,n\}\) denote the cluster assigned to unit \(i\), and \(|c_{k}|=n_{k}\) the number of individuals in cluster \(k\). For easy of exposition, we focus our discussion and assumptions below in the presence of a given clustering \(\mathcal{C}_{\mathrm{C},n}\) and return to choosing the optimal clustering in Section 4. **Assumption 5** (Cluster designs).: \(\mathcal{C}_{\mathrm{C},n}\) is measurable with respect to \(\mathbf{A}\) and is such that the number of units in cluster \(k\) is \(n_{k}=\gamma_{k}\frac{n}{K_{n}}\) with \(\max_{k}\gamma_{k}\leq\bar{\gamma}<\infty\), for some \(\bar{\gamma}<\infty\). Assume that \(D_{i}=\tilde{D}_{\mathrm{c}(i)}\) almost surely, where \(\tilde{D}_{\mathrm{c}(i)}|\mathbf{A},\{Y_{i}(\mathbf{d})\}_{i\in\{1,\cdots,n \}\mathbf{d}\in\{0,1\}^{N}}\sim\text{Bern}(0.5)\) are independent across clusters. Assumption 5 states the clustering is constructed using information from the adjacency matrix only (i.e., it is independent of potential outcomes), clusters are proportional in size7, and individuals in a given cluster are all assigned either treatment or control with equal probability. Assumption 5 restricts the class of designs to cluster designs, motivated by our focus on overall treatment effects, and empirical practice.8 Footnote 7: This restriction is sufficient but not necessary for our analysis, and it is imposed for expositional convenience, as the rates of convergence only depend on \(n\) and \(K_{n}\) (population size and number of clusters) instead of also the dimension of the largest and smallest cluster. It is possible to relax such an assumption by assuming that \(\sum_{k=1}^{K_{n}}\gamma_{k}^{2}/K_{n}=\mathcal{O}(1)\). Footnote 8: Saturation designs [e.g. Baird et al., 2018] would also be interesting to study if researchers are interested in estimands other than the overall treatment effect, such as the effect of treating a certain percentage of individuals in a given cluster. We leave saturation designs to future research. Motivated by standard practice both in industrial applications and in field experiments with clusters [Baird et al., 2018], we consider estimators obtained by simple difference in means between treated and control clusters. Because \(P(D_{i}=1)=1/2\), we construct a (biased) estimator of treatment effects as \[\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})=\frac{2}{n}\sum_{i=1}^{n} \Big{[}D_{i}Y_{i}-(1-D_{i})Y_{i}\Big{]}. \tag{3}\] The estimator \(\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})\) is a simple difference in means between treated and control units, that normalizes by the probability of treatment. Therefore, \(\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})\) depends on the clustering \(\mathcal{C}_{\mathrm{C},n}\)_only_ because the distribution of the treatments depends on the clusters under Assumption 5. Studying the estimator in Equation (3) is a natural starting point for the analysis of cluster experiments. Variants of difference in means estimators (possibly also with regression adjustments discussed in Remark 3) are often used or studied in practice (Holtz et al., 2020; Karrer et al., 2021; Savje et al., 2021). One could also normalize each sum in Equation (3) by the number of treated and control units (instead of using knowledge about the treatment probability). This would improve the stability of the estimators, but complicate the analysis of the estimators' properties when the number of treated units is stochastic (e.g., when clusters have different size).9 Footnote 9: With a stochastic number of treated units, the estimators with and without normalization are equivalent up-to an error of order \(\mathcal{O}(1/\sqrt{K_{n}})\). **Remark 3** (Covariate adjustment).: Our framework directly generalizes to settings that use covariate adjustment for baseline outcomes. Denote \(\bar{\mu}_{i}\) an arbitrary predictor for \(\mu_{i}(\mathbf{0})\) that only uses information from some arbitrary baseline observable characteristics (i.e., does not depend on the treatments or end-line outcomes in the experiment). The estimator with such an adjustment takes the form \[\frac{2}{n}\sum_{i=1}^{n}(Y_{i}-\bar{\mu}_{i})(2D_{i}-1). \tag{4}\] Our analysis continues to hold after defining the outcome of interest \(Y_{i}-\bar{\mu}_{i}\). **Remark 4** (Alternative estimators).: Alternative estimators studied in the literature are inverse probability weights estimators (e.g., Aronow and Samii, 2017; Ugander et al., 2013). Unless researchers impose additional restrictions on the exposure mapping, in the network context, these estimators can be subject to instability of the propensity score because they reweight by the inverse probability that _all_ friends of a given individual are either under treatment or control. A different alternative are model-based estimators, which however can be subject to model misspecification. ## 3 (When) should you cluster? This section studies when a given clustering \(\mathcal{C}_{\mathrm{C},n}\) should be chosen over a simple Bernoulli design where treatments are randomized independently across individuals. This is often a central question in experiments (e.g. Holtz et al., 2020). To study this question, we first characterize the bias and variance of the estimator \(\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})\). We denote a Bernoulli design as a special cluster design with clusters \(\mathcal{C}_{\mathrm{B},n}=\{c_{k}=\{k\},k\in\{1,\cdots,n\}\}\). Denote \(\mathbb{E}_{\mu}[\cdot]\) the expectation for given potential outcome functions \(\mu\) (conditional on \(\mathbf{A}\)). The components \(\mathcal{O}(\cdot),o(\cdot)\) in the following lemmas and theorems hold uniformly over all cluster designs with \(K_{n}\) many clusters that satisfy Assumption 5. Our goal is to design experiments that minimize the worst-case variance, while controlling the worst-case bias. For a given clustering \(\mathcal{C}_{\mathrm{C},n}\) we can write the (dual of the) optimization problem as: \[\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda)=\sup_{\mu\in\mathcal{M}} \mathbb{E}_{\mu}\Big{[}\Big{(}\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n}) -\mathbb{E}_{\mu}[\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})]\Big{)}^{2} \Big{]}+\lambda\sup_{\mu\in\mathcal{M}}\Big{(}\tau_{\mu}-\mathbb{E}_{\mu}[ \hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})]\Big{)}^{2}, \tag{5}\] where the supremum is intended over \((\mu_{i})_{i=1}^{n}\), and \(\mathcal{M}\) is the product space of potential outcome functions as in (iii) in Assumption 2. The parameter \(\lambda\) is user-specific and denotes the relative importance weight assigned to the worst-case bias. In Section 4.2 we show that, under slightly stronger restrictions on \(\mathcal{M}\), \(\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda=1)\) coincides with the worst-case mean squared error. ### Worst-case bias As a first step, we characterize the worst-case bias of the estimator \(\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})\). **Lemma 3.1** (Worst-case bias).: _Let Assumptions 1, 2, 3, 5 hold. Then_ \[\sup_{\mu\in\mathcal{M}}\Big{|}\tau_{n,\mu}-\mathbb{E}_{\mu}[\hat{\tau}_{n}^{ cl}(\mathcal{C}_{\mathrm{C},n})]\Big{|}=\frac{\bar{\phi}_{n}}{n}\sum_{i=1}^{n} \frac{\alpha_{i}}{|\mathcal{N}_{i}|}\Big{|}\mathcal{N}_{i}\bigcap\Big{\{}j:c( i)\neq c(j)\Big{\}}\Big{|},\] Figure 1: Example of clustering design with a single network. The network is partitioned into three clusters. Elements in a given cluster are assigned the same color. _where \(\left\{j:c(i)\neq c(j)\right\}\) denotes the set of units \(j\) in a different cluster from unit \(i\)._ Proof.: See Appendix C.1. Lemma 3.1 shows that the worst-case bias can be expressed as the number of friends of \(i\) in a different cluster from \(i\), appropriately reweighted by the overall number of friends of \(i\). The size of the worst-case bias also depends on the magnitude of spillover effects \(\bar{\phi}_{n}\). Lemma 3.1 shows how notions of between clusters connectedness relates to the worst-case bias of the treatment effect estimator. The equality is attained for a model as in Example 2.1, with coefficient multiplying spillover effects equal to \(\bar{\phi}_{n}\). We denote the worst-case bias as \[b_{n}(\mathcal{C}_{\mathrm{C},n})=\sup_{\mu\in\mathcal{M}}\Big{|}\tau_{n,\mu }-\mathbb{E}_{\mu}[\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})]\Big{|}. \tag{6}\] ### Worst-case variance In the following lines, we study the variance of the estimator \(\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})\). Observe that \[\mathbb{E}_{\mu}\Big{[}\Big{(}\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n}) -\mathbb{E}_{\mu}[\hat{\tau}_{n}^{cl}(\mathcal{C}_{\mathrm{C},n})]\Big{)}^{2} \Big{]}=\frac{4}{n^{2}}\sum_{i,j}\mathrm{Cov}\Big{(}\mu_{i}(D_{i},\mathbf{D}_{ -i})[2D_{i}-1],\mu_{j}(D_{j},\mathbf{D}_{-j})[2D_{j}-1]\Big{)}.\] We study each covariance component separately. **Lemma 3.2** (Zero covariances).: _Suppose that Assumptions 1, 5 hold. Then for all \(i\in\{1,\cdots,n\}\)_ \[\mathrm{Cov}\Big{(}\mu_{i}(D_{i},\mathbf{D}_{-i})\Big{[}2D_{i}-1\Big{]},\mu_{ j}(D_{j},\mathbf{D}_{-j})\Big{[}2D_{j}-1\Big{]}\Big{)}=0,\quad\forall j\not\in \Big{\{}B_{i}\cup G_{i}\Big{\}},\] _where_ \[B_{i} =\Big{\{}v\in\{1,\cdots,n\}:\text{ either }c(v)=c(i)\text{ or }c(v)=c(v^{ \prime}),\text{ for some }v^{\prime}\in\mathcal{N}_{i}\Big{\}},\] \[G_{i} =\Big{\{}g\in\{1,\cdots,n\}:\mathcal{N}_{g}(1)\cap B_{i}\neq\emptyset \Big{\}}\] Proof of Lemma 3.2.: See Appendix C.2. Lemma 3.2 states that two realized outcomes have zero covariance if two individuals (i) are in two different clusters, such that none of the two clusters contains a friend of the other individual, and (ii) are not friends or share a common friend (set), and if there is no friend of \(j\) in a cluster that contains a friend of \(i\) (set \(G_{i}\)). Note that Lemma 3.2 is equivalent to saying that \(\mu_{i}(D_{i},\mathbf{D}_{-i})[2D_{i}-1],\mu_{j}(D_{j},\mathbf{D}_{-j})[2D_{j}-1]\) have zero covariance if \(B_{i}\cap B_{j}=\emptyset\). Next, we analyze the covariances for the remaining units. **Lemma 3.3** (Non-zero Covariances).: _Suppose Assumptions 1, 3, 5 hold. Then, for each \(\mu\in\mathcal{M}\), \(i\in\{1,\cdots,n\}\)_ \[\Big{|}\mathrm{Cov}\Big{(}\mu_{i}(D_{i},\mathbf{D}_{-i})\Big{[}2D_{i}-1\Big{]}, \mu_{j}(D_{j},\mathbf{D}_{-j})\Big{[}2D_{j}-1\Big{]}\Big{)}\Big{|}=\mathcal{O} \Big{(}b_{n}(\mathcal{C}_{\mathrm{C},n})\Big{)}\quad\forall j:c(j)\neq c(i), \tag{7}\] _with \(b_{n}(\mathcal{C}_{\mathrm{C},n})\) as in Equation (6)._ _In addition, for each \(\mu\in\mathcal{M}\), for \(c(i)=c(j)\)_ \[\begin{split}\mathrm{Cov}\Big{(}\mu_{i}(D_{i},\mathbf{D}_{-i}) \Big{[}2D_{i}-1\Big{]},\mu_{j}(D_{j},\mathbf{D}_{-j})\Big{[}2D_{j}-1\Big{]} \Big{)}=&\frac{1}{4}\Big{(}\mu_{i}(\mathbf{1})+\mu_{i}(\mathbf{ 0})\Big{)}\Big{(}\mu_{j}(\mathbf{1})+\mu_{j}(\mathbf{0})\Big{)}\\ &+\mathcal{O}\Big{(}b_{n}(\mathcal{C}_{\mathrm{C},n})\Big{)}. \end{split} \tag{8}\] Proof.: See Appendix C.3. Lemma 3.3 characterizes the covariance between individuals in different clusters (Equation (7)) and individuals in the same cluster (Equation (8)). For individuals in different clusters the covariance is of the _same_ order of the bias, whereas for individuals in the _same_ cluster the covariance is \(\mathcal{O}(1)\). The component captures the covariance between individuals in the same clusters up-to the bias \(b_{n}(\mathcal{C}_{\mathrm{C},n})\). The covariance between individuals in different clusters is zero if there are no individuals with neighbors in a different cluster, since in this case the within-cluster covariance captures all the covariances between individuals. **Lemma 3.4** (Worst-case variance).: _Let Assumptions 1, 2, 3, 4, 5 hold, and let_ \[\bar{\psi}:=\sup_{i\in\{1,\cdots,n\},\mu_{i}(0,\cdot)\in\mathcal{M}_{0,i},\mu_ {i}(1,\cdot)\in\mathcal{M}_{1,i}}\Big{(}\mu_{i}(\mathbf{1})+\mu_{i}(\mathbf{ 0})\Big{)}^{2}, \tag{9}\] \(\underline{\psi}>0\) _as defined in Assumption 2. Then as \(K_{n}\to\infty\)_ \[\sup_{\mu\in\mathcal{M}}\mathbb{E}_{\mu}\Big{[}\Big{(}\hat{\tau}_{n}^{cl}( \mathcal{C}_{\mathrm{C},n})-\mathbb{E}[\hat{\tau}_{n}^{cl}(\mathcal{C}_{ \mathrm{C},n})]\Big{)}^{2}\Big{]}\in\frac{1}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2} \times\Big{[}\underline{\psi}+o(1),\bar{\psi}+o(1)\Big{]}.\] Proof.: See Appendix C.4. Lemma 3.4 characterizes lower and upper bounds on the worst-case variance as the number of clusters \(K_{n}\) grows (at an arbitrary rate). Such bounds depend on the variance in cluster sizes, \(\frac{1}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}\), up to a positive constant. By leveraging the local asymptotic framework in Assumption 4, this result shows that the variance is mostly driven by the within-cluster correlations instead of cross-cluster connections. **Remark 5** (Unobserved **A**).: Suppose that \(\mathbf{A}\) is unobserved or _partially_ observed, and researchers have a prior over \(\mathbf{A}\). In this case, the characterization of the bias and variance continue to hold once we take _expectations_ with respect to the distribution of \(\mathbf{A}\), where the prior over \(\mathbf{A}\) may depend on partial network information (e.g. Breza et al., 2020). ### Comparison with a Bernoulli design We now compare a given class of clustering designs to a Bernoulli design \(\mathcal{C}_{\mathrm{B},n}\). **Theorem 3.5**.: _Suppose that Assumptions 1, 2, 3, 4, 5 hold. Then for any \(\lambda\in(0,\infty)\), bounded away from zero and infinity, as \(K_{n}\to\infty\),_ \[\begin{split}\lim_{n\to\infty}\Big{(}\mathcal{B}_{n}(\mathcal{C} _{\mathrm{C},n},\lambda)-\mathcal{B}_{n}(\mathcal{C}_{0},\lambda)\Big{)}& \geq 0\quad\text{ if }\sqrt{K_{n}}\bar{\phi}_{n}\to 0 \text{ and }K_{n}/n=o(1),\\ \lim_{n\to\infty}\Big{(}\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda)-\mathcal{B}_{n}(\mathcal{C}_{0},\lambda)\Big{)}& \leq 0\quad\text{ if }\sqrt{K_{n}}\bar{\phi}_{n}\to\infty,\text{ and }b_{n}( \mathcal{C}_{\mathrm{C},n})\leq\delta b_{n}(\mathcal{C}_{\mathrm{B},n}).\end{split} \tag{10}\] _for some constants \(\delta\in[0,1)\)._ Proof.: See Appendix C.5. Theorem 3.5 compares a Bernoulli design and a cluster design with an asymptotically negligible bias. Theorem 3.5 states that we should _not_ run a cluster experiment if the size of the spillover effects \(\bar{\phi}_{n}\) goes to zero at an order faster than \(1/\sqrt{K_{n}}\), where \(K_{n}\) denotes the number of clusters, and the number of clusters is sufficiently smaller than the sample size (at an arbitrary rate).10 We should instead run a cluster design if spillover effects are larger in magnitude than \(1/\sqrt{K_{n}}\) and the bias of the cluster design is smaller than the bias of a Bernoulli design (encoded in the assumption that \(b_{n}(\mathcal{C}_{\mathrm{C},n})\leq\delta b_{n}(\mathcal{C}_{\mathrm{B},n})\)). The assumption that \(b_{n}(\mathcal{C}_{\mathrm{C},n})\leq\delta b_{n}(\mathcal{C}_{\mathrm{B},n})\) is equivalent to assuming that \(\sum_{i=1}^{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|}|\mathcal{N}_{i}\bigcap\{j :c(j)\neq c(i)\}|\leq\delta\sum_{i=1}^{n}\alpha_{i}\), for some \(\delta\in[0,1)\), i.e., the clustering decreases the bias. Footnote 10: The condition \(K_{n}/n=o(1)\), can be relaxed by a finite sample condition \(K_{n}\leq n\delta^{\prime}(\underline{\psi}/\bar{\psi})\) for some \(\delta^{\prime}\in[0,1)\). In particular, under the assumptions in Section 4.2, \(\underline{\psi}=\bar{\psi}\) and the condition is equivalent to that a fixed fraction of clusters have more than one observation. The choice of the Bernoulli design must depend on (i) the number of clusters and (ii) the _size_ of spillovers. Table 1 provides explicit recommendations for researchers. Suppose that spillovers are of order \(n^{-1/3}\) or smaller, therefore of _slower_ order than \(n^{-1/2}\). Then we should not run a cluster design if \(K_{n}\) is smaller than \(n^{2/3}\). Therefore, even if spillovers are "not very small", cluster designs are sub-optimal if the number of clusters is "not very large", formally characterized by the rates of convergence. The second result, vice-versa, illustrates when to run a cluster design. Returning to the example of spillovers of order \(n^{-1/3}\), suppose now the number of clusters is of order \(n\) (e.g., clusters contain few individuals each). Then the cluster design is optimal. In the following theorem we provide an explicit rule of thumb for a cluster design. **Theorem 3.6** (Rule of thumb).: _Suppose that Assumptions 1, 2, 3, 4, 5 hold. Let \(\underline{\gamma}=\frac{1}{K_{n}}\sum_{k=1}^{K_{n}}\gamma_{k}^{2}\), where \(\gamma_{k}=n_{k}K_{n}/n\), and assume that \(\underline{\alpha}=1\). Then_ \[\lim_{n\to\infty}\mathcal{B}_{n}(\mathcal{C}_{\mathrm{B},n},\lambda)- \mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda)\geq 0\quad\text{ if }\frac{\bar{\psi}}{\lambda\bar{\phi}_{n}^{2}}\leq \frac{1-(\frac{1}{n}\sum_{i=1}^{n}\frac{1}{|\mathcal{N}_{i}|}|j\in\mathcal{N}_{ i}:c(i)\neq c(j)|)^{2}}{\underline{\gamma}K_{n}^{-1}}.\] Proof of Theorem 3.6.: See Appendix C.6. Theorem 3.6 provides a rule of thumb for choosing a cluster design with given clusters \(\mathcal{C}_{\mathrm{C},n}\) over a Bernoulli design \(\mathcal{C}_{\mathrm{B},n}\). The right-hand side depends on three _observables_: (i) \(\underline{\gamma}\), (ii) the expected bias of the clustering method (as a function of \(\mathbf{A}\)), and (iii) the number of clusters \(K_{n}\). The left-hand side equals \[\xi_{n}:=(\lambda\bar{\phi}_{n}^{2}/\bar{\psi})^{-1}.\] For \(\lambda=1\), known \(\bar{\psi}\), the rule of thumb provides the smallest spillover effects that would guarantee that the cluster design dominates the Bernoulli design. The last column in Table 1 collects the implications of the rule of thumb, assuming (i) equally sized clusters, (ii) the bias of the clustering is at most \(50\%\) as a conservative upper bound, and (iii) outcomes are bounded between zero and one (in which case \(\bar{\psi}\leq 4\)). In this setting, researchers should run a cluster experiment when \(\bar{\phi}_{n}\sqrt{K_{n}}\) is larger than \(2.3\) when \(\bar{\psi}=4\). Figure 2 illustrates the rule of thumb as a function of the bias and clusters. \begin{table} \begin{tabular}{c c c c c} \hline \hline Description & Formulation & Implication: Run & Rule of thumb (\(\bar{\psi}=4\)) & (\(\bar{\psi}=3\)) \\ \hline Small spillovers & \(\sqrt{K_{n}}\bar{\phi}_{n}=o(1)\) & Bernoulli design & & \\ small \# of clusters & & & & \\ Not that small spillovers & \(\bar{\phi}_{n}=o(1)\) & Bernoulli design & & \\ very small \# of clusters & \(K_{n}=\mathcal{O}(1)\) & & & \\ Non-negligible spillovers & \(\sqrt{K_{n}}\bar{\phi}_{n}\to\infty\) & Cluster design & \(\bar{\phi}_{n}\sqrt{K_{n}}>2.30\) & \(\bar{\phi}_{n}\sqrt{K_{n}}>2\) \\ large \# of clusters & & & & \\ Small spillovers & \(\bar{\phi}_{n}\propto n^{-1/3}\) & Cluster design & \(\bar{\phi}_{n}\sqrt{K_{n}}>2.30\) & \(\bar{\phi}_{n}\sqrt{K_{n}}>2\) \\ very large \# of clusters & \(K_{n}\propto n\) & & & \\ \hline \end{tabular} \end{table} Table 1: Practical implications of Theorem 3.5. Rule of thumb is computed for \(\lambda=1\), in the presence of equally sized clusters with outcomes taking values between zero and one, and the bias of the clustering equal (or smaller than) \(50\%\) (i.e., for each individual, \(50\%\) of her connections are in her same cluster). Here \(\bar{\psi}\leq 4\) when outcomes are binary. ## 4 Choosing the cluster design In this section, we turn to the question of designing the optimal clustering. The following theorem characterizes the objective function. **Theorem 4.1**.: _Suppose that Assumptions 1, 2, 3, 4, 5 hold. Then as \(K_{n}\to\infty\)_ \[\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda)\leq\frac{1}{n^{2}}\sum_{k= 1}^{K_{n}}n_{k}^{2}\Big{[}\bar{\psi}+o(1)\Big{]}+\lambda\bar{\phi}_{n}^{2} \Big{(}\frac{1}{n}\sum_{i=1}^{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|}\Big{|} \mathcal{N}_{i}\bigcap\Big{\{}j:c(i)\neq c(j)\Big{\}}\Big{|}\Big{)}^{2}, \tag{11}\] _with \(\bar{\psi}\) as in Equation (9). Equation (11) holds with equality if \(\sup_{\mu_{i}\in\mathcal{M}_{i},\mu_{j}\in\mathcal{M}_{j}}(\mu_{i}(\mathbf{1}) +\mu_{i}(\mathbf{0}))(\mu_{j}(\mathbf{1})+\mu_{j}(\mathbf{0}))=\bar{\psi}\) for all \((i,j)\)._ Proof of Theorem 4.1.: See Appendix C.7. Theorem 4.1 characterizes the objective function as a function of the covariance between units in the same cluster, bounded by \(\bar{\psi}\), the between-clusters variation (average of \(n_{k}^{2}\)), the size of the spillover effects \(\bar{\phi}_{n}\) and the "cluster impurity", i.e., the average number of friends of a given individual assigned to a different cluster. The constant \(\lambda\) defines the relative importance weight of the bias compared to the variance. Figure 2: Minimum spillover effects’ size that justifies running a cluster experiment instead of a Bernoulli design in the presence of equally sized clusters (\(\gamma_{k}=1\) for all \(k\in\{1,\cdots,K_{n}\}\)). The y-axis reports in log-scale the minimum size of the spillover effects \(\bar{\phi}_{n}/\bar{\psi}\) that justify running a cluster experiment instead of a Bernoulli design according to the rule in Theorem 3.6. The x-axis reports the number of clusters. Different colors report different values of the bias \(\frac{1}{n}\sum_{i=1}^{n}\frac{1}{|\mathcal{N}_{i}|}|j\in\mathcal{N}_{i}:c(i) \neq c(j)|\in[0,1]\). After simple re-arrangement, and assuming that the _worst-case_ spillover effects are homogenous across units (\(\alpha_{i}=1\) for all \(i\))11, Theorem 4.1 provides a simple-to-compute metric for ranking (a few) _given_ clusters Footnote 11: All our results extend to different choices of \(\alpha_{i}\), although we think that \(\alpha_{i}=1\) is a natural choice in practice, which mimics applications where individuals are assumed to depend on the share of treated friends. \[\frac{\xi_{n}}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}+\Big{(}\frac{1}{n}\sum_{i=1}^{ n}\frac{1}{|\mathcal{N}_{i}|}\Big{|}j\in\mathcal{N}_{i}:c(i)\neq c(j)\Big{|} \Big{)}^{2},\quad\xi_{n}=\frac{\bar{\psi}}{\bar{\phi}_{n}^{2}\lambda} \tag{12}\] In practice, we always recommend to draw the frontier in Equation (12) as a function of the bias \(\Big{(}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{|\mathcal{N}_{i}|}\Big{|}j\in \mathcal{N}_{i}:c(i)\neq c(j)\Big{|}\Big{)}^{2}\) and the variance \(\frac{1}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}\), with multiplier over the variance \(\xi_{n}=(\lambda\bar{\phi}_{n}^{2}/\bar{\psi})^{-1}\). Clusters that perform reasonably well compared to other clusters for a large set of values of \(\xi_{n}\) should be preferred for implementation (see Section 6). However, Equation (12) is computationally difficult to optimize over a _large_ space of clusters. We present a feasible relaxation in the following section. ### Causal clustering: objective and algorithm The task of estimating the best clustering over a large class is challenging because the number of connections between different clusters enters non-linearly in Equation (12). As a first step, we rewrite the objective function as a function of the absolute (instead of squared) bias. **Corollary 1**.: _Let the conditions in Theorem 4.2 hold. Then as \(K_{n}\to\infty\)_ \[\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda)\leq\bar{\phi}_{n}^{2} \left(\frac{\xi_{n}}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}\Big{[}1+o(1)\Big{]}+ \frac{1}{n}\sum_{i=1}^{n}\frac{1}{|\mathcal{N}_{i}|}\Big{|}\mathcal{N}_{i} \bigcap\Big{\{}j:c(i)\neq c(j)\Big{\}}\Big{|}\right), \tag{13}\] _for \(\xi_{n}=(\lambda\bar{\phi}_{n}^{2}/\bar{\psi})^{-1}\)._ Corollary 1 shows that Equation (13) with \(\xi_{n}=(\lambda\bar{\phi}_{n}^{2}/\bar{\psi})^{-1}\), as a function of the absolute instead of squared bias, is a surrogate (upper bound) loss of the objective function. To gain further intuition, note that we can interpret the objective in Equation (12) as minimizing the worst-case variance under _constraints_ on the squared worst-case bias (whose dual depends on the multiplier \(\lambda\)). Whenever we are interested in the _frontier_ that trade offs the bias and variance over different values of \(\xi_{n}\), Equation (12) or (13) have the same dual representation for (different) values of the multipliers \(\xi_{n}\) in each equation. Corollary 1 shows that we can choose the same \(\xi_{n}\) to obtain an upper bound on the original objective. Following Corollary 1, we optimize over \((K_{n},c(\cdot))\) as a function of a constant \(\xi_{n}\), \[\frac{\xi_{n}}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}+\frac{1}{n}\sum_{i=1}^{n}\frac{1 }{|\mathcal{N}_{i}|}\Big{|}j\in\mathcal{N}_{i}:c(i)\neq c(j)\Big{|}. \tag{14}\] Equation (14) solves a min-cut problem with an additional _penalization_ term that depends on the variance of the clusters' size. The optimization problem in Equation (14) is similar in spirit to, though different from, the minimum normalized-cut problem (e.g., Ling and Strohmer, 2020; Shi and Malik, 2000), whose objective is \(\sum_{k=1}^{K}\sum_{i:c(i)=k}|j\in\mathcal{N}_{i}:c(i)\neq c(j)|/\sum_{i:c(i)= k}|\mathcal{N}_{i}|.\) Different from our proposal, standard min-cut problems use a different denominator (since the objective is not motivated by the bias of treatment effect estimators), and do not account for the additional variance component. Here, researchers should balance the bias and variance in the choice of the clusters. To the best of our knowledge, the problem in Equation (14) has not been studied before. In the following lines we exploit Corollary 1 to find the clustering that minimizes Equation (14) over a large space of clusters. Let \(\mathbf{V}=\mathrm{diag}(\mathbf{A}\mathbf{1}_{n})=\mathrm{diag}(|\mathcal{N} _{1}(1)|,\ldots,|\mathcal{N}_{n}(1)|)\). Define the left-normalized Laplacian \[\mathbf{L}=\mathbf{V}^{-1}\mathbf{A}.\] Given a set of \(K\) clusters (i.e., fixing the number of clusters), for any cluster mapping \(c:\{1,\cdots,n\}\mapsto\{1,\cdots,K\}\), let \(\mathbf{M}_{c}(K)\in\mathbb{R}^{n\times K}\) with \[\mathbf{M}_{c,ik}(K)=1\{c(i)=k\},\quad i\in\{1,\cdots,n\},k\in\{1,\cdots,K\},\] where \(\mathbf{M}_{c}(K)\) is a function of the number of clusters \(K\). The entries \(\mathbf{M}_{c,ik}(K)\) denote whether the cluster of unit \(i\) is in cluster \(k\). **Theorem 4.2**.: _Let_ \[c^{\star}\in\arg\max_{c:\{1,\cdots,n\}\mapsto\{1,\cdots,K\}}\mathrm{tr}\Big{(} (n\mathbf{L}-\xi_{n}\mathbf{1}_{n}\mathbf{1}_{n}^{T})\mathbf{M}_{c}(K) \mathbf{M}_{c}^{T}(K)\Big{)},\] _where \(\mathrm{tr}(\cdot)\) denotes the trace operator. Then \(c^{\star}\) minimizes Equation (14)._ Proof of Theorem 4.2.: See Appendix C.8. Theorem 4.2 formalizes the optimization problem as a trace-optimization program, for fixed number of clusters \(K\). Theorem 4.2 does not characterize a convex optimization program, but it provides a natural starting point to study convex relaxations of the proposed optimization problem. To obtain a convex relaxation, we relax the constraint on the matrix \(\mathbf{X}(K)=\mathbf{M}_{c}(K)\mathbf{M}_{c}^{T}(K)\). We propose solving the following semi-definite programming (SDP) problem (for given number of clusters \(K\)): \[\max_{\mathbf{X}(K)}\operatorname{tr}(\mathbf{L}_{\xi_{n}}\mathbf{X}(K)),\quad \text{s.t.}\quad\operatorname{diag}(\mathbf{X}(K))=\mathbf{1}_{n},\ \mathbf{X}(K)\succeq 0,\quad\mathbf{L}_{\xi_{n}}=n \mathbf{L}-\xi_{n}\mathbf{1}_{n}\mathbf{1}_{n}^{T}. \tag{15}\] where Equation (15) defines a sequence of semi-definite optimization programs, each indexed by the number of clusters \(K\). The main distinction between Equation (14), and Equation (15) is that the matrix \(\mathbf{X}(K)\) must be positive-definite, but does not necessarily contains binary entries only. Let \(\hat{\mathbf{X}}(K)\) be the solution of (15), for given \(K\), that can be obtained using off-the-shelf optimization routines. We then apply the \(K\)-means algorithm (Bradley et al., 2000) on the first \(K\) eigenvectors of \(\hat{\mathbf{X}}(K)\) to retrieve the mapping \(c\).12 Finally, we compare solutions for different values of \(K\) and choose the clustering with the largest objective. Equation (15) is a convex relaxation of the problem in Theorem 4.2 because it substitutes the original constraint on \(\mathbf{X}\) to contain binary entries with a semi-definite constraint, and then retrieves the clusters via \(K\)-means clustering on the matrix \(\mathbf{X}(K)\) directly. Such convex relaxations are common in the clustering literature and have been widely studied both from theoretical and numerical perspective, see Hong et al. (2021) for a review. Footnote 12: \(K\)-means on the \(K\) largest eigenvectors of \(\hat{\mathbf{X}}(K)\) is a well-studied problem in the literature on spectral clustering algorithms (Von Luxburg, 2007). In summary, the complete algorithm (Algorithm 1) solves a sequence of semi-definite trace-optimization problems, each indexed by a different value of \(K\), and report the clustering (and corresponding number of clusters) that leads to the largest objective. **Remark 6** (Spectral relaxation).: Unlike the minimum normalized cut, there is no simple _spectral_ relaxation of the optimization problem in Theorem 4.2, unless all clusters are equally sized. For the special case where all clusters are equal-sized, (15) can be relaxed to \[\max_{K,\mathbf{U}}\operatorname{tr}(\mathbf{L}_{\xi_{n}}\mathbf{U}(K)\mathbf{ U}^{T}(K)),\quad\text{s.t.}\ \mathbf{U}^{T}(K)\mathbf{U}(K)=\mathbf{I}_{K},\] where \(\mathbf{I}_{K}\) denotes the identity matrix of dimension \(K\). We first symmetrize the objective function by \[\operatorname{tr}(\mathbf{L}_{\xi_{n}}\mathbf{U}(K)\mathbf{U}^{T}(K))= \operatorname{tr}\left(\frac{\mathbf{L}_{\xi_{n}}+\mathbf{L}_{\xi_{n}}^{T}}{2 }\mathbf{U}\mathbf{U}^{T}\right)\] The solution to the above problem \(\hat{\mathbf{U}}(K)\in\mathbb{R}^{n\times K}\) is given by the matrix of top-\(k\) eigenvectors of \((\mathbf{L}_{\xi_{n}}+\mathbf{L}_{\xi_{n}}^{T})/2\). Then we can perform constrained \(K\)-means algorithm (Bradley et al., 2000) and equal-sized clusters to recover the clusters. ### Worst-case mean squared error We conclude this section by extending our results (Theorem 4.2) to show (i) how to allow for heterogeneity in covariances between individuals and (ii) that the worst-case mean-squared error coincides with the sum of the worst-case bias and variance. Before doing so, we need to introduce some additional notation. Without loss of generality, for unit \(i\), we decompose the potential outcome function \(\mu_{i}(\mathbf{d}_{i},\mathbf{d}_{-i})\) into four components \[\mu_{i}(\mathbf{d}_{i},\mathbf{d}_{-i})=\begin{cases}\mu_{i}(\mathbf{1})-\mu_{ 1i}(\mathbf{d}_{-i})&\mathbf{d}_{i}=1\\ \mu_{0i}(\mathbf{d}_{-i})+\mu_{i}(\mathbf{0})&\mathbf{d}_{i}=0\end{cases}, \tag{16}\] where \[\mu_{1i}(\mathbf{d}_{-i})=\mu_{i}(\mathbf{1})-\mu_{i}(1,\mathbf{d}_{-i}), \quad\mu_{0i}(\mathbf{d}_{-i})=\mu_{i}(0,\mathbf{d}_{-i})-\mu_{i}(\mathbf{0}).\] We define for given \(\mathcal{M}_{1i},\mathcal{M}_{0i},\mathcal{U}_{i},\) \[\mathcal{M}_{i}=\bigg{\{} \mu_{i}:\{0,1\}^{n}\mapsto\mathbb{R}\text{ with }\mu_{i}(\mathbf{d}_{i},\mathbf{d}_{-i})\text{ satisfying \eqref{eq:con For the other two spaces, let \[\mathcal{M}_{1i}=\left\{h:\{0,1\}^{n-1}\mapsto\mathbb{R}\text{ such that }|h(d_{1},\ldots,d_{n-1})|\leq\bar{\phi}_{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|} \sum_{k\in\mathcal{N}_{i}}\left(1-\mathbf{d}_{k}\right)\right\}, \tag{20}\] and \[\mathcal{M}_{0i}=\left\{h:\{0,1\}^{n-1}\mapsto\mathbb{R}\text{ such that }|h(d_{1},\ldots,d_{n-1})|\leq\bar{\phi}_{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|} \sum_{k\in\mathcal{N}_{i}}\mathbf{d}_{k}\right\}. \tag{21}\] We normalize \(\alpha_{i}\) such that \[\max_{i}\alpha_{i}=1. \tag{22}\] Note that \(\mathcal{U}_{i}\) and \(\alpha_{i}\) are allowed to vary with \(i\) to reflect heterogeneous prior information. In the following theorem we provide an exact characterization of the objective function defined as the worst-case _mean squared error_ under heterogeneity. **Theorem 4.3**.: _Assume that either \(\psi_{i}^{+}\geq\psi_{i}^{-}\) for all \(i\) or \(\psi_{i}^{+}\leq\psi_{i}^{-}\) for all \(i\), and \(\underline{\psi}^{1/2}\leq\psi_{i}\leq\bar{\psi}^{1/2}\) for all \(i\) for some \(0<\underline{\psi}<\bar{\psi}<\infty\). Let Assumptions 1, 4, 5 hold. Then as \(K_{n}\to\infty\)_ \[\sup_{\mu\in\mathcal{M}^{*}}\mathbb{E}_{\mu}\Big{[}\Big{(}\hat{\tau}_{n}^{d}( \mathcal{C}_{\mathrm{C},n})-\tau_{\mu}\Big{)}^{2}\Big{]}=\mathcal{B}_{n}^{*}( \mathcal{C}_{\mathrm{C},n},\lambda=1)\cdot(1+o(1)),\] _where_ \[\mathcal{B}_{n}^{*}(\mathcal{C}_{\mathrm{C},n},\lambda=1)=\sum_{k=1}^{K_{n}} \frac{n_{k}^{2}}{n^{2}}\left(\frac{1}{n_{k}}\sum_{i\in c_{k}}\psi_{i}\right)^{ 2}+\bar{\phi}_{n}^{2}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\alpha_{i}}{| \mathcal{N}_{i}|}\Big{|}\mathcal{N}_{i}\bigcap\left\{j:c(i)\neq c(j)\right\} \Big{|}\right)^{2}.\] Proof.: See Appendix C.9. The following corollary shows when the worst-case mean-squared error and the sum of the worst-case bias and variance coincide and when (11) holds with equality. The latter is important because it justifies the objective function (12) as an ex-ante assessment of the worst-case mean square error for a given clustering. **Corollary 2**.: _Let the conditions in Theorem 4.3 hold and \(\psi_{i}=\bar{\psi}\) for all \(i\). Then \(\mathcal{B}_{n}(\mathcal{C}_{\mathrm{C},n},\lambda=1)=\mathcal{B}_{n}^{*}( \mathcal{C}_{\mathrm{C},n},\lambda=1)[1+o(1)]\) as \(K_{n}\to\infty\). Furthermore, if \(\alpha_{i}=1\) for all \(i\), the (11) holds with equality with \(\lambda=1\)._ The surrogate objective function under full heterogeneity is given by \[\frac{1}{n}\sum_{i=1}^{n}\frac{\alpha_{i}}{|\mathcal{N}_{i}|}\Big{|}\mathcal{ N}_{i}\bigcap\left\{j:c(i)\neq c(j)\right\}\Big{|}+\frac{\xi_{n}}{n^{2}}\sum_{k=1}^{K_{ n}}\left(\sum_{i\in c_{k}}\phi_{i}\right)^{2}, \tag{23}\] where \(\xi_{n}=\frac{1}{\phi_{n}^{2}}\). **Theorem 4.4**.: _Let \(\alpha=(\alpha_{1},\ldots,\alpha_{n})^{T}\) and \(\phi=(\phi_{1},\ldots,\phi_{n})^{T}\)_ \[c^{\star}\in\arg\max_{c:\{1,\cdots,n\}\mapsto\{1,\cdots,K\}}\mathrm{tr}\Big{(}( n\mathrm{diag}(\alpha)\mathbf{V}^{-1}\mathbf{A}-\xi_{n}\phi\phi^{T})\mathbf{M}_{c} (K)\mathbf{M}_{c}^{T}(K)\Big{)},\] _where \(\mathrm{tr}(\cdot)\) denotes the trace operator. Then \(c^{\star}\) minimizes Equation (23)._ ## 5 Empirical illustration and numerical studies In this section we illustrate the properties of the procedure in two empirical applications, one using unique data from the Facebook friendship and messaging graphs, and one using data from an experiment conducted in rural China by Cai et al. (2015). We provide further evidence supporting our theoretical analysis using simulated networks in Section 5.3. ### Clustering on Facebook graphs We first study the procedure's properties for two clustering algorithms implemented at scale on social networks owned by Meta: Louvain algorithm (Blondel et al., 2008), Balanced Partitioning (Kabiljo et al., 2017). We consider two graphs owned by Meta. In each graph, edges are continuous variables. In the first graph, edges capture the strength of the friendship in the Facebook graph, and in the second graph they capture connections based on messaging on Facebook, in both cases the data was aggregated and de-identified. We compute our statistics by setting the edges to be zero if below the \(5^{th}\), \(10^{th}\), and \(50^{th}\) percentile. We will refer to these graphs obtained as dense, moderate (mod), and sparse after thresholding (graphs all have a bounded maximum degree). We report the bias and variance, with the variance weighted by a parameter \(\xi_{n}\), defined respectively as \[\frac{1}{n}\sum_{i=1}^{n}\frac{\sum_{k\in\mathcal{N}_{i}}\Big{|}k:c(k)\neq c(i )\Big{|}}{|\mathcal{N}_{i}|},\quad\frac{\xi_{n}}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^ {2}.\] Louvain and Balanced Partitioning clustering produce a hierarchical clustering structure with a growing number of clusters. We consider three "types" of such algorithms, corresponding to three levels in the clustering structure hierarchy (i.e., different numbers of clusters), defined as Type 1, 2, and 3, respectively. For Louvain clustering, higher-order types denote more clusters, whereas it is the opposite for Balanced Partitioning. For each clustering, we report \(\log(K_{n}/n)\), log-ratio of the number of clusters over the population size. Table 2 collects each of these algorithms' worst-case bias and variance and the number of clusters per individual. Louvain algorithm dominates Balanced Partitioning both in worst-case variance and bias. A larger number of clusters increases the bias but decreases the variance, which we observe throughout all graphs. The bias increases for sparser networks because the denominator \(|\mathcal{N}_{i}|\) decreases. Figure 3 reports the objective function for different graphs, Louvain algorithms, and degrees of sparsity. For \(\xi_{n}\approx 1\), the Louvain algorithm with the largest number of clusters (Type 3) dominates the other two algorithms in all but one case. This result provides some interpretable comparisons for the number of clusters. We also observe a trade-off between the number of clusters and the degree of sparsity of the adjacency matrix as \(\xi_{n}\) decreases. Louvain clustering with the largest number of clusters (Type 1) is optimal for a dense graph with \(\xi_{n}\gg 1\). Such clustering, however, is sub-optimal as the graph becomes more sparse. This result is intuitive: for denser networks, a larger number of clusters may best control the estimator's bias relative to the bias induced by other clustering algorithms. These comparisons motivate trade-offs in the choice of the graph (and its density) and the clustering algorithm. We recommend practitioners compare clustering by averaging objective functions over different sparsity thresholding. For example, it is possible to compare clustering based on the average objective across different graphs using some priors on the degree of sparsity without affecting our theoretical results. Finally, note that using Theorem 3.6 it is possible to compute the _smallest_ values of the spillover effects that would motivate using a cluster instead of Bernoulli design. From Theorem 3.6, researchers should prefer a cluster over a Bernoulli design if \[\xi_{n}\leq\bar{\xi}_{n}:=\left(1-\Big{(}\frac{1}{n}\sum_{i=1}^{n}\frac{\sum_ {k\in\mathcal{N}_{i}}\Big{|}k:c(k)\neq c(i)}{|\mathcal{N}_{i}|}\Big{)}^{2} \right)\Big{/}\Big{(}\frac{1}{n^{2}}\sum_{k=1}^{K_{n}}n_{k}^{2}\Big{)}.\] Because \(\xi_{n}=\bar{\psi}/\bar{\phi}_{n}^{2}\) for \(\lambda=1\), researchers should run a Cluster design if \[\bar{\phi}_{n}^{2}\geq\bar{\psi}/\bar{\xi}_{n}.\] For example, for a Louvain clustering of Type 1 and the Friendship graph, \(\bar{\xi}_{n}=873\), which implies that the researcher should run a cluster design if spillovers are larger than \(\bar{\phi}^{2}\geq\bar{\psi}/873\). This comparison is suggestive that a cluster design with a Louvain clustering may be preferred over a Bernoulli design over a wide range of values of spillover effects. In summary, our results shed light on using clustering algorithms for large-scale implementation on online platforms. These results suggest that Louvain clustering with possibly many clusters performs best in practice. It confirms intuitive arguments in Karrer et al. (2021) who also recommend Louvain clustering based on AA-tests. ### Clustering in the field We consider as an application the problem of informing individuals to increase insurance take-up studied in Cai et al. (2015). The authors collected network information in approximately 184 villages in rural China from 48 larger regions. We use network data collected by Cai et al. (2015) to study the properties of the proposed method, where we assume that two individuals are connected if at least one of the two indicates the other as a friend. Because we do not require information from end-line outcomes, we construct a network using information from surveyed individuals as well as their friends. The network has in total 7649 nodes, once we also include individuals who are friends of surveyed individuals (but do not necessarily live in \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Friendship} & \multicolumn{3}{c}{Type 1} & \multicolumn{3}{c}{Type 2} & \multicolumn{3}{c}{Type 3} \\ \cline{2-10} & dense & mod & sparse & dense & mod & sparse & dense & mod & sparse \\ \hline \hline \multicolumn{10}{|c|}{Balanced Partitioning Algorithm} \\ \cline{2-10} 100 Bias & 38.41 & 38.54 & 39.95 & 55.21 & 55.17 & 55.89 & 69.00 & 68.92 & 69.37 \\ 10000 Variance & 9.76 & 9.76 & 9.85 & 0.30 & 0.30 & 0.30 & 0.01 & 0.01 & 0.01 \\ \cline{2-10} \(\log(n/K_{n})\) & 14.41 & 14.37 & 14.17 & 10.95 & 10.91 & 10.70 & 7.48 & 7.44 & 7.24 \\ \hline \hline \multicolumn{10}{|c|}{Louvain Algorithm} \\ \cline{2-10} 100 Bias & 0.17 & 3.64 & 16.55 & 0.01 & 2.34 & 12.43 & 0.01 & 2.29 & 12.17 \\ 10000 Variance & 0.03 & 0.03 & 0.03 & 0.05 & 0.05 & 0.06 & 0.06 & 0.06 \\ \cline{2-10} \(\log(n/K_{n})\) & 2.30 & 2.28 & 2.16 & 4.51 & 4.56 & 4.59 & 5.32 & 5.50 & 6.05 \\ \hline \hline \multicolumn{10}{|c|}{Messaging} & \multicolumn{3}{c}{Type 1} & \multicolumn{3}{c}{Type 2} & \multicolumn{3}{c}{Type 3} \\ \cline{2-10} & dense & mod & sparse & dense & mod & sparse & dense & mod & sparse \\ \hline \hline \multicolumn{10}{|c|}{Balanced Partitioning Algorithm} \\ \cline{2-10} 100 Bias & 17.26 & 18.10 & 22.57 & 26.94 & 27.62 & 31.00 & 37.06 & 37.46 & 39.06 \\ 10000 Variance & 97.65 & 97.65 & 97.71 & 3.05 & 3.05 & 3.05 & 0.09 & 0.09 & 0.09 \\ \cline{2-10} \(\log(n/K_{n})\) & 16.04 & 16.02 & 15.76 & 12.57 & 12.55 & 12.30 & 9.10 & 9.09 & 8.83 \\ \hline \multicolumn{10}{|c|}{Louvain Algorithm} \\ \cline{2-10} 100 Bias & 1.83 & 3.34 & 11.24 & 1.48 & 2.76 & 9.45 & 1.18 & 2.18 & 7.12 \\ 10000 Variance & 0.02 & 0.02 & 0.02 & 0.05 & 0.05 & 0.05 & 0.44 & 0.44 & 0.45 \\ \cline{2-10} \(\log(n/K_{n})\) & 3.50 & 3.49 & 3.50 & 6.05 & 6.09 & 6.19 & 6.45 & 6.52 & 6.88 \\ \hline \hline \end{tabular} \end{table} Table 2: Worst-case bias and variance for Balanced Partition Clustering and Louvain clusterings, and for two different graphs owned by Meta. Different types corresponding to algorithms with increasing number of clusters for Balanced Partition and decreasing number of clusters for Louvain. their same village or were surveyed in the experiment). Individuals between villages present on average 50% of their connections _outside_ their village. On the other hand, individuals in different regions (with approximately 50 regions) have 99% of their connections within the same region. We use an adjacency matrix a "weak ties" adjacency matrix, where two individuals are connected if either indicates the other has a friend. We study clustering within each region and report results for different clustering algorithms. Clustering within each region is performed as in Algorithm 1, where we first estimate \(\hat{\mathbf{X}}(K)\) via semi-definite programming, use \(K\)-means algorithm to retrieve the clusters from \(\hat{\mathbf{X}}(K)\) and iterate to estimate the optimal number of clusters \(K\). We report the frontier with respect to the absolute bias consistently with Algorithm 1. Figure 3: Clusters comparisons for Louvain clustering. Different Types correspond to different numbers of clusters (with Type 1 having the largest number of clusters). Different panels correspond to different graphs where two individuals are not connected if the connection (measured with a continuous variable) is below the \(5^{th},10^{th},50^{th}\) percentile (dense, moderate, and sparse graph). The two graphs in the panels are Facebook friendship and Facebook messaging. #### 5.2.1 How many clusters should we choose? Figure 4 presents the main results. The left-hand side panel reports the average number of estimated clusters, estimated by the proposed method divided by the population size in a given cluster. Different choices of \(\xi_{n}\) justify a different number of clusters. For \(\xi_{n}=1\), the number of clusters is 7%, the total population size, whereas for \(\xi_{n}=15\), the number of clusters is roughly half the population size. To gain further intuition of its practical implication, take \(\bar{\psi}=0.24\) approximately equal to the outcomes' variance in Cai et al. (2015) and \(\bar{\phi}_{n}=0.27\) as in Table 2 in Cai et al. (2015). A recommended choice of \(\xi_{n}\) is approximately \(\bar{\psi}/\bar{\phi}_{n}^{2}=3.29\). This choice puts slightly more weight on the bias than the variance when \(\lambda=1\), and therefore can be considered a conservative choice for the number of clusters.13 Even with such a conservative choice, the suggested number of clusters is large relative to the population size. In this case, the optimal number of clusters is around 15% of the overall population size, approximately 600 clusters with a surveyed sample size of 3600 individuals. This number is consistent (in scale) with the choice in the number of clusters in other applications in development, such as Egger et al. (2022), Alatas et al. (2012), but larger than the number of villages that we have in the application in Cai et al. (2015). Figure 4: Example of number of clusters as a function of \(\xi_{n}\) (left-panel, with dotted line corresponding to 3.2) and objective function in Theorem 4.2 for different clustering. Algorithm 1 corresponds to causal clustering. Data from Cai et al. (2015), where we report the average result across 47 regions in the dataset. #### 5.2.2 Comparison with other clustering methods Next, we compare the procedure (denoted as "Causal Clustering") with the following alternative clustering algorithms: \(\varepsilon\)-net clustering as in Eckles et al. (2017) with \(\varepsilon=3\) as suggested in Eckles et al. (2017); spectral clustering with a fixed number of clusters equal to \(n/3\), where \(n\) is the population size in a given region (note that spectral clustering do not optimize over the number of clusters); Louvain clustering with default parameters selected by the R-package igraph; clustering based on village identity of each individual (creating one additional cluster for those individuals whose village is missing in the data). We contrast the objective function with alternative algorithms in the right-hand side of Figure 4. The proposed method achieves the smallest objective function across all competitors, providing suggestive evidence that Algorithm 1 (with a semi-definite relaxation) works particularly well in practice. For a smaller number of clusters the method is closer to the Louvain clustering algorithm, while for a larger number of clusters the objective of the algorithm is close to the objective of the spectral algorithm. Interestingly, clustering based on village identity leads to the largest loss function. This is because individuals are connected within and between villages in this application. Figure 5: Mean-squared error (in logs) simulated by calibrating the model to data from Cai et al. (2015), averaged over 47 regions. The first three plots vary the variance of the residuals in the outcome model \(\sigma^{2}\in\{1/4,1/2,1\}\), and calibrating the remaining parameters to the model in Cai et al. (2015), Table 2, Column 4, where outcomes are functions of neighbors’ treatments. The last plot calibrate the simulations to settings where the outcomes are functions of the neighbors’ _outcome_ violating Assumption 1, and \(\sigma^{2}=1/4\). Finally, in Figure 5 (first three panels), we report the mean-squared error obtained by simulating the model in Cai et al. (2015) (Table 2, Column 4) and vary the variance of the residuals \(\sigma^{2}\in\{1/4,1/2,1\}\) to emulate settings with high, medium and low signal to noise ratio. The model in Cai et al. (2015) assumes that individual outcomes depend on neighbors' treatments, consistently with our Assumption 1. Our method uniformly outperforms competitors in terms of mean-squared error, except for the Louvain algorithm when the signal-to-noise ratio is particularly high, but not in the remaining settings. In the last panel, we consider settings where the local interference assumption is violated, and we calibrate our model to the specification in Cai et al. (2015) (Column 4, Table 5), where the individual outcome depends on the neighbors' outcomes, violating Assumption 1 and introducing global dependence. We fix the residuals' variance to be \(\sigma^{2}=1/4\), whereas the results are robust as we increase \(\sigma^{2}\). In this setup, we observe a larger mean-squared error of all methods, with the proposed method achieving the lowest mean-squared error. ### A numerical study Next, we illustrate the properties of the method in numerical studies. We consider three different data generating processes for the network formation: a geometric network formation, Albert Barabasi network and Erdos-Renyi graph. The geometric network takes the form \(A_{i,j}=1\{|X_{i,1}-X_{j,1}|/2+|X_{i,2}-X_{j,2}|/2\leq r_{n}\}\) where \(r_{n}=\sqrt{4/2.75n}\) similarly to simulations in Leung (2020). Here, \(X_{i,1},X_{i,2}\) are drawn independently from a uniform distribution between \([-1,1]\). For the Albert-Barabasi network we first draw \(n/5\) edges uniformly according to Erdos-Renyi graph with probabilities \(p=10/n\), and second, we draw sequentially connections of the new nodes to the existing ones with probability equal to the number of connection of each pre-existing node divided by the overall number of connections. Finally, the Erdos-Renyi graph is generated with probability of connection \(p=2/n\). We study the properties of the proposed method and of the same competitors discussed in Section 5.2. Figure 6 showcases that the proposed procedure leads to the lowest objective function. This result is consistent with our theoretical findings. Figure 7 showcases that the number of clusters increases in the sample size, varies substantially across different data generating processes, and increases in \(\xi_{n}\). Figure 6: Objective function in Theorem 4.2 (in log scale) as a function of \(\xi\) over 100 replications for different network formation models. \(N\) denotes the size of the network. The proposed clustering method is defined as “minimax”. Figure 7: Average number of clusters as a function of \(\xi_{n}\) over 100 replications and three different network formation models. Recommendations for practice Our algorithm depends on the choice of the network, and spillover effects. We conclude with a summary of our method and explicit recommendations for practice. Choosing the adjacency matrixThe network choice must depend on researchers' prior knowledge of which dimension spillovers propagate over. When researchers only have some prior distribution over which network matters the most, our framework directly extends to these settings, where the objective must average over the distribution of the network. In this case researchers can compute the _expected_ bias and _expected_ variance over multiple networks by first computing the bias and variance for each network separately as we discuss in Section 3, and then taking a weighted average with some pre-specified weights (e.g., equal weights). Choosing the range of magnitude for the spillover effectsOur method also depends on the choice of \(\xi_{n}=(\bar{\phi}_{n}^{2}/\bar{\psi})^{-1}\) (the size of spillover effects \(\bar{\phi}_{n}\) relative to the outcomes' largest squared deviation \(\bar{\psi}\)). Given \(\bar{\phi}_{n}\), our **recommended choice for given spillover effects size**\(\bar{\phi}_{n}\in\mathcal{S}_{n}\) is to consider values for \(\xi_{n}\) within a neighborhood around \(c\bar{\sigma}^{2}/\bar{\phi}_{n},\bar{\phi}_{n}\in\mathcal{S}_{n},c\in[1,4]\), where \(\mathcal{S}_{n}\) defines the range of values that spillover effects might take, and \(\bar{\sigma}^{2}\) is the baseline variance of the residuals after removing the covariate adjustment, and \(c\) is a constant. Algorithm 2 presents a description of the choice of the tuning parameter.14 In our empirical applications, we observe that certain clustering algorithms uniformly outperform others for a large range of values of \(\xi_{n}\), suggesting that studying clustering over ranges of values of \(\xi_{n}\) can be informative. Footnote 14: To gain further intuition on this choice, let \(\lambda=1\). Let \(\bar{\mu}_{i}\) be a prediction for \(\mu_{i}(\mathbf{0})\) as in Remark 3 and consider an estimator as in Equation (4). When using regression adjusted estimators, we have \(\bar{\psi}\leq\sup_{i,\mu_{i}}\tau_{i}^{2}+4\Big{(}\mu_{i}(\mathbf{0})-\bar{ \mu}_{i}\Big{)}^{2}\), where \(\tau_{i}=\mu_{i}(\mathbf{1})-\mu_{i}(\mathbf{0})\). We bound \(\bar{\psi}\leq 4\bar{\sigma}^{2}+\bar{\tau}^{2}\), with \(\bar{\sigma}^{2}=\max_{i}\Big{(}\mu_{i}(\mathbf{0})-\bar{\mu}_{i}\Big{)}^{2},\bar{\tau}^{2}=\max_{i}\tau_{i}^{2}\). We use as a (rough) approximation to this bound values \(\bar{\sigma}^{2}\) to be the residual variance from baseline outcomes after using the regression adjustment, and \(\bar{\tau}^{2}\approx 0\), for small global effects relative to the outcomes’ variance (as in our local asymptotics framework). ``` 0: Set of values of spillover effects \(\bar{\phi}_{n}\in\mathcal{S}_{n}\) 0: Estimate \(\bar{\mu}_{i}\) by regressing baseline outcomes on arbitrary covariates; 0: Estimate \(\bar{\sigma}^{2}\) the variance of the residuals from this regression; 0: Study optimal clustering over the range of values \(\xi_{n}\in[\bar{\sigma}^{2}/\bar{\phi}_{n}^{2},4\bar{\sigma}^{2}/\bar{\phi}_{ n}^{2}],\bar{\phi}_{n}\in\mathcal{S}_{n}\) ``` **Algorithm 2** Practical choice of range of tuning parameter \(\xi_{n}\) The choice of \(\bar{\phi}_{n}\) can be based on spillover effects observed in previous experiments, in the spirit of minimum detectable effects [Baird et al., 2018]. When instead such experiments are not available, our **recommended choice when the range of values of spillover effects \(\mathcal{S}_{n}\) is unknown** is to regress the _baseline_ outcomes on neighbors outcomes and observable covariates as in Appendix B (fixing the treatments to be equal to a baseline of zero for all units). Obtain an estimate of \(\gamma_{n}\), the coefficient of the individual outcome on the other units' outcomes, and consider a range of values \(\bar{\phi}_{n}\in\left[|\gamma_{n}|^{2},\bar{\beta}|\gamma_{n}|\right]\), where the lower bound \(\gamma_{n}^{2}\) assumes that direct effect equals first-order spillover effects, and \(\bar{\beta}\) denotes the largest value that direct effect may take (e.g., \(\bar{\beta}\leq 1\) for binary outcomes). Choosing between a Cluster or Bernoulli design: an explicit rule of thumbGiven the range of values of \(\xi_{n}\), and the choice of the network, Theorem 3.6 provides a rule of thumb to choose between a cluster or Bernoulli design. Table 1 suggests a rule of thumb \(\bar{\phi}_{n}\sqrt{K_{n}}>2.3\) for binary outcomes when \(\bar{\psi}=4\). Figure 2 provides a wider and specific range of values of the spillover effects that motivate running a cluster experiment, based on the size of the neighbors assigned to different clusters, and the number of clusters. Choosing the optimal clustersFinally, once researchers decide to run a cluster experiment, Algorithm 1 provides an explicit algorithm to estimate the optimal clusters via semi-definite programming, choosing both the clustering and number of clusters.
2308.08138
Data-Driven Adversarial Online Control for Unknown Linear Systems
We consider the online control problem with an unknown linear dynamical system in the presence of adversarial perturbations and adversarial convex loss functions. Although the problem is widely studied in model-based control, it remains unclear whether data-driven approaches, which bypass the system identification step, can solve the problem. In this work, we present a novel data-driven online adaptive control algorithm to address this online control problem. Our algorithm leverages the behavioral systems theory to learn a non-parametric system representation and then adopts a perturbation-based controller updated by online gradient descent. We prove that our algorithm guarantees an $\tmO(T^{2/3})$ regret bound with high probability, which matches the best-known regret bound for this problem. Furthermore, we extend our algorithm and performance guarantee to the cases with output feedback.
Zishun Liu, Yongxin Chen
2023-08-16T04:05:22Z
http://arxiv.org/abs/2308.08138v2
# Online Control for Linear Dynamics: A Data-Driven Approach ###### Abstract This paper considers an online control problem over a linear time-invariant system with unknown dynamics, bounded disturbance, and adversarial cost. We propose a data-driven strategy to reduce the regret of the controller. Unlike model-based methods, our algorithm does not identify the system model, instead, it leverages a single noise-free trajectory to calculate the accumulation of disturbance and makes decisions using the accumulated disturbance action controller we design, whose parameters are updated by online gradient descent. We prove that the regret of our algorithm is \(\mathcal{O}(\sqrt{T})\) under mild assumptions, suggesting that its performance is on par with model-based methods. Data-driven control, Online control ## I Introduction In recent years, the significance of online control is growing with the flourishing of online learning. Like optimal control, the learner seeks to design a control policy to solve the following problem in online control, \[\min J=\sum_{t=0}^{T-1}c_{t}(x_{t},u_{t}) \tag{1a}\] \[\text{s.t.}\quad x_{t+1}=g_{t}(x_{t},u_{t})+w_{t} \tag{1b}\] where \(x_{t}\) is the state, \(u_{t}\) is the control and \(w_{t}\) is the disturbance. At each time \(t\), the learner makes the decision based on her observations and memory, and then receives the next state \(x_{t+1}\) as well as the instantaneous cost \(c_{t}\). According to how \(w_{t}\) is generated, online control can be divided into two categories. One is online stochastic control, where \(w_{t}\) is sampled from a sub-Gaussian distribution. There have been rich studies in this branch, such as [1, 2, 3, 4] using model-based methods and [5, 6, 7] in data-driven manner. Another one is online nonstochastic control, which means no assumptions are put on statistical properties of the disturbance. There have been many works solving online nonstochastic control in a wide range of scenarios, such as quadratic cost function [8], nonlinear system [9, 10] and time-varying system [11, 12]. Most of them are model-based and very few consider data-driven methods. Departing from model-based control methods which identify an explicit model of the system and then design control policy based on it, data-driven schemes bypass the system identification step and learn control actions by leveraging the data directly. In applications that generate and store huge amounts of process data at every time instant, facilitating these data would be beneficial [13]. Over the last few years, data-driven control methods have been utilized in various control problems, including data-driven stabilization [14, 15, 16, 17], data-driven LQR [5, 18, 19, 6], data-driven model predictive control (MPC) [20, 21, 22] and so on. However, existing studies on online nonstochastic control with data-driven approaches either only guarantee closed-loop stability [15] or rely on a finite controller pool [16]; the performance and regret guarantee of data-driven online nonstochastic control with a continuous controller pool is still unclear. **Contributions.** In this work we address problem (1) on a linear time-invariant (LTI) system in a data-driven manner. We propose an online nonstochastic control algorithm by integrating a controller called ADAC with data-driven system representation. Instead of system identification, we arrange a noise-free trajectory in some matrices and update the controller directly by data and observations. To the best of our knowledge, this is the first algorithm that introduces data-driven methods into online nonstochastic control. Theoretically, we prove that our algorithm achieves an \(\tilde{\mathcal{O}}(\sqrt{T})\) regret, which is on par with model-based online nonstochastic control. The rest of this paper is organized as follows. In Section II we formulate the problem and introduce some tools used in our algorithm. We present our method and the associated analysis in Section III. _Notations._ For a vector \(x\), we use \(\|x\|\) to denote its Euclidean norm. For a matrix \(A\), \(A[-1,:]\) denotes its last row, \(A^{T}\) denotes its transpose, \(\rho(A)\) denotes its spectral radius, and \(\|A\|\) denotes its operator norm. For a variable \(M=\{M^{(1)},\ldots,M^{(L)}\}\), \(M_{t}\) means \(M\) is calculated in \(t\)-th iteration, and \(M_{t}^{(a:b)}=\{M_{t}^{(a)},\ldots,M_{t}^{(b)}\}\). For a set \(\mathbb{X}\), denote \(\dim{(\mathbb{X})}=\max_{x_{j}\in\mathbb{X}}\|x-y\|\). For a sequence \(x=\{x_{k}\}_{k=1}^{N-1}\), \(x_{a:b}\) denotes \(\{x_{k}\}_{k=a}^{b}\) listed in a column, and we define a Hankel matrix associated with \(x\) with length \(L\) as: \[H_{L}(x)=\begin{bmatrix}x_{0}&x_{1}&\cdots&x_{N-L}\\ x_{1}&x_{2}&\cdots&x_{N-L+1}\\ \vdots&\vdots&\ddots&\vdots\\ x_{L-1}&x_{L}&\cdots&x_{N-1}\end{bmatrix} \tag{2}\] We adopt \(\mathcal{O}\) notation in theoretical analysis. Given a trajectory \(\{x_{0},u_{0},\ldots,x_{N-1},u_{N-1}\}\) of (3), we say that \(u=\{u_{0},\ldots u_{N-1}\}\) is persistently exciting of order \(L\) if \(\mathrm{rank}(H_{L}(u))=mL\). For a control signal \(u_{t}\) and a given \(K\), we denote \(u_{t}^{(c)}=u_{t}-Kx_{t}\), meaning that \(u_{t}^{(c)}\) is generated by a well-designed controller. ## II Problem Formulation and Preliminaries ### _Problem Formulation_ Consider the online control problem on an LTI system \[\min J=\sum_{t=0}^{T-1}c_{t}(x_{t},u_{t})\] (3a) s.t. \[x_{t+1}=Ax_{t}+Bu_{t}+w_{t}, \tag{3b}\] where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\), and \(w_{t}\in\mathbb{R}^{n}\) is a disturbance. Without loss of generality, assume \(n\geq m\). Here the true dynamics \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{n\times m}\) are unknown and we have no prior knowledge on how \(w_{t}\) is generated other than its boundness. We also assume \((A,B)\) is controllable. **Assumption 1**.: _The pair \((A,B)\) is controllable._ **Assumption 2**.: _The disturbance \(w_{t}\) is bounded by \(\|w_{t}\|\leq\varepsilon\)._ At every time step \(t\), the learner plays \(u_{t}\) and then observes \(x_{t+1}\) and instantaneous cost \(c_{t}(x_{t},u_{t}):\mathbb{R}^{m+n}\to[0,\infty)\). It should be noted that \(c_{t}\) is revealed only after playing \(u_{t}\), meaning it could be given in an adversarial form. **Assumption 3**.: _At any time step \(t\), \(c_{t}(x_{t},u_{t})\) is a convex and differentiable function of \(x_{t}\) and \(u_{t}\), and \(\|\nabla_{x_{t},u_{t}}c_{t}\|\leq G\) with some finite constant \(G>0\)._ Since we have no assumption on statistical properties of \(w_{t}\), it is impossible to chase the "minimal" total cost \(\sum_{t=1}^{T}c_{t}(x_{t},u_{t})\) over a finite timeline \(T\)[23]. However, if we fix a reference policy class \(\Pi\) and suppose that there exists an oracle that knows everything in prior and is able to choose the best policy \(\pi^{*}=\arg\min_{\pi\in\Pi}\sum_{t=1}^{T}c_{t}(x_{t}^{\pi},u_{t}^{\pi})\) based on prior knowledge, then we can use _policy regret_ between \(\mathcal{A}\) and \(\Pi\) to measure how good \(\mathcal{A}\) is compared to \(\pi^{*}\). The policy regret is defined as follows, and \(\Pi\) used throughout this paper will be given in Section III-B. **Definition II.1** (**Policy Regret**).: _Given a policy class \(\Pi\), the policy regret between the learner's policy \(\mathcal{A}\) and \(\Pi\) over \(T\) steps is defined as_ \[\mathtt{rgt}_{T}(\mathcal{A},\Pi)=\sum_{t=1}^{T}c_{t}(x_{t},u_{t})-\min_{\pi \in\Pi}\sum_{t=1}^{T}c_{t}(x_{t}^{\pi},u_{t}^{\pi}) \tag{4}\] _where \(x_{t}\) is the actual state, \(u_{t}\) is generated by \(\mathcal{A}\), and \((u_{t}^{\pi},x_{t}^{\pi})\) are the artificial state sequence and controls under the policy \(\pi\), i.e., \(x_{t+1}^{\pi}=Ax_{t}^{\pi}+Bu_{t}^{\pi}+w_{t}\), \(x_{0}^{\pi}=x_{0}\), \(u_{t}^{\pi}=\pi(x_{t}^{\pi})\)._ Our goal is to design a data-driven online control policy that is able to reach a sub-linear regret, i.e., \(\mathtt{rgt}_{T}(\mathcal{A},\Pi)\leq\tilde{\mathcal{O}}(T^{\alpha})\) with \(\alpha<1\). Before presenting our method, we introduce some preliminaries used in our algorithm. ### _Data-driven Representation of LTI Systems_ In this paper we consider the case where we are able to acquire a noise-free trajectory with \(w_{t}=0\) before putting the system into practice. This case is widely considered in many researches about data-driven control, such as [21, 22]. As a result, we can give a precise data-driven representation of the system before starting the control task. Given a noise-free trajectory \(\{x,u\}=\{(x_{0},u_{0}),\ldots,(x_{N-1},u_{N-1})\}\) from (3), the following theorem on the data-driven representation of (3) holds. **Theorem 1**.: _[_24_]_ _Suppose \(\{x,u\}\) is a noise-free trajectory of the system (3), where \(\{u\}\) is persistently exciting of order \(L+n\). Then, \(\{\hat{u},\hat{x}\}=\{(\hat{u}_{1},\hat{x}_{1}),\ldots,(\hat{u}_{L},\hat{x}_{L })\}\) is a noise-free trajectory of system (3) if and only if there exists \(\alpha\in\mathbb{R}^{N-L+1}\) such that_ \[\begin{bmatrix}H_{L}(u)\\ H_{L}(x)\end{bmatrix}\alpha=\begin{bmatrix}\hat{u}\\ \hat{x}\end{bmatrix} \tag{5}\] The above theorem indicates that if we can obtain a noise-free trajectory \(\{x,u\}\) of an LTI system, then the Hankel matrices \(H_{L}(u),H_{L}(x)\) give a precise data-based representation of the system dynamics. Throughout this paper, we use this data-driven representation of LTI systems for online control. ### _Disturbance Action Controllers_ For the optimal control problem (3), it is well known that if we apply a linear controller \(u_{t}=Kx_{t}\), then \(J(K)=\sum_{t=0}^{T-1}c_{t}(x_{t}(K),u_{t}(K))\) is a nonconvex function of \(K\)[5, Lemma 2], making it hard to handle. A good substitution in such a case is the disturbance action controller (DAC) proposed in [11]. **Definition II.2** (**Disturbance Action Controller (DAC)**).: _Given the LTI system (3) and one of its stabilizing \(K\), i.e., \(\rho(A+BK)<1\), if we have historical access to \(w_{\tau}\), \(\tau\leq t\) at each time \(t\), then DAC parameterized by \(M=[M^{(1)},\ldots,M^{(L)}]\) and \(K\) is_ \[u_{t}^{(DAC)}=Kx_{t}+\sum_{i=1}^{L}M^{(i)}w_{t-i} \tag{6}\] It can be shown that if \(c_{t}(x_{t},u_{t})\), \(\forall t\) is convex and \(u_{t}\) is generated by DAC, then \(J(M)=\sum_{t=0}^{T-1}c_{t}(x_{t}(M),u_{t}(M))\) is a convex function of \(M\)[12]. ## III Data-Driven Online Nonstochastic Control ### _Structure of Data-Driven Online Nonstochastic Control_ Our designation has three stages: stabilize the system, build data-driven representation and update control policy. The first two stages are carried out in a noise-free environment and the control task is set in a noisy environment. **Stage 0: Online stabilization.** In this stage we run the system (3) in a noise-free environment for \(N_{0}\) iterates with randomly chosen \(u_{t}\) and calculate a stabilizing \(K\). By [15] and [25], it turns out that if \(N_{0}\geq m+n\) and \(u_{t}\) is independent from each other, \(\forall t\), then we are able to find a stabilizing \(K\) by _Semi-definite Programming_, which is in data-driven manner. **Stage 1: Build data-driven representation.** In this stage we choose \(u_{t}=Kx_{t}+u_{t}^{(c)}\) with random \(u_{t}^{(c)}\) and collect a \(N\)-step trajectory \(\{x^{d},u^{d(c)}\}=\{x_{0},u_{0}^{(c)},\ldots,x_{N-1},\,u_{N-1}^{(c)}\}\). We use these data to build Hankel matrices \(H_{L}(x^{d})\) and \(H_{L}(u^{d(c)})\). We use \(H_{L}(u^{d(c)})\) rather than \(H_{L}(u^{d})\) because the \(Kx_{t}\) part in \(u_{t}\) can be merged in \(\tilde{A}=A+BK\) and has no effect on the regret analysis. **Stage 2: Run control task and update the policy \(\mathcal{A}\).** In this stage we handle the control task in a noisy environment. At every time step \(t\) we make decision \(u_{t}\) by the controller we design and play it, then observe \(x_{t+1}\) and \(c_{t}(x_{t},u_{t})\), and update the controller based on the observation. This structure is similar with the explore-then-commit structure [26, 27] in online learning, where the learner identifies the parameter \(\hat{\theta}=\text{SysId}(x_{1:N},u_{1:N})\) at first and then regards \(\hat{\theta}\) as true \(\theta\) and runs a properly designed policy \(\pi(\cdot;\hat{\theta})\). The main difference between this data-driven framework and model-based ETC is that there is no estimation on \((A,B)\) during the exploration stage of our scheme while model-based ETC relies on a good estimation. By putting the exploration stage into a noise-free environment, we can obtain a noise-free trajectory in Stage 1. Therefore, \(H_{L}(x^{d})\) and \(H_{L}(u^{d})\) can be treated as an "accurate" representation of the system dynamics. To establish a sub-linear regret bound, we require \(H_{L}(x^{d})\) and \(H_{L}(u^{d})\) to be in full row rank and have a good size, as captured in the following assumption. **Assumption 4**.: \(L\geq n\)_, \((n-1)L+1<N\leq\sqrt{T}\)._ This assumption also implies that if the policy \(\mathcal{A}\) can reach an \(\tilde{\mathcal{O}}(\sqrt{T})\) regret bound w.r.t. the reference class we choose, then the first two stages do not affect the level of regret bound. To achieve it, the core problems are how to decide the reference policy class and how to update \(\mathcal{A}\). ### _Accumulated Disturbance Action Controller_ We now introduce the policy class considered in the control task. To begin with, recall that \(u_{t}^{(DAC)}=Kx_{t}+\sum_{i=1}^{L}M^{(i)}w_{t-i}\). Although we can find a stabilizing \(K\) of \((A,B)\) with \(\tilde{\mathcal{O}}(n)\) noise-free data, departing from the model-based setting, one challenge for DAC in our setting is that it is intractable to calculate \(w_{t}\) using \(x_{t+1}\), \(x_{t}\), \(u_{t}\), \(H_{L}(x^{d})\) and \(H_{L}(u^{d(c)})\), so DAC with disturbance input does not work anymore. However, we can get access to "accumulated" disturbance \(w^{\prime}_{t}\) by leveraging the properties of Hankel matrices. Set the initial state of Stage 2 as 0. Let \(u_{t}=Kx_{t}+u_{t}^{(c)}\) and define \(\tilde{A}=A+BK\), then we have \[x_{t}=\sum_{i=1}^{t}\tilde{A}^{i-1}Bu_{t-i}^{(c)}+\sum_{i=0}^{t-1}\tilde{A}^{i }w_{t-1-i}. \tag{7}\] Define \(w^{\prime}_{t}=\sum_{i=0}^{t}\tilde{A}^{i}w_{t-1-i}\) and \(x^{z}_{t}=\sum_{i=1}^{t}\tilde{A}^{i-1}Bu_{t-i}^{(c)}\), then we have \(x_{t}=x^{z}_{t}+w^{\prime}_{t-1}\). Moreover, \(w^{\prime}_{t}\) can be regarded as the accumulation of \(w_{\tau}\) from \(\tau=1,\ldots,t\), and \(x^{z}_{t}\) satisfies \[x^{z}_{t}=\tilde{A}x^{z}_{t-1}+Bu_{t-1}^{(c)},\quad x^{z}_{0}=x_{0}=0. \tag{8}\] In another word, \(\{x^{z}_{\tau},u^{(c)}_{\tau}\}_{\tau\leq t}\) is a noise-free trajectory of the system \((\tilde{A},B)\). Observe that \(\{x^{d},u^{(c)}\}\) obtained during Stage 1 is also a trajectory of \((\tilde{A},B)\), thus by Theorem 1, we know there exists an \(\alpha\in\mathbb{R}^{N-L+1}\) such that \[\begin{bmatrix}u^{(c)}_{t-L+1:t}\\ x_{t-L+1:t}-w^{\prime}_{t-L:t-1}\end{bmatrix}=\begin{bmatrix}H_{L}(u^{d(c)}) \\ H_{L}(x^{d})\end{bmatrix}\alpha \tag{9}\] This provides a way to calculate \(w^{\prime}_{t}\) iteratively, which will be shown in Algorithm 2. With access to \(w^{\prime}_{t}\), we can define a class of controller named _Accumulated Disturbance Action Controller_ in a similar formulation of DAC. An assumption is given hereafter to make gradient-based methods feasible. **Definition III.1** (**Accumulated Disturbance Action Controller (ADAC)**).: _Given the LTI system (3), one of its stabilizing \(K\) and a set \(\mathbb{M}\), if we have historical access to \(w^{\prime}_{\tau}=\sum_{i=0}^{\tau}{(A+BK)^{i}w_{\tau-i}}\), \(\tau<t\) at each time \(t\), then ADAC parameterized by \(M=[M^{(1)},\ldots,M^{(L)}]\in\mathbb{M}\) and \(K\) is_ \[u_{t}^{(ADAC)}=Kx_{t}+\sum_{i=1}^{L}M^{(i)}w^{\prime}_{t-i}\overset{i.e.}{=}Kx_ {t}+u_{t}^{(c)} \tag{10}\] **Assumption 5**.: \(\mathbb{M}\) _is a convex set with \(\dim{(\mathbb{M})}<\infty\)._ Notice that the input of ADAC is composed by a state feedback plus a bounded control signal \(u_{t}^{(c)}\). The boundness of \(u_{t}^{(c)}\) is shown in the proof of Lemma III.3. Therefore, ADAC stabilizes the system as long as \(K\) is stabilizing. Moreover, Section III-D shows that \(J(M)=\sum_{t=0}^{T-1}c_{t}(\tilde{x}_{t}(M),\tilde{u}_{t}(M))\) is a convex function w.r.t \(M\), which implies that ADAC shares all the good properties with DAC. Throughout this paper, we set ADAC as the reference policy class, and consider the regret \(\texttt{rgt}_{T}(\mathcal{A},\Pi^{ADAC})\), where \(u_{t}\) is generated from \(\mathcal{A}\) and the length of \(M\) used in \(\Pi^{\text{ADAC}}\) is \(L\). ### _Adaptive Controller with OGD_ We next design our online controller based on the ADAC. The core idea of our algorithm is to use an adaptive ADAC whose parameter \(M\) is updated by online gradient descent (OGD) as \[u_{t} =Kx_{t}+\sum_{i=1}^{L}M_{t}^{(i)}w^{\prime}_{t-i} \tag{11a}\] \[M_{t+1} =M_{t}-\text{Proj}_{\mathbb{M}}\big{(}\lambda\nabla_{M_{t}}f_{t}( M_{t})\big{)} \tag{11b}\] with stepsize \(\lambda\). As discussed in [23, 10], a crucial problem here is how to define \(f(M)\). The goal of using OGD is to let \(M_{t}\) chase \(M^{*}\), where \(M^{*}=\arg\min_{M\in\Pi^{\text{ADC}}}\sum_{t=1}^{T}c(x^{\pi}_{t},u^{\pi}_{t})\), so we define \(f_{t}(M)\) as \[f_{t}(M)=c_{t}(\tilde{x}_{t}(M),\tilde{u}_{t}(M)) \tag{12a}\] \[s.t. \begin{cases}\tilde{x}_{\tau+1}(M)=A\tilde{x}_{\tau}(M)+B\tilde{u}_{ \tau}(M)+w_{\tau}\\ \tilde{u}_{\tau}(M)=K\tilde{x}_{\tau}(M)+\sum_{i=1}^{L}M^{(i)}w^{\prime}_{\tau-i }\\ \tilde{x}_{0}(M)=x_{0}\end{cases}\tau\leq t \tag{12b}\] In other words, (12b) represents a trajectory starting from the initial state of Stage 2, stopping at \(t\) and driven by a stationary policy \(\pi\in\{\Pi^{\text{ADAC}}|M\}\), and \(f_{t}(M)\) is the terminal instantaneous cost of this trajectory. It should be notice that \(f_{t}(M_{t})\) and \(f_{t}(M^{*})\) should have the same expression, which is not equal to \(c_{t}(x_{t},u_{t})\). To obtain \(f_{t}(M_{t})\), we should "simulate" \(\tilde{x}_{t}(M_{t})\) and \(\tilde{u}_{t}(M_{t})\) as (12b) while replacing \(M\) by \(M_{t}\), and take the terminal instantaneous cost as \(f_{t}(M_{t})\). Although we do not have an estimation of \((A,B)\), there is still a data-driven way to simulate this trajectory by applying (9) with a stationary \(M_{t}\) iteratively. The pseudo-code of this process is given in Algorithm 3. Finally, we are ready to present our policy \(\mathcal{A}\) in Algorithm 1,2,3. We design the controller based on ADAC, which is also chosen as the reference policy class in regret analysis, and update its parameters by Online Gradient Descent. The function AccNoise() is for calculating \(w_{t}^{\prime}\) and PiTraj() is proposed to calculate and \(\tilde{x}_{\tau}(M_{t})\), \(\tau\leq t\) defined in (12b). Both are in data-driven manners. ``` Input: Time Horizon \(N\),\(T\), dimension \(m\), \(n\), length \(L\), set \(\mathbb{M}\), gradient bound \(G\) in Assumption 3, stabilizing \(K\), Hankel matrices \(H_{L}(u^{d(c)})\), \(H_{L}(x^{d})\), initial points \(x^{st}\), \(u^{st}\). 1 Initialization: \(t=0\), \(x_{0:L}=x^{st}\), \(u_{0:L-1}=u^{st}\), \(w_{0:L-1}^{\prime}=0\), \(M_{i}^{j}=\texttt{rand}\left(\texttt{m},\texttt{n}\right)\),\(\forall 1\leq i,j\leq L\), \(\lambda=\frac{\dim\left(\mathbb{M}\right)}{G\sqrt{T}}\). 2for\(t=L,L+1,\ldots,T+L-1\)do 3 Set \(u_{t}=Kx_{t}+u_{t}^{(c)}\), where \(u_{t}^{(c)}=\sum_{i=1}^{L}M_{i}^{(c)}w_{t-i}^{\prime}\). 4 Receive \(x_{t+1}\) and \(c_{t}(x_{t},u_{t})\). 5 Calculate \(w_{t}^{\prime}=\text{AccNoise}(u_{t-L+2:t}^{(c)}\), \(x_{t-L+2}\),\(x_{t+1}\), \(w_{t-L+1}^{\prime}\), \(H_{L}(u^{d(c)})\), \(H_{L}(x^{d}))\). 6 Calculate \(\tilde{x}_{t}(M_{t})=\text{PiTraj}(w_{0:t-1}^{\prime}\), \(M_{t}\), \(K,x_{0:L}\), \(u_{0:L-1}^{(c)}\), \(H_{L}(u^{d(c)})\), \(H_{L}(x^{d}),t)\), \(\tilde{u}_{t}(M_{t})=K\tilde{x}_{t}(M_{t})+u_{t}^{(c)}\) and \(f_{t}(M_{t})=c_{t}(\tilde{x}(M_{t}),\tilde{u}(M_{t}))\). 7 OGD: \(M_{t+1}=\text{Proj}_{\mathbb{M}}\big{(}M_{t}-\lambda\nabla f_{t}(M_{t})\big{)}\). 8 end for ``` **Algorithm 1**Data-Driven Online Nonstochastic Control Policy \(\mathcal{A}\) with Noise-free Exploration ### _Theoretical Results_ The following is the main theoretical result of this paper. **Theorem 2** (\(\mathcal{O}(\sqrt{T})\) Regret).: _Suppose that Assumption 1,2,3,4 are satisfied. Let \(C_{1}=\dim\left(\mathbb{M}\right)\)\(G(\frac{\|B\|_{\rho\rho}(1+\|K\|)\sqrt{L}}{(1-\rho)^{3}}+1)\) and \(\rho\) denotes \(\rho(\tilde{A})\). Then the control policy [Algorithm 1] denoted as \(\mathcal{A}\) guarantees that_ \[\texttt{rgt}_{T}(\mathcal{A},\Pi^{\text{ADAC}})\leq C_{1}\sqrt{T}\] This result proves that among the three stages displayed in III-A, our algorithm guarantees an \(\mathcal{O}(m+n+N+\sqrt{T})=\tilde{\mathcal{O}}(\sqrt{T})\) regret bound. As a comparison, [11] proves that if Assumption 3 holds and the learner is able to acquire an accurate model of the linear system, then model-based online nonstochastic control achieves an \(\mathcal{O}(\sqrt{T})\) regret bound. Moreover, from \(C_{1}\) we see that our algorithm has a polynomial dependence on other fixed parameters, while model-based method also assures this. Therefore, data-driven online nonstochastic control has the same performance as model-based methods. Here we provide a sketch of proof of Theorem 2, and more details can be found in Appendix A. To begin with, we prove the following lemma. **Lemma III.1**.: _Suppose that Assumption 1,2,4 hold. Then, for any \(L\leq t\leq T+L-1\), the following two statements about a sequence \(\{x_{t-L+2},u_{t-L+2},\ldots,x_{t+1},u_{t+1}\}\) are equivalent._ * \(\{x_{t-L+2},u_{t-L+2},\ldots,x_{t+1},u_{t+1}\}\) _is a trajectory of (_3_)_ * _There exists an_ \(\alpha\in\mathbb{R}^{N+L-1}\) _such that_ \[\begin{bmatrix}u_{t-L+2:t+1}^{(c)}\\ x_{t-L+2}-w_{t-L+1}^{\prime}\end{bmatrix}=\begin{bmatrix}H_{L}(u^{d(c)})\\ H_{L}(x^{d})[1,:]\end{bmatrix}\alpha\] (13) \[x_{t-L+i+1}-w_{t-L+i}^{\prime}=H_{L}(x^{d})[i,:]\alpha,\ \ 1\leq i\leq L\] (14) The proof of Lemma III.1 is given in Appendix A. This lemma tells that the output of Algorithm 2 at time \(t\) is nothing but \(w_{t}^{\prime}=\sum_{i=0}^{t}\tilde{A}^{i}w_{t-i}\), and the return of Algorithm 3 at time \(t\) is the terminal state of the trajectory produced by (12). Therefore, \(f_{t}(M_{t})\) has the same formula as \(f_{t}(M^{*})\). To this end, we prove that \(f_{t}(M_{t})\) is convex, and then bound \(\sum f_{t}(M_{t})-\sum f_{t}(M^{*})\) by convex optimization techniques. **Lemma III.2** (Bound of OGD).: _Suppose that Assumption 1,2,3,4,5 hold and \(\Pi^{ADAC}\) is defined in (10), then \(f_{t}(M_{t})\) is a convex function and Algorithm 1 guarantees the following bound_ \[\sum_{t=0}^{T-1}f_{t}(M_{t})-\min_{M^{*}\in\Pi^{\text{MAC}}}\sum_{t=0}^{T-1}f_{ t}(M^{*})\leq\dim{(\mathbb{M})}G\sqrt{T} \tag{15}\] The proof of this lemma is given in Appendix B. This lemma connects \(f_{t}(M_{t})\) and \(f_{t}(M^{*})\), which are essentially \(c_{t}(\tilde{x}_{t}(M_{t}),\tilde{u}_{t}(M_{t}))\) and \(c_{t}(\tilde{x}_{t}(M^{*}),\tilde{u}_{t}(M^{*}))\). To reach the goal of connecting \(c_{t}(x_{t},u_{t})\) and \(c_{t}(\tilde{x}_{t}(M^{*}),\tilde{u}_{t}(M^{*}))\), we need to quantify the difference between \(x_{t}\) and \(\tilde{x}_{t}(M_{t})\). **Lemma III.3** (Bound of \(x_{t}-\tilde{x}_{t}\)).: _Suppose that Assumption 1,2,3,4,5 hold. Then \(x_{t}\) and \(\tilde{x}_{t}(M_{t})\) generated by Algorithm 1 meet the following relationship with some constant \(C\)._ \[\|x_{t}-\tilde{x}_{t}(M_{t})\|\leq\frac{C}{\sqrt{T}} \tag{16}\] The proof of this lemma is given in Appendix C. Equipped with these lemmas, we finally give the regret bound on Stage 2 and the whole process, see Appendix D. ## IV Conclusions In this work we investigate an online nonstochastic control problem over a linear time-invariant system with unknown dynamics and bounded disturbance. We first design a controller termed ADAC and then propose a data-driven algorithm based on it. One important extension of this work involves the case where a noise-free trajectory is not available for the data-driven representation of the system. In [12], a model-based algorithm is proposed to reach \(\tilde{\mathcal{O}}(T^{\frac{5}{4}})\) regret, but how to solve it in data-driven manner is unclear. Some other interesting directions that worth further research in data-driven online nonstochastic control include data-driven LQR with adversarial noise, data-driven online nonstochastic control with safety constraints and so on.